Week 2 : May 7th – 13th
Mapping the Landscape
ARVR Innovate Conference
The 5th annual ARVR Innovate conference and expo was on in the RDS and I took it as a great chance to get to see the current state of AR applications and technology, along with meeting a lot of people working in the area. Overall it was a very interesting experience, with stands and talks from people deeply embedded in AR technologies. There were a variety of topics that I intended to find out more about from experts in the field, some of which were:
- The process behind their AR development, how they begin prototyping and what stages of user testing they go through. What techniques and approaches they use, what does and doesn’t work, and how long they spend prior to development.
- What early stage prototyping means to them, and if they have any current methods of validating a concept before they begin developing.
- If they do user testing, what value do they get from it, how long do they spend testing, how do they validate concepts and how do they measure success or failure in their designs.
- If there were any AR user interactions they wanted to test, if they saw any big problems with the current state of AR and if they had any guidelines for designing for AR applications.
Here’s a full list of who I spoke to at the conference:
Pixel Legend – Argo Interactive – Sentireal – VRAI – Daqri – vStream – Kainos – Creatikal – UCD Psychology
Below are a few highlighted conference speakers, and conversations I had with companies:
Dave Lorenzini – youAR
Dave Lorenzini is the creator of keyhole.com which was acquired by Google and became Google Earth. He is currently CEO & founder of youAR, an AR cloud platform for delivering persistent AR experiences such as synchronous AR viewing across multiple users. This means that if I was to manipulate an object in AR, everyone else around me would see those effects, it also enables persistent (permanent) location based AR. You can sign up for developer access of the youAR tools on the site.
Dave’s talk was titled “How the AR cloud changes everything”. He described the AR cloud as a digital copy of the world layered over the physical world, and with real time mapping giving you the ability to anchor AR objects to the real world. His vision of the AR cloud must be able to:
- Localise devices
- Position AR objects
- Sync users and actions
- Visualise remote scenes
“The world’s largest technology companies are in a race to move the internet off screens and out into the world where it belongs.”
The concept of AR synchronisation enables shared interactive experiences, really opening up the idea of social and shared AR applications. Persistence in AR could lead to location based environments allowing advertising, gaming, or information for services and tourism. The AR cloud to me seems like an exciting prospect for how AR could be more heavily adopted by a mass audience, adding realistic use cases and enjoyment to AR. Dave also mentioned in his talk that we need to get people off Unity and provide them with the tools they need to create these experiences, which could be another angle for me to view this project.
Niall Heffernan – Daqri
Niall Heffernan is a European Sales Specialist at Daqri, an AR company providing professional grade AR through their Daqri Smart Glasses and accompanying Worksense software. He likened the Worksense software for AR to the Microsoft Office suite. They intend to make a common set of apps to tackle the tasks that they’ve witnessed in professional environments such as medical and manufacturing.
The Daqri Worksense suite addresses 5 main areas for improving commercial contexts through AR:
Show – Scan – Tag – Model – Guide
Show allows the user to collaborate with their teams and experts through heads up video display.
Scan allows the user to instantly map their environments.
Tag allows the user to create persistent AR objects in their environment.
Model allows the user to layer 3D models.
Guide provides training and instruction.
I also had the opportunity to speak to Pierre Nowacki (Corporate Sales Specialist) at the Daqri stand in the expo. It was a very interesting conversation where he outlined the importance of user comprehension when it comes to AR, where sickness used to occur, or that the user wouldn’t understand the task or purpose of the applications. Pierre emphasised the importance of user testing with the Smart Glasses, mentioning that with their previous product (Smart Helmet) the menus used to follow the user after they moved their head and that the controls weren’t as consistent. The Smart Glasses have incorporated a huge amount of usability testing into the software, addressing consistency and clear user understanding of functionality.
I was curious about the colour choices for the Daqri Smart Glasses, Pierre mentioned that context is very important and that as their product is intended for use in a variety of workplaces, it has to be clear but also unobtrusive. The topic of colour blindness came up and that Daqri attempt to accommodate for people with colour blindness in their product, ensuring that colour variation within the menus is identifiable. Daqri are an extremely interesting company and definitely at the forefront when it comes to user experience and usability in AR. I look forward to seeing where they can take this technology.
Philip Nagele – Wikitude
Philip Nagele is the CTO of Wikitude, who provide an SDK for building AR experiences. His talk “Context is King” described the importance of context for AR applications and development. He presented a quote from Matt Miesnieks.
“There’s no point building an AR app unless it interacts with the physical world in some way.”
At first I thought this quote was obvious; of course an AR app has to interact with the physical world. But as I thought about it more, it became clear what he was talking about… the plethora of AR apps which simply act as a heads up display, or more importantly apps without context, where it doesn’t matter where you are. An example he gave of a bad presentation of AR was from Apple’s ARkit, it didn’t interact with the real world and was an “irrelevant” example of AR.
Some technology which can deliver context in AR are geolocation, 2D image recognition, markerless tracking and object/scene recognition. Philip gave the motorcycle scan example from Terminator as a good example of AR in context. The terminator is scanning the motorcycle (a real world object), gaining real time and relevant information about the vehicle, and the scan is filling in the missing information (occlusion from other objects). He argued that for AR to develop, applications need to reduce information density, increase context and increase their relevance to the user.
Andrew Jenkinson – vStream
Andrew Jenkinson is the director and co-founder of the “experiential technology agency” vStream. They employ a story driven approach to technology, focusing on telling stories through AR and VR to create captivating and informative experiences. One of their main projects was Atria for Pfizer, where they created a VR educational experience to show the importance of a medicine which prevents blood clots. Their philosophy with this project was to entertain first, and the education would follow. They create their content for cross platform viewing (from head mounted displays, to Google Cardboard, to YouTube) in an attempt to democratise the content.
I wonder if there is a need for using VR in this infotainment context and if it adds anything that a video can’t do, as the experience was not interactive (other than where you look). It was apparently a successful piece of content for Pfizor in terms of views, but I’d be interested to see if it was successful in terms of delivering it’s core message of taking your medicine.
vStream had their own stand at the expo, displaying their Audi AR walkthrough experience with the Hololens (which involves user interfaces and interactive menus), which I unfortunately didn’t get to try because of how busy it was. However, I did get to speak to one of the employees at vStream regarding their creative process. I was surprised to hear that they do not incorporate any user testing into their VR or AR experiences. I was told that their clients trust them to come up with great content, and that they trust their developers to deliver the content. When it comes to producing the content for clients, they create storyboards and then jump into development.
“The developers are good enough to know what to do.”
While the primary output of vStream is storytelling, I feel that by using AR and starting to add user interface elements to their experiences, it may be necessary to begin user testing, and perhaps starting to “trust the user”. At this point I had spoken to a few other AR companies & startups and was beginning to realise that other than Daqri, UX is not presently a design consideration and development precedes any form of user testing (if user testing ever even occurs).
David Trainor – Sentireal
Sentireal provide software for commercial AR/VR training. David was very passionate and enthusiastic about his product and I spoke to him regarding their AR development process. They do not currently do any internal user testing or early prototyping when developing their software, however their clients provide feedback to them regarding any concerns which are then remedied. An interesting point he made was that the majority of the problematic feedback they get will be due to the UX or UI design. Time, expense and a lack of experience with user testing were all factors which impacted their ability to test their products. They do realise the importance of UX and have a willingness to tackle these problems early. They have outsourced their user testing to get early feedback in the past, but this is expensive for them.
“90% of the testing will highlight problems with UI and UX.”
I used their Hololens demo which had a medical training program loaded onto it. After commenting on the high colour saturation, we discussed colour choice for AR. David explained that the context is very important, and that they need to understand who’s using it and where it’s being used before they develop an AR application. Essentially, it’s important to understand the use case when designing for AR.
Liam Ferris – Kainos
Liam Ferris is a VR software engineer working for Kainos; a digital technology company based in the UK. His current work involves gathering research through VR for the DVSA (The Driver & Vehicle Standards Agency). The project being showcased was a virtual reality hazard identification test, intended to assess driver awareness and where they look when driving. We discussed his approach to user testing and his methodology in his research.
An interesting concept Liam mentioned was “The Fidelity Contract” where everything in the world needs to be consistent or you run the risk of confusing the user. He explained that their initial testing involved a steering wheel and the motion didn’t translate to the VR experience. People would instinctively turn the wheel and be taken out of the experience and become confused when the wheel didn’t work as they expected.
“Offering people too much when it isn’t consistent with the world can be really confusing for them.”
Liam also discussed how VR itself could be a factor in testing with people, especially if they’ve never used VR before. It’s common to get false positives by testing with new users and he recommends introducing them to good VR before asking them to test your designs.
UCD School of Psychology
The UCD School of Psychology is currently using virtual reality to test social interactions. I spoke to the researchers regarding their testing metrics and how they were gathering data on their tests through VR. One of the tests they were showing at the expo was a cafe in VR which had another person sitting at a table. They intended to test how people would respond to another person being there when in a virtual setting. Their actual test is a rigorous series of different experiences in order to develop benchmarks.
The metrics they were testing were:
- Time perception
- Task promptness
- Heart rate & other biological measures
They explained that the user feedback would be gathered over a series of A-B test trials through questionnaires, observation, biological monitoring and user self documentation. Although the testing wasn’t for UX, it offered some insight into methodological formation and potential relevant metrics when testing a 3D space.
Thoughts on the State of AR
Based on many conversations and demos with AR companies and startups throughout the expo I felt that the current state of AR is highly technology and development driven, with little regard to the user. Most companies had no focus on usefulness to the user, UX considerations, or even very much beyond the novelty of AR. The Gartner Hype Cycle can provide some insight into the general approach to a lot of AR: big sales pitch – okay demo never to be used again.
I’ll compare AR as it is now to what the World Wide Web was when it was in it’s infancy, generally useless applications to drive future developments. Are they creating what people want? Probably not. Do they seem to want to find out what people want? In most cases, no. It’s interesting how experience and usability in web and mobile has become a defining characteristic of good design, while it is largely ignored in AR. It seems to me that the approach to AR design as a whole needs to re-addressed to allow for some consideration of the user.
The graphic above shows my experience from talking to the majority of AR companies at the expo. Other than Daqri, the “Current AR Workflow” graph was followed, only bringing the design to the user on completion, and never bringing them into the design process. The “Human Centered AR Workflow” is a very quick and crude representation of what AR development might look like when involving people in the process. A longer concept stage allows for early prototyping and user research, and the involvement of users from start to finish should guide the designs in a direction for usefulness and enjoyment. I intend to get in touch with Daqri to further understand how they implement a human centered approach in their designs.
What does all of this Mean for the Project?
There is a definite need to involve the user in the design of AR applications while offering designers a means to prototyping and testing their AR concepts in an accessible way. I intend to approach this project from an exploratory point of view, testing different methods and techniques of early stage prototyping for AR.
The expected output from this project is not a finished AR interface; but to test the validity, effectiveness and methods of early stage prototyping for AR. I intend to explore and evaluate a variety of techniques to guide AR design in the direction of experience & usability for the user from the outset.
The techniques & methods used will be evaluated, and the user experience findings will be documented and expanded upon through actual AR interfaces & video prototyping. I’ve decided to choose a use case relevant to the general public (rather than the professional user segments mentioned above) to allow for greater understanding of what’s required and expected from a usability and experience standpoint. Web usage is an area everyone is familiar with, and has a variety of challenges (text input, scrolling, viewing text and images and videos, multiple tabs, bookmarks, menus, selection, buttons, etc).
Use Case: AR Interface Design for Web Use
Exploration and evaluation of early stage prototyping techniques for AR.
Establish principles and approaches to prototyping for AR.
The UX findings portrayed in a semi-functional AR app, video representation or UX development plans.
For this project and early stage prototyping to be successful in AR, the approaches attempted must yield clear results and considerations for UX design before and in tandem with AR development.