The Project Begins

Week 5 : May 28th - June 3rd

The Project Begins

 

Web Usage in AR

Assumptions
This project seeks to explore what the future of web usage could be in AR for the everyday user. I believe that testing, research and prototyping methods need to be credible for this project, with well considered metrics and understanding of the technology. This will not be a 'speculative design' project, but will be approached with the following assumptions in mind.

  • AR head mounted display hardware will progress to a much higher standard. They'll be lighter, more compact, and issues with field of view, colour, resolution have been solved.
  • These head mounted displays have become ubiquitous in use and are worn by the average person.

Use Cases
Two use cases of web usage will be explored throughout this project to ensure it lies within the scope of how long the project is, but also provides a range of considerations in the final designs. Scenarios will be created around these two use cases to aid in design.

  • Light web usage (outdoor, mobile, short term, specific purpose).
  • Heavy web usage (indoor, laptop, long term, meandering purpose).

Factors
So far, I have come across three important factors that I feel are necessary for user experience design in AR. The fidelity and consideration of the design of these factors can impact how the others are perceived. As tested in McMahan et al., 2012, combining high fidelity displays or interactions with their low fidelity counterpart caused an overall reduction in performance and subject metrics.

  • Interaction (control, manipulation, UI, peripheral devices).
  • Display (content, representation, density, positioning, physical world / mapping).
  • Context (purpose of use, visual obstruction, speed, location, environment).

Metrics
As I've seen across other studies involving 3D user interfaces, AR and VR, there has been a consistent use of both subjective and performance metrics being measured in tandem, similar to typical 2D UX testing. Below are a few examples of generic testing metrics across the categories, however I expect test specific metrics to be added throughout the project.

  • Performance (efficiency, speed, errors).
  • Subjective (enjoyment, presence, understanding, familiartity, comprehension).

What is the future of web usage in augmented reality (AR)? I want to explore how AR could be interacted with, displayed and experienced across different contexts of web usage for everyday use. This project is about uncovering what web usage could be beyond the constraints of conventional browsers and screens.

-------------------------------------------------------------------------------------------------------

 

Thoughts on Web Usage Contexts

Heavy Usage

I'm defining heavy web usage as a long term, stationary (seated) approach to content consumption and exploration. Similar to using web browsers on a PC or laptop, heavy web usage involves the use of multiple tabs; video & audio in the background or foreground; content creation & organistation (cloud storage, blog posts); high information density with a focus on the content being provided (learning, research, compiling, entertainment).

Heavy web usage occuring over a long period of time can have a meandering purpose where digital distraction happens frequently, such as scrolling through social media or forums instead of doing what you actually intended, there could also be no initial intention other than to browse.

Light Usage

I'm defining light web usage as a more short term, contextual (different environments, on the move) approach to content consumption and exploration. This would be similar to how mobile devices are used for web usage with a limited number of tabs and a specific purpose to find information or a short browse while bored. Mobile web usage is more context specific; wayfinding (maps, locations, directions); messaging; instant information (restaurant reviews, bookings, times).

Social situations make up a unique part of light web usage. Sharing information in social contexts with the people around you during discussions would be important for some, while others may find comfort in purposeless web usage in socially awkward situations.

Research Potential

Habits: Habits surrounding the first tasks people do after opening a browser on a laptop versus a mobile device would be interesting to research. For example, do they always open their email account, social media and frequented forum on their laptop browser before doing anything else? How people use their homepage and bookmarks could provide insight into what an AR experience could provide for customisation and quick access.

Boredom: Purposeless web usage when bored could be another route to explore, is it the same across both forms of web usage? How does checking various social medias or forums on mobile while bored translate to its desktop counterpart? It'd be interesting to know how this purposeless web usage could be explored across the contexts in AR. How should this behaviour be addressed in the technology?

Distraction: Just as headphones can be used to augment your hearing, AR can augment both vision and hearing. The use of headphones for focusing on work in busy environments such as workplaces or cafes could be applied to AR. Increasing immersion through visuals and audio has the potential to limit distraction further than headphones alone.

Contextually Relevant: AR has the potential to be context specific and allow for both forms of web usage. It could be an immersive experience when you want it to be during a period of intensive web usage and research. This immersion could then ease away as you get up to walk to the bus stop but still provide you with relevant, wanted and unobstructive information about the environment or personal information such as reminders or timekeeping. Once on the bus, another level of immersion could be achieved to watch a video or read the latest news. When out for a few drinks with friends it might be required to have no visual information but when trying to figure out who the actor was in the movie being discussed, sharing the information with those around the table through the AR cloud could be the future of handing your phone across the table. Adapting to the needs of the user and the environment will be of critical importance for the development of contextually relevant AR.


Gatwick Airport AR Wayfinding
Credit: installation-international.com

AR Cloud: The lines are blurring between the type of content that's being browsed specifically on web and mobile. The difference seems to lie in how it is being searched and the patterns of use. Light web usage contexts in public spaces could be enhanced through the AR cloud and the use of persistant AR, a recent example would be how 'Pokemon Go' tied gyms (multiplayer aspects) to landmarks which could only be interacted with through physical proximity. At the height of Pokemon Go it was common to see bars, cafes and restaurants taking advantage of their proximity to virtual Pokemon Go content, enhancing the content in a bid to boost sales.


Pokemon Go Gyms
Credit: imore.com

Ethics: I believe there is a major question of ethics for AR use and content. In essence, AR gives developers, designers & companies the power to replace or augment people's vision and hearing with whatever they deem relevant or necessary. This opens the door for a range of exploitation techniques such as:

  • Directed advertising - being shown directions to partnered locations over the competition.
  • Hiding information / misinformation - such as negative reviews.
  • Overbearing visual information - aggressive advertising directly in the headset.
  • Lack of control over features - required pre-loaded 3rd party software.
  • Personal data - gathering and exploiting marketing information.


Altered Carbon
Credit: Netflix


Hyper Reality
Credit: Keiichi Matsuda

 

-------------------------------------------------------------------------------------------------------

 

Affordances in AR Interaction

Interaction in augmented reality can be ambiguous; software and hardware can impact how an AR application will be interacted with, and this can be extremely inconsistent. Controllers or body movement such as hand gestures can be used, but even these controllers and gestures vary greatly. Therefore, affordances in AR design are critical in conveying how content is manipulated, and what effect you can have on it. AR applications can provide visual, audio and even haptic (through external devices) affordances and signifiers for use.

Using the Leap Motion and Unity, Brendan Polley has created some interesting methods of visual feedback for his application. The first video shows the waiting time before a pinch is registered through colour and motion. Proximity and touch is represented through location based shadows, and grabbing through colour change. This sample clearly defines the different modes of interaction available in the app, and when they are active. A first time user may not understand how to create the inital sphere, but reaching out and grabbing it would be a natural and obvious interaction.


Credit: https://twitter.com/brenpolley

The second video shows another variation of the pinch feedback, representing it though colour and digital foreceps. The forceps afford the pinching movement through familiarity and a physical metaphor; they indicate what a user needs to do. Physical metaphors such as this could be ideal for introducing gestures to a first time user, think of it as a hint / tip box which you see the first time you use a new mobile app. Metaphors like this would be interesting to test in terms of usability and user understanding.


Credit: https://twitter.com/brenpolley

Affordances for audio and voice commands could also be visually represented to the user. Visuals which prompt a spoken command in order to complete a task could be worth exploring.

 

Scenarios, Storyboards & Sketches >