Redefining The Project

Week 3 & 4 : May 14th - 27th

Redefining the Project

 

New Project Direction

The last two weeks have given me the chance to consider how I want to take this project forward. I feel that a redefinition of the project brief and aims are necessary before continuing further, in order to prepare for the months ahead most effectively.

The initial project title was:

Approaches to Early Stage Prototyping of Augmented Reality Interfaces for Designing Useful & Enjoyable User Experiences

 

The aim of this project was to explore the concept of early stage prototyping for augmented reality, and evaluate what does and doesn't work for effective UX consideration prior to AR development for head mounted displays. This project would focus heavily on the prototyping approaches, user testing and evaluation of the success of the approaches. I wanted to bridge the gap between designers and designing for AR, giving them options to test their designs without needing extensive experience in coding. The thought process behind this project was to establish what paper prototyping or wireframing could be for AR design evaluation.

There felt like there was a lot of limitations I was placing on myself for this project. Evaluating the prototyping techniques would take a lot of time and work, leaving a lack of time and potential for actual design (beyond test design). Based on my assumption that actual application of UX in AR was rarer than I had expected, I wanted to open the project up to a human centered design approach where I could both explore prototyping (with less pressure to evaluate and define success) and explore the application of UX to AR in real prototypes.

The new project title is:

Designing Contextual Web Usage in AR for Useful & Enjoyable User Experiences

 

This projects aim is more focussed on applying user experience and usability considerations to AR for the recognisable and accessible context of web usage. Using a human centered design approach, I want to test a range of prototypes of differing fidelity with people. This will be a more typical design project than the initial project, but will allow me to explore the deign outputs further and more thoroughly. I will also be evaluating my AR workflow in an attempt to provide insight into what did and didn't work for me along the way.

The tools available to me at the moment are a HTC Vive, Leap Motion, Unity, Processing, Rhino 3D, Adobe CC and potentially an Oculus Go before the end of the project. I will still be focusing on head mounted displays (or enviornmental AR, I won't be exploring mobile AR) with this project although not designing for any in particular. The idea of integrating hand tracking into AR is very interesting to me and will be explored extensively in testing. New avenues of exploration opened up to me with this new project approach:

  • Coding functional interfaces for AR such as gesture tracking and hand location.
  • Exploring what web design considerations could be made for AR. (Augmented Web?)
  • Ergonomic, usability and efficiency testing.
  • Supplementary AR, not removing the PC but enhancing the experience.
  • AR interaction design & visual layout design.

 

-------------------------------------------------------------------------------------------------------

 

Project Assumptions & Questions

This project will be explored from the point of view that AR has become more ubiquitous, and head mounted displays have become less bulky and more prevalent. Working under the assumption that AR has reached this future facing stage in it's development, it's necessary to ask how it should be interacted with. If AR glasses have the potential to be used at any time by anyone, how would web browsing be controlled and viewed?

Location/Usage Context will need to be explored; what difference is there in required (or expected) control when out and about compared to sitting at home or work; how does AR function in its environmental context? This opens up a variety of ever changing ergonomic, efficency and information density considerations.

Equipment Context ties into the location. Can you control your laptop using AR and enhance the experience of use through a form of hybrid physical/digital interaction? Does the AR system recognise when additional tools are being used and change it's functonality? How can the AR controls blend with the physical inputs of another device without them interrupting each other: can I type on a keyboard and then switch tabs with my hand? Does AR function complimentary to a phone, or is the phone replaced? AR interacts with the physical world so the future of AR could take advantage of these other tools and forms of interaction.

Web Browsers would need to support the AR functionality. How would information be displayed & controlled? Would there be web standards for AR visual presentation? How would it facilitate functionality such as 360 video or viewing an item on a shopping website?

Measurement of User Experience for 3D user interfaces will need to be evaluated. What metrics need to be used in addition or instead of standard 2D UI testing metrics for establishing positvive or negative UX (such as physical comfort or presence)? What heuristics need to be considered when evalutating 3D UIs and are they different? Human factors in information processing such as perception, cognition and physical ergonomics will be of importance when designing 3D UIs (laViola et al, 2017).

Leap Motion Single Hand Gestures

While my experience from the ARVR Innovate conference highlighted the lack of UX, usability and functionality considerations of AR, Leap Motion recently wrote up a blog post discussing single hand gestures for controlling VR. They had a focus on ease of use and comfort over extended periods of use and explored what interactions could be for VR. They wanted to explore the strengths of abstract gestures and direct manipulation.

  • Abstract gestures are often ambiguous. How do we define an abstract gesture like ‘swipe up’ in three-dimensional space? When and where does a swipe begin or end? How quickly must it be completed? How many fingers must be involved?
  • Less abstract interactions reduce the learning curve for users. Everyone can tap into into a lifetime of experience with directly manipulating physical objects in the real world. Trying to teach a user specific movements so they can perform commands reliably is a significant challenge.
  • Shortcuts need to be quickly and easily accessible but hard to trigger accidentally. These design goals seem at odds! Ease of accessibility means expanding the range of valid poses/movements, but this makes us more likely to trigger the shortcut unintentionally.

via GIPHY

Leap Motion seem to be proponants of incorporating user experience into AR and VR. They tested hand motion and comprehension with users to ensure that their gestures made sense and were applicable. I found the article extremely interesting and it helped in defining a slight shift in the direction of my project. Their exploration of different gestures and applying digital information onto them is defeinitely something that I want to involve in this project.

via GIPHY

I can see outputs similar to these gifs being present in my exploration of UX in AR interactions. User testing will be a major part of this project to ensure that the findings are sound and accessible to others. Leap motion have been producing some really engaging blog posts as of late, and I want to try to get in contact them regarding their thoughts on AR, UX and their design philosophy.

 

-------------------------------------------------------------------------------------------------------

 

Display & Interaction Fidelity

McMahan, R., Bowman, D., Zielinski, D. and Brady, R. (2012). Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game. IEEE Transactions on Visualization and Computer Graphics, 18(4), pp.626-633.

In this study on fidelity in a VR first person shooter game, the level of fidelity of interactions and displays were evaluated against performance metrics and subjective user metrics. A CAVE (Cave Automatic Virtual Environment) is a six sided immersive environment in which projectors are directed at the walls to create a virtual environment without a HMD. The CAVE was used as the testing space to evaluate four different combinations of fidelity:

High Display High Interaction (HDHI): Surrounded by six projected screens in the CAVE, the user could control movement with physical body movement while aiming with a six DOF (Degrees of Freedom) controller in their hand.
High Display Low Interaction (HDLI): Like HDHI, every screen was in use, but the controller was replaced with a keyboard and mouse situated on a rotating plinth.
Low Display High Interaction (LDHI): Only a single screen was in use, but the user could control the game through physical movement and aiming with the controller similar to (HDHI).
Low Display Low Interaction (LDLI): Like a standard PC game, a single screen was used along with a keyboard and mouse for control.

Image Credit - McMahon et al

The study examined performance metrics such as remaining health, accuracy and completion speed. Usability, presence (immersion) and engagement were assessed through questionnaires following the experiments. They concluded that the HDHI and LDLI scenarios achieved the highest scores in performance metrics, basing the performance on familiarity. The HDHI (second highest) scenario being similar to real life motion and aiming a real weapon is familiar to the user while LDLI (highest) would be the same as current approaches to first person shooters and gaming. The combined scenarios (HDLI & LDHI) were not analogous to real life situations and performed significantly worse in performance metrics. Positive subjective user experience responses were found from the HDHI scenario in engagement, presence and usability.

“...the combination of display fidelity and interaction fidelity can determine the familiarity of the overall system, and it is this familiarity that seems to determine overall performance in many cases.”

Choosing the correct display and interaction fidelity will be use case dependent. A balance of positive user experience and performance will need to be found when designing for different applications, locations and desired output.

The idea that familiarity was an influence on performance metrics should be something to consider for the project. Similar to the abstract gestures mentioned in the leap motion blog post; how recognisable does the system need to be to ensure usability?

-------------------------------------------------------------------------------------------------------

 

Information Processing

LaViola, J., Kruijff, E., McMahan, R., Bowman, D. and Poupyrev, I. (2017). 3D user interfaces. 2nd ed. Boston: Addison-Wesley, pp.34 - 72.

The book “3D User Interfaces” addresses the human factors which must be considered when designing for AR, VR and 3D interfaces. They map information processing to three main factors: perception, cognition and physical ergonomics.

“...it is important to have a basic knowledge of how users process information into useful (inter)actions. This process is generally referred to as information processing”

Visual Perception

A scene in AR or VR must be interpreted correctly through visual cues to ensure effective use. Selection, manipulation of content, and navigation require an understanding of what is being shown to the user. Depth is a powerful method in defining the scene and representing visual information. There are a range of techniques for depicting depth, but they can be broken into two categories: relative depth and absolute depth.

Relative depth relies on gathering information on the depths of objects relative to other objects in the scene. Occlusion is when a closer object obstructs the view of an object which is further away, making it clear that the occluded object is further. Perspective is another technique for establishing relative depth cues. Parallel lines converge into the distance highlighting relative sizes and positions to create a sense of depth. Texture, brightness and shadow can be used alongside these techniques for added realism and depth perception.

Absolute depth cues do not rely on other objects, but are relative to the user. Motion parallax causes far away objects to move more slowly than objects closer to the user to very clearly and accurately convey depth and distance.

When designing for AR throughout this project, visual perception considerations for when each type of depth cues are applicable and relevant will need to be explored. User comprehension is likely to be a factor when testing these methods; it’s likely that they will need to be introduced to established AR & VR environments prior to testing.

Cognition - Situation Awareness

How we interact with surrounding objects in a 3D user interface or a virtual environment is affected by our situation awareness. Decision making, the processing of information and deciding an action are all impacted by the information base provided in the situation. The design of spatially and contextually aware UIs and virtual environments affects the thought processes and comprehension of the user.

Situation awareness will be important to assess when testing AR applications with users. It must be clear that they understand the context of the test and what it represents (especially for rapid prototyping). False positives will be a likely side effect of poor situation awareness and user comprehension; leading to biased testing or inconclusive user feedback.

Physical Ergonomics

When designing 3D user interfaces which require additional body usage or extended range of motion compared to a 2D counterpart, physical ergonomics must be considered. Clarity, ease of use and comfort over prolonged usage are all factors in 3D UI design. Physical ergonomics should be designed around the task and usage context.

“The control task can be characterized by its accuracy, speed and frequency, degrees of freedom, direction and duration, and it is highly affected by the anatomical capabilities of the human body. Thus, task characteristics directly affect the choice of how to map control to the human body.”

Posture and required body positioning can severely impact 3D UI usability. For an interface which a user must raise their arms to interact with menus, fatigue is a common issue. Usage time can be extended considerably by lowering the required arm height or by bringing the hands closer to the body. Fatigue can result in increased time in performing actions and is prone to causing more errors in use. The graphic on the left visualises the expected time to reach shoulder muscle fatigue.


Image Credits - LaViola et al
 

The Project Begins >