Week 6 : June 4th – 10th

Scenarios, Storyboards & Sketches

 

Early Ideation & Sketching

I came across this blog post by Volodymyr Kurbatov which explains how he sketches for virtual reality. Using a 360 template, scanning the sketch and opening it in a 360 photo viewer, your sketches can be viewed in 3D by a user on a VR headset. I downloaded the template to try it for myself, as I believe that sketching could be a very quick and iterative process for early user testing in AR, but it needs to feel more realistic than a 2D sketch for the real experience.

This template allows you to sketch in all areas where the user could look, allowing for the testing of rough positioning, scale and density. To get a better understanding of how the template should be sketched on and used, I overlayed Mike Alger’s content zone diagram.

To test out the technique I sketched a few screens, boxes and icons around the template to see how well it worked. When it was ready I took a photo of the sketch and uploaded it to the Google Street View app which allows you to create 360 images very quickly. I viewed this using my phone and a Google Cardboard VR headset and was quite impressed at the results for such a quick piece of work.

The problem with this technique is that this technique is intended for VR sketching and the situational context of AR is lost. Next, I tried using a 360 photo of a location and overlaying my sketch on top of it. While this creates a more contextual experience for the sketch, it is still VR.

This sketching technique is quite simple and displays some context, and I can definitely see a variation of this technique being useful for this project. 360 sketching combined with storyboards and scenarios will be useful methods in early stage user testing and feedback. There are some limitations with this technique. I’m not sure how AR elements which follow gaze or head orientation could be represented. Depth isn’t perceived well and interaction with the content isn’t possible unless image slideshows and wizard of oz style prototyping methods could add some form of user interaction with the displays. Google Cardboard also has a relatively narrow field of view (90 degrees) in comparison to normal vision, this may make it more difficult to test the contextual nature of the sketches.

 

——————————————————————————————————-

 

Web Usage Breakdown

As mentioned in the previous blog, the contexts of use which I will be exploring are light and heavy web usage. Scenarios will be created across these usage contexts in order to drive the design creativity and communicate the ideas alongside first person sketching and concept storyboarding. These scenarios will assist in highlighting user expectations and design requirements.

Heavy Web Usage

What do people need to be able to do while using the web heavily?

  • Search and find what they’re looking for.
  • Explore the web.
  • Consume content and media (text, images, music, video).
  • Organise and manage content (bookmarks, tabs, homepage, cloud storage).
  • Create content (email, Google docs).
  • Communication (instant messaging, video calls, email, social media).

How should heavy web usage be represented in the world?

  • AR shouldn’t completely remove you from the world, it should compliment it.
  • Too much visual information can be overwhelming and confusing, AR should be used in small controlled bursts.
  • AR should only show what is needed for what people are trying to achieve, less is more.
  • Control should be multimodal, voice control is more acceptable in private contexts.

Where does heavy web usage happen?

  • Heavy web usage occurs in regular locations such as homes or work, allowing for the customisation of these spaces.
Light Web Usage

What do people need to be able to do while lightly using the web?

  • Search and find what they’re looking for quickly.
  • Consume manageable amounts of content and media (text, images, music, video).
  • Receive contextually relevant information at their location.
  • Communication (instant messaging, video calls, email, social media).

How should light web usage be represented in the world?

  • AR should only show what is needed for what people are trying to achieve, no unnecessary or obstructive information.
  • AR content can’t block vision while moving.
  • There are opportunities to add extra purpose and meaning to the physical world.
  • Personal and social (friends, groups) customisation of environments.
  • Voice control and audio feedback may not be as acceptable in some public contexts.

Where does light web usage happen?

  • Public spaces and when moving.
  • When bored or in socially awkward situations.

 

——————————————————————————————————-

 

Self Documented Research

To aid in the creation of scenarios, a deeper understanding of contextual web usage across different people will be important. I’ve decided to recruit people to document their own web usage over a few days, keeping track of where they are, what they’re doing and what sort of online applications they’re using. The format of this self documentation will be the following:

On the bus, sitting : Facebook, writing emails, browsing on Google, Spotify.

The aim of this is to get a wider understanding of exactly what sort of applications and information people are using the web for, and the differences that where they are or what they’re doing will have on it. I will be documenting my own web usage alongside them over the next few days. At the start of each day I will aggregate this web usage information and graph the differences across web usage types, physical locations and actions. Interviews will be conducted with users as they self document.

——————————————————————————————————-

 

Why is All This Relevant?

This project is about embedding contextual AR in everyday life; exploring what it should and shouldn’t do as a technology. I want to focus on what people will get out of contextual AR in the real world, with a focus on web usage and the requirements of the technology (display, interaction & context).

Ubiquitous AR headsets should celebrate the physical world rather than cover it. Providing relevant contextual information only when people want it.

Why?

1. Display up to date, contextually relevant information when you want it
I believe that the most compelling part of AR is to be contextually relevant, even allowing users to create their own digital spaces & information in the real world. There is huge potential in displaying this information, but it should only be displayed if the user wants to see it. Cluttering the view of the user, or annoying them will only lead to unsuccessful AR implementation.

2. Provide functional interactivity with ergonomic and social ease
Controlling AR in the real world over extended periods of time needs to be quick, simple and put minimal strain on the body. When in public, it’ll be necessary to be able to interact with the application while walking and without social stigma.

3. Information retrieval, searching & messaging
The most important and common aspects of web usage will need to be possible, without compromising the flow of AR, or their ease of use on other systems.

4. Provide a superior real world experience to mobile devices
While AR has the power to be more immersive than mobile devices; I believe that they should be less about pure digital content, instead combining it with the physical world.

5. React to your environment & devices
AR should react to the environments you’re in, such as ensuring safety when crossing roads. I don’t believe that all devices will be replaced by AR headsets, so these displays should be able to work alongside your other devices.

6. Shared social experiences
Social media could be brought from the digital timelines, pages and walls to form location dependant social interactions in the real world. Customising spaces within social groups could add a highly experiential layer to AR, encouraging day to day exploration and sharing.

 

Why Not?

1. Display irrelevant or too much information when it’s unwanted
AR could potentially give you information on everything around you at all times. How much of this information would ever be relevant or even wanted? Not much, maybe none. Showing what AR is capable of doing should be approached with a “less is more” mentality. Obstructing the world with useless digital content will only damage the medium.

2. Compose, manage & create large bodies of content or media
The scope of this project will not cover AR as a pure content creation tool. Tasks such as writing large amounts of text, coding, creating or editing digital content will be left to other devices such as PCs for this project. The project will be primarily focusing on implementing contextual AR in the physical world rather than a full PC replacement.

3. Cover your senses and remove you from the real world
As mentioned earlier, I feel that the real world needs to be at the forefront and augmented by the digital world, not the other way around. AR shouldn’t block you from the real world.

4. Social media in it’s current form
I believe that social media shouldn’t present itself in AR in the form that it currently exists on our devices. They are completely digital and not at all contextually driven. I want to touch on how these social experiences could manifest, but contextual social media is a project in itself.

 

——————————————————————————————————-

 

Scenario Drafting

This scenario is based around the persona of Eric, a 27 year old civil servant. The scenario doesn’t go into the finer details of how he interacts with the content, but instead what he can do with the content and how he uses it contextually.

Eric wakes up on Friday morning and lies in bed for a few minutes. After a while, he puts on his AR glasses and can view the content he has customised for his bedroom in the mornings before work. The due times of the next buses he can get are mapped to the wall at the end of his bed, along with options to view the weather forecast for the day. He has set this content to be automatically displayed there on weekday mornings. He checks the weather forecast and begins to get ready for the day. His headset is linked to his mobile device and he can see that he has new messages, he reads and replies to these messages on his mobile.

As he leaves the house for work, he notices the reminder he left on the door last night so he’d make sure he buys some bread. He has also set up another persistent bus timetable outside his house so that he can see if he is still making good time. On the walk to the bus stop he selects some music to listen to for his commute. Eric uses his arm to display any relevant quick access notifications and information; looking at his arm he checks the time, the current music and can see he has another unread message. When he arrives at the bus stop he sees that one of his friends has left some shared content at the bus stop, he chooses not to access it because his bus is arriving soon and it’s too early.

When the bus arrives, Eric pulls out his travel card, which displays his remaining credit in real time. After scanning on and getting comfortable, he watches the last 15 minutes of the TV show he started last night. Once the show is finished, he looks up some information about some new policies in work and browses through the content. He sees that he has a few more unread messages and responds to them using his mobile phone; the AR content fades away as he uses the other device. He sees an advertisement on the bus and interacts with it to see more information about the product; his stop is next so he decides to save this content for later on when he has a chance to view it.

After arriving in work, due to work regulations staff aren’t allowed to link their AR glasses with their work computers. If Eric could link with the PC, he could view 3D content and extend the screen versatility beyond the monitor. Eric and some of his friends in work organise to go to a new bar for some food and drinks after work. They create a linked social event which allows everyone involved to check out information about the venue and how to get there. Eric has to stay in a bit later than the rest of them, so he tells them to go on and he’ll meet them there.

Eric leaves work about 20 minutes after everyone else. His usual bus timetable outside work has been hidden due to the new social event that was created, instead he can see options about the event. Eric isn’t sure where the bar is exactly and decides to use the wayfinding functionality of his AR glasses. The directions map to the physical world and he follows them through town. Along the way he sees some social content which was left by the group that went on ahead, he accesses it and sees that they left a funny video directed at him.

At the bar Eric meets the rest of his friends and orders himself some food and a drink. They have a good time and conversation turns to a friendly argument about who the actor was in the film that was out a few weeks ago. One of his friends pulls up the information and shares it with everyone in the group to prove that he’s right. Eric has to be up early tomorrow morning and doesn’t want to stay too late and miss the last bus. He sets up the bus times and walking duration information on his arm, this information is only visible to him. As it gets close to the time to leave, the display changes to notify him.

Eric says his goodbyes and begins to head home. He knows the way back so he just puts on some music for the walk. He gets on the bus and heads home, remembering that he has forgotten to buy bread again. When he gets home he sets up another reminder on his door, to ensure that he definitely gets the bread when he’s out tomorrow.

 

——————————————————————————————————-

 

Areas for Ideation

Based on the scenario and storyboards above, there are three promising areas that I feel should be explored for this project. Embedding contextual AR in everyday life could be achieved with the following:

1. Personalised Persistent Information
Augmented reality could give people the opportunity to use the environment around them as their personal canvas. Customising the areas you frequent to provide the most relevant and necessary information, only at the times when you need it will be a huge part of making AR useful to the everyday user.

In the example above, Eric created persistent AR information about the bus times to bring a sense of flow and clarity to his mornings. He also used it set reminders for himself. Other examples of this type of AR could be to provide real time traffic updates when leaving work, or even decorating your work space.

Considerations will need to be made for designing these types of AR interactions:

  • How the user sets up persistent information
  • Interacting with the information and modifying or customising it
  • How it will be displayed to the user

2. Shared Social Content
There is a huge opportunity for social content in AR, empowering the usage of location dependant social interactions. Like the personalised persistent information discussed above, this could consist of leaving content for select friends or groups, creating context specific interactive content. The surprise and mystery of this type of content could bring highly interactive, fun and shared experiences to AR.

Eric’s colleagues left him a video of them as they went to the bar in the scenario, but there is a wide range of use for social AR. Viewing and leaving content while on holidays or even on a normal walk could enhance the overall experience. There is potential for content like this to only be accessible from the location, but perhaps being able to view where the content is could add to the exploration. Instant messaging is a form of social content left out of the scenario which should also be assessed.

Considerations will need to be made for designing these types of AR interactions:

  • The kind of content that can be shared
  • How the user sets up social content
  • Interacting with the information and modifying or customising it
  • How it will be displayed to the user and others
  • Is there instant messaging?

3. Information Retrieval & Consumption
Information retrieval such as web browsing and viewing videos in a similar way to how mobile devices are currently used would provide greater versatility to an AR platform. While this type of interaction is not as contextually driven as the other two, it can offer important functionality when it is needed. An AR device capable of information retrieval brings the risks of high information deinsity, overuse and removing the user from the physical world. Too much customisation in this regard is potentially more dangerous while moving than looking down at a mobile phone.

In the scenario, information retrieval was used for watching a video on the bus and wayfinding to help direct Eric to the bar. In the bar one of his friends brought up information to clarify a topic of conversation and shared it with the group (an example of both social content and information retrieval). The intention of this project isn’t to replace to phone; the AR concept doesn’t need to have the full functionality of mobile information retrieval, it needs to be suitable to the medium and the users.

Considerations will need to be made for designing these types of AR interactions:

  • How content will be searched for
  • What type of content can be accessed
  • Interacting with the information and modifying or customising it
  • How it will be displayed to the user
  • The quantity and density of information capable of being displayed
  • Where the content can be accessed

 

It may be necessary to narrow the focus onto one or two of these areas as the project progresses.

 

Looking Back & Moving Forward >