March 2018 - November 2018
We started this project as an Inclusive Design Challenge given by Microsoft in our UX Design class. The challenge was to design for inclusivity in a "deskless workspace". After exploring multiple options, my team decided to work on making digital art in VR accessible and used Google Tilt Brush as the starting point.
Cherisha Agarwal, Joanna Yen, Pratik Jain, Simi Gu, Srishti Kush, Raksha Ravimohan
300 million people of the world's population has some sort of limb disability and due to the use of hand controllers, most VR applications are not accessible to people with these disabilities. People who have cerebral palsy, multiple sclerosis, paralysis and someone who is missing an arm is completely excluded from utilizing the full potential of VR.
Applications like Google Tilt brush, Medium, Blocks and Quill, which require the use of hand controllers is completely inaccessible to people with limb disabilities.
To understand the mobility issues and experience with creating art, we conducted one on one user interviews at various centers like the ADAPT community, hospitals and disability rights centers, and spoke to people who been diagnosed with cerebral palsy, multiple sclerosis and people who have limited mobility due to an injury.
Perception about VR
Most of our potential users were not familiar with VR and to introduce them to the technology, we made them watch a VR movie. The experience was very exciting to them and they wanted to learn more.
Current methods of drawing
The users mostly used hands on hands technique, a wacom tablet or a smartphone to make drawings depending on the medium they were interested in.
Level of comfort with technology
Most of the experts that we interviewed with said that voice commands and eye gestures would be an easy of interacting.
"Interaction using eyes can be helpful and you could explore mixed reality as well"
- Serap Yiggit Elliot, UX researcher at Google
"There is very little work done in accessibility for VR. Tobii is creating headsets with eye tracking built in."
- Todd Bryant, Professor at NYU
To envision the basic interaction of controlling the interface through eye tracking and voice commands, we conducted a role play where the members of the team would behave like the interface elements.
After gaining clarity on the different tasks and interactions that should be present in the tool, we proceeded to create a more detailed prototype by incorporating advanced tools and a menu consisting of other system functions as well. The idea for this interface was derived from exisiting design tools such as Sketch and Illustrator.
We tested the paper prototype with both experts and potential users from ADAPT community to understand if the interface is intuitive enough for the users. We learnt that people who have no experience using art tools found it difficult to navigate and understand what the tools meant.
The final prototype was implemented using the Unreal Engine and combined all the learnings from the previous stages of user testing. To help the user navigate the system easily, we decided to include an AI assistant who can have conversations with the user and also provide onboarding.
The app will introduce many people into the world of VR and help them experience this new medium.
With onboarding, AI assistance and clearcut visual for tools, the interaction becomes more natural and intuitive.
The different environments within the app provide the feeling of visiting different places.
The user will be able to connect and draw with their friends from across the world.
The different environments and strokes are very pleasing and therapeutic.
The app provides opportunities to create and share artwork thereby helping them build skills and generate income.
- User, Axis Project
- User, Haym Solomon Home
- VR expert, Mount Sinai
- Roland Dubois, Accessibility expert
Place elements in such a way that people who cannot move their head too much will also be able to use it effectively.
Increase contrast in the menu to make it suitable for people with low vision and allow them to increase the size of the menu.
Instead of free strokes which can be jittery, automate some of the drawing by using pre-existing shapes or replicating patterns.
This project received a grant of $10,000 from NYC Media Lab's XR Startup Initiative. During the 12 weeks intensive sessions, our team worked on the business model and product-market fit, and conducted about 120 customer and expert interviews. We also showcased our prototype at various exhibits - NYVR Expo '18, Media Lab Summit '18, Exploring Future Reality '18, R- Lab XR Showcase, Science Fair BCG Ventures and Verizon 5G Labs.