The Challenge

In 5 days, using the Google Design Sprint methodology, create a context-aware system that adjusts based on motion, location, or travel mode—balancing attention, navigation, and safety. 

Skills: Literature Review, Competitive Audit, Rapid Prototyping, Affinity Mapping, Storyboarding

Approach
In a team of three, we were to create an interactive prototype that assisted older adults with indoor navigation.

Researcher and Prototyper

Tools
Slack, Miro, FigJam, Figma

View Prototype

Problem Statement

The current state of indoor navigation tools for complex environments, such as airports, lacks consideration for people with mobility disabilities and older adults. What existing tools fail to address is the common problems aging populations face while navigating indoor environments (e.g., visual problems, cognitive impairments, unclear guidance, and delayed assistance). Digital tools tend to place emphasis on assisting the blind and visually impaired with usability based on simulated results rather than actual insights from the lived experiences of those affected. Physical and procedural tools tend to be inaccessible, confusing, and anxiety-inducing, causing high cognitive load and abandonment.

We propose a context-aware system that helps older adults—disabled and able-bodied—better navigate these environments with continued, proper guidance and clear assistance to their destination. 

About the Challenge

Our challenge focuses on developing context-aware systems that enhance indoor navigation for people in motion or traveling, especially in large environments such as airports. Navigating these spaces can be difficult, even if you have done it once before; the process remains stressful. Transit in general can be a source of anxiety, but current available aids for indoor navigation specifically are limited to unclear signage or confusing maps/guides, with little consideration for those with disabilities and older adults. Moreover, GPS is unreliable indoors, which means an alternative or additional location-detection method will be required.

Understanding the Domain Day 1

The Research

Studies regarding wayfinding and indoor navigation applications tend to focus on optimizing these environments for people with blind and visual impairments. There is limited focus on older adults, persons with mobility/physical disabilities, and those with cognitive decline. Moreover, testing and evaluation methods often rely on simulated results, which causes products to fall short of the unique needs and expectations of their end users. The manner and thought process behind an able-bodied person’s execution of a task differ from those of an older adult or one with disabilities; therefore, there exists a need to include participants affected by these disabilities in the evaluation and testing process to fully integrate their experience into the interface.

Related Works

One particular study developed a context-aware system called IndoorWaze [1] to automatically generate a high-fidelity floor plan designed to help people with poor spatial awareness, children, senior citizens, and visually impaired users navigate indoor environments. They focused on a mall, collecting RSS data (WIFI fingerprints) from each store to act as labeled samples for the store. IMU sensors (accelerometer, compass, gyroscopes) were used to extract shoppers’ walking traces. When used in combination with the RSS fingerprints, the application can infer the stores that shoppers have passed in the walking traces. Then, based on the shopper’s current location and path, the system provides context-aware audio instructions to the shopper. If a shopper deviates from the path, the sensors alert the system, catching the mistake immediately.

The use of RSS data collection/WIFI fingerprints was particularly interesting. The quick setup would allow stores, or in our case, airports (check-in gates, in-airport restaurants/stores, parking deck levels, etc.) to give us a signature to label their locations. The traveler could then have a visual pathway to their location, as well as places they have already passed. It would allow for better indoor positioning and an accessible floor plan that offers necessary, continued guidance for our target demographic. 

Another study [2] focused on a wearable system using an Arduino, GSM module, speaker, and infrared module to assist the aging population with their normal movement in indoor environments; it was also equipped with a help button to send an SMS alert to a caregiver in an emergency situation. The authors note that the most common problems among the aging population are a decline in memory (forgetfulness), visual problems (low-vision), dependency on others, shortage of caregivers, and increased cost of healthcare. Cognitive impairments contribute to difficulties in memorizing and finding a path during movement, meaning there is a need for specificity in directions (knowing exactly how and where to move to know where to go) to successfully navigate to desired locations. These issues cause older adults and the elderly to wait for assistance or forfeit altogether, causing disappointment, frustration, loneliness, and insecurity.

Before crafting their small device that interprets gestures and voice commands to help elderly adults navigate the airport [3], the developers found that poor eyesight, reduced mobility, information complexity, and the confusion due to the lack of continuous guidance from existing navigation tools—like signage and maps—were the most significant issues the device should tackle. To use the small device, users first scan their itinerary to process their route and receive guidance to their location. They utilized four essential wayfinding functions:

  1. Direction – current position, time left to reach the segment, and details of the next intersection's directions are displayed.

  2. Distance – progress percentage along route, the distance until their next run, current floor level, and total route distance are displayed.

  3. Process Information – provides step completion process, subsequent steps, and ongoing airport task information.

  4. Assistance Function – users can communicate with the AI assistant to resolve any wayfinding issues.

Current challenges in complex environments for older adults include deciphering maps, interpreting signage, and operating digital devices. The mix of physical, digital, and procedural information can lead to high cognitive load, anxiety, and confusion. 

Key Takeaways: 

  • A majority of indoor navigation apps focus on blind and visual impairments with little consideration for other disabilities or other adults.

  • Simulated results disregard the lived experiences of those with disabilities and cause products to fall short of their needs/expectations.

  • A combination of IMU sensors and RSS fingerprints would allow better visual navigation and guidance that’s quickly accessible via a phone app.

  • Primary issues among the aging population are forgetfulness, visual problems, and cognitive impairment; the specificity of directions is paramount to the success of the product.

  • Reduced vocal strength makes it harder to verbally request assistance, and the lack of independence exacerbates insecurities/loneliness.

  • Current indoor navigation tools use confusing signage, maps, and hard-to-learn digital devices.

  • Visualization of direction, route distance, task information, and an assistance function would provide continuous guidance that users can easily reference.

Existing Tools and Competitors

Floorplans and Directory Maps

Traditional floor plans and directory maps are typically accessed at stationary terminals or central points throughout the airport (and other complex environments). They often involve a legend of sorts to differentiate between information with corresponding texts or arrows to direct the user. Depending on the size of the location, floor plans and maps can contain large amounts of information with overlapping lines and text that are hard to decipher. While color can be helpful to distinguish the different pathways, it’s overwhelming to look at the available pathways. Legibility and text size also play a factor in interpreting these tools. For a person with declining eyesight, they can be a pain to read; additionally, once you walk away, you must memorize the direction you are supposed to go. There is no continued guidance. You must navigate on your own until you find another map/floor plan or ask for assistance from someone else.

Map-Based Navigation Apps (Google, Apple, Waze, etc.)

The benefit of map-based applications is that they offer a visual pathway and continued guidance to your destination. If you deviate from the path, it automatically reroutes and can alert you of any incidents that may cause your travel duration to increase. Audio cues mean you do not always have to pay attention to the screen; instead, you are free to focus on the environment and know that your assistant will alert you when you are nearing the next step. You can also see the total distance necessary to travel, preview the route and all the steps, and customize the frequency of alerts your audio assistant gives you.


The downside of map-based applications is that they are primarily built for outdoor navigation. GPS does not have the functionality to work indoors and would require additional sensors to properly navigate complex, indoor environments. Dense buildings block GPS signals, weakening its power and making it difficult to use in enclosed locations. It is great to arrive at your location, but once you are inside, you are on your own. Moreover, digital applications in general can be difficult for older adults and those with impairments to learn.

Environment Signage, Arrows, and Human Guides

Signage and markers are typically found throughout complex environments. They provide general guidance to desired locations with labels for each destination. Visitors follow that sign to another sign that directs them along the path, repeating this process until they reach their location; sometimes, the arrows are clear, and sometimes, the arrows are abandoned too early in the journey. If a place is too crowded, signs may go unnoticed or accidentally disregarded. This can lead to confusion and the visitor becoming lost. As a result, they may seek assistance from a directory or ask a nearby person, such as a guide or fellow traveler, to receive aid.

While fellow travelers can sometimes help direct visitors to their location, it is also likely that they are equally lost. Even if they know their way around or are employed as official guides, they may not properly articulate the directions to your destination; conversely, the visitor may not know how to frame their question to receive proper help or be insecure about their need for assistance and opt to silently struggle alone/fail to ask for clarification to avoid embarrassment or shame. If the visitor has a visual impairment or cognitive issues, remembering where to go, how to move, and what to look for on the way to their path can prove challenging.

Keeping this research in mind, we phrased key pain points in the form of “How Might We…” questions:

  1. How might we use ubiquitous computing technology to help older adults and people with disabilities effectively voice their concerns and inconveniences with the system or staff of complex environments (airports, concert halls, parking decks, etc.)?

  2. How might we use ambient cues (sound, light, haptics) to subtly guide individuals without requiring constant screen interaction?

  3. How might we communicate reassurance and progression feedback (e.g., “You’re 20 steps from your gate”) to reduce uncertainty during long navigation paths?

  4. How might we detect a traveler’s physical and cognitive state in real time (e.g., fatigue, stress, disorientation) to automatically adapt navigation assistance accordingly?

Plan and Storyboard (Day 2)

To begin thinking of creative solutions to tackle this problem, members ran a Crazy 8’s session. Instead of sketches, members crafted sentences to convey their ideas. After, we ran an affinity mapping session where ideas were sorted into three categories: Interactivity, Technology, System Feedback. FigJam link here

Crazy 8s and Affinity Mapping

Storyboard and Persona

To get a better sense of what interactions will be essential to our prototype, we created a storyboard that guides an older adult woman through the Hartsfield-Jackson airport. To reach her gate, she uses our app to scan in her flight information ahead of time. The data is processed and used to create a visual pathway that uses AR to draw a path to her location, utilizing audio cues for added clarity and assistance during her journey. She can preview her route, including directions, distance, and arrival, add stops, and receive visual and auditory instructions with clear confirmation when she arrives at her destination.

User Journey: Andrea is at the Atlanta airport. Prior to her flight, she checked her luggage. As lunch time nears, Andrea grows hungry and searches for a lunch spot before her boarding time. With the complexity of the environment and loud crowd, she quickly becomes disoriented and confused while navigating the interior of the airport.

The affinity mapping session helped us settle on the type of digital product we wanted to create: a mobile application geared towards helping adults aged 40 to 60 navigate through complex and large airports.

Prototype (Day 3)

Low-Medium Fidelity Prototype

In this low-medium fidelity prototype, we tried to capture the essence of context-aware and ubiquitous computing technology by using auditory and visual feedback to provide direction, clarity, and reassurance in Andrea’s disorienting environment.

The storyboard and user journey acted as a guide for the essential interactions and elements required to help Andrea successfully navigate the complex environment.  Navigating these spaces can be difficult, even if you have done it once before; the process remains stressful. With NavPort, Andrea receives the continuous guidance, control, and clarity that other indoor navigation tools lack. This click-through prototype uses her boarding pass to identify her gate, then help her navigate the confusing, loud interior of an airport to reach two stops—Chick-fil-A and the boarding gate—without panic or hurry.


Prototype Link

Feedback and Iterate (Day 4)

Peer Feedback from Team 3

Clarity — Is the problem and solution easy to understand?

“The flow was intuitive and met my overall expectations as a solution to the described scenario. The problem is clear, and the solution meets the needs of users attempting to navigate large public spaces by utilizing location services and incorporating varying levels of feedback to gently keep them on track.”

Novelty — What’s fresh, clever, or unexpected here?

“The incorporation of augmented reality and voice assistance was clever. This helps elderly people by allowing them to see exactly where to go. Voice assistance is also helpful in aiding them by acting like a personal aid, so they feel that someone is helping them. The overall focus on building confidence is a clever way of promoting user autonomy. The health monitoring is somewhat unexpected and may seem out of place. The app should focus on navigation, but the inclusion of health information, such as heart rate and pace [may be] confusing.”

Context Alignment — Does it effectively use context-aware or Ubicomp principles?

“This aligns with context-aware and Ubicomp principles by not demanding attention from the users and allowing the users to stay in control. The app lets the user know when they have arrived at their destinations. The user can turn the captions off or on, allowing the user to have autonomy. Audible assistance for users helps them to stay focused on the task at hand without focusing on their phones.”

Feedback and Refinements

  • When users are looking for the airport, display the nearest airports as top options. Most people will be using airports in their specific area, so this will reduce time spent searching or typing for an airport.

  • Give the users an option to edit their stops once they start their navigation. Maybe a user wants to add another stop to a different restaurant in the airport before their flight.

  • Once the user starts the navigation process, there is no way to get out of the page. Adding a back arrow or a hamburger menu on the top left or right corner will allow users to still visit other parts of the app once their navigation has begun. With voice assistance in the loop, they do not need to constantly look at their screens.

  • Take out the language discussing mobile ability, like "I am strong, let's walk," and rating the user's walking pace. While we’re sure it's unintentional, this comes off as a bit ableist and puts down those who aren't as mobile or not applicable for those who could be using a wheelchair, for example.

  • A bit of confusion regarding whether "listening on" is something you could click on while you walk and change it? Or just letting you know if you have headphones connected to your phone?

  • Forcing you to turn off captions to proceed in the prototype was a bit confusing.

  • Add in a "step by step" that gives a list of the directions as well, similar to how Apple does, for example, rather than just a map view.

  • Add the ‘k’ in Chick-fil-A in the screen where you set the navigation up.

Iterate and Refine

Our team successfully completed a one-week design sprint that proved both productive and insightful. Despite the tight timeline, I went back and refined the prototype to incorporate the valuable feedback from Team 3 (our classmates). The changes included:

  • Instead of toggling to the next step, users can simply tap the center of the screen.

  • Language throughout the application for clarity and to avoid mobility assumptions.

  • Captions were shifted to toggle only.

  • Back Arrows, as well as a route cancellation overlay (option to add a stop, pause route, or end route) were added for clearer exits and navigation.

  • Added a route preview to the map.

Conclusion (Lessons Learned)🤯

One Thing I Learned: Sprints are the time when designers find out how much of an ego or pride they can have, and it makes the process more difficult when everyone is married to their own ideas and mindset instead of being open to new possibilities or methods. On this team, we trusted each other enough to know when to work individually and when to step in or request help without judgment or ego.

One decision I would change: The depth of my competitive analysis. It seems like a small detail, but I feel like I could have gone for more specificity with the map-based applications instead of generalizing them all together. It could have allowed us to go more in-depth with our requirements or even create a user flow diagram to better guide our prototyping process. 

One method or skill I will continue to carry forward: Stand-up meetings/check-points. Really simple but effective in ensuring a status report on everyone’s progress. It’s also a great time to figure out where and how to pivot as quickly as possible, before further mistakes or issues occur.

View Prototype