Mobile > Technology

PROJECT GUIDELINE

GOOGLE CREATIVE LAB, New York / GOOGLE / 2021

Awards:

Silver Cannes Lions
CampaignCampaign(opens in a new tab)
Demo Film
Presentation Image

Overview

Credits

Overview

Background

At a hackathon in August 2019, Thomas Panek, accomplished marathon runner and person who is blind, challenged our team: “for greater independence while exercising, can we make navigation for a blind runner possible?” According to the International Agency for the Prevention of Blindness, there are 217 million people in the world with moderate to severe vision-loss and 36 million people who are blind. What’s more, vision loss often causes decreases in physical and mental health. The power of innovation in accessibility comes from a collaborative environment with community and industry. In design partnership with Thomas, we set out to explore if we could enable independent running and walking for exercise, for an increased quality of life for persons who are blind or low vision.

Describe the creative idea

Typically, when someone who is blind or has low-vision runs for exercise, they might use a treadmill, rely on a guide dog, or use a human guide who is tethered to them. In many cases, they may have been unable to run independently since becoming blind or low-vision. We worked with Thomas, a blind runner, to better understand what he would need to run independently. We set out to explore if the technology most people have in their pockets, headphones and a consumer smartphone, plus a line painted on the ground could enable independent running for Thomas? Could we build accessibility technology that is economically accessible and doesn’t rely on specialty hardware? Could we build a solution that would run entirely on-device, without internet connection? We were successful in achieving all of our goals; using consumer hardware to build an on-device machine learning system to enable Thomas to run independently.

Describe the strategy

We believe in building products that work for everyone. That’s why we invest in and explore technology primarily built together with and for people with disabilities. We designed and developed our technology in direct partnership with Thomas, a blind marathon runner. Starting in 2019 and through 2020, in a series of on-site multi-day sprints (with off-site engineering cycles in between), we tested and refined a number of prototypes for the machine learning system, post-processing system, and audio guidance system. Each sprint was Thomas’ opportunity to provide his direct feedback and for us to make changes on the fly for immediate re-testing. And as the technology became more robust, Thomas invited six people who are blind or low-vision from his personal network to try the system with him and provide their feedback to us, helping to direct our technology roadmap moving forward.

Describe the execution

Using advances in on-device machine learning, we built a system to enable people who are blind or low-vision to utilize their Android devices to run or walk independently. Wearing their phones around their waist with a custom harness, the phone’s camera is angled to view the path and the pre-painted guideline ahead. The machine learning system receives the camera feed and segments the line from the environment. A post-processing system smooths the machine learning model output and provides users real-time stereo audio feedback to approximate their position and help them follow the line. As they drift left, they hear a signal increasing in volume and dissonance the further away they drift. The same happens if they drift right. When running on or near the line, like an audio tunnel. The system runs completely locally on-device, without an internet connection.

List the results

Thomas used our system to run independently for the first time in 25 years, since going blind at age 25. Later, he used the system to independently run a New York Road Runner 5K virtual race in New York City’s Central Park at a 7min mile pace. At launch, we received online, print, and TV coverage from New York Times, Times of London, National Post, Al Jazeera, Reuters, Canal+, FOX, CBS, Runner’s World, Stanford AI Lab’s The Batch, Daily Mail, Forbes, Venture Beat, Engadget, etc. More importantly, we received 150+ requests for partnership and trial from the leading vision and accessibility non-profits and research institutions; paving the way for us to further develop the technology and to get it out to more people.

More Entries from Innovative Use of Technology in Mobile

24 items

Gold Cannes Lions
PEDESTAL PROJECT

AR

PEDESTAL PROJECT

COLOR OF CHANGE, BBDO

(opens in a new tab)

More Entries from GOOGLE CREATIVE LAB

24 items

Grand Prix Cannes Lions
THE WILDERNESS DOWNTOWN

Publications & Media

THE WILDERNESS DOWNTOWN

GOOGLE, GOOGLE CREATIVE LAB

(opens in a new tab)