By Kristen Halpen
Imagine a young adult starting their day in 2032. They wake up to an alarm playing the latest hit they requested the night before, are greeted by their friendly robot in the kitchen with suggestions on what to have for breakfast to fuel their morning, and a handy voice reminder to take their medication.
Another chimes in, advising them of what is on their agenda, offering suggestions on what to wear based on the local weather and the day’s activities. They leave for work in their driverless car with a bag packed with everything they need while away from the house for the day. The door locks, heat turns down, lights turn off behind them. And this was all made possible with voice technology.
Now imagine this voice-first future not working for people with Down syndrome. The reality is that some of the people who could benefit from this technology the most might be left out.
According to Google, the current error rate for people with Down syndrome is on average 30 percent. But there is work being done to improve this. A few years ago, Google launched a program called Project Euphonia to make voice technology more accessible for individuals with non-standard speech. One challenge for Google has been recruiting enough people to participate in the data collection process. The Canadian Down Syndrome Society (CDSS) saw this as a great opportunity to collaborate and reach individuals within the Down syndrome community. Between Google’s technological expertise and CDSS’ connections, the partnership works to further Project Euphonia’s research. CDSS wants to ensure that individuals with Down syndrome are being well represented in the future of voice technology, and so Project Understood was created.
“For most people, voice technology simply makes life a little easier. For people with Down syndrome, it has the potential for creating greater independence. From daily reminders to keeping in contact with loved ones and accessing directions, voice technology can help facilitate infinite access to tools and learnings that could lead to enriched lives,” says Laura LaChance, Interim Executive Director with CDSS.
Project Understood aims to collect voice data from adults with Down syndrome to improve its voice recognition models. “With the help of CDSS we were able to sample a small group to test whether there were enough patterns in the speech of people with Down syndrome for our algorithm to learn and adapt,” says Julie Cattiau, Product Manager at Google. “It’s exciting to see the success of that test and move into the next phase of collecting voice samples that represent the vocal diversity of the community. The more people who participate, the more likely Google will be able to eventually improve speech recognition for everyone.”
“This project has really struck a chord with the Canadian Down syndrome community,” says Glen Hoos, Director of Communications for the Down Syndrome Resource Foundation. “As we’ve shared Project Understood with our families, there has been a great deal of enthusiasm for it. It’s easy to see how the technology can be leveraged to create greater independence for many people with Down syndrome, if it can be successfully adapted to diverse patterns of speech.” So now it is up to the Down syndrome community to take action. Machines learn through data. The more data they get, the more accurate they are.
Source: Project Understood