Discussions on Artificial Intelligence Dominate HR Tech 2019

LinkedIn
Collage image of people attending and speaking at the HR tech conference

The team observed three key themes for accessible technology and the impact of emerging technology on people with disabilities while participating in HR Tech 2019 in Las Vegas, Nevada.

The HR Technology Conference & Exposition (HR Tech) is one of the top global events showcasing emerging technologies and tools transforming the human relations (HR) industry.

This year, platforms utilizing artificial intelligence (AI) continued to trend at HR Tech.

The conference also infused a new focus on mitigating bias and ensuring that AI tools can align with corporate goals for diverse and inclusive hiring.

Our team observed three key themes for accessible technology and the impact of emerging technology on people with disabilities while participating in HR Tech 2019 in Las Vegas, Nevada.

These themes included AI activities at the forefront, C-suite support for access to emerging technology, and bridging gaps in inclusion in AI focuses.

Artificial Intelligence is Front and Center

HR Tech 2019 featured several sessions related to AI bias in emerging technology. Dmitri Krakovsky, the head of Hire for Google, emphasized that “AI will have the greatest impact when everyone can access it and when it is built with everyone’s benefit in mind.” Several other speakers at HR Tech 2019 outlined key strategies for mitigating the bias that AI can introduce into the hiring process. Himanshu Aggarwal, CEO & Co-founder of the AI-powered platform Aspiring Minds, described strategies for using AI responsibly.

Aggarwal and other speakers also noted that:

  • AI measurements can offer broad indicators about a candidate’s qualifications, but humans need to be involved in the process for selecting data sets and evaluating the data to minimize and mitigate bias.
  • The data measured and collected must be directly job-related; otherwise, companies may face increased risks for discrimination (intentionally or unintentionally).

John Sumser, Editor-in-Chief of HR Examiner (an online magazine), also illustrated the landscape of challenges presented by AI usage by adopting the metaphor of a fruit salad. He noted that AI programs can excel in identifying the individual ingredients that comprise the fruit said, while missing what he considers the best part entirely: “the mixed-up juice at the bottom.” Sumser also stressed that an AI system could mistakenly identify the polka dots on the fruit salad bowl as another ingredient. He offered this advice for working with AI-focused systems:

  • AI is here, so start planning for it and include your legal team in the discussion.
  • Be aware that machines may make major mistakes when using AI today.
  • Machines can have biases pre-programmed in because they are designed intentionally with algorithms that can reflect the biases of their designers and developers.

C-Suite Support is Essential

HR professionals at HR Tech and elsewhere have increasingly viewed accessibility as a business imperative—but only when it is driven and championed at the executive level. Vendors at HR Tech noted that they only observed enthusiasm for accessibility from HR professionals themselves when corresponding commitment from leaders in the C-suite was also present. These experiences dovetailed with our team’s own understanding and insight into how to drive adoption of technology accessibility–that making forward progress requires solid support from top-level corporate leadership to be strongly effective.

Awareness of Disability Inclusion Lags in AI Discussions

When approached with questions about the AI behind their products, HR Tech vendors eagerly described their strategies for mitigating bias. However, most vendors only discussed how their strategies dealt with biases for gender and race. Few vendors showed sufficient knowledge of the issue to respond to this question in depth with respect to disability. We know from our work in promoting accessible workplace technology that people with disabilities represent a very heterogeneous group; this group contains many people considered outliers when compared with people without disabilities.

Thus, we find it very critical that these platforms pay careful attention to multi-factored dimensions of inclusion. We also find it imperative to recall wisdom from leaders like Jutta Treviranus, Director of the Inclusive Design Research Centre at OCAD University in Toronto, Canada. She notes that success in reducing disability-related biases in AI requires approaches rooted in the jagged starburst of human data—rather than simple bell curves.

HR Tech 2019 also reminded our team that awareness about the impact of emerging technology on people with disabilities remains the major immediate hurdle—especially when companies increasingly seek to benefit from inclusion of people with disabilities in the workplace. Unless HR systems align with diversity-focused hiring goals, companies will not fully realize the business advantages of hiring people with disabilities. In fact, new research from Disability:IN and the American Association of People with Disabilities showed that businesses leading in inclusion of people with disabilities saw a 28 percent gain in revenue. These organizations also witnessed a doubling of their net income and a 30 percent increase in economic profit margins.

How Emojis are Improving Inclusion

LinkedIn

In fall this year, we can expect an array of new emojis coming to our smart devices, including ones that are more inclusive to differing genders.

The Unicode Consortium announced earlier this year that there would be 62 new emojis coming to smart devices, including 55 emojis that will strive to be more gender inclusive.

Emojis of the transgender flag and of non-binary individuals in occupations that were previously only available as women and men will be just some of the new additions we can expect to see.

 

Some of the new gender inclusive emojis to be released later this year
Some of the new gender inclusive emojis to be released later this year

By implementing these emojis, people of differing gender identities will not only be able to express themselves through messages and social media in a smaller, normalized way, but will also attempt to include those of all genders to feel validated in who they are.

While these emojis are set to appear on most devices around September or October, some smart devices could receive the new additions early.

New Braille Keyboard Opens Many Doors

LinkedIn
Two hands reading a book in braille

With the popularity of the smartphone, many people within the visually impaired community have used the voice dictation feature to write a text message. However, within the last few weeks, Google’s Android makes talk to text the second way that people with visual impairments can communicate.

In the last few weeks, Google released a new braille keyboard on its Android 5.0 products—Talkback.

The keyboard will be available in braille grades 1 and 2 in English and will utilize a six-key system, each key representing one of the six braille dots. Each key will be numbered one through six and be combined into different number combinations to form words and sentences, allowing for words to be written on the smartphone entirely in braille. Deletion of words and spaces will also be possible in a simple two-finger swipe to either the left or the right.
As smartphones became more popular, many worried that using braille would soon become obsolete to the next generation with visual impairments. In some instances, braille keyboards could be attached to devices to write messages, but that would require carrying around a keyboard in addition to your cellular device. Talkback will not only make messaging easier and more compact for those with visual impairments but will also help advocate the importance of learning braille.

Talkback is only one of the many tools available to those with visual impairments for navigating smart technology through Android’s Accessibility Suite. To learn more about the product, click here or to learn how to set the system up on your device, click here 

Exploring Tangible Pathways for XR Accessibility at the 2019 W3C Workshop

LinkedIn
Women in wheelchair at the access symposium wearing the Oculus headgear

Lately, the Partnership on Employment & Accessible Technology (PEAT) has met many people across industry and academia who are enthusiastically driving a new wave of (XR) technologies. This community’s commitment to accessible and inclusive XR solutions is essential, and we were excited to join a recent workshop in Seattle, Washington exploring these issues in depth.

Hosted by the World Wide Web Consortium (W3C), the workshop’s goal was to discuss strategies for making XR platforms on the web using principles of inclusive design.

Below, please check out the takeaways our team gathered for tackling the unique accessibility challenges of XR—and how accessible XR can increase employment opportunities for people with disabilities.

XR Is Made for Accessibility

Speakers throughout the day conveyed an overarching theme: XR naturally lends itself to inclusive design. Josh O’Connor from the W3C noted that XR can provide “rich, accessible alternatives” that do more than simply convey text on a web page. In fact, XR can offer a full accessible experience with multiple modes of interaction based around the user’s needs and preferences.

Creating XR tools often means blending physical and digital environments, which contains checkpoints for overlaying accessibility features directly onto the world. Common recommendations emerged throughout the day from several speakers, including: the importance of building accessible XR hand or eye controllers for people with varied dexterity, possibilities for plain language guidance to support cognitive accessibility.

Several speakers also discussed exciting research on tangible accessibility solutions:

Meredith Ringel Morris from Microsoft demoed SeeingVR, a set of tools to make virtual reality more accessible to people with low vision.

Wendy Dannels from the Rochester Institute of Technology presented her research to deliver auditory accessibility using XR.

Melina Möhlne from IRT showcased her research on how to display subtitles in 360° media.

Building XR with Meaning

During the event, we discussed the guiding question of how to extend existing web accessibility standards to XR platforms. For example, what factors could we apply from the W3C’s Web Content Accessibility Guidelines (WCAG) and the Web Accessibility Initiative’s ARIA standards (Accessible Rich Internet Applications)?

Taking a step back, this question really concerns how to build XR that conveys meaning to people with and without disabilities. ARIA attributes work for websites because they provide labels for specific features like checkboxes and forms. XR spaces are essentially boundless. Because many more types of objects require labeling, participants generally doubted ARIA’s abilities to transfer to XR.

Fortunately, a new file type called glTF could help us assign meaning in XR spaces. Chris Joel from Google presented glTF as a counterpart to jpeg image files. While jpeg is for pictures, glTF is for 3-D objects and scenes. These files carry the ability to add immaculate labels with rich text information that makes them more accessible. This text can be legible to screen readers, and it can provide plain language guidance to aid with cognitive accessibility.

However, questions remained about just how much information to feed the user, as opposed to letting them figure out the 3-D space on their own. Participants wondered how we can make this information production easier for the developers creating glTF files— and of course, what other solutions might exist to build XR with meaning.

How Can We Use XR at Work?

XR includes virtual, augmented, immersive, and mixed reality tools, and the applications are staggering. In the workplace, these XR tools can bolster functions like virtual meetings and online training. XR can also provide a platform for workers to engage more intimately with 3-D models, which span domains like architecture, medicine, engineering, and manufacturing.

PEAT looks forward to continued involvement in making XR more accessible through our partnership with the XR Access Initiative. For more on the topic of accessible XR technologies, check out our key takeaways from the 2019 MAVRIC Conference on Achieving Measurable Results with XR.

Google Seeks Help From People With Down Syndrome

LinkedIn
A man with voice recognition on his phone

Voice computing is the future of tech— devices like smart-home systems and internet-enabled speakers are leading a shift away from screens and towards speech. But for people with unique speech patterns, these devices can be inaccessible when speech-recognition technology fails to understand what users are saying.

Google is aiming to change that with a new initiative dubbed “Project Understood.” The company is partnering with the Canadian Down Syndrome Society to solicit hundreds of voice recordings from people with Down syndrome in order to train its voice recognition AI to better understand them.

“Out of the box, Google’s speech recognizer would not recognize every third word for a person with Down syndrome, and that makes the technology not very usable,” Google engineer Jimmy Tobin said in a video introducing the project.

Voice assistants — which offer AI-driven scheduling, reminders, and lifestyle tools — have the potential to let people with Down syndrome live more independently, according to Matt MacNeil, who has Down syndrome and is working with Google on the project.

“When I started doing the project, the first thing that came to my mind is really helping more people be independent,” MacNeil said in the announcement video.

Continue on to Business Insider to read the complete article.

Meet The Kenyan Engineer Who Created Gloves That Turn Sign Language Into Audible Speech

LinkedIn
Kenyan engineer is seated at work station holding up the sign language glove wtih his right hand

Twenty-five-year-old Kenyan engineer and innovator, Roy Allela, has created a set of gloves that will ultimately allow better communication between those who are deaf and those who are hearing yet may not necessarily know sign language. The Sign-IO gloves in essence translate signed hand movements into audible speech.

Allela’s gloves feature sensors located on each finger that detect the positioning of each finger, including how much each finger will bend into a given position. The glove connects via Bluetooth to an Android phone which then will leverage use the text-to-speech function to provide translated speech to the hand gestures of a person signing.

The inspiration behind the Sign-IO gloves comes from the personal experience of having a young niece who is deaf. He nor his family knows sign language and often struggled to adequately and consistently communicate with her.

“My niece wears the gloves, pairs them with her phone or mine, then starts signing. I’m able to understand what she’s saying,” Allela shared in an interview with The Guardian.

Allela’s vision for the gloves is to have them placed in schools for special needs children throughout his home country of Kenya and then expand from there to positively impact the experiences of as many deaf or hearing-impaired children as possible. His gloves are amongst a number of cutting-edge projects that are contributing to the growing market of assistive technology devices that seek to provide aid to those with specific impairments and limitations.

Continue on to Because of Them We Can to read the complete article.

Wheelchair users may soon have more chances to hail Lyft, Uber rides

LinkedIn
BER driver assisting man in a wheelchair

As Lyft and Uber became part of the nation’s transportation systems, people who use non-folding wheelchairs felt left on the sidelines because the cars couldn’t accommodate them. That’s slowly starting to change. The two San Francisco companies on July 1 began collecting 10 cents on every ride in California to go to an accessibility fund established by the California Public Utilities Commission.

The agency has not yet said how and where that money will be allocated, but its purpose is to make sure that the apps offer sufficient vehicles with “ramps, lifts and adequate space to accommodate users who cannot leave their wheelchairs during a trip.” The fund grew out of a state bill passed last year, SB1376, requiring the companies to provide accessible services.

Meanwhile, Lyft, which so far has referred wheelchair users to call paratransit, taxi companies or other third parties, is starting a pilot on Tuesday in San Francisco and Los Angeles to offer five wheelchair-accessible vehicles in each market. Although the number seems modest, each will operate for 14 hours straight (with different drivers), a time frame spanning the most popular ride-request periods, according to Lyft.

The cars, modified 2019 Toyota Sienna minivans, will be driven by trained employees of paratransit provider First Transit. Lyft riders will be able to summon them via the app and will pay the same prices as for similar Lyft rides.

Lyft offers bonuses to independent-contractor drivers who happen to have wheelchair-accessible vehicles, though the company was unable to say how many people have them.

Continue on to SFC.com to read the complete article.

Elon Musk is making implants to link the brain with a smartphone

LinkedIn
image of a memory boost implant inside head

Elon Musk wants to insert Bluetooth-enabled implants into your brain, claiming the devices could enable telepathy and repair motor function in people with injuries.

Speaking recently, the CEO of Tesla (TSLA) and SpaceX said his Neuralink devices will consist of a tiny chip connected to 1,000 wires measuring one-tenth the width of a human hair.

The chip features a USB-C port, the same adapter used by Apple’s Macbooks, and connects via Bluebooth to a small computer worn over the ear and to a smartphone, Musk said.

“If you’re going to stick something in a brain, you want it not to be large,” Musk said, playing up the device’s diminutive size. Neuralink, a startup founded by Musk, says the devices can be used by those seeking a memory boost or by stroke victims, cancer patients, quadriplegics or others with congenital defects.

The company says up to 10 units can be placed in a patient’s brain. The chips will connect to an iPhone app that the user can control.”

The devices will be installed by a robot built by the startup. Musk said the robot, when operated by a surgeon, will drill 2 millimeter holes in a person’s skull. The chip part of the device will plug the hole in the patient’s skull.

“The interface to the chip is wireless, so you have no wires poking out of your head. That’s very important,” Musk added.

Continue on to CNN News to read the complete article.

Meet Grace Hopper Celebration 2019’s Honoree Jhillika Kumar

LinkedIn
Jhillika Kumar poses outside smiling wearing a white blouse and smiling

The Student of Vision Abie Award honors young women dedicated to creating a future where the people who imagine and build technology mirror the people and societies for which they build. This year’s winner is Georgia Tech student Jhillika Kumar.

When Jhillika’s parents brought home an iPad for the first time, they could not have predicted how much it would improve their family’s lives. Accessible technology, for the first time ever, allowed her autistic and nonverbal brother to enjoy his passion for music. It distracted his mind from the physical world of disability. She watched her brother instantly swipe and tap swiftly across the interface. The smile that it brought him is the smile she wants to bring to millions of others with disabilities.

Jhillika’s family experience ignited her passion to advocate for disability rights and a career driven by a mission to create an inclusive world. She is a UX/UI designer, aspiring entrepreneur, and a third-year Georgia Tech student with a desire to improve the lives of the differently abled. She advocates to lift the barriers that exist within technology, design, and even policy, and empowers the largest underserved group by bringing attention to the importance of empathy and mutuality in design.

Knowing the impact that UX Design could make on someone who once couldn’t communicate, Jhillika decided to pursue a focus in computer science and interaction design through Georgia Tech’s undergraduate Computational Media program and Digital Media master’s program. Over the summer of her sophomore year of college, she interned at Disney where she created a short film to raise awareness to the product teams on the capacity that their technology had to empower entire communities of untapped potential, purely through improved accessibility. Expanding on this, Jhillika presented a talk at TEDxGeorgiaTech last fall, where she spoke about the importance of accessibility in the industry.

All of Jhillika’s efforts in this space have come together in her current initiative: an on-campus organization she founded called AxisAbility. In order to augment the capabilities of individuals with Autism Spectrum Disorder, AxisAbility is creating a virtual platform to understand family needs and match them with the technology engineered to directly generate physiological changes in the brain to improve cognitive function.

At the School of Interactive Computing, Jhillika currently works in academia, collaborating with Dr. Gregory Abowd and Ivan Riobo to study how non-speaking autistic individuals could use technology-led therapies and assistive technologies to communicate. The study looks at evaluating cognitive competency through eye-gaze tracking software (retinal movement). This could provide vast insight into their cognitive abilities. Jhillika returned to school to her junior year of college engulfed with the spirit of empathy for the differently abled, and was invited as a speaker at World Information Architecture Day and FutureX Live, as well as Women in XR. Her initiatives won her the Alvin M. Ferst Leadership and Entrepreneur Award for 2019 awarded by Georgia Tech.

Continue on to “How Our College Startup’s Autism App Is Flowering Into Fruition – Enlighten Mentors to read Jhillika’s personal story and how you can help her mission.

Google announces literary activities to help kids evaluate and analyze media as they browse the Internet

LinkedIn
Mom and Daughter are lying in bed together looking at an ipad and smiling

Google is pleased to announce the addition of 6 new media literacy activities to the 2019 edition of Be Internet Awesome. Designed to help kids analyze and evaluate media as they navigate the Internet, the new lessons address educators’ growing interest in teaching media literacy.

They were developed in collaboration with Anne Collier, executive director of The Net Safety Collaborative, and Faith Rogow, PhD, co-author of The Teacher’s Guide to Media Literacy and a co-founder of the National Association for Media Literacy Education. Because media literacy is essential to safety and citizenship in the digital age, the news lessons complement Be Internet Awesome ’s digital safety and citizenship topics.

Overview of new activities:
1. Share with Care: That’s not what I meant!
● Overview: Students will learn the importance of asking the question: “How might others interpret what I share?” They’ll learn to read visual cues people use to communicate information about themselves and to draw conclusions about others.

2. Share with Care: Frame it
● Overview: Students will learn to see themselves as media creators. They’ll understand that media makers make choices about what to show and what to keep outside the frame. They’ll apply the concept of framing to understand the difference between what to make visible and public online and what to keep “invisible.”

3. Don’t Fall for Fake: Is that really true?
● Overview: Students will learn how to apply critical thinking to discern between what’s credible and non-credible in the many kinds of media they run into online.

4. Don’t Fall for Fake: Spotting disinformation online
● Overview: Students will learn how to look for and analyze clues to what is and isn’t reliable information online.

5. It’s Cool to Be Kind: How words can change a picture
● Overview: Students will learn to make meaning from the combination of pictures and words and will understand how a caption can change what we think a picture is communicating. They will gain an appreciation for the power of their own words, especially when combined with pictures they post.

6. When in Doubt, Talk It Out: What does it mean to be brave?
● Overview: Students will think about what it means to be brave online and IRL, where they got their ideas about “brave” and how media affect their thinking about it.

Expanding resources to families
YMCA
We teamed up with the YMCA across six cities to host bilingual workshops for parents to help teach families about online safety and digital citizenship with Be Internet Awesome and help families create healthy digital habits with the Family Link app. The workshops, designed for parents, coincide with June’s National Internet Safety Month and come at the start of the school summer holidays.

Continue on here to read more.

CSUN Assistive Technology Conference Showcases Innovations For a More Inclusive World

LinkedIn
Photo by Lee Choo

By Jacob Bennett

The CSUN Assistive Technology Conference has a specific purpose — to advance knowledge and the use of technology that improves the lives of individuals with disabilities — but its impact is wide-ranging.

In addition to companies that specialize in such things as captioning technology for people who are deaf and hard of hearing and voice-controlled devices for people who are visually impaired, the 34th annual conference, held March 11-15 in Anaheim, was attended by representatives from banks, grocery stores, retail chains, medical companies, airlines and many more companies with vast customer bases.

If attendees weren’t developing assistive technology, they were certainly interested in using it.

At a corner booth in the bustling exhibit hall, the three-person team from Feelif, a tech company from Slovenia, found themselves addressing a steady stream of potential business partners. There was no time to check out other areas of the conference, as the Feelif team was busy showing off their premium tablet for people who are blind and visually impaired, which uses vibrations to simulate the experience of feeling Braille dots.

“It’s very busy,” said Rebeka Zerovnik, the company’s international business development associate. “We don’t have enough people to work the booth.”

The 34th CSUN Assistive Technology Conference — organized by the California State University, Northridge Center on Disabilities, and known in the industry as the CSUN Conference — attracted exhibitors, researchers, consumers, practitioners, government representatives and speakers from around the world.

For the first time, the conference was held at the Anaheim Marriott after a long run in San Diego. The change of venue didn’t seem to hurt attendance — final attendance numbers hadn’t been tallied early this week, but attendance approached 5,000.

Peter Korn, director of accessibility for Amazon Lab126, a research and development team that designs and engineers high-profile consumer electronic devices such as Fire tablets and Amazon Echo, said this was his 28th CSUN Conference, beginning when he was with Berkeley Systems, which developed the outSPOKEN screen reader so that Macintosh computers could be used by people who were blind or partially sighted, and continuing for the past five years with Amazon. In that time, he said, the company has dramatically expanded its footprint at the conference.

“CSUN is the premier assistive technology conference in the world,” Korn said. “Of course we’re here.”

The conference included more than 300 educational sessions, with updates on state-of-the-art technology as well as insights into where the industry is headed. For example, attendees could learn about how artificial intelligence will be critical to improving assistive technology applications, and best practices for including people with disabilities in usability studies.

A seventh annual Journal on Technology and People with Disabilities will be published after the conference and will highlight the proceedings from the conference’s science and research track.

A highlight of the conference was the exhibit hall, where 122 booths showcased time-tested and brand-new solutions. A wristband used sonar to locate obstacles near people with visual impairments, then vibrated to help navigate around the obstacles. An app connected people who are blind or have low vision to trained agents who serve as “on-demand eyes.” A real-time transcription and captioning service helped students who are deaf and hard of hearing access distance-learning courses.

The new venue kept all informational sessions and the exhibit hall on the same floor, which had not been the case in San Diego.

“We were very pleased to see that the attendance stayed strong at our new venue for the 2019 event,” Sandy Plotin, managing director of the Center on Disabilities. “The benefits of having all the conference activities consolidated on one floor in a ‘mini-convention’ space seems to be providing the positive outcome we were looking for. I’ve heard people say they’ve been able to network even more, and that’s probably the most important component to having a successful conference experience.”

Johanna Lucht, the first NASA engineer who is deaf and who has taken an active role in the control room during a crewed test flight, delivered a keynote address that aimed to remove barriers to developing assistive technology. She noted that many of the most beneficial technologies for people with disabilities were not designed with that purpose. As an example, she noted that ridesharing services such as Uber removed potential miscommunications that occurred when people who are deaf and hard of hearing ordered taxis through interpreter services — the new apps have enabled people to type in exact addresses.

Conversely, closed captioning can benefit even people without disabilities: For example, it enables people to understand what sportscasters on TV are saying in a noisy and crowded bar.

Lucht noted that assistive technologies are designed to level the playing field for people with disabilities, which implies a sense of “catching up.” Instead, she advocated for designers to think in terms of “universal design,” identifying potential barriers and fixing them before products are launched. She showed a zoo fence that would disrupt the view for visitors in wheelchairs. An assistive design would install a ramp to see over the fence, she said. A universal-design alternative would be a see-through barrier that provides views for everyone.

“The point I’m making is, society is too hung up on the definition behind assistive technology,” Lucht said. “This technology can also benefit everyone.”

Air Force Civilian Service

Air Force Civilian Service

Verizon

Verizon

Robert Half