How Xbox Adaptive Controller Will Make Gaming More Accessible

LinkedIn
xbox adaptive controller

On Wednesday night, Microsoft unveiled its new Xbox Adaptive Controller for the Xbox One console, aimed at making gaming more accessible for those with disabilities and mobility limitations as part of their Gaming for Everyone initiative.

The device allows for individual customization through a series of peripheral attachments that allow gamers to cater the controls to their own specific comfort.

For many, the current Xbox controller design (and those of other consoles’ controllers like Nintendo’s Switch and Sony’s Playstation 4) presents a challenge to use as it was not designed for individuals with mobility impairments. The Adaptive Controller is a foot-long rectangular unit with a d-pad, menu and home buttons, the Xbox home icon button and two additional large black buttons that can be mapped to any function.

On its back are a series of jacks for input devices and various peripheral accessories, each of which can be mapped to a specific button, trigger or function on the Xbox controller.

“Everyone knew this was a product that Microsoft should make,” Bryce Johnson, inclusive lead for product research and accessibility for Xbox, told Heat Vision.

The original inspiration for the Adaptive Controller came during 2015’s Microsoft One-Week Hackathon, an event where employees develop new ideas and tackle issues with their products. Through a partnership with Warfighter Engaged, an all‐volunteer non-profit that modifies gaming controllers for severely wounded veterans through personally adapted devices, a prototype was put together that would eventually become the Adaptive Controller.

“We had been doing our own stuff for a couple of years before that, making custom adaptive items for combat veterans, and it was kind of a challenge for even the most basic changes, requiring basically taking a controller apart,” Warfighter Engaged founder Ken Jones said. “Microsoft was thinking along the same lines. It was really just perfect timing.”

As development on the project went on, Microsoft began working with other foundations aimed at making gaming more accessible such as AbleGamers, SpecialEffect, the Cerebral Palsy Foundation and Craig Hospital, a Denver-area rehabilitation center for spinal cord and brain injuries.

While third-party manufacturers have created more accessible peripheral controllers in the past, Microsoft is the first of the major gaming publishers to make a first-party offering.

“I think we’re always open to exploring new things,” Johnson said of Microsoft developing their own peripherals for the Adaptive Controller. “Right now, I think the challenge is that there is a super large ecosystem of devices that we intentionally supported as part of the Xbox Adaptive Controller, and we want people to go out and find that vast array of toggles, buttons, etc. and have those work with that device.”

Continue onto The Hollywood Reporter to read the complete article.

The latest video game controller isn’t plastic. It’s your face.

LinkedIn
Dunn playing “Minecraft” using voice commands on the Enabled Play controller, face expression controls via a phone and virtual buttons on Xbox's adaptive controller. (Courtesy of Enabled Play Game Controller)

By Amanda Florian, The Washington Post

Over decades, input devices in the video game industry have evolved from simple joysticks to sophisticated controllers that emit haptic feedback. But with Enabled Play, a new piece of assistive tech created by self-taught developer Alex Dunn, users are embracing a different kind of input: facial expressions.

While companies like Microsoft have sought to expand accessibility through adaptive controllers and accessories, Dunn’s new device takes those efforts even further, translating users’ head movements, facial expressions, real-time speech and other nontraditional input methods into mouse clicks, key strokes and thumbstick movements. The device has users raising eyebrows — quite literally.

“Enabled Play is a device that learns to work with you — not a device you have to learn to work with,” Dunn, who lives in Boston, said via Zoom.

Dunn, 26, created Enabled Play so that everyone — including his younger brother with a disability — can interface with technology in a natural and intuitive way. At the beginning of the pandemic, the only thing he and his New Hampshire-based brother could do together, while approximately 70 miles apart, was game.

“And that’s when I started to see firsthand some of the challenges that he had and the limitations that games had for people with really any type of disability,” he added.

At 17, Dunn dropped out of Worcester Polytechnic Institute to become a full-time software engineer. He began researching and developing Enabled Play two and a half years ago, which initially proved challenging, as most speech-recognition programs lagged in response time.

“I built some prototypes with voice commands, and then I started talking to people who were deaf and had a range of disabilities, and I found that voice commands didn’t cut it,” Dunn said.

That’s when he started thinking outside the box.

Having already built Suave Keys, a voice-powered program for gamers with disabilities, Dunn created Snap Keys — an extension that turns a user’s Snapchat lens into a controller when playing games like Call of Duty, “Fall Guys,” and “Dark Souls.” In 2020, he won two awards for his work at Snap Inc.’s Snap Kit Developer Challenge, a competition among third-party app creators to innovate Snapchat’s developer tool kit.

With Enabled Play, Dunn takes accessibility to the next level. With a wider variety of inputs, users can connect the assistive device — equipped with a robust CPU and 8 GB of RAM — to a computer, game console or other device to play games in whatever way works best for them.

Dunn also spent time making sure Enabled Play was accessible to people who are deaf, as well as people who want to use nonverbal audio input, like “ooh” or “aah,” to perform an action. Enabled Play’s vowel sound detection model is based on “The Vocal Joystick,” which engineers and linguistics experts at the University of Washington developed in 2006.

“Essentially, it looks to predict the word you are going to say based on what is in the profile, rather than trying to assume it could be any word in the dictionary,” Dunn said. “This helps cut through machine learning bias by learning more about how the individual speaks and applies it to their desired commands.”

Dunn’s AI-enabled controller takes into account a person’s natural tendencies. If a gamer wants to set up a jump command every time they open their mouth, Enabled Play would identify that person’s individual resting mouth position and set that as the baseline.

In January, Enabled Play officially launched in six countries — its user base extending from the U.S. to the U.K., Ghana and Austria. For Dunn, one of his primary goals was to fill a gap in accessibility and pricing compared to other assistive gaming devices.

“There are things like the Xbox Adaptive Controller. There are things like the HORI Flex [for Nintendo Switch]. There are things like Tobii, which does eye-tracking and stuff like that. But it still seemed like it wasn’t enough,” he said.

Compared to some devices that are only compatible with one gaming system or computer at a time, Dunn’s AI-enabled controller — priced at $249.99 — supports a combination of inputs and outputs. Speech therapists say that compared to augmentative and alternative communication (AAC) devices, which are medically essential for some with disabilities, Dunn’s device offers simplicity.

“This is just the start,” said Julia Franklin, a speech language pathologist at Community School of Davidson in Davidson, N.C. Franklin introduced students to Enabled Play this summer and feels it’s a better alternative to other AAC devices on the market that are often “expensive, bulky and limited” in usability. Many sophisticated AAC systems can range from $6,000 to $11,500 for high-tech devices, with low-end eye-trackers running in the thousands. A person may also download AAC apps on their mobile devices, which range from $49.99 to $299.99 for the app alone.

“For many people who have physical and cognitive differences, they often exhaust themselves to learn a complex AAC system that has limits,” she said. “The Enabled Play device allows individuals to leverage their strengths and movements that are already present.”

Internet users have applauded Dunn for his work, noting that asking for accessibility should not equate to asking for an “easy mode” — a misconception often cited by critics of making games more accessible.

“This is how you make gaming accessible,” one Reddit user wrote about Enabled Play. “Not by dumbing it down, but by creating mechanical solutions that allow users to have the same experience and accomplish the same feats as [people without disabilities].” Another user who said they regularly worked with young patients with cerebral palsy speculated that Enabled Play “would quite literally change their lives.”

Click here to read the full article on The Washington Post.

Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation

LinkedIn
Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

By , Unite

Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

The new approach can not only distinguish between unaffected and affected subjects, but can also correctly distinguish depression from schizophrenia, as well as the degree to which the patient is currently affected by the disease.

The researchers have provided a composite image that represents the control group for their tests (on the left in the image below) and the patients who are suffering from mental disorders (right). The identities of multiple people are blended in the representations, and neither image depicts a particular individual:

Individuals with affective disorders tend to have raised eyebrows, leaden gazes, swollen faces and hang-dog mouth expressions. To protect patient privacy, these composite images are the only ones made available in support of the new work.

Until now, facial affect recognition has been primarily used as a potential tool for basic diagnosis. The new approach, instead, offers a possible method to evaluate patient progress throughout treatment, or else (potentially, though the paper does not suggest it) in their own domestic environment for outpatient monitoring.

The paper states*:

‘Going beyond machine diagnosis of depression in affective computing, which has been developed in previous studies, we show that the measurable affective state estimated by means of computer vision contains far more information than the pure categorical classification.’

The researchers have dubbed this technique Opto Electronic Encephalography (OEG), a completely passive method of inferring mental state by facial image analysis instead of topical sensors or ray-based medical imaging technologies.

The authors conclude that OEG could potentially be not just a mere secondary aide to diagnosis and treatment, but, in the long term, a potential replacement for certain evaluative parts of the treatment pipeline, and one that could cut down on the time necessary for patient monitoring and initial diagnosis. They note:

‘Overall, the results predicted by the machine show better correlations compared to the pure clinical observer rating based questionnaires and are also objective. The relatively short measurement period of a few minutes for the computer vision approaches is also noteworthy, whereas hours are sometimes required for the clinical interviews.’

However, the authors are keen to emphasize that patient care in this field is a multi-modal pursuit, with many other indicators of patient state to be considered than just their facial expressions, and that it is too early to consider that such a system could entirely substitute traditional approaches to mental disorders. Nonetheless, they consider OEG a promising adjunct technology, particularly as a method to grade the effects of pharmaceutical treatment in a patient’s prescribed regime.

The paper is titled The Face of Affective Disorders, and comes from eight researchers across a broad range of institutions from the private and public medical research sector.

Data

(The new paper deals mostly with the various theories and methods that are currently popular in patient diagnosis of mental disorders, with less attention than is usual to the actual technologies and processes used in the tests and various experiments)

Data-gathering took place at University Hospital at Aachen, with 100 gender-balanced patients and a control group of 50 non-affected people. The patients included 35 sufferers from schizophrenia and 65 people suffering from depression.

For the patient portion of the test group, initial measurements were taken at the time of first hospitalization, and the second prior to their discharge from hospital, spanning an average interval of 12 weeks. The control group participants were recruited arbitrarily from the local population, with their own induction and ‘discharge’ mirroring that of the actual patients.

In effect, the most important ‘ground truth’ for such an experiment must be diagnoses obtained by approved and standard methods, and this was the case for the OEG trials.

However, the data-gathering stage obtained additional data more suited for machine interpretation: interviews averaging 90 minutes were captured over three phases with a Logitech c270 consumer webcam running at 25fps.

The first session comprised of a standard Hamilton interview (based on research originated around 1960), such as would normally be given on admission. In the second phase, unusually, the patients (and their counterparts in the control group) were shown videos of a series of facial expressions, and asked to mimic each of these, while stating their own estimation of their mental condition at that time, including emotional state and intensity. This phase lasted around ten minutes.

In the third and final phase, the participants were shown 96 videos of actors, lasting just over ten seconds each, apparently recounting intense emotional experiences. The participants were then asked to evaluate the emotion and intensity represented in the videos, as well as their own corresponding feelings. This phase lasted around 15 minutes.

Click here to read the full article on Unite.

Meet Jonny Huntington – the man set to be the first to solo the South Pole with a significant disability

LinkedIn
Jonny Huntington Headshot

By Oli Ballard, Business Leader

In November 2023 Jonny will embark on a journey to the South Pole from the continental shelf of Antartica, a distance of over 900km. He is doing this alone and will become the first ever disabled person to solo the South Pole.

As part of the expedition, Jonny has put together a training timeline that starts in July 2022 across the South West Cost Path. The total distance of the coastal path is 630 miles and in total he will burn 5524 calories.

In 2014, Jonny had a brain bleed that left him paralysed from the neck down on his left side. Following extensive rehabilitation and discharge from the Army, he returned to the world of elite sport as a disabled athlete, competing for Great Britain in cross country skiing.

Jonny comments: “I’m ready to go and take on this challenge. First and foremost, I’m an athlete. My injury hasn’t changed this. It may cause me to rethink my approach, but intrinsically the challenge is the same- with the right attitude and hard work, anything is achievable.

“I’m delighted to be working together with Business Leader to have their media support.”

Business Leader is covering Jonny’s expedition and will be hosting a speaking event with him in the coming months.

Click here to read the full article on Business Leader.

Gamifying Fear: Vr Exposure Therapy Shown To Be Effective At Treating Severe Phobias

LinkedIn
Girl using virtual reality goggles watching spider. Photo: Donald Iain Smith/Gett Images

By Cassidy Ward, SyFy

In the 2007 horror film House of Fears (now streaming on Peacock!), a group of teenagers enters the titular haunted house the night before it is set to open. Once inside, they encounter a grisly set of horrors leaving some of them dead and others terrified. For many, haunted houses are a fun way to intentionally trigger a fear response. For others, fear is something they live with on a daily basis and it’s anything but fun.

Roughly 8% of adults report a severe fear of flying; between 3 and 15% endure a fear of spiders; and between 3 and 6% have a fear of heights. Taken together, along with folks who have a fear of needles, dogs, or any number of other life-altering phobias, there’s a good chance you know someone who is living with a fear serious enough to impact their lives. You might even have such a phobia yourself.

There are, thankfully, a number of treatments a person can undergo in order to cope with a debilitating phobia. However, those treatments often require traveling someplace else and having access to medical care, something which isn’t always available or possible. With that in mind, scientists from the Department of Psychological Medicine at the University of Otago have investigated the use of virtual reality to remotely treat severe phobias with digital exposure therapy. Their findings were published in the Australian and New Zealand Journal of Psychiatry.

Prior studies into the efficacy of virtual reality for the treatment of phobias were reliant on high-end VR rigs which can be expensive and difficult to acquire for the average patient. They also focused on specific phobias. The team at the University of Otago wanted something that could reach a higher number of patients, both in terms of content and access to equipment.

They used oVRcome, a widely available smartphone app anyone can download from their phone’s app store. The app has virtual reality content related to a number of common phobias in addition to the five listed above. Moreover, because it runs on your smartphone, it can be experienced using any number of affordable VR headsets which your phone slides into.

Participants enter in their phobias and their severity on a scale and are presented with a series of virtual experiences designed to gently and progressively expose the user to their fear. The study involved 129 people between the ages of 18 and 64, all of which reported all five of the target phobias. They used oVRcome over the course of six weeks with weekly emailed questionnaires measuring their progress. Participants also had access to a clinical psychologist in the event that they experienced any adverse effects from the study.

Participants were given a baseline score measuring the severity of their phobia and were measured again at a follow up 12 weeks after the start of the program. At baseline, participants averaged a score of 28 out of 40, indicating moderate to severe symptoms. By the end of the trial, the average score was down to 7, indicating minimal symptoms. Some participants even indicated they had overcome their phobia to the extent that they felt comfortable booking a flight, scheduling a medical procedure involving needles, or capturing and releasing a spider from their home, something they weren’t comfortable doing at the start.

Part of what makes the software so effective is the diversity of programming available and the ability for an individual to tailor their experiences based on their own unique experience. Additionally, exposure therapy is coupled with additional virtual modules including relaxation, mindfulness, cognitive techniques, and psychoeducation.

Click here to read the full article on SyFy.

Can Virtual Reality Help Autistic Children Navigate the Real World?

LinkedIn
Mr. Ravindran adjusts his son’s VR headset between lessons. “It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said of the time when his son used Google Street View through a headset, then went into his playroom and acted out what he had experienced in VR. “It ended up being a light bulb moment.

By Gautham Nagesh, New York Times

This article is part of Upstart, a series on young companies harnessing new science and technology.

Vijay Ravindran has always been fascinated with technology. At Amazon, he oversaw the team that built and started Amazon Prime. Later, he joined the Washington Post as chief digital officer, where he advised Donald E. Graham on the sale of the newspaper to his former boss, Jeff Bezos, in 2013.

By late 2015, Mr. Ravindran was winding down his time at the renamed Graham Holdings Company. But his primary focus was his son, who was then 6 years old and undergoing therapy for autism.

“Then an amazing thing happened,” Mr. Ravindran said.

Mr. Ravindran was noodling around with a virtual reality headset when his son asked to try it out. After spending 30 minutes using the headset in Google Street View, the child went to his playroom and started acting out what he had done in virtual reality.

“It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said. “It ended up being a light bulb moment.”

Like many autistic children, Mr. Ravindran’s son struggled with pretend play and other social skills. His son’s ability to translate his virtual reality experience to the real world sparked an idea. A year later, Mr. Ravindran started a company called Floreo, which is developing virtual reality lessons designed to help behavioral therapists, speech therapists, special educators and parents who work with autistic children.

The idea of using virtual reality to help autistic people has been around for some time, but Mr. Ravindran said the widespread availability of commercial virtual reality headsets since 2015 had enabled research and commercial deployment at much larger scale. Floreo has developed almost 200 virtual reality lessons that are designed to help children build social skills and train for real world experiences like crossing the street or choosing where to sit in the school cafeteria.

Last year, as the pandemic exploded demand for telehealth and remote learning services, the company delivered 17,000 lessons to customers in the United States. Experts in autism believe the company’s flexible platform could go global in the near future.

That’s because the demand for behavioral and speech therapy as well as other forms of intervention to address autism is so vast. Getting a diagnosis for autism can take months — crucial time in a child’s development when therapeutic intervention can be vital. And such therapy can be costly and require enormous investments of time and resources by parents.

The Floreo system requires an iPhone (version 7 or later) and a V.R. headset (a low-end model costs as little as $15 to $30), as well as an iPad, which can be used by a parent, teacher or coach in-person or remotely. The cost of the program is roughly $50 per month. (Floreo is currently working to enable insurance reimbursement, and has received Medicaid approval in four states.)

A child dons the headset and navigates the virtual reality lesson, while the coach — who can be a parent, teacher, therapist, counselor or personal aide — monitors and interacts with the child through the iPad.

The lessons cover a wide range of situations, such as visiting the aquarium or going to the grocery store. Many of the lessons involve teaching autistic children, who may struggle to interpret nonverbal cues, to interpret body language.

Autistic self-advocates note that behavioral therapy to treat autism is controversial among those with autism, arguing that it is not a disease to be cured and that therapy is often imposed on autistic children by their non-autistic parents or guardians. Behavioral therapy, they say, can harm or punish children for behaviors such as fidgeting. They argue that rather than conditioning autistic people to act like neurotypical individuals, society should be more welcoming of them and their different manner of experiencing the world.

“A lot of the mismatch between autistic people and society is not the fault of autistic people, but the fault of society,” said Zoe Gross, the director of advocacy at the Autistic Self Advocacy Network. “People should be taught to interact with people who have different kinds of disabilities.”

Mr. Ravindran said Floreo respected all voices in the autistic community, where needs are diverse. He noted that while Floreo was used by many behavioral health providers, it had been deployed in a variety of contexts, including at schools and in the home.

“The Floreo system is designed to be positive and fun, while creating positive reinforcement to help build skills that help acclimate to the real world,” Mr. Ravindran said.

In 2017, Floreo secured a $2 million fast track grant from the National Institutes of Health. The company is first testing whether autistic children will tolerate headsets, then conducting a randomized control trial to test the method’s usefulness in helping autistic people interact with the police.

Early results have been promising: According to a study published in the Autism Research journal (Mr. Ravindran was one of the authors), 98 percent of the children completed their lessons, quelling concerns about autistic children with sensory sensitivities being resistant to the headsets.

Ms. Gross said she saw potential in virtual reality lessons that helped people rehearse unfamiliar situations, such as Floreo’s lesson on crossing the street. “There are parts of Floreo to get really excited about: the airport walk through, or trick or treating — a social story for something that doesn’t happen as frequently in someone’s life,” she said, adding that she would like to see a lesson for medical procedures.

However, she questioned a general emphasis by the behavioral therapy industry on using emerging technologies to teach autistic people social skills.

A second randomized control trial using telehealth, conducted by Floreo using another N.I.H. grant, is underway, in hopes of showing that Floreo’s approach is as effective as in-person coaching.

But it was those early successes that convinced Mr. Ravindran to commit fully to the project.

“There were just a lot of really excited people.,” he said. “When I started showing families what we had developed, people would just give me a big hug. They would start crying that there was someone working on such a high-tech solution for their kids.”

Clinicians who have used the Floreo system say the virtual reality environment makes it easier for children to focus on the skill being taught in the lessons, unlike in the real world where they might be overwhelmed by sensory stimuli.

Celebrate the Children, a nonprofit private school in Denville, N.J., for children with autism and related challenges, hosted one of the early pilots for Floreo; Monica Osgood, the school’s co-founder and executive director, said the school had continued to use the system.

Click here to read the full article on New York Times.

Disability In Hollywood: The Road Traveled And The Road Ahead

LinkedIn
Hollywood Actor RJ Mitte April 2021 Issue

By Josh Wilson, Forbes

Hollywood plays a massive part in shaping our understanding of different groups and helps us gain insight into worlds and cultures we may never have been able to on our own. The movies and TV series that flood our screens are more than just entertainment; they’re education. But with great power and influence comes great responsibility as there’s always the danger of misrepresentation.

Over the years, Hollywood has faced backlash from several communities and social movements about the issue of misrepresentation and underrepresentation. Groups identifying with Black Lives Matter, LGBTQ, the MeToo Movement, and protests like the OscarsSoWhite campaign come to mind.

People with disabilities, moreover racialized groups with disabilities, should also be at the forefront of this conversation, but they aren’t. This is a huge problem, especially considering that about a billion people live with some form of disability. In the U.S., one in five people have a disability, and for adults specifically, the disability count is about 26 percent, according to the CDC—roughly one in four adults.

“It’s almost impossible not to find people living with disabilities in any of these communities that feel let down by the entertainment industry’s depiction of their reality,” he said. “The discussion about proper inclusion and authentic depictions of a disabled person’s circumstances can only bode well for these groups and the entire industry as a whole.”

Disability isn’t new to the entertainment industry
Hollywood and the wider entertainment industry have many popular figures who are on the disability spectrum. Michael J. Fox has been diagnosed with Parkinson’s disease, Jim Carrey has talked about having ADHD, and Billie Eilish was diagnosed with Tourette Syndrome as a child, to mention a few.

Many of Hollywood’s big names have also brought awareness to various disabilities by talking about their condition, advocating for better understanding and acceptance of people with disabilities, or donating to their cause. The industry has also taken steps to shine a light on disabilities by making movies and TV productions focused on varying disabilities, or casting lead characters as people with disabilities.

The problem here is that the bigger picture still tells a story of underrepresentation and a lack of inclusion with only 3.5 percent of series regular characters being disabled in 2020, according to GLAAD. Another study found that this number was reasonably higher in 2018—12 per cent higher in fact—but that the majority of these characters were portrayed negatively.

There have been reports over the years of actors, writers, and other workers in entertainment losing their jobs or not being considered for a position due to disability-related issues. So while some of the silver screen’s most loved names play the roles of disabled characters and win awards and recognitions, the disabled community isn’t seeing any reasonable increase in inclusion and accessibility in the industry. In fact, about 95 per cent of characters with disabilities in Hollywood’s top shows are played by able-bodied actors, and during the 2019 Oscars, only two out of the 61 nominees and 27 winners that played disabled characters were actually disabled.

This gives credence to the concern of inauthentic portrayals of any given disability or disabled person. “It has never made sense to me that disabled characters in our shows and movies are played by people who have no disability.” Musab opines, “You can’t give what you don’t have, not optimally anyway. The way I see it, it’s like getting Cameron Diaz to play Harriet Tubman. No matter how pure her intentions and commitment to deliver on the role, she simply won’t be able to do it justice. It is an indictment of the abilities of disabled artists.”

The real focus is not only on the disability of the Hollywood spectrum but on the lack of inclusivity for racialized groups within the disabled community. The stories of their lives may have been voiced on several platforms but never from the eyes of the Hollywood industry. This is an important recognition for racialized groups within the disabled community, to not only be recognized but seen through a macro spectrum of representations.

Click here to read the full article on Forbes.

Olney Theatre reimagines ‘The Music Man’ with a deaf Harold Hill

LinkedIn
James Caverly plays professor Harold Hill in The Music Man at Olney Theatre Center. (Teresa Castracane Photography)

By , The Washington Post

James Caverly was working as a carpenter in Olney Theatre Center’s scene shop some seven years ago when he laid the foundation for an unconventional undertaking: a production of “The Music Man” featuring a blend of deaf and hearing actors.

At the time, the Gallaudet University alumnus was finding roles for deaf actors hard to come by. Having recently seen Deaf West’s 2015 production of “Spring Awakening” — performed on Broadway in American Sign Language and spoken English — Caverly thought the time was right for a D.C. theater to follow suit. So when Olney Artistic Director Jason Loewith encouraged staff to approach him with ideas for shows, Caverly spoke up.

“It’s like when Frankenstein’s monster came up to Dr. Frankenstein and said, ‘I need a wife,’ ” Caverly says during a recent video chat. “That was me with Jason Loewith saying, ‘Hey, I need a production.’ ” (With the exception of Loewith, all interviews for this story were conducted with the assistance of an ASL interpreter.)

The sales pitch worked: Loewith greenlighted a workshop to explore Caverly’s concept, then set the musical for the summer of 2021 before the coronavirus pandemic intervened. During the delay, Caverly’s profile spiked: He booked a recurring role on Steve Martin and Martin Short’s Hulu comedy “Only Murders in the Building,” earning widespread acclaim for a nearly silent episode focused on his morally complicated character.

Equipped with newfound cachet, Caverly has returned to Olney — this time, leaving his carpentry tools behind. Featuring deaf, hearing and hard of hearing actors, with Caverly starring as slippery con man Harold Hill, a bilingual production of “The Music Man” marches onto the theater’s main stage this week.

“What [Caverly] possesses is a presence and a charm and a charisma and a drive and a passion that is, in some way, Harold Hill,” Loewith says. “I mean, think about how he got this production to happen: He totally Harold Hilled me. But he’s a con man that I like.”

Olney’s production of “The Music Man” features a cast that mixes deaf, hard of hearing and hearing actors. (Teresa Castracane Photography)

In fitting Hill fashion, Caverly won over his mark despite some initial skepticism. Although Loewith says his concerns were mostly focused on the logistics of staging what’s traditionally a sprawling show, he also recalled pressing Caverly on the idea’s artistic merits.

“I didn’t want to just do it as, ‘Here’s us being inclusive,’ ” Loewith says. “I wanted to be like, ‘What is a musical that needs this kind of storytelling?’ ”

That’s when Caverly filled in Loewith on the history of Martha’s Vineyard: In the 19th century, a genetic anomaly led to such a prominent deaf population — about 1 in 25 residents — that the island’s native sign language became ubiquitous, and deaf people were fully integrated into the community.

So what if River City, the backwater Iowa town where “The Music Man” unfolds, was like Martha’s Vineyard? Caverly, like many of his deaf peers, also learned to play an instrument in his youth — in his case, the guitar. Thus, the idea of the traveling salesman Hill swindling the locals into investing in a boys’ marching band, with the intent of skipping town before teaching them a note, held up as well.

“The beautiful thing about this story is that Harold Hill never really teaches the kids music,” Caverly says, “so he doesn’t really have to hear music and he doesn’t have to play these musical instruments.”

Click here to read the full article in The Washington Post.

Meet the Black Doctor Reshaping the Industry With Virtual Prosthetic Clinics to Help Amputee Patients

LinkedIn
Dr. Hassan Akinbiyi, the Black Doctor Reshaping the Industry With Virtual Prosthetic Clinics to Help Amputee Patients

By YAHOO! Entertainment

Dr. Hassan Akinbiyi, a leader in physiatry and rehabilitative medicine from Scottsdale, Ariz., is pleased to announce his partnership with Hanger Clinic, to provide Virtual Prosthetic Clinics.

Dr. Hassan, a highly esteemed board-certified physiatrist, is preoperatively involved in explaining the process from limb loss to independence with a prosthesis.

Through the Virtual Prosthetic Clinics, he is reshaping the prosthetic rehabilitation program by using telehealth for diagnosis, evaluation, and prosthetic care. As a result, a patient can now afford to receive specialized prosthetic services virtually.

He helps set the patient up for success by assisting with their transition through the post-acute care continuum, overseeing their prosthetic care, and ensuring they are thriving. In addition, his expertise and extensive knowledge as a physiatrist enable him to navigate the insurance process for prosthetic devices and issue all necessary documentation.

Regardless of an amputee patient’s entry point, Dr. Hassan ensures they receive the necessary care to resume their life’s activities when they desire it most.

Click here to read the full article on YAHOO! Entertainment.

Disability Inclusion Is Coming Soon to the Metaverse

LinkedIn
Disabled avatars from the metaverse in a wheelchair

By Christopher Reardon, PC Mag

When you think of futurism, you probably don’t think of the payroll company ADP—but that’s where Giselle Mota works as the company’s principal consultant on the “future of work.” Mota, who has given a Ted Talk(Opens in a new window) and has written(Opens in a new window) for Forbes, is committed to bringing more inclusion and access to the Web3 and metaverse spaces. She’s also been working on a side project called Unhidden, which will provide disabled people with accurate avatars, so they’ll have the option to remain themselves in the metaverse and across Web3.

To See and Be Seen
The goal of Unhidden is to encourage tech companies to be more inclusive, particularly of people with disabilities. The project has launched and already has a partnership with the Wanderland(Opens in a new window) app, which will feature Unhidden avatars through its mixed-reality(Opens in a new window) platform at the VivaTech Conference in Paris and the DisabilityIN Conference in Dallas. The first 12 avatars will come out this summer with Mota, Dr. Tiffany Jana, Brandon Farstein, Tiffany Yu, and other global figures representing disability inclusion.

The above array of individuals is known as the NFTY Collective(Opens in a new window). Its members hail from countries including America, the UK, and Australia, and the collective represents a spectrum of disabilities, ranging from the invisible type, such as bipolar and other forms of neurodiversity, to the more visible, including hypoplasia and dwarfism.

Hypoplasia causes the underdevelopment of an organ or tissue. For Isaac Harvey, the disease manifested by leaving him with no arms and short legs. Harvey uses a wheelchair and is the president of Wheels for Wheelchairs, along with being a video editor. He got involved with Unhidden after being approached by its co-creator along with Mota, Victoria Jenkins, who is an inclusive fashion designer.

Click here to read the full article on PC Mag.

For people with disabilities, AI can only go so far to make the web more accessible

LinkedIn
Woman's Hands Working From Home on Computer while looking at her iPhone

By Kate Kaye, Protocol

“It’s a lot to listen to a robot all day long,” said Tina Pinedo, communications director at Disability Rights Oregon, a group that works to promote and defend the rights of people with disabilities.

But listening to a machine is exactly what many people with visual impairments do while using screen reading tools to accomplish everyday online tasks such as paying bills or ordering groceries from an ecommerce site.

“There are not enough web developers or people who actually take the time to listen to what their website sounds like to a blind person. It’s auditorily exhausting,” said Pinedo.

Whether struggling to comprehend a screen reader barking out dynamic updates to a website, trying to make sense of poorly written video captions or watching out for fast-moving imagery that could induce a seizure, the everyday obstacles blocking people with disabilities from a satisfying digital experience are immense.

Needless to say, technology companies have tried to step in, often promising more than they deliver to users and businesses hoping that automated tools can break down barriers to accessibility. Although automated tech used to check website designs for accessibility flaws have been around for some time, companies such as Evinced claim that sophisticated AI not only does a better job of automatically finding and helping correct accessibility problems, but can do it for large enterprises that need to manage thousands of website pages and app content.

Still, people with disabilities and those who regularly test for web accessibility problems say automated systems and AI can only go so far. “The big danger is thinking that some type of automation can replace a real person going through your website, and basically denying people of their experience on your website, and that’s a big problem,” Pinedo said.

Why Capital One is betting on accessibility AI
For a global corporation such as Capital One, relying on a manual process to catch accessibility issues is a losing battle.

“We test our entire digital footprint every month. That’s heavily reliant on automation as we’re testing almost 20,000 webpages,” said Mark Penicook, director of Accessibility at the banking and credit card company, whose digital accessibility team is responsible for all digital experiences across Capital One including websites, mobile apps and electronic messaging in the U.S., the U.K. and Canada.

Accessibility isn’t taught in computer science.
Even though Capital One has a team of people dedicated to the effort, Penicook said he has had to work to raise awareness about digital accessibility among the company’s web developers. “Accessibility isn’t taught in computer science,” Penicook told Protocol. “One of the first things that we do is start teaching them about accessibility.”

One way the company does that is by celebrating Global Accessibility Awareness Day each year, Penicook said. Held on Thursday, the annual worldwide event is intended to educate people about digital access and inclusion for those with disabilities and impairments.

Before Capital One gave Evinced’s software a try around 2018, its accessibility evaluations for new software releases or features relied on manual review and other tools. Using Evinced’s software, Penicook said the financial services company’s accessibility testing takes hours rather than weeks, and Capital One’s engineers and developers use the system throughout their internal software development testing process.

It was enough to convince Capital One to invest in Evinced through its venture arm, Capital One Ventures. Microsoft’s venture group, M12, also joined a $17 million funding round for Evinced last year.

Evinced’s software automatically scans webpages and other content, and then applies computer vision and visual analysis AI to detect problems. The software might discover a lack of contrast between font and background colors that make it difficult for people with vision impairments like color blindness to read. The system might find images that do not have alt text, the metadata that screen readers use to explain what’s in a photo or illustration. Rather than pointing out individual problems, the software uses machine learning to find patterns that indicate when the same type of problem is happening in several places and suggests a way to correct it.

“It automatically tells you, instead of a thousand issues, it’s actually one issue,” said Navin Thadani, co-founder and CEO of Evinced.

The software also takes context into account, factoring in the purpose of a site feature or considering the various operating systems or screen-reader technologies that people might use when visiting a webpage or other content. For instance, it identifies user design features that might be most accessible for a specific purpose, such as a button to enable a bill payment transaction rather than a link.

Some companies use tools typically referred to as “overlays” to check for accessibility problems. Many of those systems are web plug-ins that add a layer of automation on top of existing sites to enable modifications tailored to peoples’ specific requirements. One product that uses computer vision and machine learning, accessiBe, allows people with epilepsy to choose an option that automatically stops all animated images and videos on a site before they could pose a risk of seizure. The company raised $28 million in venture capital funding last year.

Another widget from TruAbilities offers an option that limits distracting page elements to allow people with neurodevelopmental disorders to focus on the most important components of a webpage.

Some overlay tools have been heavily criticized for adding new annoyances to the web experience and providing surface-level responses to problems that deserve more robust solutions. Some overlay tech providers have “pretty brazen guarantees,” said Chase Aucoin, chief architect at TPGi, a company that provides accessibility automation tools and consultation services to customers, including software development monitoring and product design assessments for web development teams.

“[Overlays] give a false sense of security from a risk perspective to the end user,” said Aucoin, who himself experiences motor impairment. “It’s just trying to slap a bunch of paint on top of the problem.”

In general, complicated site designs or interfaces that automatically hop to a new page section or open a new window can create a chaotic experience for people using screen readers, Aucoin said. “A big thing now is just cognitive; how hard is this thing for somebody to understand what’s going on?” he said.

Even more sophisticated AI-based accessibility technologies don’t address every disability issue. For instance, people with an array of disabilities either need or prefer to view videos with captions, rather than having sound enabled. However, although automated captions for videos have improved over the years, “captions that are computer-generated without human review can be really terrible,” said Karawynn Long, an autistic writer with central auditory processing disorder and hyperlexia, a hyperfocus on written language.

“I always appreciate when written transcripts are included as an option, but auto-generated ones fall woefully short, especially because they don’t include good indications of non-linguistic elements of the media,” Long said.

Click here to read the full article on Protocol.

Air Force Civilian Service

Air Force Civilian Service

American Family Insurance

American Family Insurance

United States Postal Services-Diversity

United States Postal Services-Diversity

Alight

Alight Solutions Logo

Leidos

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. Join us in D. C. for Tapia 2022!
    September 6, 2022 - September 10, 2022
  4. The 2022 Global ERG Summit
    September 19, 2022 - September 23, 2022
  5. ROMBA Conference
    October 6, 2022 - October 8, 2022

Upcoming Events

  1. City Career Fair
    January 19, 2022 - November 4, 2022
  2. The Small Business Expo–Multiple Event Dates
    February 17, 2022 - December 1, 2022
  3. Join us in D. C. for Tapia 2022!
    September 6, 2022 - September 10, 2022
  4. The 2022 Global ERG Summit
    September 19, 2022 - September 23, 2022
  5. ROMBA Conference
    October 6, 2022 - October 8, 2022