The 3D-printed wheelchair: a revolution in comfort?

LinkedIn

If Benjamin Hubert gets his way, the days of the one-size-fits-all wheelchair could be numbered.

For millions of users, the essential mobility device can feel clinical, mechanical and uncomfortable. Now the British industrial designer thinks he’s found the answer: 3D-printing.

His agency, Layer, is using 3-D digital data to map biometric information. The technology allows the design team to create a bespoke wheelchair that fits an individual’s body shape, weight and disability.

“This… object performed better, decreased injuries and expressed the individual’s sense of style, movement and emotions,” Hubert said.

The resin and plastic material used for the seat absorbs shocks and the design ensures the user has the best center of gravity.

Layer, which once focused largely on creating high-end home furnishing products, has shifted its attention to projects that have more social impact.

It involved users in the design process by creating an app where they could specify optional elements, patterns and colors.

It also interviewed users to find out about pain points and frustrations before they began developing prototypes.

“It’s a very stigmatized, emotionally charged area of thinking so it became really important for us that it embodied points and views and opinions and voices of the people who we were designing for,” Hubert said.

Continue onto CNN Tech to read the complete article.

Your first career move, powered by Netflix

LinkedIn
Man in wheelchair on computer doing Apprenticeship

Netflix is partnering with Formation to build a world where people from every walk of life have a seat at the table in tech.

Our program will be completely free of charge for students accepted. It is designed to unlock your engineering potential with personalized training and world-class mentorship from the best engineers across the tech industry.

The below information will be required, and adding why you want to land a New Grad Engineering role at Netflix.

The application requires:

Info about your experience, education, and background

Info regarding your eligibility for the program

A one minute video telling us about yourself

Apply today at https://formation.dev/partners/netflix

Application deadline is March 5, 2023.

Cracking the code: Working together to engage and empower female technologists at Bloomberg

LinkedIn
diverse women working on laptop

To create products that serve increasingly diverse customers and solve a wider range of social problems, technology companies need women engineers. However, only 25 percent of math and computer science jobs in the United States are filled by women, and one-third of women in the U.S. and China quit these jobs mid-career due to factors like social isolation, a lack of access to creative technical roles and difficulty advancing to leadership positions.

At Bloomberg, we’ve established a company culture that supports gender equality in a multitude of ways – from company-wide Diversity & Inclusion business plans to a newly expanded family leave policy. But we know that’s not enough. In recent years, we’ve adopted a system-wide approach to increasing the number of women in technical roles, taking steps to remove barriers to advancement both inside our organization and beyond Bloomberg, supporting female talent from middle school through mid-career.

While the number of women in technical jobs at Bloomberg is growing, we’re committed to making progress faster and completing all the steps needed to solve the equation. Here are some of the ways we’re tackling this important deficit – and making quantifiable change.

Early engagement

Bloomberg supports organizations that help increase women’s participation in STEM and financial technology, exposing students to various career options through Bloomberg Startup and encouraging our female engineers to engage with the next generation of talent.

Collaboration, creativity, and a love of problem-solving drew Chelsea Ohh to the field of engineering. Now she works at Bloomberg as a software engineer team lead, helping to provide critical information to financial decision makers across the globe.

Recruitment

We target our entry-level engineering recruiting efforts at colleges that have achieved or are focused on gender parity in their STEM classes. And because not all the best talent come from the same schools or have the same experiences, Bloomberg actively seeks women engineers with non-traditional backgrounds or career paths.

Talent development

Women engineers can sharpen their technical skills through open courses, on-site training sessions, and business hackathons held throughout the year. Bloomberg is committed to inspiring our female employees, eliminating barriers like impostor syndrome, and encouraging them to pursue opportunities in engineering.

Community & allies

To strengthen its network of female engineers, global BWIT (Bloomberg Women in Technology) chapters organize more than 150 events, mentoring sessions, and meet-ups a year. The community also engages male allies and advocates, sharing strategies to help them support their female colleagues.

Click here to read the full article on Bloomberg.

The latest video game controller isn’t plastic. It’s your face.

LinkedIn
Dunn playing “Minecraft” using voice commands on the Enabled Play controller, face expression controls via a phone and virtual buttons on Xbox's adaptive controller. (Courtesy of Enabled Play Game Controller)

By Amanda Florian, The Washington Post

Over decades, input devices in the video game industry have evolved from simple joysticks to sophisticated controllers that emit haptic feedback. But with Enabled Play, a new piece of assistive tech created by self-taught developer Alex Dunn, users are embracing a different kind of input: facial expressions.

While companies like Microsoft have sought to expand accessibility through adaptive controllers and accessories, Dunn’s new device takes those efforts even further, translating users’ head movements, facial expressions, real-time speech and other nontraditional input methods into mouse clicks, key strokes and thumbstick movements. The device has users raising eyebrows — quite literally.

“Enabled Play is a device that learns to work with you — not a device you have to learn to work with,” Dunn, who lives in Boston, said via Zoom.

Dunn, 26, created Enabled Play so that everyone — including his younger brother with a disability — can interface with technology in a natural and intuitive way. At the beginning of the pandemic, the only thing he and his New Hampshire-based brother could do together, while approximately 70 miles apart, was game.

“And that’s when I started to see firsthand some of the challenges that he had and the limitations that games had for people with really any type of disability,” he added.

At 17, Dunn dropped out of Worcester Polytechnic Institute to become a full-time software engineer. He began researching and developing Enabled Play two and a half years ago, which initially proved challenging, as most speech-recognition programs lagged in response time.

“I built some prototypes with voice commands, and then I started talking to people who were deaf and had a range of disabilities, and I found that voice commands didn’t cut it,” Dunn said.

That’s when he started thinking outside the box.

Having already built Suave Keys, a voice-powered program for gamers with disabilities, Dunn created Snap Keys — an extension that turns a user’s Snapchat lens into a controller when playing games like Call of Duty, “Fall Guys,” and “Dark Souls.” In 2020, he won two awards for his work at Snap Inc.’s Snap Kit Developer Challenge, a competition among third-party app creators to innovate Snapchat’s developer tool kit.

With Enabled Play, Dunn takes accessibility to the next level. With a wider variety of inputs, users can connect the assistive device — equipped with a robust CPU and 8 GB of RAM — to a computer, game console or other device to play games in whatever way works best for them.

Dunn also spent time making sure Enabled Play was accessible to people who are deaf, as well as people who want to use nonverbal audio input, like “ooh” or “aah,” to perform an action. Enabled Play’s vowel sound detection model is based on “The Vocal Joystick,” which engineers and linguistics experts at the University of Washington developed in 2006.

“Essentially, it looks to predict the word you are going to say based on what is in the profile, rather than trying to assume it could be any word in the dictionary,” Dunn said. “This helps cut through machine learning bias by learning more about how the individual speaks and applies it to their desired commands.”

Dunn’s AI-enabled controller takes into account a person’s natural tendencies. If a gamer wants to set up a jump command every time they open their mouth, Enabled Play would identify that person’s individual resting mouth position and set that as the baseline.

In January, Enabled Play officially launched in six countries — its user base extending from the U.S. to the U.K., Ghana and Austria. For Dunn, one of his primary goals was to fill a gap in accessibility and pricing compared to other assistive gaming devices.

“There are things like the Xbox Adaptive Controller. There are things like the HORI Flex [for Nintendo Switch]. There are things like Tobii, which does eye-tracking and stuff like that. But it still seemed like it wasn’t enough,” he said.

Compared to some devices that are only compatible with one gaming system or computer at a time, Dunn’s AI-enabled controller — priced at $249.99 — supports a combination of inputs and outputs. Speech therapists say that compared to augmentative and alternative communication (AAC) devices, which are medically essential for some with disabilities, Dunn’s device offers simplicity.

“This is just the start,” said Julia Franklin, a speech language pathologist at Community School of Davidson in Davidson, N.C. Franklin introduced students to Enabled Play this summer and feels it’s a better alternative to other AAC devices on the market that are often “expensive, bulky and limited” in usability. Many sophisticated AAC systems can range from $6,000 to $11,500 for high-tech devices, with low-end eye-trackers running in the thousands. A person may also download AAC apps on their mobile devices, which range from $49.99 to $299.99 for the app alone.

“For many people who have physical and cognitive differences, they often exhaust themselves to learn a complex AAC system that has limits,” she said. “The Enabled Play device allows individuals to leverage their strengths and movements that are already present.”

Internet users have applauded Dunn for his work, noting that asking for accessibility should not equate to asking for an “easy mode” — a misconception often cited by critics of making games more accessible.

“This is how you make gaming accessible,” one Reddit user wrote about Enabled Play. “Not by dumbing it down, but by creating mechanical solutions that allow users to have the same experience and accomplish the same feats as [people without disabilities].” Another user who said they regularly worked with young patients with cerebral palsy speculated that Enabled Play “would quite literally change their lives.”

Click here to read the full article on The Washington Post.

Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation

LinkedIn
Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

By , Unite

Researchers from Germany have developed a method for identifying mental disorders based on facial expressions interpreted by computer vision.

The new approach can not only distinguish between unaffected and affected subjects, but can also correctly distinguish depression from schizophrenia, as well as the degree to which the patient is currently affected by the disease.

The researchers have provided a composite image that represents the control group for their tests (on the left in the image below) and the patients who are suffering from mental disorders (right). The identities of multiple people are blended in the representations, and neither image depicts a particular individual:

Individuals with affective disorders tend to have raised eyebrows, leaden gazes, swollen faces and hang-dog mouth expressions. To protect patient privacy, these composite images are the only ones made available in support of the new work.

Until now, facial affect recognition has been primarily used as a potential tool for basic diagnosis. The new approach, instead, offers a possible method to evaluate patient progress throughout treatment, or else (potentially, though the paper does not suggest it) in their own domestic environment for outpatient monitoring.

The paper states*:

‘Going beyond machine diagnosis of depression in affective computing, which has been developed in previous studies, we show that the measurable affective state estimated by means of computer vision contains far more information than the pure categorical classification.’

The researchers have dubbed this technique Opto Electronic Encephalography (OEG), a completely passive method of inferring mental state by facial image analysis instead of topical sensors or ray-based medical imaging technologies.

The authors conclude that OEG could potentially be not just a mere secondary aide to diagnosis and treatment, but, in the long term, a potential replacement for certain evaluative parts of the treatment pipeline, and one that could cut down on the time necessary for patient monitoring and initial diagnosis. They note:

‘Overall, the results predicted by the machine show better correlations compared to the pure clinical observer rating based questionnaires and are also objective. The relatively short measurement period of a few minutes for the computer vision approaches is also noteworthy, whereas hours are sometimes required for the clinical interviews.’

However, the authors are keen to emphasize that patient care in this field is a multi-modal pursuit, with many other indicators of patient state to be considered than just their facial expressions, and that it is too early to consider that such a system could entirely substitute traditional approaches to mental disorders. Nonetheless, they consider OEG a promising adjunct technology, particularly as a method to grade the effects of pharmaceutical treatment in a patient’s prescribed regime.

The paper is titled The Face of Affective Disorders, and comes from eight researchers across a broad range of institutions from the private and public medical research sector.

Data

(The new paper deals mostly with the various theories and methods that are currently popular in patient diagnosis of mental disorders, with less attention than is usual to the actual technologies and processes used in the tests and various experiments)

Data-gathering took place at University Hospital at Aachen, with 100 gender-balanced patients and a control group of 50 non-affected people. The patients included 35 sufferers from schizophrenia and 65 people suffering from depression.

For the patient portion of the test group, initial measurements were taken at the time of first hospitalization, and the second prior to their discharge from hospital, spanning an average interval of 12 weeks. The control group participants were recruited arbitrarily from the local population, with their own induction and ‘discharge’ mirroring that of the actual patients.

In effect, the most important ‘ground truth’ for such an experiment must be diagnoses obtained by approved and standard methods, and this was the case for the OEG trials.

However, the data-gathering stage obtained additional data more suited for machine interpretation: interviews averaging 90 minutes were captured over three phases with a Logitech c270 consumer webcam running at 25fps.

The first session comprised of a standard Hamilton interview (based on research originated around 1960), such as would normally be given on admission. In the second phase, unusually, the patients (and their counterparts in the control group) were shown videos of a series of facial expressions, and asked to mimic each of these, while stating their own estimation of their mental condition at that time, including emotional state and intensity. This phase lasted around ten minutes.

In the third and final phase, the participants were shown 96 videos of actors, lasting just over ten seconds each, apparently recounting intense emotional experiences. The participants were then asked to evaluate the emotion and intensity represented in the videos, as well as their own corresponding feelings. This phase lasted around 15 minutes.

Click here to read the full article on Unite.

Gamifying Fear: Vr Exposure Therapy Shown To Be Effective At Treating Severe Phobias

LinkedIn
Girl using virtual reality goggles watching spider. Photo: Donald Iain Smith/Gett Images

By Cassidy Ward, SyFy

In the 2007 horror film House of Fears (now streaming on Peacock!), a group of teenagers enters the titular haunted house the night before it is set to open. Once inside, they encounter a grisly set of horrors leaving some of them dead and others terrified. For many, haunted houses are a fun way to intentionally trigger a fear response. For others, fear is something they live with on a daily basis and it’s anything but fun.

Roughly 8% of adults report a severe fear of flying; between 3 and 15% endure a fear of spiders; and between 3 and 6% have a fear of heights. Taken together, along with folks who have a fear of needles, dogs, or any number of other life-altering phobias, there’s a good chance you know someone who is living with a fear serious enough to impact their lives. You might even have such a phobia yourself.

There are, thankfully, a number of treatments a person can undergo in order to cope with a debilitating phobia. However, those treatments often require traveling someplace else and having access to medical care, something which isn’t always available or possible. With that in mind, scientists from the Department of Psychological Medicine at the University of Otago have investigated the use of virtual reality to remotely treat severe phobias with digital exposure therapy. Their findings were published in the Australian and New Zealand Journal of Psychiatry.

Prior studies into the efficacy of virtual reality for the treatment of phobias were reliant on high-end VR rigs which can be expensive and difficult to acquire for the average patient. They also focused on specific phobias. The team at the University of Otago wanted something that could reach a higher number of patients, both in terms of content and access to equipment.

They used oVRcome, a widely available smartphone app anyone can download from their phone’s app store. The app has virtual reality content related to a number of common phobias in addition to the five listed above. Moreover, because it runs on your smartphone, it can be experienced using any number of affordable VR headsets which your phone slides into.

Participants enter in their phobias and their severity on a scale and are presented with a series of virtual experiences designed to gently and progressively expose the user to their fear. The study involved 129 people between the ages of 18 and 64, all of which reported all five of the target phobias. They used oVRcome over the course of six weeks with weekly emailed questionnaires measuring their progress. Participants also had access to a clinical psychologist in the event that they experienced any adverse effects from the study.

Participants were given a baseline score measuring the severity of their phobia and were measured again at a follow up 12 weeks after the start of the program. At baseline, participants averaged a score of 28 out of 40, indicating moderate to severe symptoms. By the end of the trial, the average score was down to 7, indicating minimal symptoms. Some participants even indicated they had overcome their phobia to the extent that they felt comfortable booking a flight, scheduling a medical procedure involving needles, or capturing and releasing a spider from their home, something they weren’t comfortable doing at the start.

Part of what makes the software so effective is the diversity of programming available and the ability for an individual to tailor their experiences based on their own unique experience. Additionally, exposure therapy is coupled with additional virtual modules including relaxation, mindfulness, cognitive techniques, and psychoeducation.

Click here to read the full article on SyFy.

Can Virtual Reality Help Autistic Children Navigate the Real World?

LinkedIn
Mr. Ravindran adjusts his son’s VR headset between lessons. “It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said of the time when his son used Google Street View through a headset, then went into his playroom and acted out what he had experienced in VR. “It ended up being a light bulb moment.

By Gautham Nagesh, New York Times

This article is part of Upstart, a series on young companies harnessing new science and technology.

Vijay Ravindran has always been fascinated with technology. At Amazon, he oversaw the team that built and started Amazon Prime. Later, he joined the Washington Post as chief digital officer, where he advised Donald E. Graham on the sale of the newspaper to his former boss, Jeff Bezos, in 2013.

By late 2015, Mr. Ravindran was winding down his time at the renamed Graham Holdings Company. But his primary focus was his son, who was then 6 years old and undergoing therapy for autism.

“Then an amazing thing happened,” Mr. Ravindran said.

Mr. Ravindran was noodling around with a virtual reality headset when his son asked to try it out. After spending 30 minutes using the headset in Google Street View, the child went to his playroom and started acting out what he had done in virtual reality.

“It was one of the first times I’d seen him do pretend play like that,” Mr. Ravindran said. “It ended up being a light bulb moment.”

Like many autistic children, Mr. Ravindran’s son struggled with pretend play and other social skills. His son’s ability to translate his virtual reality experience to the real world sparked an idea. A year later, Mr. Ravindran started a company called Floreo, which is developing virtual reality lessons designed to help behavioral therapists, speech therapists, special educators and parents who work with autistic children.

The idea of using virtual reality to help autistic people has been around for some time, but Mr. Ravindran said the widespread availability of commercial virtual reality headsets since 2015 had enabled research and commercial deployment at much larger scale. Floreo has developed almost 200 virtual reality lessons that are designed to help children build social skills and train for real world experiences like crossing the street or choosing where to sit in the school cafeteria.

Last year, as the pandemic exploded demand for telehealth and remote learning services, the company delivered 17,000 lessons to customers in the United States. Experts in autism believe the company’s flexible platform could go global in the near future.

That’s because the demand for behavioral and speech therapy as well as other forms of intervention to address autism is so vast. Getting a diagnosis for autism can take months — crucial time in a child’s development when therapeutic intervention can be vital. And such therapy can be costly and require enormous investments of time and resources by parents.

The Floreo system requires an iPhone (version 7 or later) and a V.R. headset (a low-end model costs as little as $15 to $30), as well as an iPad, which can be used by a parent, teacher or coach in-person or remotely. The cost of the program is roughly $50 per month. (Floreo is currently working to enable insurance reimbursement, and has received Medicaid approval in four states.)

A child dons the headset and navigates the virtual reality lesson, while the coach — who can be a parent, teacher, therapist, counselor or personal aide — monitors and interacts with the child through the iPad.

The lessons cover a wide range of situations, such as visiting the aquarium or going to the grocery store. Many of the lessons involve teaching autistic children, who may struggle to interpret nonverbal cues, to interpret body language.

Autistic self-advocates note that behavioral therapy to treat autism is controversial among those with autism, arguing that it is not a disease to be cured and that therapy is often imposed on autistic children by their non-autistic parents or guardians. Behavioral therapy, they say, can harm or punish children for behaviors such as fidgeting. They argue that rather than conditioning autistic people to act like neurotypical individuals, society should be more welcoming of them and their different manner of experiencing the world.

“A lot of the mismatch between autistic people and society is not the fault of autistic people, but the fault of society,” said Zoe Gross, the director of advocacy at the Autistic Self Advocacy Network. “People should be taught to interact with people who have different kinds of disabilities.”

Mr. Ravindran said Floreo respected all voices in the autistic community, where needs are diverse. He noted that while Floreo was used by many behavioral health providers, it had been deployed in a variety of contexts, including at schools and in the home.

“The Floreo system is designed to be positive and fun, while creating positive reinforcement to help build skills that help acclimate to the real world,” Mr. Ravindran said.

In 2017, Floreo secured a $2 million fast track grant from the National Institutes of Health. The company is first testing whether autistic children will tolerate headsets, then conducting a randomized control trial to test the method’s usefulness in helping autistic people interact with the police.

Early results have been promising: According to a study published in the Autism Research journal (Mr. Ravindran was one of the authors), 98 percent of the children completed their lessons, quelling concerns about autistic children with sensory sensitivities being resistant to the headsets.

Ms. Gross said she saw potential in virtual reality lessons that helped people rehearse unfamiliar situations, such as Floreo’s lesson on crossing the street. “There are parts of Floreo to get really excited about: the airport walk through, or trick or treating — a social story for something that doesn’t happen as frequently in someone’s life,” she said, adding that she would like to see a lesson for medical procedures.

However, she questioned a general emphasis by the behavioral therapy industry on using emerging technologies to teach autistic people social skills.

A second randomized control trial using telehealth, conducted by Floreo using another N.I.H. grant, is underway, in hopes of showing that Floreo’s approach is as effective as in-person coaching.

But it was those early successes that convinced Mr. Ravindran to commit fully to the project.

“There were just a lot of really excited people.,” he said. “When I started showing families what we had developed, people would just give me a big hug. They would start crying that there was someone working on such a high-tech solution for their kids.”

Clinicians who have used the Floreo system say the virtual reality environment makes it easier for children to focus on the skill being taught in the lessons, unlike in the real world where they might be overwhelmed by sensory stimuli.

Celebrate the Children, a nonprofit private school in Denville, N.J., for children with autism and related challenges, hosted one of the early pilots for Floreo; Monica Osgood, the school’s co-founder and executive director, said the school had continued to use the system.

Click here to read the full article on New York Times.

Meet the Black Doctor Reshaping the Industry With Virtual Prosthetic Clinics to Help Amputee Patients

LinkedIn
Dr. Hassan Akinbiyi, the Black Doctor Reshaping the Industry With Virtual Prosthetic Clinics to Help Amputee Patients

By YAHOO! Entertainment

Dr. Hassan Akinbiyi, a leader in physiatry and rehabilitative medicine from Scottsdale, Ariz., is pleased to announce his partnership with Hanger Clinic, to provide Virtual Prosthetic Clinics.

Dr. Hassan, a highly esteemed board-certified physiatrist, is preoperatively involved in explaining the process from limb loss to independence with a prosthesis.

Through the Virtual Prosthetic Clinics, he is reshaping the prosthetic rehabilitation program by using telehealth for diagnosis, evaluation, and prosthetic care. As a result, a patient can now afford to receive specialized prosthetic services virtually.

He helps set the patient up for success by assisting with their transition through the post-acute care continuum, overseeing their prosthetic care, and ensuring they are thriving. In addition, his expertise and extensive knowledge as a physiatrist enable him to navigate the insurance process for prosthetic devices and issue all necessary documentation.

Regardless of an amputee patient’s entry point, Dr. Hassan ensures they receive the necessary care to resume their life’s activities when they desire it most.

Click here to read the full article on YAHOO! Entertainment.

Disability Inclusion Is Coming Soon to the Metaverse

LinkedIn
Disabled avatars from the metaverse in a wheelchair

By Christopher Reardon, PC Mag

When you think of futurism, you probably don’t think of the payroll company ADP—but that’s where Giselle Mota works as the company’s principal consultant on the “future of work.” Mota, who has given a Ted Talk(Opens in a new window) and has written(Opens in a new window) for Forbes, is committed to bringing more inclusion and access to the Web3 and metaverse spaces. She’s also been working on a side project called Unhidden, which will provide disabled people with accurate avatars, so they’ll have the option to remain themselves in the metaverse and across Web3.

To See and Be Seen
The goal of Unhidden is to encourage tech companies to be more inclusive, particularly of people with disabilities. The project has launched and already has a partnership with the Wanderland(Opens in a new window) app, which will feature Unhidden avatars through its mixed-reality(Opens in a new window) platform at the VivaTech Conference in Paris and the DisabilityIN Conference in Dallas. The first 12 avatars will come out this summer with Mota, Dr. Tiffany Jana, Brandon Farstein, Tiffany Yu, and other global figures representing disability inclusion.

The above array of individuals is known as the NFTY Collective(Opens in a new window). Its members hail from countries including America, the UK, and Australia, and the collective represents a spectrum of disabilities, ranging from the invisible type, such as bipolar and other forms of neurodiversity, to the more visible, including hypoplasia and dwarfism.

Hypoplasia causes the underdevelopment of an organ or tissue. For Isaac Harvey, the disease manifested by leaving him with no arms and short legs. Harvey uses a wheelchair and is the president of Wheels for Wheelchairs, along with being a video editor. He got involved with Unhidden after being approached by its co-creator along with Mota, Victoria Jenkins, who is an inclusive fashion designer.

Click here to read the full article on PC Mag.

For people with disabilities, AI can only go so far to make the web more accessible

LinkedIn
Woman's Hands Working From Home on Computer while looking at her iPhone

By Kate Kaye, Protocol

“It’s a lot to listen to a robot all day long,” said Tina Pinedo, communications director at Disability Rights Oregon, a group that works to promote and defend the rights of people with disabilities.

But listening to a machine is exactly what many people with visual impairments do while using screen reading tools to accomplish everyday online tasks such as paying bills or ordering groceries from an ecommerce site.

“There are not enough web developers or people who actually take the time to listen to what their website sounds like to a blind person. It’s auditorily exhausting,” said Pinedo.

Whether struggling to comprehend a screen reader barking out dynamic updates to a website, trying to make sense of poorly written video captions or watching out for fast-moving imagery that could induce a seizure, the everyday obstacles blocking people with disabilities from a satisfying digital experience are immense.

Needless to say, technology companies have tried to step in, often promising more than they deliver to users and businesses hoping that automated tools can break down barriers to accessibility. Although automated tech used to check website designs for accessibility flaws have been around for some time, companies such as Evinced claim that sophisticated AI not only does a better job of automatically finding and helping correct accessibility problems, but can do it for large enterprises that need to manage thousands of website pages and app content.

Still, people with disabilities and those who regularly test for web accessibility problems say automated systems and AI can only go so far. “The big danger is thinking that some type of automation can replace a real person going through your website, and basically denying people of their experience on your website, and that’s a big problem,” Pinedo said.

Why Capital One is betting on accessibility AI
For a global corporation such as Capital One, relying on a manual process to catch accessibility issues is a losing battle.

“We test our entire digital footprint every month. That’s heavily reliant on automation as we’re testing almost 20,000 webpages,” said Mark Penicook, director of Accessibility at the banking and credit card company, whose digital accessibility team is responsible for all digital experiences across Capital One including websites, mobile apps and electronic messaging in the U.S., the U.K. and Canada.

Accessibility isn’t taught in computer science.
Even though Capital One has a team of people dedicated to the effort, Penicook said he has had to work to raise awareness about digital accessibility among the company’s web developers. “Accessibility isn’t taught in computer science,” Penicook told Protocol. “One of the first things that we do is start teaching them about accessibility.”

One way the company does that is by celebrating Global Accessibility Awareness Day each year, Penicook said. Held on Thursday, the annual worldwide event is intended to educate people about digital access and inclusion for those with disabilities and impairments.

Before Capital One gave Evinced’s software a try around 2018, its accessibility evaluations for new software releases or features relied on manual review and other tools. Using Evinced’s software, Penicook said the financial services company’s accessibility testing takes hours rather than weeks, and Capital One’s engineers and developers use the system throughout their internal software development testing process.

It was enough to convince Capital One to invest in Evinced through its venture arm, Capital One Ventures. Microsoft’s venture group, M12, also joined a $17 million funding round for Evinced last year.

Evinced’s software automatically scans webpages and other content, and then applies computer vision and visual analysis AI to detect problems. The software might discover a lack of contrast between font and background colors that make it difficult for people with vision impairments like color blindness to read. The system might find images that do not have alt text, the metadata that screen readers use to explain what’s in a photo or illustration. Rather than pointing out individual problems, the software uses machine learning to find patterns that indicate when the same type of problem is happening in several places and suggests a way to correct it.

“It automatically tells you, instead of a thousand issues, it’s actually one issue,” said Navin Thadani, co-founder and CEO of Evinced.

The software also takes context into account, factoring in the purpose of a site feature or considering the various operating systems or screen-reader technologies that people might use when visiting a webpage or other content. For instance, it identifies user design features that might be most accessible for a specific purpose, such as a button to enable a bill payment transaction rather than a link.

Some companies use tools typically referred to as “overlays” to check for accessibility problems. Many of those systems are web plug-ins that add a layer of automation on top of existing sites to enable modifications tailored to peoples’ specific requirements. One product that uses computer vision and machine learning, accessiBe, allows people with epilepsy to choose an option that automatically stops all animated images and videos on a site before they could pose a risk of seizure. The company raised $28 million in venture capital funding last year.

Another widget from TruAbilities offers an option that limits distracting page elements to allow people with neurodevelopmental disorders to focus on the most important components of a webpage.

Some overlay tools have been heavily criticized for adding new annoyances to the web experience and providing surface-level responses to problems that deserve more robust solutions. Some overlay tech providers have “pretty brazen guarantees,” said Chase Aucoin, chief architect at TPGi, a company that provides accessibility automation tools and consultation services to customers, including software development monitoring and product design assessments for web development teams.

“[Overlays] give a false sense of security from a risk perspective to the end user,” said Aucoin, who himself experiences motor impairment. “It’s just trying to slap a bunch of paint on top of the problem.”

In general, complicated site designs or interfaces that automatically hop to a new page section or open a new window can create a chaotic experience for people using screen readers, Aucoin said. “A big thing now is just cognitive; how hard is this thing for somebody to understand what’s going on?” he said.

Even more sophisticated AI-based accessibility technologies don’t address every disability issue. For instance, people with an array of disabilities either need or prefer to view videos with captions, rather than having sound enabled. However, although automated captions for videos have improved over the years, “captions that are computer-generated without human review can be really terrible,” said Karawynn Long, an autistic writer with central auditory processing disorder and hyperlexia, a hyperfocus on written language.

“I always appreciate when written transcripts are included as an option, but auto-generated ones fall woefully short, especially because they don’t include good indications of non-linguistic elements of the media,” Long said.

Click here to read the full article on Protocol.

A specialized video game could help children on the autism spectrum improve their social skills

LinkedIn
Child with Autism spectrum playing video games

By Emily Manis, PsyPost

Are video games the future of treatment for children on the autism spectrum? A study published in the Journal of Autism and Developmental Disorders suggests they could be. Video game-based interventions may be a cheap, easy, and effective alternative to face-to-face treatment.

Many people on the autism spectrum have trouble with social skills, which can lead to adverse effects including isolation and social rejection. This can put them at a higher risk for anxiety and depression. Interventions often consist of building social skills, which can utilize a myriad of techniques. Previous research has experimented with using video games as a tool for this type of intervention but did not have a control group. This study seeks to address limitations of past research and expand the literature on this topic.

Renae Beaumont and colleagues utilized a sample of 7- to 12-year-old children in Queensland with Autism Spectrum Disorder. Participants had to refrain from other treatment during the duration of the study. Seventy children participated, including 60 boys and 10 girls. They were randomly assigned to either the social skills video game condition or the control condition, which was a similar video game without any social or emotional skill component. (The social skills video game is called Secret Agent Society.)

Parents were asked to rate their children on social skills, emotional regulation, behavior, anxiety, and also rate their satisfaction with the program. Participants completed 10 weeks of their program and completed post-trial measures. Six weeks later they completed follow-up measures.

Results showed that the social skills intervention was successful, with the children in that condition showing significantly larger improvements in their social and emotional skills. These positive results were maintained during follow-up a month and a half later. Parents of children in the control condition noted improvements as well, but not as much as in the experimental condition. This could be due to the increased time spent with the children. The results did not show any significant effects of the intervention on the children’s anxiety but did show a reduction in behavioral issues.

Though this study took strides into understanding if video game-based social and emotional treatment is effective, it also has limitations. Firstly, the parents were the raters and are susceptible to bias. This is shown by the improvements perceived by parents of children in the control group. Additionally, there was a very uneven gender split in the sample, which could lead to skewed results.

Click here to read the full article on PsyPost.

Alight

Alight Solutions Logo

Leidos

Robert Half

Upcoming Events

  1. City Career Fairs Schedule for 2023
    June 6, 2023 - December 12, 2023
  2. Small Business Expo 2023 Business Networking & Educational Events Schedule
    June 23, 2023 - February 22, 2024
  3. Chicago Abilities Expo 2023
    June 23, 2023 - June 25, 2023
  4. B3 2023 Conference + Expo: Register Today!
    June 29, 2023
  5. 2023 Strategic ERG Leadership Summit
    August 3, 2023 - August 4, 2023

Upcoming Events

  1. City Career Fairs Schedule for 2023
    June 6, 2023 - December 12, 2023
  2. Small Business Expo 2023 Business Networking & Educational Events Schedule
    June 23, 2023 - February 22, 2024
  3. Chicago Abilities Expo 2023
    June 23, 2023 - June 25, 2023
  4. B3 2023 Conference + Expo: Register Today!
    June 29, 2023
  5. 2023 Strategic ERG Leadership Summit
    August 3, 2023 - August 4, 2023