InnerVoice, the app that helps kids with autism improve their interaction with the world
From a very young age, neurotypical children learn social, cognitive, and communicative abilities from their primary caregivers by listening to speech and looking at faces. Recognizing, interpreting, and mimicking their caregiver’s facial expressions allows infants to identify socially important people and understand others’ internal mental states.
When a guardian establishes eye contact and communicates with his/her child, it helps to activate specific neurological structures in the baby’s brain, which allow the child to develop essential social, cognitive, and communication skills. When these components in the brain develop slowly or activate atypically, the global effects on a child can be profound — such as those seen in autism spectrum disorder.
Speech is a motoric activity and requires a lot of practice and imitation to master. Research shows that video self-modeling is an effective way to teach skills to people with autism — possibly because it stimulates mirror neuron activity in the brain.
This is why Lois Jean Brady and Matthew Guggemos founded iTherapy, a company focused on helping people on the autism spectrum communicate more easily and naturally. They’ve begun on the market with the InnerVoice App and have recently received a grant from Microsoft for AI for Accessibility to develop the InnerVoice Artificial Intelligence: Visual language. Visual Language is an exciting new feature that uses Microsoft’s Azure artificial intelligence technology to teach language and literacy skills in a unique way. Take a picture of what you’re looking at and watch InnerVoice’s AI system label your picture with text and describe it with speech — allowing users to see the relationships shared among the environment, speech, language, and text.
At Inspiralia we are keen to help our clients secure the grants they need to make their innovative ideas a reality. iTherapy now benefits from our knowledge in public funding as well as our services in the development of their new technology. Our team is closely working with Jeanie and Matthew on new projects for the National Science Foundation and the National Institutes of Health under the scope of helping children diagnosed with speech impediments better learn important language skills. This ranges from validating methodologies to curate teaching stimulus so that the children can absorb more information, to creating a physician support tool that uses deep learning to help differentiate between different language issues that could be present.