What if anyone on the internet could learn to read and speak English from a free microphone application on their web browser, mobile phone, or tablet, just by reading aloud from the screen to choose their path through a series of adventure stories? Please help make it a reality.
This summer, expert software engineers are building a free open source speech recognition-based game to teach beginning and intermediate English reading and pronunciation to learners of all ages and backgrounds. As reading and spoken language learners attempt to read phrases out loud to work through the game, their speech and associated pronunciation fluency will be measured by phonemes, biphones, words, phrases, and aggregated phrase scores to track and chart their progress. The entire system will be published as open source and made available for free on the web, Android and Apple phones and tablets, and One Laptop Per Child (OLPC XO) laptops, initially for the world's English language learners but adaptable to other languages.
To see how useful and important this technology is for teaching reading and spoken language, please see Aist, Mostow, et al. (2001) "Computer-assisted oral reading helps third graders learn vocabulary better than a classroom control — about as well as one-on-one human-assisted oral reading," in Moore, et al. (eds.) Artificial Intelligence in Education: AI-ED in the Wired and Wireless Future, pp. 267-277 (Amsterdam: IOS Press.) This research proves that speech recognition based reading instruction can be more effective per time spent than instruction from a teacher instructing only two students at once.
What we need to reach this goal:
We have secured generous corporate sponsorship to pay for two of the engineers working on the team, but we need to raise enough to bring one more engineer on board to make sure that:
- (1) the game can be delivered as a stand-alone application on a wide variety of mobile platforms;
- (2) we can collect enough exemplar pronunciation data for practice material sufficient to challenge and engage ESL and beginning learners with diverse reading levels and fluency;
- (3) we can create sufficient game play scenarios, graphics and animations to make the game fun and exciting for learners of all ages and backgrounds; and
- (4) we can secure our project server hosting and development hardware needs. We already have a generous donation of server hosting through August, but we need to extend it to handle much larger expected bandwidth.
Your contributions will be used to fund those needs. Can we use excess funds? YES! Items (1), (2), and (3) are essentially open ended through $50,000, so we very much hope to substantially exceed our minimum fundraising goal.
Srikanth Ronanki is a fifth year B-Tech and Masters student at the International Institute of Information Technology, Hyderabad in India. He is accomplished in using the open source CMU Sphinx3 speech recognition system for pronunciation evaluation. See Ronanki's accepted proposal for this project. FUNDED.
Li Bo (English name: Troy Lee) is a fifth year Ph.D student at the National University of Singapore Computational Linguistics Lab. His primary research interests include acoustic modeling and speaker adaptation for automatic speech recognition. He has also been using Sphinx3 for language learning. See Troy's accepted proposal for this project. FUNDED.
Guillem Perez is studying Computer Engineering at Pompeu Fabra University in Barcelona, Spain. He is interested in speech recognition technology, computer vision, and audio signal processing. His thesis is on speech recognition evaluation to improve second language learning. He has extensive experience with mobile technology, and Android phones and tablets in particular, as well as Windows Phone. NEEDS YOUR FUNDING FOR THIS PROJECT. Please donate generously. Thank you.
James Salsman, project mentor: full bio and links at right. Shown here in a 14 minute 2010 video demonstration and discussion with Robert Scoble. TENTATIVELY FUNDED, but funds raised will allow more time to devote to this project and the work described above. Please donate generously. Thank you again.
[Track this project on Kicktraq.]