Multimodal Learning- An App to Improve Human Reading with Active Eye-Tracking
The aim of our project is to support people in learning a language. There are a number of reasons to learn languages in our networked world – may they be professional, social or political. We in-tend to implement an app which will enable users to read continuous texts on different topics. These previously edited texts will be analyzed with the aid of Machine Learning and paragraphs will be supplemented with a suitable context-related picture. The app will be operated by eye-tracking in order to facilitate a barrier-free use. Furthermore, we want to find out if there are problems of comprehension: if a user looks at a word for a longer time it will be highlighted with a picture. Thus, intuitive learning is supposed to be supported.
Research group
- Ali Ebrahimi Pourasad
- Daniel Djahangir
- Robert Geislinger
Mentor
- Prof. Dr. Chris Biemann