Wearable devices are a great way to display content, but they lack processing power. With so many on-board sensors generating data the question is: can we process this data in real-time on a wearable device? In most cases the answer is no. COSMONiO has addressed this problem by integrating its NOUS supercomputer with wearable devices to offer real-time data processing.
Let’s take Google Glass as an example. With its on-board camera it can capture video and stills. What if we could run a Deep Learning application to recognise patterns in these images? Currently, Glass or any paired mobile device would not have enough power to perform this task. This is where NOUS comes in to offer real-time video stream analysis.
Many people assume that the main problem that visually-impaired people experience is avoiding obstacles. As a result, a lot of research has focussed on developing 3D mapping systems to help with navigation in unknown areas. However, an expert team from the University of York led by Professor Helen Petrie identified a different kind of problem as the leading one. Visually-impaired people can be trained to use the walking stick or a guide dog very efficiently. But if they need to go the the post-office, how do they navigate the last few meters between the bus-stop and the post-office door?
We asked ourselves: Could we develop a system that helps visually-impared people navigate the last few meters of their journey?
If the visually-impaired person wears a wearable device such as Google Glass then we use the camera to capture images of the street. Since we know the user’s GPS location we can also gain access to all the Google Streetview data in the area.
Using computer-vision we can then compare the camera images to the Streetview images.
Since we know the location from which the Streetview images were captured we can work out the user’s position and give navigation instructions.
Since Google Glass does not yet have enough power to perform this visual navigation task, we developed a proof-of-concept prototype using a more powerful iPhone. The following video demonstrates the concept.
Wearable devices with on-board cameras offer a wide-range of possibilities for computer-vision and machine learning applications. We have already developed an architecture for real-time video transmission between wearable devices and the NOUS supercomputer, which allows really advanced applications to run on wearables with zero lag.