Why we invested in Opteran
The team at Opteran are planning to put a honeybee brain on a chip. No, this is not an attempt to replace our endangered pollinators with robot equivalents, and while it might sound like the stuff of science fiction it turns out to be a very useful idea — and just the thing to unlock the next generation of digital applications, increasingly deployed at the edge of the Internet.
Autonomous vehicles, drones or robots have to make decisions all the time in response to rapidly evolving situations. How do I avoid that bird; the left hand turn on my route map is blocked; there is an obstacle in front of me on the street; and so on. Current deployment models tend to capture situational information and then either send it to the cloud for processing or rely on deep learning models deployed on the device. This is inefficient, slow, expensive in terms of compute time and power — and often fails many edge use-cases. SLAM (Simultaneous Localization and Mapping) technology, heavily invested in by Google, Apple and Amazon among many others, is seen as a core tool for this next generation of applications, but still has many outstanding challenges.
Insects on the other hand are pretty good at identifying obstacles, avoiding collisions, and taking evasive action when necessary. And what’s more they are able to do this in a relatively efficient manner — with something like 1m neurons a bee’s brain is much more efficient than the current machine learning equivalent. And it is these properties that the Opteran team are focusing on. Led by James Marshall and a top team of academics from Sheffield University, the team has been working to understand insect brains for many years — and is now beginning the process of deploying their research to address real world problems through university spin-out Opteran.
Initially the team offers technology that can quickly analyse a situation using video images — through cameras for example. Opteran’s technology can classify activity and very quickly identify objects of situations that need responses — much faster and more efficiently than existing smart cameras can. The team are also working on enabling decision making, taking evasive actions and ultimately offering full autonomy, all with the same compact compute footprint. The vision here of replicating efficient natural capabilities in digital applications is compelling — and the value of this technology to a broad range of use cases is clear.