Latest Posts:

Sorry, no posts matched your criteria.

Follow Us:

Back To Top

Neuromorphic Computing

Why we're excited about it

The application of machine learning is exploding but hampered by existing von-Neumann architectures both in the cloud and at the edge. Even the latest NVIDIA GPUs, Google TPUs and other dedicated ASICS for mobile ML are still fundamentally limited by both the diminishing returns of ever smaller transistors and the need to shuttle information back and forward between memory and processor. This is very clearly demonstrated by the fact that AlphaGo Zero required 5000 TPUs to play one human at chess (or about £25,000 per hour at current cloud prices).

This leaves machine learning and the broader AI picture in somewhat of a brute force local optima. Back-propagation has enabled a wave of impressive, yet fragile, pattern matching that requires near artisan crafting and provides little hope of a more generalised approach.

In theory, neuromorphic architectures – inspired by the analog and integrated memory-processing architecture of the brain – could provide power efficiency that is 8-9 orders of magnitude higher than von-Neumann architectures. However, to date implementations seem to exist largely in three camps: (1) neural memristor type hardware implementations of traditional ANN algorithms which, perhaps unsurprisingly, fail to show any real performance increase; (2) hyper realistic simulations of parts of the brain; and (3) nano level manufacturing to create structures and materials that behave somewhat like a synapse.

Unfortunately, very few of these camps seek to provide a full-stack approach from flexible substrate through to meaningful leap forward in a given application.

We believe that bio-inspired architectures both in software, hardware and full-stack approaches have the potential to massively reduce the power required to train models, and open up the space beyond brute force to create truly adaptive systems. This will be a long transition, but we believe that there are very realistic niches available now across autonomous systems from cars and robotics to networking, security, finance, and scientific computation. There is a much quicker path to low power, highly scalable bio-inspired computing, we just need to find it.

Neuromorphic Computing

Why we're excited about it

The application of machine learning is exploding but hampered by existing von-Neumann architectures both in the cloud and at the edge. Even the latest NVIDIA GPUs, Google TPUs and other dedicated ASICS for mobile ML are still fundamentally limited by both the diminishing returns of ever smaller transistors and the need to shuttle information back and forward between memory and processor. This is very clearly demonstrated by the fact that AlphaGo Zero required 5000 TPUs to play one human at chess (or about £25,000 per hour at current cloud prices).

This leaves machine learning and the broader AI picture in somewhat of a brute force local optima. Back-propagation has enabled a wave of impressive, yet fragile, pattern matching that requires near artisan crafting and provides little hope of a more generalised approach.

In theory, neuromorphic architectures – inspired by the analog and integrated memory-processing architecture of the brain – could provide power efficiency that is 8-9 orders of magnitude higher than von-Neumann architectures. However, to date implementations seem to exist largely in three camps: (1) neural memristor type hardware implementations of traditional ANN algorithms which, perhaps unsurprisingly, fail to show any real performance increase; (2) hyper realistic simulations of parts of the brain; and (3) nano level manufacturing to create structures and materials that behave somewhat like a synapse.

Unfortunately, very few of these camps seek to provide a full-stack approach from flexible substrate through to meaningful leap forward in a given application.

We believe that bio-inspired architectures both in software, hardware and full-stack approaches have the potential to massively reduce the power required to train models, and open up the space beyond brute force to create truly adaptive systems. This will be a long transition, but we believe that there are very realistic niches available now across autonomous systems from cars and robotics to networking, security, finance, and scientific computation. There is a much quicker path to low power, highly scalable bio-inspired computing, we just need to find it.

Join

We’re currently recruiting for a Founding Analyst to lead our venture creation activity in Neuromorphic Computing.  More info here.

If you are interested in joining a project, pitching an idea, investing or other collaboration opportunities in this area, please get in touch via hello@dsv.io