Latest Posts:

Sorry, no posts matched your criteria.

Follow Us:

Back To Top

Our work in Computation

Scaling intelligence.

A working thesis as we seek to understand and build in this area.

TL;DR

We’ve come to think of computation as a tool that we use to complete our work and extend the bounds of our social and entertainment world. At its core however, computation is simply a mechanism to scale the manipulation and transfer of information. This is something that humans have been doing for thousands of years, from initial languages to factory automation. Right now we’re moving from a world where humans use computation as tools, to one in which computation is beginning to inform its own design, connect without humans-in-the-middle, interact coherently with the real world and, overall, escape the limitations of human oversight. 

 

We see five main areas of opportunity for new companies:

1. Removing the bottleneck of human design on semiconductor progress. Semiconductors are the substrate that all computation is limited by. At the same time, this should allow us to escape (what we believe is) a local optima in deep-learning by finding the right abstract between brain inspired networks and rapidly iterable primitives that underlie common functions.

2. Creating fluid interactions between machines manipulating complex data sets in knowledge-heavy, low-repeatability environments.

3. Extending this to interactions with the physical world.

4. Joining the dots between the capabilities available and the potential impact across high-tech sectors from drug development to clean energy, an area we’re already heavily focused on.

5. Looking at where uniquely human capabilities sit and could be yet further augmented in a far more automated world.

 

Areas for venture creation:
Quantum computing

We are developing a new application-driven framework across both the hardware and software layers using the dual particle-wave nature of light to be able to solve a larger class of problems. Currently recruiting individuals with broad technical knowledge at a Doctoral level – more on this opportunity here.

Reconfigurable compute acceleration

Until recently it has been infeasible to create architectures that can escape the inherent tradeoffs in CPU, ASIC and FPGA design, both at a materials and software level, whilst maintaining compatibility with existing scalable manufacturing processes.

We have laid the groundwork which suggests that there is potential for speed ups of many orders of magnitude on common computational intensive tasks by progressing all of these elements in parallel.

We’re currently recruiting individuals with technical knowledge across circuits design, memory materials innovation and software – more on this opportunity here.

Why scale the intelligence

At its core, compute is simply a mechanism to scale the manipulation and transfer of information; that is, information in the purest sense of the word, from the meaningful arrangement of atoms through to data. 

 

For thousands of years human brains were the only real game in town. We then began to leverage tools to carry messages beyond our immediate time and space (language, books, the internet and now ubiquitous connectivity), increase abstraction to enable the manipulation and transfer of greater amounts of information (more advanced languages, designs, protocols, pattern matching neural nets) and allow for calculations beyond our immediate working memory and perception (abacus, card fed computers and now dedicated compute accelerators). 

 

We believe that we are graduating from an age of compute as a tool used by humans with much of our world spent manipulating information into compute compatible forms, to self-orchestrated computation running large parts of the economy with little human engagement other than macro level resource allocation. In some areas, such as finance, as much as 80% of the trading is already undertaken by algorithms, in other areas where data is less directly mappable to models, guarded by rent-keeping business models or low frequency in nature, this will take longer.

 

We believe that happier and more productive lives exist in a largely automated world, one where we are no longer largely spending our days translating between various sources of information or actuating on behalf of larger software driven systems. However, this is the subject of many philosophical debates that we don’t have space to address here. This change is coming and we’re focused on how to move the world to one where the full breadth of society benefits from the step change of super-intelligence and automation across health, food, energy, happiness and abundant capital.

 

 

From pattern matching to independence

Right now machine learning is a tool which can accelerate tasks that are largely based on matching patterns and rules to data representations. If computers are ever going to have the capability to handle all but the most basic human supervised tasks, including many of the areas below, we will need to move beyond this paradigm. 

 

The constraints to progress in this area are twofold. Firstly, semiconductor design cycles are long and expensive, which in turn limits the algorithms that can be used to those that work with hardware that was designed for last season’s challenges. Despite the fact that the nature of computation has changed significantly over the last 30 years, all problems are nevertheless forced into 30-year-old  Von-Neumann architectures that are either flexible but slow, or fast but rigid. We need a fast and flexible substrate that can evolve alongside algorithmic advancement.  

 

Secondly, algorithm design is stuck in a local optima of its own; siloed between warring factions. In the first camp: those who believe in  ‘one giant deep-net to rule them all’ and in the second: the bio-inspired fanaticists who want to recreate neural spike timing to the microsecond. In all likelihood, the answer will be somewhere in between, and quite possibly beyond our own creativity to find, potentially requiring a meta-approach using a hybrid of human creativity and computational optimisation and evolution to search through the combinatorial space of possible configurations. This represents an opportunity in itself.

 

 

High level abstraction of machine to machine communication

Many people are currently employed in roles that involve, to a large extent, taking one digitised data source, applying a set of rules or translations and outputting in another form. This is true from tax auditing to marketing. This is laborious for people and challenging for machines due to the number of edge cases and the need to consider a wide context beyond the immediate task at hand. 

 

Existing APIs following mostly linear one directional programme flows are v1 of this world, the ability to share simple values and numbers between applications. V2 is likely in creating a more fluid back and forth interaction (see TEC example below) in which programmes automatically interrogate each other before reaching a more abstracted conclusion and sharing that conclusion. V3 is likely the unification of standards for sharing complex information such as context, or the graph of information that has gone into the decision so far. 

 

Portfolio company The Engineering Company (TEC) is a great example of a leap forward in this space. Computer Aided Design (CAD) is already a highly digitised paradigm yet it takes hundreds of people many years to design complex hardware, principally because the layers and languages don’t fully connect. TEC has joined up this space, reducing design time from years to minutes in many cases. 

 

This has been a monumental undertaking for TEC in an already digitised space, it’s an even larger challenge in spaces with messy data spread across calls, notepads and heuristics. Take for example the application of tax rules, theoretically these are easily encodable from law, in reality they are a moving picture from hundreds of interactions with HMRC (in the UK). Somehow the dynamic nature of an individual’s situation needs to be assessed against a dynamic macro picture and then communicated in a way that gives confidence that this is the best answer available. There are a lot of ‘soft’ problems to answer in the journey to full machine to machine communication. 

 

 

Enabling reliable real-world interaction

We currently sit in a somewhat dystopian time where many humans are working as the actuators of machines, just think about how surreal it is for the pickers being sacked by software in Amazon’s warehouses; how debasing it is for the thousands of people working as “mechanical turks” to label data for machine learning that will subsequently render them obsolete. Full digitisation will occur when solutions are designed from the outset to handle the diversity of outcomes across the problem space, together with advancing the fidelity of physical tactile manipulation, and where design of solutions focuses in particular on being able to deal with the expense and difficulty in capturing edge cases in training data, whilst also enabling strong yet unobtrusive security. No big deal. 

 

One of the most impressive efforts in this space is Vicarious (not a DSV co) whose robots are robust to changes in the environment specifically because they look for high-level abstracts: their papers are definitely worth a read. Portfolio company Cytera.bio is a also great example of how to create solutions that free up time for more human work, in their case removing the need for researchers to feed and monitor cells with all of the intricate decisions and subtleties of this work handled reliably and, critically repeatably, by the system.

 

 

Maximising the impact of advancements in compute on health, agriculture and energy

With respect to compute domain expertise, both talent and capital are siloed into specific applications and sectors and excessive hype which leads to confused incumbents and investors piling money into things that aren’t necessarily the best route to solve the challenge. The challenges in the computational drug development space were well covered by our friend Tom at MMC here and in our pharma thesis. Meanwhile, quantum supremacy is being declared for a machine that is in essence a random number generater, where random number generation is of course an inherent feature of the problem with quantum computers, noise, and simmulatable in classical environments with no clear route to the grandiose claims of complex chemical simulations etc. (more detailed post on this to follow shortly). 

 

Many of the challenges that these new compute paradigms may be capable of addressing, across pharma, clean energy and food resilience are too important to be lost in this gap. We will be looking at how we can create more effective cross-disciplinary coalitions to solve these challenges. Portfolio company Antiverse is a good example of our first steps in this space, with a mixed bio-computational team from day one, and backed by the founder of AbCam, they have focused not on the noisy market of protein optimisation like so many others but instead on solving hard targets that are impossible with existing tools (we have some incredibly exciting results to share in the coming weeks). 

 

HolyGrail represents another step in this direction in an area that’s close to our heart. They equip researchers across domains, from materials to bio, with the tools to quickly and easily find the global optima in the enormous search space of experimental possibilities, often turning years of relatively undirected trial and error into a few days’ work. For example one of their current customers has spent several years attempting to optimise a battery in a consumer product to meet requirements, entering the parameters into HG’s system it designed a set of experiments to find the optimum within an hour and the product was tested and completed just weeks later. 

 

 

Expanding human potential

Maybe one day all the knowledge work will be machine-led. Knowledge = data + understanding. Currently, computers are vastly better than humans at manipulating data, but lack understanding unless it is highly repeatable, domain specific and at enormous scale. Because of this, for likely the next 10 years, the computer will still be the tool of the human, even if the manner in which we use the tool changes beyond recognition. This will include areas such as better decision-making based on complex data, increasing creativity, enabling new working cultures, especially around working across multiple organisations (beyond just at the bottom and top of the job market), and seamless remote presence that has the same feeling as being in the same room with someone.

 

 

Our first mission

Beyond Von-Neumann and Moore’s law

Computing architectures are increasingly specialist, from custom-designed High Performance Compute (HPC) clusters to cloud and local ASICs. This is because the aim has historically been to accommodate the shift towards specialist processing loads, such as neural nets, which lean heavily on a small set of repeated operations such as matrix multiplications. However, the process remains rigid, extremely costly, laden with power-speed trade-offs. And whilst there are no doubt heuristic optimisations to be made in many cases (trading off optimality, completeness, accuracy or precision for speed) there is far more to be gained from working with a substrate that could optimise for this both at the software and hardware levels. 

 

Research has been on the verge of escaping the Von Neumann architecture and Moore’s law for 20+ years: neuromorphic designs have reached production, in-memory compute has moved the memory closer to the processing, non-volatile memory always sits tantalisingly on the horizon and quantum hardware companies promise huge increases in speed by recreating gates in the analogue quantum domain.  However, until recently it has not been feasible to create architectures that can escape the inherent tradeoffs between power, speed and cost, whilst maintaining forward compatibility and manufacturability. 

 

Even the much-hyped quantum platforms are only theoretically useful in a subset of tasks and can be easily simulated on a classical computer simply by adding noise. We believe that we can see a route to bypassing the traditional lock-step progression of hardware, thereby reducing iteration cycles from years to perhaps minutes and are currently seeking co-founders to take this opportunity forwards. If that sounds good to you check out our latest opportunities.