digiLab was founded to provide top-tier data science to the engineering industries. A spinout from the University of Exeter, digiLab uses pioneering machine learning to transform the efficiency, resilience and environmental sustainability of its customers. This conversation with the Co-founders of digiLab, Dr Anhad Sandhu and Professor Tim Dodwell, explores the genesis of the company and reveals how digiLab is already optimising approaches to water treatment and nuclear decommissioning.
What led you to create digiLab?
Tim Dodwell: I lead the Data-Centric Engineering group at the University of Exeter, and lots of engineering companies were contacting us to ask if we could help them solve key problems by exploiting their underutilised data. These were often important, impactful problems that I knew could be addressed with state-of-the-art machine learning approaches. But they offered nothing new from a purely academic perspective, so we were unable to collaborate through the traditional university/industry model.
So me and Anhad, who was then a member of my academic group, decided to join forces in creating a spinout to work with these companies to create the real-world transformative impact that we know machine learning can deliver.
How confident are you that there’s a market for what you are offering?
Anhad Sandhu: Engineering companies are crying out for it. We saw this many times over before we launched, during our participation in a customer discovery programme called Innovation to Commercialisation of University Research(ICURe), funded by Innovate UK. It is designed to take university research and the academics involved through a rigorous market research phase. We spoke to over 100 companies across seven continents – even Antarctica! – and the response from these potential customers was overwhelming.
Many engineering, utility or power companies have already spent a lot of money and other resources collecting data about their processes. They’ve put in that early effort, and they've been doing it for years. So they've got all this warehoused data lying dormant and making no real impact on their business. But this data is gold, if only they knew how to mine it effectively. And that’s why they come to us.
How do you like to work with your customers?
AS: Most traditional companies simply don't have data scientists working for them. We come with the integrity of a top-tier academic group and we partner with our clients, meaning they don’t need the big capital outlay of building a data science team without really knowing what they're trying to do. So our working model starts with consultancy, and that may lead to the development of bespoke, machine-learning-based software products to solve their industrial challenges.
TD: We work at a personal level, face-to-face, and we’ll take a deep dive into your business challenges, look at your data, and then build the solution with you. But if there isn't a data science/AI solution for your problem – if you can do it using a ruler rather than a supercomputer – I'd like to be the person who clarifies that for you. Unlike big consultancies like PwC or McKinsey, we won’t push you towards a predefined range of essentially off-the-shelf products.
How would you describe your technical approach?
TD: Fundamentally what we do is probabilistic forecasting. So we're taking a customer’s data – often time-series or sequential data – and then saying, based on this data and what we're observing right now, that this is how to be more intelligent about your engineering, industrial or resourcing decisions; about what you do next as a company. We call it decision intelligence.
As Anhad mentioned, companies that we want to help are likely sitting on big, complicated datasets, often unstructured, that they don't know how to exploit. We can work with them to develop or deploy the latest AI algorithms to give them that decision intelligence. We enable them to make good strategic decisions to be more sustainable, robust and profitable.
Can you give me an example?
TD: Take one of our earliest clients, South West Water. They use a variety of sensors to measure the water quality of the River Exe. The company discharges treated water into the river and monitors the water quality downstream. Depending on the composition of the treated water and the downstream readings, the discharged water is dosed with coagulants which improve water quality. But coagulants also affect the environment, and they are costly. Before we came in, they were using a simple equation and expert human judgment to make their water-dosing decisions. That was working, but they felt they could improve on this process.
So we took all their sensor readings and their unstructured data and we designed a state-of-the-art algorithm that used those data to teach itself how South West Water could optimise the dosing, while also aiming to minimise the amount of coagulant it was using. Our system quickly learned how South West Water could maintain water quality at the desired level while simultaneously reducing the amount of coagulant by 40%. A huge difference.
AS: That was great news from a sustainability perspective, because it minimises adverse effects on the river and its environment. Addressing water quality issues is high on the agenda of utility companies nationwide, particularly after a different water company, Southern Water, was fined a record £90 million last year for its unsustainable practises.
What other sectors are you working in?
AS: We are also working with the engineering giant, Jacobs, as they tender for decommissioning contracts for magnox-type nuclear reactors. Decommissioning is a growing market as more reactors come of age and must be returned to brownfield condition.
Where does data science come into nuclear decommissioning?
AS: Nuclear decommissioning costs billions of pounds, and to secure a government contract to perform this work, a company must show how it can keep costs under control.
For a nuclear reactor commissioned in the 70s, say, there’s a lot of information on paper or in analogue form. And there will have been spot checks on the reactor’s radioactivity field over time, but this data is sparse with lots of uncertainty. And you need to find out the current radioactivity to plan the costly series of operations required to disassemble and decommission it safely. This means going into the reactor to take sample readings in various places. Each sampling is expensive, and there is a lot of inefficiency in industry-standard ways of sampling. What we can do is take their existing data and use it to tell them precisely where they need to sample next to get the most bang for their buck.
So you are nuclear experts, as well as data scientists?
TS: The funny thing is, we don't need to be. For example, I recently published a paper called Where to Drill Next?, in which I addressed this challenge from an academic viewpoint: where should you sample next to maximise your informational gain? Using the latest in Bayesian machine learning techniques, the aim was to maximise the information gained about a system by identifying the regions of greatest uncertainty, and prioritising sampling in those locations, learning as you go, to construct an accurate picture using limited data. It is a perfect approach to this decommissioning sampling problem, and a massive money saver in a nuclear industry that often employs methods developed in the 1960s.
What other sorts of companies are you looking to work with?
AS: At this stage, we are mostly interested in big organisations because they've got the big datasets and infrastructure. They represent an opportunity for digiLab to make a greater impact in terms of industrial and environmental sustainability, resilience, and simply making the most of finite resources.
But as we grow we will be keen to supercharge the offering of other start-ups that are also looking to make a big impact on environmental sustainability.
How do you plan to build your team in 2022?
TD: This is really exciting. Our funding is secured, so now we are building our team. We will be one of few high-end, supercharged tech companies based in Devon, though we expect a remote-working element too, of course. But for me, a keen surfer, this is the ideal world that also offers a great work and family-life balance.
We will be advertising specific roles, but we’re also interested to hear from enthusiastic data scientists who are full of ideas and passionate about problem-solving. We plan to build an agile and diverse team that gives people real opportunity. That's how I built my academic research group – if someone flourishes, they’ll be given room to run.
Dr Anhad Sandhu is Co-founder & CEO of digiLab.
Professor Tim Dodwell is Co-founder & CTO of digiLab, Professor in Applied Mathematics at the University of Exeter, and a Turing AI Fellow at The Alan Turing Institute in London, the UK’s national data science institute.