IoT and associated technologies such as AI depend on accurate data. But can every sensor be trusted?
We are coming out of an era where cleaning poor quality data has been a costly exercise – the era of ERP implementations and migrations.
Now, with IoT, machines are exchanging data messages with other machines via storage in the cloud, and organizations are using AI techniques including Machine Learning (ML) to process that data at incredible speeds.
So how, in this new era, are we managing the quality of IoT data?
The short answer is: we’re not. And we’re potentially sleepwalking into a situation where we ask AI to learn things based on flawed, poor quality data, which can very quickly lead to exponentially flawed results.
For example, at a basic level, IoT sensors are designed in fundamentally different ways according to the vendor: different nationalities of vendors will program sensors to read temperature in Celsius or Fahrenheit. And when you put data from these disparate sources into a single cloud, it’s unable to merge. That may be simple enough to fix.
But what if individual sensors start to drift in their readings? In the world of physical machinery there are specific rules for calibrating equipment unit by unit. But imagine you have thousands or millions of sensors – no human calibration or maintenance operative can ever keep up. And when you factor in a failure rate of 10-15% of IoT sensors in any given installation, the task looks even more daunting.
As things are becoming more connected and machines rely on accurate data to perform the right actions, calibrated sensors are the prerequisite to deliver on the promise of IoT.