Process historians are time-series data collection systems that record important parameters to be used for (among other things) reporting, regulatory record-keeping, and diagnostic troubleshooting. Purpose-built historians have been commercially available for about 40 years now, and MANY manufacturing facilities have them as a part of their control network. If you’re in the manufacturing space, there’s a good chance that your plants have them.
Have you ever wondered how much data has been collected by process historians over the last 40 years? To visualize how quickly the numbers escalate, consider a single historian with 10,000 tags (not a very large number for a modern historian database) that change two times per minute. In one decade, this system would generate over 100 BILLION records.
A Mountain of Data
In an article published in 2013 by McKinsey, authors Martin Baily and James Manyika asserted that “Manufacturing stores more data than any other sector – an estimated two exabytes (two quintillion bytes) in 2010.”1 In case you don’t have it committed to memory, an Exabyte is a million Terabytes. And that data volume has undoubtedly increased significantly in the last decade. Stored process data must make up a sizeable percentage of that, but how much? I spent some time online trying to find an estimate of the data volume currently stored in historians, but eventually gave up. No matter the actual number, it’s undeniable that a mountain of data has been collected into historians in the last few decades.
That thought surfaces some other questions for me:
- In any given facility, how many people have access to that data?
- What are the hurdles to make it more widely available?
- How much of the available value has been extracted from the collected data?
- Can process history be used to predict the future of that process?
- What if there was a simple and secure way to get access to this data and add the ability to point and click your way to AI Analytics?
AI & Machine Learning
A convergence of factors has made artificial intelligence and machine learning more accessible than ever and, in the last few years, the ease with which that power can be deployed has improved by orders of magnitude. The advent of inexpensive cloud computing has made it very easy to aim the power of machine learning at the kinds of complex challenges that face manufacturers. But machine learning models need lots of data for training and validation. Where could one get such data? From your historian, of course! Put that mountain of data to work to train a model (or models) that can predict your plant’s final quality, or throughput, or energy consumption, or remaining useful life, or probability of failure, or… You get the idea.
What would the impact to your organization be if you could predict a future problem and take action to correct it before it actually happens? Join me for a look at a continuum of modern, cloud-based predictive analytics tools that could do just that.
AVEVA Insight guided and advanced analytics