As part of the crescendo of noise surrounding the Industrial Internet of Things (IIoT), predictive maintenance is generally viewed as one of the most compelling initial arguments for moving forward. As manufacturers make their way into implementations, however, they’re finding that collecting data from connected assets is just the tip of the iceberg. Identifying the right data, contextualizing that data so it’s mapped to desired objectives, and linking the entire process back into existing workflows is where the real heavy lifting comes in, creating obstacles for all but the most progressive companies.
The sheer magnitude of the leap—moving from decades-old, clipboard-based data collection and maintenance processes performed by onsite plant personnel to digital workflows that can be automated and orchestrated by remote workers—requires a certain level of confidence and digital infrastructure maturity not yet pervasive among a majority of manufacturers. Many legacy industrial assets are still not outfitted with sensors, let alone connected, which impedes any potential data collection. In addition, most manufacturers don’t yet have a clear picture of how to create and apply machine learning and predictive models to drive these next-generation maintenance workflows.
“Despite the IIoT buzz, there’s a lot of FUD (fear, uncertainty and doubt),” says Kevin Starr, advanced service global program director for ABB. “Companies know there really is an industrial revolution on the horizon and there’s a lot of discussion, but they don’t want to make a mistake and have to redo their efforts.”
Manufacturers have latched on to the concept of predictive maintenance as a strong return on investment (ROI) use case for IIoT, hoping to benefit by eliminating downtime, streamlining service calls and reducing maintenance costs. Although those metrics are a viable upside to IIoT-enabled predictive maintenance applications, manufacturers are realizing that it’s a lot harder to scale from pilot project to production, acknowledges Rob Patterson, vice president of strategic marketing for the ThingWorx IoT platform at PTC.
“Many companies get stuck in the pilot purgatory of IoT projects,” Patterson says, explaining that manufacturers can get tripped up by miscalculating what it takes to design and build the predictive maintenance models while lacking the on-staff data scientists and domain experts who are critical to getting projects off the ground. “People have the perception that it’s a magic block box that ultimately produces outcomes and predictions with very little involvement from human hands. But this a fundamentally flawed perception of what machine learning entails.”
The predictive maintenance stack
As with any new and transformative initiative, the fact that there isn’t a packaged, out-of-the-box solution for predictive maintenance is a hurdle for most manufacturers. Mature predictive maintenance applications require more than just connected and sensored industrial assets, which in itself is no easy task. An effective solution is architected around a cloud platform for ingesting and aggregating all the various data points, analytics and machine learning capabilities for combing through data and unearthing insights, integration capabilities for access to other core enterprise systems like enterprise resource planning (ERP) or service platforms, a well-designed user experience based on dashboards that empower the maintenance worker to easily make the required fixes, and process changes to tie the predictive insights into existing maintenance service systems and workflows.
“This requires a fairly decent level of IoT technical maturity,” notes Alan Griffiths, senior industry analyst specializing in digital transformation and IoT for Cambashi, a global research firm. “The technology is coming along, and some companies are using it effectively. But not many are putting implementations together at a mass production level.”
Griffiths says some manufacturers have been doing a form of predictive maintenance for some time—tracking assets via sensor data to understand, for example, when a particular motor type should be replaced. But this is typically done more at an aggregated level as opposed to getting a window into the condition and failure potential of one specific asset. As companies move into the realm of IoT-enabled predictive maintenance, they typically first do simple monitoring to issue alerts if a temperature goes above a certain threshold, for example, and then move on to control and optimization, whereby they apply analytics to the sensor data to gain insights into failure patterns and preemptive fixes, he explains.
Though many customers of Fluke, which provides computerized maintenance management software (CMMS) and enterprise asset management (EAM) software, are talking about predictive maintenance, they don’t fully understand the concept nor have they built a proper condition monitoring foundation to capture the base data and provide context for whether something is about to fail or deteriorate, says Kevin Clark, vice president of Accelix. Accelix is a new integration layer that connects Fluke’s eMaint CMMS with a variety of connected tools such as sensors as well as third-party systems.
Customers can easily start collecting and aggregating data on their assets via the Fluke Connect Cloud, Clark explains. The hard part is knowing what that data is telling you. “The basics of predictive is understanding the data,” he says. “This is why predictive hasn’t taken off—because it hasn’t been simple enough.”
Condition-based data in the Fluke Connect Cloud can send alarms if an asset is running too hot or vibrating too much, triggering a workflow to an operator, which lets companies move from condition-based monitoring to condition-based maintenance. Even so, Clark admits that’s still not predictive. To get there, companies need to go through the proper exploration to figure out what kinds of failure modes they want to detect and then what kind of data, beyond what’s collected by sensors, is critical for creating context for a more actionable analysis. “It takes a study of your equipment and the processes around that asset to truly understand what a predictive point is,” he says. “Most haven’t gotten there yet because it’s hard.”
Manufacturers should first take a step back and identify what assets are the most critical, and thus ripe for predictive maintenance. Implementers should distill the equipment by manufacturer and equipment type, understand which equipment impacts the business most and clarify what possible failure modes are the most dominant, says Joe Nichols, chief operating officer of industrial applications for GE Digital. Once that exercise is complete, there’s a need to identify data that can contextualize the findings—whether that’s information from another enterprise system that can shed light on past maintenance records or on-going quality issues, for example, or even external resources such as weather or geospatial information.
Subject matter expertise on issues like when to make repairs, the most common periods for asset downtime, and the root cause of common failures is also critical to creating predictive maintenance models, according to Phil Bush, product manager for remote monitoring and analytics services at Rockwell Automation. “You need to look at patterns to develop correlations between certain performance data and events you’ve seen in the past and what normal patterns of behavior are,” he explains. “From there, you can tell if something is wrong or not.”
The data dilemma
Getting the underlying data right is also critical to fueling the right analytics. Not only is it important to identify the right data resources, you also need to ensure the data is accurate and at a granular enough level to enable predictions, notes Michael Donohue, vice president for thermal energy at Uptake. Uptake markets a cloud-based system that overlays existing ERP or supervisory control and data acquisition (SCADA) systems, ingesting and analyzing sensor, enterprise and even contextual data such as weather and lightning strikes to deliver insights related to performance, energy optimization and predictive maintenance.
MidAmerican Energy, an energy provider serving nearly 1.5 million electric and natural gas customers and a wholly owned subsidiary of Berkshire Hathaway Energy, turned to the Uptake predictive analytics suite to increase availability and optimize performance for turbines in one of its wind parks. By ingesting data from the turbines and doing analysis work, Uptake was immediately able to identify that something was off with one of the main bearings of a particular turbine because it bore a signature similar to prior failures that had previously led to a catastrophic gearbox failure, Donohue explains. Uptake let the MidAmerican Energy engineers know, and the team was able to issue a quick fix for less than $5,000. Now, months into the Uptake initiative, the company has saved $250,000 by finding a handful of similar anomalies on the more than 10 percent of turbines now under management.
“The ability to plan ahead and see anomalies and failures ahead of time, be able to plan those, get technicians into the tower one time to make repairs—that’s the leading edge of maintenance programs,” says Mark Jeratowski, maintenance manager for MidAmerican Energy.
Just as important as the data is creating closed-loop processes and integration into other enterprise systems so you can automatically initiate a maintenance action when required. To that end, GE is tightly coupling its ServiceMax cloud-based field service management with its asset performance management (APM) software to close the gap, facilitating work orders so problems get fixed proactively and providing contextual information in areas like service history, including previous failure information and frequency of break-fix issues.
“We’ve seen people implement predictive systems, manage their data, uncover anomalies they don’t like—but they don’t have a systematic way to close the loop on alerts coming out of that,” Nichols explains.
In the end, predictive maintenance projects will span a range of maturity levels, from mapping a simple temperature threshold to a critical failure mode to multivariable analysis of changes in patterns. “They will go from the super simple to solving really big problems or the super complex to solving really big problems,” Nichols says. “Customers need to know what they are trying to solve and what level of sophistication they can handle.”