Case Study

Using SQL to increase uptime

An East Coast processor works with a system integrator to learn what was causing downtime and pinpoint the equipment involved.

When an East Coast processor wanted to know what was causing downtime, which equipment was involved, and exactly how much downtime it was actually experiencing, it turned to Andy Baughman, an M.G. Newell control systems engineer. 

M.G. Newell is a certified member of the Control System Integrators Association (CSIA).

Baughman upgraded the controls and programmed the system to collect all relevant data about the process. He then developed a solution based on SQL (Structured Query Language) programming that allowed the processor to data-mine the collected process information. In the end, the processor was able to:

• See trends about the equipment and process that had not been detected previously.

• Tailor preventive maintenance using these trends.

• Allow users to integrate web features, such as talk-to-text.

• Enable its West Coast office to monitor the process in real time.

• Use SQL for its versatility to add other options such as bar-coding in the future.

• Improve monitoring and data collection. (For example, the processor discovered it was running more than 50 units per minute instead of the 30 units per minute previously thought.)

SQL is a programming language used for storing, manipulating and retrieving data from databases. Database tables collect and store process and product data in a way that can be retrieved and used later. SQL allows users to describe, define and manipulate that data and even allows the user to embed it within other programming languages. 

“Flat files” describe data that is stored in columns and rows (like an Excel spreadsheet). “It is preferable to collect data in database tables rather than flat files because downtime doesn’t fit the flat file model,” says Baughman. “For any given production run, the number of downtime events can vary from zero to infinity. A flat file would need enough columns to store the maximum number of downtime events. This would not be efficient or practical because any production run that concluded with no downtime would have zeros or ‘null’ values in every column assigned for downtime events. The file itself would be huge and contain very little to no information. Likewise, retrieving any information would be tedious.”

A better solution, says Baughman, is to store the data in multiple “tables.” A “job” table could store all the information regarding a unique production run. A job marries a specific product to a specific production line. It documents details about that job, such as start/stop times, product information, production rate, etc. A “downtime” table could store all the information regarding each downtime occurrence. The two tables could then be linked to each other in a database. A new field could be added to the job data to represent the number of downtime events that occurred during each production run.  

Additionally, database tables can be added or deleted at any time. For example, the East Coast processor plans to add bar-coding functionality in the future. By collecting as much data as possible now, the information can be data-mined in the future. 

Companies in this article

Joyce's Voice Newsletter

Subscribe now to receive Joyce Fassl’s weekly e-newsletter focusing on the industry’s most pressing manufacturing and operations stories.