Page 72 - Occupational Health & Safety, September 2019
P. 72

PREDICTIVE ANALYTICS
Three Principles of Leveraging Predictive Analytics for Safety
BY NICK BERNINI
Ask anyone who is looking for more insight into their operations and they will say they need more analytics. But do they know what they really want? Analytics has be- come such a generic term with such a lack of defini- tion that many who ask for analytics don’t really know what they are asking for. At its most basic, analytics is the use of data to make informed decisions.
How can a safety professional benefit from imple- menting an analytics program? Think of the sea of information your teams have entered into multiple tools and systems. Hidden in all those disparate data sources is a lot of valuable information. Now, con- sider the time, effort, and resources already invested to measure and report risk in your organization’s en- vironment and associated with understanding that data. That’s where analytics comes into play.
A form of advanced analytics is predictive mod- eling. With predictive modeling, data is used to pre- dict an outcome and the toolset used to do so is ma- chine learning. Machine learning uses the sea of data you are currently gathering to predict where risk is going to be higher—this takes your analytics to the next level.
How does machine learning accomplish the “next level” of analytics? Studies show that the hu- man brain can handle at most four pieces of in- formation at a given time. To better asses risk, it is typically necessary to leverage many more than four pieces of information which then creates the prover- bial rock and a hard place. Additionally, when trying to mitigate risk, organizations tend to increase the amount of data gathered and metrics tracked, which only increases the already unmanageable see of data. How can all this data be leveraged together, at the same time, to understand where risk is likely to oc- cur? The answer is predictive modeling.
One of the many benefits of predictive modeling is the capability to feed tens, hundreds, or even thou- sands of disparate data points into a software to iden- tify patterns. The hope is that the patterns found can be used to make informed decisions to sway the outcome favorably (i.e. mitigate risk). But what is a model? At its core, a predictive model is just an al- gorithm. The algorithm is given data, such as leading indicators, told what events to look for (i.e. risk or lagging indicators), and then finds the patterns that correlate between the two. There are no emotions. No biases. Just 1s and 0s. These algorithms have been proven to work with up to 97 percent accuracy. Em- pirically, these algorithms can forecast risk with bet- ter than random chance. Another way to think of it is that predictive modeling ensures you get the most
68 Occupational Health & Safety | SEPTEMBER 2019
www.ohsonline.com
value out of the sea of data you gather by clearly iden- tifying areas that need further identification. This is the magic bullet of predictive modeling. It is a tool in the safety toolbox specific to helping make sense of the data that’s been collected.
This predictive modeling technology centers on three foundational principles, each of which encom- passes technical and ideological concepts. These foundations are key for any organization to gain the utmost value from predictive analytics within your organization. These pillars include:
Trusting the model
If the goal is to mitigate risk by leveraging data and ana- lytics, the foundation for that goal is having trust in the model. As all decisions made afterwards will be based on the outcome of this model, there needs to be faith that the foundation of model-based decisions is strong and valid. In validating trust in a model, there are three items that must be addressed:
Accuracy. Simply put, when a model makes a prediction, that prediction should be accurate. This is typically done by testing the model on your historic data to predict if an incident identified by the system actually occurred. Accuracy can be determined by working with your internal data or team or external consultants to translate the accuracy in the model back to your organization.
Stability. The next metric used to gain trust in the model is stability. It is necessary to have an ac- curate model, but if the model is only accurate one time, the efforts are wasted. The goal is to have an accurate model that is also stable—which is where machine learning comes into play. Machine learn- ing refers to the process by which the computer (i.e. machine) is not told what to look for, rather it is given a large quantity of data and, in this in- stance, finds the patterns that predict risk.
The intent is for the machine to learn from the data. This trains your organization’s unique algorithm while validating that the model has learned rather than memorized. For example, if three years of data were acquired to build the model, the first two years would be used to train the model. Over this partition, the model is learning to find patterns that predict risk. Once a suitable model is found, we then evaluate this model across the third year of data that was held out of the training process. Here, we are looking for consistency of the model’s performance across both the training partition of data and the hold out parti- tion of data. If we can maintain a consistency of ac- curacy between these two partitions, we can conclude that we have a stable model, and that this model can


































































































   70   71   72   73   74