Presentation on theme: "What is a Good SmartSignal Model? Presented by Joe Milton Engineering Manager Reliant."— Presentation transcript:
What is a Good SmartSignal Model? Presented by Joe Milton Engineering Manager Reliant
Outline Reliant generating fleet and OSIsoft PI infrastructure Virtual M&D concept Human factors What is a good model? KPIs and other metrics
Generating Fleet Location and Fuel Type
Common Plant Historian -- OSIsoft PI 29 PI Servers 203 interfaces to various systems process displays & reports 350K tags (real-time data points)
OSIsoft PI took care of history and current value but not what the value should be…. We selected SmartSignal to leverage our existing OSI PI infrastructure and predict…
SS takes data from our PI infrastructure (like vibration, temperature, pressure, etc.) and inputs it into a software model. This model shows how a a typical piece of equipment is supposed to operate over various loads and ambient temperatures. If the data goes outside of the predictive model, an alarm is sent to the station for further investigation. Basically, we are making the computer do the work of scanning all of the sensors and instruments in the plant for changes in known equipment behavior. SmartSignal Predictive Maintenance Tool
Defining WatchList Graphing Terms: Actual, Estimate, Residual, Alert, Incident Actual (blue) from OSI PI Estimate (green) generated by SSC models Residual = Actual - Est Incident (diamond) multiple samples or sensors showing alerts; drives the WatchList Alerts (red X) Statistically significant residual value
Typical Results & Additional Terms VSG (green diamond) Virtual Signal Generation was enabled Missing Value (blue X) Signal gave NaN value
Smart Signal – Site WatchList Models Can see incidents by clicking the arrow button.
SmartSignal – Site Model Detail -Site that you are on. -Incident List – This is a list of incidents on the models. -Chart View – The Blue line is the actual PI data and the Green line is the model data.
SmartSignal-Reliant Scope 67 coal & natural gas power units across U.S. Total 13,450 MW power Rotating & non-rotating balance-of-plant assets monitored 411 assets, 1174 models Turn 30K+ sensors to exception-based monitoring Goals include – Early warning of equipment faults & process problems – Maximize availability – Minimize forced outages – Improvement of unit heat rates
Reliant’s “Virtual” M&D Center Staff of three working 5*8 at their own cubes: – Review WatchList – plants when equipment issues are detected – Tune – Build new models – Teach WatchList classes – Document issues and model changes. Plant champions – Look at the WatchList – Use log – Deal with the easy ones
Typical s to Plant
Typical Plant Response The Plant Champion will commonly use Pi Processbook or Datalink to confirm the item and issue a work order
Example of Full Cycle of a Catch The oil levels are all good and the filters have been changed. The filters were dirty and the temps are dropping on the motor after the change out. From: Sent: Thursday, November 30, :34 PM This is a pretty significant movement on FD Fan Motor outboard bearing (about 17 deg above expected currently).
What is a Good Model?
Good Model Brainstorm
Truth Table (Logic) SS Catch True Catch SS no Catch Machine issue SS no Catch No Machine issue SS Catch False Catch
What is a Good Model So Far? Modeling elements – It has catches (detects changes) – It has no misses (or few) – It uses as-found data Human elements – It has engaged users
Metrics True Catches Look at the tracking log-equipment/instrumentation catches Missed Catches Long term – Use Power GADS to review major events Was the system modeled? If not, should it be? Build it. If modeled, review and improve the model. No Catches & No Machine Problems Could be a sign of a good program Could mean just wait False Catch Track issues Use log statistics and target problematic models
Catches *2006 catches not a full year The table shows true catches and false catches
False Catch SmartSignal alerts indicate a change from the model data set (State matrix)- – A repair could cause a change – A new season – An operational mode change – An abnormal configuration – Or bad modeling data
Model with False Alerts Detailed review of this model data set found that it had two modes. A and B with the same Inlet temperature and A down stream of B
Model with False Alerts Detailed review of this model data set found that it had two spikes that were included in the model.
Human Factors How can acceptance be measured? Is the user engaged? What are good KPIs? How can it be made “fair” for peaking and base load unit?
Levels of SmartSignal Utilization basic level is where the WatchList Team (WLT) serves as an M&D center. Plant responds to WLT questions about their units and WatchList items. the engaged user logs into the WatchList at least once a week (on average). The committed user logs in several times a week (on average). They also add information to the log and clear alerts from the WatchList. The power user exceeds the committed user level by pressing model issues and providing a dialogue about known plant repairs and changes.
Levels of SmartSignal Utilization 2007 – ‘08
KPI Issues Terms like basic, power user work well for a full-year but they did not track seasonal units well We moved to a new metric that is much closer to the desired outcome– If you are running- login
SS Login Ratio “Average number of service hours between logins”
Log Entries % of plants had no log entries 78% of log entries were at one plant “C” 2008 Only 10% have no log entries for 2008 Log entries more evenly spread across fleet 119% increase in total log entries
Lessons The knowledge of how the plant should run comes from how it has run (PI data) Keeping the messenger alive is part of the task (instruments) Eliminate variables (do not add them) Model to detect changes Exception-based Statistics-based technologies are not common in plant environments. Tools are not complete solutions – People still have to act