Standard definitions for the benchmarking of availability and utilization of equipment

An effective standard has been created for the surface mining industry that will help in comparing equipment availabilities, operating parameters, and utilization. Standard definitions were reported at the Annual General Meeting of the Canadian Institute of Mining, Metallurgy, and Petroleum.

The results of the research project intended to enable comparison of equipment performance across the mining industry can be found in the following report entitled "Standardization of Definitions for Benchmarking" by Zoltan W. Lukacs.



Standardization of Definitions for Benchmarking - Zoltan W. Lukacs, P. Eng.

Dedication

This report is dedicated to the memory of Ian Muirhead, who at the time of his untimely passing was Director of the Department of Mining and Petroleum Engineering at the University of Alberta. Ian understood Industry’s need to conduct this study, and provided the inertia to get the project off the ground.

Credit goes out to Blair Tuck, whose research for his Masters Project formed the basis for the conclusions reached within this report.

Credit is also extended to Chris Barclay of Luscar Ltd, and Denise Duncan from Syncrude Canada Ltd. , who as members of the industry steering committee for the project provided the initial direction and assistance in conducting the surveys.

Executive Summary

The desire to engage in collaborative relationships to gain competitive advantage at a global level was the driver for a proposal by the Surface Mining Association for Research and Technology (SMART) to commission a research project intended to enable comparison of equipment performance across the mining industry.

As a start point it was decided to focus on the development of common definitions for availability and utilization. A UOA grad student in fulfillment of a Masters degree project requirement conducted a survey of twenty-five mining operations in Canada and the United States. A methodology was developed to capture industry practice and classify the responses. Several typical operating events encountered in the normal operation of a mine were identified and included in the survey for each operation to classify.

The survey found that the formulas and definitions for availability and utilization parameters were similar, however differences in the meanings behind the formulae and the classification of events occurring in the course of the operation of a mine created inconsistencies in reporting. While it is possible to derive common definitions for operating parameters, comparison is meaningless without addressing the discrepancies occurring at a more fundamental level of classification of operating events into time categories.

With this finding, the project objectives shifted to identify the fundamental differences between the way operations classified normal operating events. The results are summarized in the report.

As the operations surveyed were understandably reluctant to change their formulas or data collection practice, a system which uses existing data collection infrastructure to develop a parallel benchmarking database solution has been proposed for operations wishing to participate in a benchmarking initiative. A central database is proposed where participants in the study would have access to the accumulated data allowing comparison using either of their own formulas and definitions, thereby allowing comparison to historical data, or to standardized benchmarking formula developed for the purpose of industry wide comparison. The benchmarking definitions derived for this purpose would be the first step toward development of an industry standard for selected operating measures.

A portion of the survey was dedicated to determine the extent to which maintenance performance parameters were used, and if interest exists in benchmarking maintenance performance. Most operations recognize the need to improve maintenance processes and performance management systems, and are actively working toward this end. There appears to be little collaborative effort in this area, as a result most operations seem to be "reinventing the wheel". As there is interest in pursuing some form of maintenance information sharing, a study comparing maintenance practice and the development of performance standards for maintenance would be of value to the mining industry.

Background

The motivation for this study was to enhance the collective efficiency of the Canadian mining industry by enabling sharing of information on operating performance.

Benchmarking has become one of the methods by which mining companies across North America are attempting to improve their fleet operations and maintenance practice. This follows the success of benchmarking initiatives in other industries.

Some of the benchmarking applications identified within the mining industry includes:

One success story in mining industry collaboration is the Large Tire User Group. Under the auspices of Surface Mining Association for Research and Technology (SMART), the Large Tire User Group established a multi-company, large tire database, which was successful in establishing consumption information, sharing of large tire testing data, and the sharing of procedures for tire and rim maintenance.

Despite its benefits, the mining industry in general has lagged other industries in the adoption of benchmarking. Some of the barriers limiting the application of benchmarking are:

The results of previous benchmarking relationships have been mixed, as comparison between operations was complicated by inconsistencies in the interpretation and reporting of data between operations.

In response to industry interest to develop meaningful indicators with which comparison was possible, SMART commissioned a project to develop a proposal for standardized performance indicators through the University of Alberta.

This paper outlines the project methodology, summarizes the results of a survey of industry practice, and presents a proposal to advance the project to the next stage.

Project

The project was initiated in January of 1998. Project participants, members of SMART, provided funding for the initial phase.

Project Sponsors were;

The project was coordinated through the University of Alberta, the research forming the basis for a graduate level thesis. A steering committee was formed among the project participants to direct the project.

The project was to be carried out in three phases;

The second and third phases were contingent on successful execution of Phase 1, which requires the acceptance of proposed standard reporting definitions.

This report summarizes the conclusions of Phase 1, the survey of current practice, and provides recommendations to enable action toward phases 2 and 3.

The survey consisted of three sections;

The first stage of data collection took place in late February and early March of 1998. This stage consisted of site visits to eight surface mines, allowing participants elaborate on responses.

The original survey was revised for a mail survey, sent to forty-four large surface mines in Canada and another fifty-five in the United States. Seventeen more responses were received for a total of twenty-five.

 

 

Results

For illustrative purposes, the flow of information from which the performance definitions are derived is reflected in Figure 1. During the course of a day various planned and unplanned events occur. These events are recorded either manually or electronically through an automated data collection system. Based on established rules and guidelines developed over the history of the operation, these events are coded to defined time classifications, again either manually or electronically. These classifications are for the most part are common to the mining industry, and make up the terms of the definitions of the performance measures used in the industry.

Figure 1 Performance Reporting Information Flow


The survey initially focussed on definitions of availability and utilization used by industry. The general intent of the definitions was the same; ie how many hours are available to the operation. Availability formulas generally represented a ratio of equipment hours available to the operation, to total hours. While there appeared to be consensus on the definitions of availability, inconsistencies in the allocation of events to time classifications diminished the validity of any comparison of operating parameters.

In most cases total hours consisted of scheduled hours (or a sum of operating, delay, standby, and down hours) or calendar hours.

One of the differences encountered was in the use of mechanical or physical availability; the majority of operations using mechanical availability. Six of the surveyed operations use both physical and mechanical availability, however, some of the operations using a mechanical availability exclusively, defined mechanical availability in terms similar to physical availability at other mines.

The other significant difference was in the use of the term "operating hour". Several operations made the distinction between net, or a "pure" operating hour vs a gross operating hour, which includes operating delay. Of the respondents using the term "operating hour" only, the meanings varied from a pure operating hour (similar to a net operating hour), to an operating hour which includes delay.

The most significant difference between operations that affects the ability to compare results, is the allocation of events to the time classifications terms making up the formulas. For example operations comparing on the basis of mechanical availability, which excludes standby or idle time, may be affected by differences that occur between what is considered operating delay and standby time at the individual operations. The inclusion of planned downtime in idle or standby time results in different availability than operations that consider planned outages as scheduled outage.

Utilization related parameters resulted in even greater variation in application and intent. The general intent of utilization parameters was to measure the use of the equipment, in some cases against available time and in others against total time. The basic formulas were also found to be quite similar for utilization parameters, however there was a large variation in terminology, from utilization, to use of availability, effective utilization, and operating efficiency. Many of these terms were used interchangeably by survey participants to reflect the same measure. As an example, the same formula was used to describe "Utilization" and "Overall Efficiency" at different operations, however the definition of the NOH term used in the two formulas was different.

Utilization measures as well are influenced by the classification of events to standby vs. operating delay, and nonscheduled time vs. standby time.

In order to enable comparison between operations, discrepancies would have to be addressed at a more fundamental level, specifically the allocation of operating events (such as lunch breaks, fuelling, queuing) to commonly accepted time classifications (such as operating hours, operating delay, standby, down, etc.).

In order to identify the differences between the way operations classify typical events the Table 1 was developed to reflect the major time classifications used within the mining industry.

All events encountered in the course of operating a mine would fall into one of the time classifications. Table 2 is a summary of the classification by the survey participants of some of the most common events.

Table 1.

Total Hours

Total hours is not used as a classification for this study, however because it is used by many of the operations participating in the study, its relationship to the other parameters is noted.

The definition of total hours varied depending on how the operation classified scheduled outages.

Where scheduled outages were part of the operation, total hours were generally equal to scheduled hours, defined as calendar hours, less scheduled outages. Where there were no scheduled outages, or in cases where scheduled outages were considered part of operating or standby time, total time equated to Calendar hours.

Calendar Hours

Calendar hours varied depending on the operation. Twenty-two respondents defined calendar hours as 8760 hours per year. One operation removed statutory holidays, and defined the year as 8520 hours. Another removed hours allocated to replacing a manufacturer defect from calendar hours. Comparisons affected by statutory holidays should be viewed with caution as not all provinces and states have the same statutory holidays.

Scheduled Outages

In most cases scheduled outages when used, included statutory holidays, planned shutdowns and scheduled down shifts. Scheduled outages were in some instances used to capture unforeseen events, which are not easily classified into the normal operating classifications. These included events such as major weather related outages, Acts of God, and labour disruption.

Scheduled hours are calculated as the difference between calendar and scheduled outage.

Almost half the mines surveyed did not classify scheduled outage separately. Of those, planned shutdowns and scheduled downshifts were classified as idle or standby. This difference affects the calculated standby time.

Down Time

The distinction between down and available was quite clear throughout. In most cases the unit was mechanically operable or it was not. Opportune maintenance or maintenance taking place during planned shutdowns was in almost all cases classified as down time.

In the majority of operations surveyed, consumables changes (ground engaging tools, hoist ropes etc.) were considered part of down time, regardless of whether mechanics or operations were involved in the change.

Available hours were then calculated as the difference between Total or scheduled hours less down time.

Idle (or Standby – these terms were used interchangeably)

Idle or standby time was in most cases considered the time the equipment was available, but not manned or used.

The major discrepancies affecting idle time were the classification of planned outages as discussed above, safety and crew meetings, which were equally defined as operating delay, and to a lesser extent lunch breaks and power outages.

Table 2. Summary of Event Classifications

Operating hours

The majority of discrepancies occurred in the definition of operating hours, and the allocation of events between operating delay, Gross Operating Hours, and Net Operating Hours. Several operations had one classification, Operating Hours. In some cases Operating Hours incorporated delay, reflecting the entire time the unit operated, while in others, Operating Hours referred strictly to the time the unit was producing.

Gross Operating Hours were generally calculated as Available hours less idle or standby time. GOH was generally defined as operating time plus operating delay.

Net Operating hours, also referred to as operating time, or production hours, is the difference between GOH and operating delay.

Operating Delay, generally referred to activity where the unit was available and manned, but not involved in production.

Working hours was a term used by a number of mines, also with multiple meaning; at one operation it equated to a GOH definition, while at another the definition reflected a Net Operating Hour.

One of the major areas of disagreement is in the classification of queue time as delay or operating time. It was found that operations with manual time and data collection tended to incorporate queuing as operating time to an upper limit, where it was then classed as delay. Operations with automated data collection systems where more likely to classify any queuing as delay. Further discrepancies resulted from the definition of a queue. In some cases if truck waiting was caused by shovel repositioning or face cleanup, it was not defined as queue, or the delay was not considered a queue until more than one truck was waiting. These discrepancies came to light after the surveys were completed, and were not further addressed.

Maintenance Survey

The most common maintenance indicator is mechanical availability, while some use reliability to varying degrees. Other indicators include; maintenance ratio (maintenance hours to operating hours); Cost per Hour; Backlog and PM compliance.

All operations have maintenance management systems, though some are limited to work order generation and history. Retrieval of historical information has been raised as an issue at some operations.

All operations keep component histories, however in most cases the history is limited to hours at replacement. Half the operations surveyed keep a failure history, though in most cases this is simply failure cause. Two respondents indicated they kept records of failure analysis on major components, or accidents.

Component changeouts were based on operating hours or depending on the component, service meter hours. Condition monitoring was used to varying degrees by all respondents. Most commonly used condition monitoring methods included oil sampling, vibration analysis, visual inspections, gear inspections, and thermography.

Some operations were either in the process of moving towards a function or usage based metric for replacement as an alternative to hours, or strongly considering it. Examples include tonne – km for tires, tonnes for hoist ropes, or BCM on buckets.

About half had some form of downtime analysis, most relating to distribution of downtime by equipment component or system. About a third documented maintenance time by activity, ie wait labour, wait shop space, cleaning, Preventive, breakdown, warranty etc.

The majority expressed interest in some form of maintenance information sharing, although many were not sure what form it should take. Interest was expressed in sharing in component histories or common equipment problems.

Conclusions

1. Any comparison is meaningless due to the lack of consistency in the way in which operating events are classified. Until this is resolved there is limited value in proposing common definitions for availability and utilization.

The focus moving ahead must then be on the consistent allocation of operating events to agreed on time classifications.

2. The consensus through the survey interviews was that there is strong interest in information sharing and comparison, however none of the operations felt they would be willing to adopt new definitions for operating parameters or adopt new standards for allocation of operating events in order to enable information exchange.

To enable comparison of data, information sharing must take place in such away that existing operating data collection and reporting systems at individual mines can operate unaffected, and that access to historical data is protected.

To accomplish these constraints, a solution that utilizes the data storage and manipulation capability of existing data collection systems could be implemented.

3. There is interest in pursuing some form of maintenance information sharing. Most operations recognize the need to improve maintenance management systems and processes. The development of maintenance performance management systems lags that of other production tracking systems in mining. There appears to be little collaborative effort in this area, as a result most operations seem to be "reinventing the wheel". A study comparing maintenance practice and the development of performance standards for maintenance would be of value to the mining industry.

4. Once a benchmarking data collection infrastructure is established, other applications could benefit. Some of these include:

Path forward

A decision to proceed is required; the benefits must be weighed against the resources needed to establish the benchmarking infrastructure, as well as the ongoing upkeep of the system. The value in the initiative will be realized by the ongoing participation of several operations.

The path forward is to develop a process which makes use of existing data management systems to collect data on operating events, and establish the infrastructure to collect event based data from participating operations to an independent, central benchmarking data warehouse. The proposed data management structure is reflected in Figure 2. Participating operations will have access to the data at a high level, which can either be reported in the definitions agreed to by a benchmarking steering committee, or inserted into their own formulas, to enable comparison with their own historical data. The definitions developed for the purpose of comparison will represent a "straw dog" for industry wide standardization. Participants will not be obligated to adopt the proposed standards, as they will have access to the database data. The development requirements and costs of the proposal are summarized in Figure 3 and Tables 3 and 4 .

For the initiative to advance, the following actions are recommended;

Table 3 System Development Requirements

SMART/Steering Committee

(ie Availability and Utilization)

Participants

Data Administrator

Ongoing Support Requirements

Table 4 Cost Estimate

Development Cost

Participant Direct (40 hrs @ $100/hr) $ 4,000

Data Infrastructure (160 hrs @ $100/hr) $ 16,000

Implementation Cost (40 hrs @ $100/hr) $ 4,000

(distributed between participants) $ 20,000

Support Costs ( distributed between participants)

Initial (40 hrs @ $35/hr) $ 1,400/ month

Ongoing (16 hrs @ $35/hr) $ 560/ month

(Estimates assume contract labour; any in-kind support by participants will reduce cost)