In his Quality Handbook Juran predicted that "The 20th century will be remembered as the Century of Productivity, whereas the 21st century will come to be known as the Century of Quality"(Juran, 1999). Maybe the world needs another decade for his prediction to come true.
Engineering companies in the Oil and Gas industry have an enormous challenge ahead, in their effort to achieve the desired client success in a dynamic and competitive environment.
In the effort to become a fact-driven company, they implement Key Performance Indicators (KPI). However, one could finds more than 20 articles of firms that are using Bar-Charts to monitor and display them in a bulletin board.
Since the 1920s, Shewhart stressed the importance of controlling the process. Conversely, they overlook the Quality Controls Charts that shall be used to determine if a variable or attribute is predictable, stable over time.
Many professionals used to react to the natural variation as though it were a special cause. Even an out of specification situation where the goal or target is not being met, is not special cause (Breyfogle, 2008). Therefore, there is nothing to worry about unless the variable is out of control.
Last but not least, these kinds of improvement initiatives lack of a holistic view, the necessary alignment, linkage and replication around the business strategy (Juran, 1999). They only lead to firefighting; a reactive culture instead of the prevention that organizations need to assure accountability and value added creation across engineering disciplines and project phases.
The paper structure is as follows:
- To develop a continual improvement framework that close the loop, translating problems into an action plan and preventing their recurrence
To design a continual improvement framework describing its implementation steps.
To develop a Policy Deployment that aligns project objectives to client expectations.
To design a dashboard performance metric system (KPI) that lead to root cause analysis and improvement action plans.
Continual Improvement Approach
Almost all Engineering Firms in North America hold ISO 9001 Certification, mostly for marketing purposes. Consequently, only a few have succeeded implementing continual improvement principles due to the complexity of project management activities. The following study, summarized in the figure I, provides an improvement model for such a complex environment.
Figure I describes a continual improvement approach designed for EPC firms delivering engineering services in the Oil and Gas sector. It has nested PDSA cycles, during all the stages.
The challenges that engineering firms are facing are immense due to each method of contracting affects in its own way the allocation of responsibility and the demands on the client side for coordination and integration of the project flow of knowledge and information in three dimensions: vertically, horizontally, and longitudinally.
EPC Continual Improvement Approach – To Plan
The Project Scope of Work (SOW) and Contract are among the inputs for starting the Continual Improvement planning phase. The first step is the development of a Project Policy Deployment (a Strategic Planning methodology, developed by Dr. Yoji Akao (1988).
Engineering companies will find that Policy Deployment's greatest strength is its ability to translate qualitative, executive level Project goals into quantitative, achievable actions (See Table I, below).
Effective Policy Deployment starts with the client requirements of the specific Oil & Gas Project that is being designed. It shall start with the Mission and Vision statements, and develop Quality Policy, Critical Success Factors, objectives and metrics. A future paper will explain the steps in the definition of Key Performance Metrics for an EPC project during its various stages and methods of contracting.
Performance feedback: Individual Control Charts (XmR)
The above step creates the foundation for managing, measuring and monitoring performance, while the next step ("Feedback" block) is the improvement driver where all the project's proactive initiatives will initiate.
The objective of this step is to identify when a metric is out – of – Control, and conduct a detail analysis of the special causes of variation to find the Root Cause.
Most of the metrics collected in an EPC engineering firm are 'variables' instead of 'attributes'. The Individual Control Chart – XmR is the one suitable for this kind of observations (conforms to ANSI/ASQC B1-B3 1996).
The term special or assignable causes as opposed to chance or common causes was used by Shewhart (1939) to distinguish between a process that is in control, with variation due to random (chance) causes only, from a process that is out of control, with variation that is due to some non-chance or special (assignable) factors (cf. Montgomery, 1996, p. 102).
The XmR utilizes three-sigma control limits and indicates an out-of-control signal if a single point falls beyond the control limits. The development of the equations for computing the control chart and its limits can be found out in Juran (1999)
Problem Definition and Description:
The best problem statements make no assumptions; they simply document the current state.
Craig Cochran (2006) stated that crafting a problem statement is one of the most important steps in problem-solving.
EPC Continual Improvement Approach – To Do
The root cause analysis by means of an Ishikawa Diagram will generate several possible solutions.
This phase ends with the Improvement Action Plan that will be followed until its final resolution.
EPC Continual Improvement Approach – To Study & To Act
Study implies understanding the sources of variation in the process (Common vs. special causes). Therefore, it requires executing the nested PDSA.
During this stage it is necessary to evaluate the benefits of the solution (The "Benefits expected" shall be defined in the Improvement Action Plan previously prepared). Compare what has been found to what is being expected to happen.
If the expectations have been met, the solution shall be standardized and Lesson Learned Database shall be updated with the solution. The Project learns when someone, other than the initial learner, adopts and adapts the new learning or prevention (Wieneke, 2008).
Then, the organization is doing Knowledge Management and transforming its own history in a list of "Best Practices".
Example of a Performance Metric System (KPI)
Let's continue working with the project called ‘SMART' and the definition of one KPI:
7- Schedule Performance Index (SPI): Is a measure of schedule efficiency on a project.
An SPI equal to or greater than one indicates a favourable condition: The Project is ahead of schedule (SPI > 1)
An SPI less than one indicates an unfavourable condition: The Project is behind of schedule (SPI < 1)
(I) SPI = Actual Days / Baseline Days
SPI can be calculated also using the formula:
BCWP- Budgeted cost of work performed
BCWS- Budgeted cost of work scheduled
if SPI<1 means project is behind schedule
I used the former one in this example because its simplicity
Using a color-code in a dashboard leads to firefighting, creating a finger pointing culture among the project members (Breyfogle, 2008). It is necessary to use a professional quality tool: the individual control chart.
Individual Control Chart (XmR)
It is important to emphasize that there are certain crucial assumptions, which allow the use of this techniques:
1. The process is in statistical control.
2. The distribution of the process considered is Normal.
If these assumptions are not met, the resulting statistics may be highly unreliable.
The results in Excel are similar to a Professional Statistical Software, the variable SPI is in statistical Control.
To prove the variable ‘SPI' follows a normal distribution, the hypothesis is:
H0: The SPI sample follows a Normal distribution.
Ha: The SPI sample does not follow a Normal distribution.
Figure V shows the results of the normality Test ‘Anderson-Darling'. As the computed p-value (0.317) is greater than the significance level alpha=0.05, one should accept the null hypothesis H0. The risk to reject the null hypothesis H0 while it is true is = 31.7%. The variable follows a normal distribution.
The metric is stable. Therefore, the process capability index (Cpk) can be determined
Minimum accepted capability:
Process capability attempts to answer the question: can we consistently meet customer requirements?
In a capable process Cpk is 1 or greater. Cpk will be higher only when the variable is meeting the target consistently, with minimum variation.
However, considering the characteristics of the EPC industry already mentioned, it is recommended to accept as a Minimum capability Cpk = 1. After the organization has applied this Approach in a whole project phase, it is ready to replicate the knowledge, reducing variation and increasing the minimum capability target.
The Figure VII has all the results in only one graph.
The value of Cpk (0.08), in both figure VI and VII means that there is a lot of variation and it is probable that the organization could not consistently meet customer requirements in the future. Being ‘0.96' the Lower Specification Limit for the metric "SPI", it is obvious that the performance have been behind schedule in 5 of the last 8 months (SPI > 1). However, this is just part of the natural variation. Therefore, it means the current process/system is incapable of being within specification limits; it has to be redesigned in order to reduce variation.
Analysis of ‘SPI' Variation
The Project Management Institute has found that mature companies have a schedule performance index (SPI) variation of 0.08 and a cost performance index (CPI) variation of 0.11. Less mature companies have corresponding values of 0.16 for both indices.
The ‘SPI' Standard Deviation in the "SMART Project" is 0.0875325 (see figure V). The ‘SPI' is close to the industry average, and the variation is due to common causes.
1. The proposed framework closed the improvement loop, integrating lessons learned and replicating it throughout the whole organization, while achieving the desired alignment with the project strategic deployment.
2. If applied properly it will solve critical problems, increasing communication between the project team members, and increasing the linkage-integration between disciplines and project phases.
ANSI/ASQC B1-B3-1996: Quality Control Chart Methodologies
Akao, Yoji, edition (Jap: 1988, Eng: 1991) (in English (tr. from Japanese)). Hoshin Kanri, policy deployment for successful TQM. New York: Productivity Press (Originally Japanese Standards Association). pp. xiii. ISBN 1-56327
Breyfogle, F. W. (2008), Integrated Enterprise Excellence, Vol. III Improvement Project Execution: A Management and Black Belt Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard, bRIDGEWAY bOOKS, aUSTIN, tx.
Construction Sector Council. 2006-2010 Alberta Construction Workforce Supply/Demand Forecast. May 11, 2006.
Cochran, Craig; (2006).Becoming a Customer Focused Organization, Paton Press
Crosby, P.B; (1979) Quality is Free, New York: McGraw-Hill.
Deming, W.E; (1986) Out of the Crisis, Cambridge, MA: Massachusetts Institute of Technology, Centre for Advanced Engineering Studies.
ISO 9001: 2008 - Quality management systems -- Requirements
Juran, J.M; (1999) and F.M. Gryna. Juran's Quality Handbook, 5th Edition, New York: McGraw-Hill.
Maynard's Industrial Engineering Handbook; (2001) 5th Ed, McGraw-Hill, New York, pp. 4.12-4.113
Montgomery, D. C; (1996). Statistical quality control (3rd. Edition). New York: Wiley.
Shewhart, W A; (1939) Statistical Method from the Viewpoint of Quality Control ISBN 0-486-65232-7
Steven Wieneke; 2008. Replacing a Lessons Learned Database with a Visible Learning Process, National Contract Management Association World Congress 2008, Cincinnati, Ohio, April 13-16