Return to Q-Skills home page
Six Sigma was being implemented corporate-wide at the insistence of some highly placed IBM executives. There were complaints and discussions throughout IBM until the leading technologist in the company called 15-20 statisticians and quality managers together to publish a position paper on Six Sigma. We were encouraged to believe that our opinions and factual evidence were going to get a hearing.
We expressed concern with Motorola s misuse of statistical terms, the thin theoretical and practical evidence for the 1.5 sigma shift, and the dubious means of counting defects and opportunities for defects. Our position paper was finally regarded as too disruptive to IBM s progress in defect reduction, which management wanted to credit to Six Sigma policies. The position paper was never distributed beyond the team that created it.
Six Sigma is rarely mentioned around IBM anymore. It quietly disappeared with the radical downsizing that took place from 1991-93, even though it was always touted as not just another quality program. I believe its disappearance did occur primarily because many of its champions either left IBM, or had too many higher priorities left to cover. I left IBM in the downsizing, along with 80% of the quality improvement experts (mostly statisticians).
Most interestingly, when I have run across Motorola employees in the years since, they consistently state that there is still a passionate pursuit of defect reduction and quality improvement at Motorola, which more or less still occurs under the banner of Six Sigma. We might dismiss the whole Six Sigma approach as sloganism, but we must realize that large corporations necessarily put a simple label on programs that they want to implement corporate-wide. Seemingly, everyone in Motorola knows just what you are talking about when you mention Six Sigma, even if it is different than what we quality experts and statisticians know it is. Their quality improvement process has stood the test of time.
IBM could not sustain its Six Sigma program, probably because of business factors. Every organization and their circumstances are a little different. I respect General Electric s CEO and their attempt to fully embrace quality improvement. They may succeed if they get their entire workforce to approach quality improvement with a simple, tools oriented, common sense process underneath the slogan of Six Sigma.
Date: Mon, 10 Jun 1996 19:30:46 -0700
I would appreciate advice and comments on the use of control charts
in quality programs based on "6 sigma".
The intent of "6 sigma" programs is to reduce defect rates from around
3 in 1000 to 3 in 1000000. The effect of this would appear to primarily
be in halving the Process Capability Index. I've got no problems with this.
However, it is difficult to see how control chart calculations and use
would be effected. Broadening control limits would render what was normally
considered a special cause (outside 3 sigma and inside 6 sigma) to be regarded
as a common cause. (Here the term 6 sigma actually refers to a 12 sigma
span on an X bar chart). This does not seem correct, although it may well
be so.
The use of "6 sigma" also appears to effect the "normal" definition
of a process that is "capable" - normally this refers to a process that
is "in control" and in specification. If 3 sigma control limits are used
with a 6 sigma process, this cannot be a correct definition.
Can anyone please shed some light on both the theory and practice of
6 sigma??
Date: Mon, 10 Jun 1996 09:43:37 -0500
------------------------------
------------------------------
I don't know if this will help, but I'll try. Problem one may be confusing
your sigmas. Very easy to do. Even some of the FORD material adds to the
confusion by sometimes referring to process standard deviation as sigma;
hat-sigma; and prime-hat-sigma and then in the same material talking about
3-sigma control limits (different sigma).
------------------------------
Date: Mon, 10 Jun 1996 19:05:59 GMT From: QUALITY List Editor Date: Mon, 10 Jun 1996 15:51:03 +0000
Date: Mon, 10 Jun 1996 19:03:16 -0400
------------------------------
===================================================
============================================================ Search
on "Motorola and six-sigma." Motorola allows for a shift in the process
average when determining the probability of exceeding six-sigma. They imply/
state the shift is "normal" and then provide for the shift. Perhaps it
was "normal" for Motorola to experience a process average shift of 1.5
sigma??? In the materials I have read on this topic, the reasoning behind
why they chose "1.5 sigma" is very vague, if non-existant.
------------------------------ > Does anyone know where the 3.4 defects
per million for 6 sigma originates???
------------------------------ Subject: Re: 6 sigma
------------------------------ Jim Ayers responds:
Getting to the real world every process mean drifts some if we our rational
subgroup size is too small to capture incremental process changes. We might
have some raw material lot to lot variability that we have to live with
(but is not causing us a problem). Each lot may take one day to use up;
hence, we might really need to technically have a rationale subgroup of
a week to average out this normal process variability, if it is not causing
us a problem.
Return to Q-Skills home page
Q-Skills may be useful to you if
you have an interest in any of the following areas: qskills, intranet,
PC based, internet, TQM, total quality management, quality improvement,
process improvement, quality training, team support, team training, CAL,
survey software, computer aided learning, CBT, computer based training,
CAT, quality improvement training, computer aided training, tutorial, interactive
multimedia, BPR, re-engineering, BPM, flowcharter, business process mapping,
swimlane, costing, user defined variables, flow chart, easy to use, automatic
line drawing, validity checking, improvement tools, improvement techniques,
Pareto, brainstorming, histogram, cause and effect, flowcharter, C &
E, attribute control chart, graph, range, averages, control chart, check
sheet, affinity diagram, random sampling, statification, SPC, statistical
process control, facilitator support, normal distribution, six sigma, 6
sigma, zero defects, software, toolkit, toolbox, FMEA, analysis, automatic
documentation generation, ISO 9000, ISO9001, ISO9002, QS9000, ISO9004,
ISO9004.4, MS, Microsoft Windows 3.0, Windows 3.1, Win 95, IE4, IE5, IE5.5,
Win NT, quality improvement training, computer based training, quality
tools, flowcharter, BPM, brainstorm, control chart, survey software.
From: QUALITY List Editor
Subject: 6 Sigma and Control Charts
From: QUALITY List Editor
Subject: Re: 6 Sigma and Control Charts
You have not clearly distinguished control limits and specification
limits. Control limits are almost always 3 standard deviations (sigma)
away from the mean. Six sigma means that the specification limits or tolerance
limits are 6 standard deviations away from the mean. Specification limits
indicate acceptable versus defective product. They are specified in absolute
measures (like inches). Control limits are statistical in nature and a
function of how big sigma is. The control limits are called control limits
because you use them to decide when to intervene in or adjust the process.
By reducing variation (and thus sigma) your unchanged specification limits
become further apart in terms of standard deviations. When specification
limits are six standard deviations apart, the control limits are 3 standard
deviations closer to the mean than are the specification limits. This means
you get out-of-control points, and can adjust the process before any defects
occur. At least in theory, real life is slightly more complicated. John
Grout jgrout@mail.cox.smu.edu
Date: Mon, 10 Jun 1996 10:59:43 -0500
From: QUALITY List Editor
Subject: Re: 6 Sigma and Control Charts
Check this article, which may answer some of your questions:
Suzanne de Treville, Norman M. Edelson, and Rick Watson, "Getting Six
Sigma Back to Basics," Quality Digest, May 1995, pp. 42-47.
If you need to get a copy, call 800 527-8875.
Hope this helps, John Woods jwoods@execpc.com
Date: Mon, 10 Jun 1996 12:24:10 +0000
From: QUALITY List Editor
Subject: Re: 6 Sigma and Control Charts
Originally the stat pioneers decided to use Greek symbols for stats
describing p opulations, and Roman alpha codes for stats describing samples.
If that agreement had remained in force, then if someone mentioned "sigma"
you'd know they were discussing a population stat; if they talked "s.d."
you would know they were discussing a sampling stat. Unfortunately it is
not like that anymore. Also the differences between inferential stat and
descriptive stat compounds the situation and sometimes we tend to think
of samples when technically we may be dealing with populations.
To address your problem lets work backwards: Lets say you do have a
6 sigma process, you are charting it on a good ol' X-bar and R chart with
n=5. You know it is a six sigma process because you have calculated the
mean average range (bar R) and divided it by your d-sub-2 factor, in this
case 2.326 and thus derived your hat-sigma or your standard deviation of
the population or process that produced the data you have been plotting.
(Source: FORD Continuing Process Control, Sept 1985, page 51). You have
also discovered that when you calculate your Zmin or your Zupp that in
neither case is your double-bar-X any closer than 6 hat-sigmas away. Thus
you have a 6 sigma process.
Now you take that same bar-R and multiply it by A-sub-2 (which would
be 0.577 in this case) and add and subtract this to your double-bar-X and
derive your control limits.
You have calculated 3-sigma control limits based on the distribution
that is producing your bar-X's, which is different from the hat-sigma that
was trying to describe the distribution that produced the individual readings
that you were massaging to produce your bar-X's.
Engineering specs usually deal with individual values and really don't
care about mean averages of bunches of parts. Thus ultimately you care
about what we have been calling here hat-sigma.
The beauty of control charts is that it is the process talking to you.
The control limits tells you what the process is happy doing. The capability
analysis tells you whether you should be happy or not happy. Although your
control limits in the instance we are discussing are going to be very tight,
the process has just told you not to worry, that it is perfectly happy
maintaining them, and that if it can't it you you will know it by the predictable
pattern of points, which will mean that the process has gotten sick and
needs your help.
I hope I understood your question and that this is a helpful answer.
I had been teaching SPC for quite some time before I finally really understood
the foregoing. (What was I doing teaching SPC and NOT understanding the
foregoing???? sort of "in the land of the blind the one-eyed man is king"
syndrome)
Regards,
L.H. Garlinghouse, C.Q.E. Waterloo Industries, Pocahontas, AR (501)
892-4586 ext 7659 garlingh@PHS.K12.AR.US
Date: Mon, 10 Jun 1996 16:54:14 -0400
From: "Bill Casti, CQA (Moderator)"
Subject: Re: 6 Sigma and Control Charts (fwd)
---------- Forwarded message ---------- Date: Mon, 10 Jun 96 14:43:21
EDT From: Dave Bigham
You have made a very basic and dangerous error in assumptions. 6 sigma
does not refer to broadening the control limits or the specifications.
It refers to the reduction of the variability. Same specs, same tolerance,
same range of allowed variation. The only difference is that the process
standard deviation has been reduced by half.
Dave Bigham
------------------------------
"6 sigma" doesn't involve a redefinition of control limits; it refers
to the relationship of the process's control limits (which are calculated
as always, and can be changed only by changing the process) to the specification
limits for the process's outputs (which are determined by customer requirements,
and can be changed only if customer requirements change). It means that
you design the process so that the amount of actual common-cause variation
in the output is less than half the allowable amount of variation in the
output. Remember the Taguchi loss function; there's a loss associated with
any deviation of a particular unit of process output from the nominal or
"ideal" value, even if that unit of output is still within spec, and the
farther from the nominal, the greater the loss.
: The use of "6 sigma" also appears to effect the "normal" definition
of a : process that is "capable" - normally this refers to a process that
is "in : control" and in specification. If 3 sigma control limits are used
with a 6 : sigma process, this cannot be a correct definition.
A 3-sigma process is capable, but a 6-sigma process is more capable.
If the control limits (what the process can do) exactly match the specification
limits (what's needed); you're pushing it; you're going to be unavoidably
producing about 3 in 1000 out-of-spec items, which is more costly than
it sounds (imagine 3 in 1000 flights crashing) and your in-spec output
is going to be all over the acceptable range, which imposes costs of its
own, because the loss associated with variation isn't a step function (none
inside the spec limits, total outside them). (If you have a less than 3-sigma
process, i.e. the control limits are outside the specification limits,
then you're relying on luck to be able to produce in-spec output.) The
less variation in output, the easier it's going to be for subsequent processes
to use that output (think of the story of the Ford and Mazda transmissions).
It's easier to grasp this if you think in terms of a variables chart
rather than an attributes chart. The whole point of 6-sigma is that you
don't use SPC solely to look for special causes of variation; you also
use it to find out whether the common-cause variation is causing losses
and, if so, to find out whether your attempts to reduce common-cause variation
are working.
Eric Bohlman (ebohlman@netcom.com)
------------------------------
From: QUALITY List Editor
Subject: Re: 6 Sigma and Control Charts
The reason for having control charts is to determine whether you have
enough information about a system to begin bringing about improvements.
If your system is "under control" within 3 times sigma, then you probably
have enough information to do this. Otherwise, you probably do not.
Six times sigma means your probability of having defects is quite remote.
Duncan Kinder dckinder@ovnet.com
---
---------------------------
Date: Mon, 10 Jun 1996 17:57:56 +0500
From: QUALITY List Editor
Subject: 6 Sigma and Control Charts
dave, If you are talking about control charts I would agree with your
statements, all of them.
We use the term, 6 Sigma Design. In this case we mean we design the
product so that it is inside the necessary operating parameters ot the
6 Sigma limits. Essentially we have reduced the variability of the product
by design. This is somewhat a simplistic statement because of course we
also must make sure that the variability of the production process is accounted
for in our design.
It gets complex!!
Tony Rinaldi
Tony Rinaldi arinaldi@modicon.com Compuserve 75074,255
------------------------------
From: QUALITY List Editor
Subject: Re: Non-normal Distribution
>What's a sound and practical way to estimate the process capability
of a >process having a non-normal distribution? Any opinion on using the
Pearson >curve such as in SPC1PRO software?
Dr. Breyfogle wrote: >talk >to your customers about making a process
capability statement that has more >physical meaning directly from the
plot, even if the data is not normally >distributed.
>The data could be evaluated for distribution fit using normal, Weibull,
or >3-parameter Weibull coordinate systems. This would be good also to
assess >whether there outliers and bi-modal characteristics in your data,
an >important consideration that many seem to skip when making process
>capability statements.
>I personally prefer making process capability statements directly
from a >best estimate probability plot (or at least supplementing any cp
or cpk >process index statement). With this approach you might make a statement
>that, for example, 1.5% of the product is estimated to be below the >specification
limit (or some other interesting parameter criterion).
He is quite correct in both suggesting making a relevant physical statement,
i.e. estimates of ppm defective and seeking to identify modality and outliers.
The sad larger truth is that many customers have settled for the moment
on the idee fixe of a Cpk or Cpm, etc. In many cases this fixation is,
in turn, caused by the next level of customer. The image of a series of
fish swallowing the next smaller fish comes to mind.
I would supplement his remarks by suggesting that one could link an
estimated ppm defective with that from normally distributed variate that
would give the same ppm and report an "equivalent Cpk" along with some
measures of location besides the mean.
Dr. Harper stated:
>Why assume any distribution? Get a direct measurement of the proportion
>falling out of spec. I believe Don Wheeler in one of his books comments
>on this - I think he calls it the fraction nonconforming. I would prefer
>to drop distributional assumptions that are hard to verify.
I agree also with his position of assuming no a' priori distribution
but part company with the implicit methodology. (Forgive me for assuming
facts not in evidence.) Mr. David did not state how much data he had so
estimating the fraction nonconforming (or ppm defective) requires relatively
large amounts of data to assess the relatively narrow statistical tolerance
intervals needed if there is no distributional assumption. I suggest that
fitting some distributions as implied by Dr. Breyfogle is an important
early step and given a good fit (perhaps a dubious assumption) proceed
from there. If there is no such fit then the use of resampling statistics
(bootstrap techniques) might avail Mr. David considerable good.
Furthermore, "grantblair"s point concerning autocorrelation should
be addressed in any analysis or presentation.
Lastly, I would be remiss if I did not mention the possibility of using
a normalizing transform to achieve the desired end. This, if it worked,
would be the only technique that would provide Mr. David a "Cpk" he could
provide in his report without holding his nose and doing an imitation of
Lady MacBeth.
Dr. Frank Isackson re:SOLUTIONS 205 West Walnut Avenue Unit A Monrovia,
CA 91016 (818) 358-1340 (Voice/FAX) FIsackson@aol.com
Date: Mon, 10 Jun 1996 16:14:09 -0700
From: QUALITY List Editor
Subject: Re: Non-normal Distribution
At 06:07 PM 6/6/96 EDT, QUALITY List Editor wrote: >Hi Guys, >Need
your advice on this fundamental but irksome problem: >What's a sound and
practical way to estimate the process capability of a >process having a
non-normal distribution? Any opinion on using the Pearson >curve such as
in SPC1PRO software?
Am I missing something here? If you were to use a standard Xbar R chart,
with reasonable sample sizes, doesn't the central limit theorem assure
you your averaged data will be normally distributed, regardless of the
distribution of the population? I thought this was the advantage of using
the Xbar R chart.
----------------------------- Michael A. Solinas Staff Quality Engineer,
Exar Corporation 48720 Kato Road Fremont, CA. 94538 Voice: (510)668-7369
Fax: (510)668-7022
michael.solinas@exar.com
The 3.4 dpmo for 6 sigma is derived from Motorola's 1.5 Sigma Shift.
So when Motorola process achieves a 3.4 dpmo it actually hits a standard
deviation level of 4.5.
The numbers you've quoted in your message are correct but 6 Sigma per
Motorola's definition is different. The 1.5 sigma rationale is based on
some statistical studies of long term records kept by the Navy?? (My trainer
couldn't point out which one). It states that over time a well control
process will fluctuates 1.5 sigma?? :)
Several years ago, I contacted an author of a JQT paper which addressed
the "six-sigma" concept. The article portrayed a close knowledge of the
concept. He, very quickly, said he only "parrotted" what Motorola had said
in their literature. He could not answer my questions. Interesting discovery?
-Andy -- William A. (Andy) Hailey, DBA =====================================================================
Home Address: 591 Appalachian Dr. Boone, NC 28607 704-265-3989 (P)
Work address: Department of Decision Sciences Raley Hall Appalachian
State University Boone, NC 28608 704-262-6504 (P)
The 3.4 defects per million is arrived at by allowing the nominal of
the sample distribution to shift up to 1.5 sigma from the target value.
This is most easily demonstrated by drawing the shifted distribution and
recalculating the area in the defect portion of the tails.
The concept for doing this was introduced by Motorola some years ago.
Their expert is Mikel J. Harry. Since the mid-80s he has written extensively
on the subject and one should be able to find references in older issues
of Quality Progress.
The very best of Harry's articles on the subject, IMHO, is "The Nature
of Six Sigma Quality." It was originally published by Motorola University
Press, located in Rolling Meadows, IL. Their number is (708) 576-7507.
Addison-Wesley Publishing has assumed responsibility for publishing Motorola
University documents so it may also be available from them.
David T. Novick dtnovick@anet.rockwell.com ===================================================
Harry, Mikel J., Ph.D. The Nature of Six Sigma Quality. Motorola Government
Electronic Group, 8201 E. McDowell Rd., Scottsdale, AZ 85257, Att: Public
Relations Rm H2184, Phone 602-990-5716 ======================================================
Yes-since I fell into that same mystery pit a couple years ago:
Some references to 6 sigma quality as 3.4 defects/million are taken
from the Motorola approach. See book, Six Sigma Producibility Analysis
and Process Characterization, by M. J. Harry and J. R. Lawson, Motorola
University Press, Chapter 6. They list earlier research articles on subject.
To summarize: the 3.4/million applies to what they call the influence
of Long-Term Dynamic Mean Variation on process capability, where the process
average is assumed to drift and vary over time. So the short-term (or static)
value for standard deviation is multiplied by a factor of 1.5 to get estimates
of long term capability. The 1.5 factor, the authors state, has been validated
by their research and is the value to use when "ambiguous manufacturing
circumstances prevail." Which I take to mean that you might fine tune your
actual factor with adequate process data--though they state the factor
will fall between 1.4 and 1.6 in any case.
Your calculations were correct, as much as they apply to static capability
and a standard z-transform and should yield a value around 2 parts-per-billion
(your 0.000197), both tails considered.
Wow, I love this stuff. Thanks for asking.
Jim Ayers, CQE =============================================== The
following is taken from a paper originally presented by Mikel J. Harry,
Ph.D. Principal Staff Engineer, Government Electronics Group, Motorola
Inc.
"Since Motorola Inc. first introduced the Six Sigma program, people
have asked why the various charts, graphs, etc. indicate 6 sigma is equal
to a defect rate of 3.4 ppm. The broad answer, as established by the Corporation,
assumes a 1.5 sigma change in the population average. When such a change
is taken into consideration,the end result is a defect rate of 3.4 ppm,
as compared to 0.002 ppm."
"The 0.002 ppm represent the *steady state* level and 3.4 denotes the
dynamic *real world* state of affairs. Thus a *six sigma capability* may
be defined as Cp=2.0, Cpk=1.5 and ppm =3.4"
Dr. Harry then proceeds, in the ensuing 22 pages of his paper, to explain
why 6 six should represent 3.4 ppm...and not 0.002.
Don't know if this helps, but those are the facts.
Jerry Kutcher Over 60 and still learning jkutcher@aol.com
------------------------------
In practice, less than three defects per million can be achieved by
reducing the process variance. Since control limits are statistical and
are typically 3 standard deviations from the mean, reducing the variance
(and its square root -- the standard deviation) makes the control limits
closer to each other (narrower). Specification limits tend to be more absolute
and unchangeable. If the control limits become narrower and the specification
limits are unchanged, then there are more standard deviations' worth of
room for the process to wonder without going out of specifications and
thus making defects.
> That is, how is the "cushion" used?
The "cushion" is the distance measured as number of standard deviations
between the control limits and the specification limits. The cushion is
used to allow the process to go out of control, to be detected by control
charts, and to be brought back into control with appropriate remedial action
without creating defects.
> One might expect that if corrective actions were taken to the warnings
> provided by control charts, (such as 7 points on the same side of the
> centreline) perhaps around 3 defects per 1000 might be produced. (Whilst
> control limits are set at 3 defects per 1000 this would not appear to
give > any guarantee of overall defect rates). How is 3 per million guaranteed?
>
Why would 7 points above (or below) the mean (a commonly accepted indication
of being out-of-control) lead to 3 defects per 1000? You must make an implicit
assumption about the location of the specification limits in order to make
this statement. Further, it is not typical to set control limits based
on the defect rate per 1000 for measurement charts. This practice is common
for attribute charts (like a p-chart). If I may venture a guess, I'd say
that the 3 per 1000 represents the two tails of the normal distribution
that lie outside 3 standard deviations from the mean. These 3 out of 1000
are the number of out-of-control points that are expected due strictly
to random causes. When these occur you will look for a special cause and
not be able to find it. Whether these points are defects depends on where
the specification limits are. These 3 out-of-control points per 1000 do
not represent a defect rate. They may be perfectly acceptable (yet statistically
unusual) products.
3 defects per million is not "guaranteed." A more accurate word would
be "expected" (like a statistical average).
John Grout jgrout@mail.cox.smu.edu
Although I cannot speak for Motorola (and no doubt they are quite pleased
about that state of affairs) I suspect that this 1.5 sigma "shift" is employed
for processes that depend upon raw materials of variable, uncontrollable
quality or environmental constraints that are uncontrollable.
Imagine a refining process that is intended to produce a material of
a certain purity. If the starting raw material is relatively "dirty" the
process could be in control and still produce suitable product toward the
lower end of the specification; likewise if the raw material was relatively
"clean" the process could be in control and produce material toward the
higher end of the specification.
Imagine further that a process is sensitive to RH (or some other uncontrollable
environmental parameter) that under very dry conditions produces product
toward one end of the specification and under moist conditions at the other
end of specification.
Under either scenario an in-control process can persist with a meandering
mean.
Can somebody out there say robust process?
I knew you could.
Frank Isackson
Please do not mistake my attempt to explain as advocacy. Like Mr. Novick,
I take no position on Motorola 6 Sigma.
It is (was) common practice for a company to run a job, prepare control
charts, and use this data to estimate Process Capability (Cp & Cpk).
The resulting values are accurate for that run, but may or may not accurately
predict performance when that job is run again, maybe 6 months from now.
Of course if I could combine several of these discrete runs, each with
its own (perhaps not identical) mean and std. dev., into one massive database,
then perhaps I could determine a more reliable estimate for "any and all"
future runs. This is what Motorola 6 Sigma attempts to estimate.
In practice you may only have data from one run, or very limited history.
Since Cpk estimates the "worst case scenario," that being the probability
of one tail (at 3 std. dev. from the mean) punching through the nearest
specification limit, it may be helpful to view the Motorola estimate as
making the "worst case scenario worser"- - that is, even if they had data
from only one run, and the process was in control, they predict that subsequent
runs will not vary from these measured values by more than +/- 1.5 std.
dev. and this becomes a factor in their estimates.
To me it just seems that Motorola raised the statistical bar: mathematical
6 sigma occurs at Cp = 2.0 and Cpk = 2.0; Motorola sets it at Cp = 2.0,
Cpk = 1.5, which essentially underrates supplier's claims of defect-free
performance. Whether this is fair to the supplier is another question.
In my experience, improvements in Cpk from values less than 1.0 to
up around 1.33 will produce real money-in-the-bank improvements through
reduced (common cause) defect rates. Further improvements, to say 1.67,
will have less impact on cost, but can inspire customer confidence. Add
control charts and operators trained to detect and fix assignable cause
variation and you can prevent virtually all manufacturing defects from
entering the process stream.
Now about those Peacock Feathers: As a practical matter, the process
will have stopped producing out-of-spec defects from common cause variation
long before it reaches 6 Sigma Quality, Cpk = 2.0. Does not matter much
if you estimate the probability of defects at 2 parts-per-billion or the
more severe (Motorola) rate of 3.4 parts-per-million--ya ain't making common
cause defects anymore. Remember we are talking about a probability here,
not an actual defect rate. There is a similiar probability that I will
win the lottery, but I never do.
I have worked with clients to improve processes up to Cpk = 3.5 (probability
of defects around 1 part-per-trillion) and it seemed to me like peacock
feathers at that point: the numbers are impressive and dazzling in their
statistical brilliance, good for bragging rights and mating rituals perhaps,
but the actual defect rate was no longer improving--it had been at zero
since before Cpk hit 1.5.
My opinions are representative of my entire organization, since I am
the only one who works here.
Jim Ayers, CQE AQMS, Seattle, WA JAyersAQMS@aol.com
------------------------------
In addition, often it is not easy to detect a shifting mean with typically
used control charts.
The real question relative to six sigma is the process capable. If
a process mean has a overall average 1 inch and it drifts .001 inch day
to day about that mean and has a very low within day standard deviation,
the process capability would be very good (have a good six sigma) given
a specification limit of 1 inch +/- .05 inch.
Forrest W. Breyfogle III Author: Statistical Methods for Testing, Development,
& Manufacturing, Publisher: John Wiley and Sons