Archive for Manufacturing Improvement

Meaningful Manufacturing Metrics

Posted in Manufacturing Improvement with tags , , , , on June 29, 2014 by manufacturingtraining

I question I frequently hear from clients is this:

     What metrics should I use for managing manufacturing?

The answer depends on the nature of your business.   Whatever you do, though, your metrics should meet these paramount requirements:

  • Your metrics should convey an honest sense of how the business is doing at a level that can be influenced by those who see the metric.
  • Your metrics should be posted where the performance is being measured.
  • The meaning of your metrics should be clear (simple is better).
  • Your metrics should be prepared by the folks doing the work.

What I usually see in client facilities are artfully-crafted Excel or PowerPoint plots that purport to show company performance.   It’s always at the company level, and the chartsmanship is always impressive.   Not the contents or the information contained in the charts, mind you, but the charts are beautiful   There must be an army of folks out there earning good livings churning out charts using everything MSOffice has to offer.   No kidding…the charts are awesome.  There are usually lots of charts in a central location (it seems like there are always more than a dozen, sometimes many more than that).   Like I said, they’re beautiful…a true testament to the capabilities of Excel and PowerPoint.

Usually, I’m the only one examining the MSOffice artistry…I never see anyone else examining them.   If you’re smiling while visualizing this image and my comments, consider this:   I often stop the next person who walks by (it doesn’t matter if it’s the CEO or a machine operator) and I ask this question:   What do the charts mean?   If it’s a productivity chart, I ask how it’s calculated.   If it’s on time delivery performance, I’ll ask how they measure it.  I can pick any chart on the wall, and after an embarrassed silence, the response is always the same:   I’m not sure.

I’m going to suggest just three metrics that I know will make a difference in your organization’s profitability and on time delivery performance:

  • Shipments Against Plan
  • Percent of Work Orders Completed On Time
  • MRB Aging

Let’s consider each of these.Shipments Against Plan

The first one, shipments against plan, is a monthly x-y plot that shows a cumulative shipping plan (in dollars) for the month, with another line showing actual shipments (again, in dollars).   Here’s what it looks like:

PlanVsActuals

The beauty of the above metric lies in several areas:

  • If the product area manager prepares it (and I’ve always made that be the case in any manufacturing organization I’ve ever managed), they know every day where the shipments are with respect to the contact due date.  They don’t have to wait until the end of the month to find out where they are.
  • If you base the dollars on the product values and their contract due dates, you get a true plan of what the shipments (both planned and actual) should look like.  I often hear manufacturers claim they have to plan by revenue rather that the product due dates, but that’s a mistake from several perspectives (and it will the subject of a future blog entry).  A bit of a prelude on that one:  If you start pulling in anything you can to make the monthly sales figure (i.e., ship product earlier because it’s closer to being ready to ship than what is actually due), you’re doing serious damage to next month’s shipping schedule.  Like I said, more on this topic later.
  • If you put this metric in the factory in the final assembly area (and especially if the product manufacturing manager is located in this area), everyone sees exactly where the company is.
  • It avoids the typical “hockey stick” shipping profile, where little goes out of the factory during the first three weeks of the month, and there’s a mad dash to ship everything during the last week of the month.

Percent Of Work Orders Completed On Time

This is another simple chart, and it’s one that should be prepared for and prominently displayed in every work center in your factory.  It looks like this:

PercentOnTime

The premise here is that something or someone assigns work orders to each work center, and that the assignment includes a required completion date.   In companies with an MRP or ERP system, it’s usually called the “dispatch report” or the “to do” list.  It almost goes without saying, but I’ll say it anyway:  If the company is to deliver its products on time, each work center must strive to complete its dispatch-report-assigned work orders on time.

The metric here is simple:  It just shows the percent of work orders the work center completes on schedule each day.    The work center supervisor should prepare it at the end of the day, and post it in a prominent location so the folks assigned to each work center know how they’re doing.   It used to take me no more than 5 minutes to do this.  It was the essence of what I was being paid to do (manage the work center).

The beauty of this metric is that it is simple (everyone in the work center will understand it), it only takes a few minutes each day to prepare, and it naturally encourages the work center to improve performance.   Hitting that 100% on time in the work center is manageable and achievable.

Sometimes I hear folks tell me this:  We can’t do this because everything in the work center is late, so there’s no way we can hit 100%.  If that’s the case in any of your work centers, you need to replan the work.   That’s important for several reasons, the most significant of which is that the master production schedule should define who needs to do what and by when.  If your master production schedule doesn’t assign the work order completion dates and everything (or nearly everything) in the work center is late, the folks in the work center will decide which jobs they work.   That’s not a formula for success.

MRB Aging

The exhortations about 6 Sigma and other management fads du jour notwithstanding, anyone who’s ever worked in a manufacturing company knows that nonconformances occur.   Yes, we want robust processes and we’d like to have zero defects, but I’ve never been a factory that doesn’t experience rejections (and I’ve been in a lot of factories).  What governs our success is how we respond to them.

When items are rejected, they enter a material review process that determines nonconformance disposition:  Should the nonconforming item be scrapped, reworked, repaired, or used as is?

The above is interesting and you could write a book about the nuances associated with managing nonconforming material (I know because I actually did write a book that addresses this topic).   In my experience, strong root cause corrective action is essential for the obvious reasons (please see our Root Cause Failure Analysis training program), and so is rapid nonconformance disposition for a less obvious reason I’ll get to in a second.   Root cause failure analysis means finding out why the nonconformance occurred and taking steps to preclude recurrence.

Nonconformance disposition means what we do with the nonconforming item, and whatever we do, it’s important that we do it quickly.   Very quickly, in fact.  From a delivery performance perspective here’s a little known fact with a huge impact:   Stuff in MRB is invisible to MRP.   The MRP system thinks the rejected items in MRB are still available.   What that means to us is this:   When rejected items languish in MRB, they interfere with on time deliveries.   Items in MRB need to be rejected rapidly.   In the plants I’ve managed, I’ve put that limit at 1 day.   I’ve scrapped stuff that was hanging around too long even it could be reworked.   My reasoning was that I was in a better position letting MRP know the material was gone so we could get on with fabricating replacement material.   It drove the Materials folks nuts, but I only had to do it a couple of times before they became world-class proponents of dispositioning rejected material in less than a day.

This brings us to the third metric, and that’s a simple list of what’s in MRB, with a requirement that anything in there for more than 24 hours was highlighted in red.   You can set up an Excel spreadsheet with conditional formatting to check the time something entered MRB to the current time, and highlight it automatically if it goes over 1 day.   I posted that list outside the MRB bond area so that anyone walking by the area (which always included me at least once daily) could immediately see if things were growing whiskers in there.   It worked well.

MRP Aging

If your company is not delivering on schedule, the above metrics will have a rapid impact on highlighting where it hurts and where the improvement opportunities lie.   It’s a great start at putting control of the factory in the hands of the folks who can make a difference in how your plant performs.   There’s much more to getting on schedule and staying there, of course, but the above is a good start.

If you’d like to learn more about on time delivery performance, please pick up a copy of Manufacturing Delivery Performance Improvement, available from Amazon.

Manufacturing-Delivery-Performance-Improvement

If you have any questions or suggestions, please give us a call at 909 204 9984; we’d love to hear from you.

 

 

Advertisements

Applying Taguchi to Load Development

Posted in Manufacturing Improvement with tags , , , , , , , on August 4, 2013 by manufacturingtraining

This blog entry describes the application of the Taguchi design of experiments technique to .45 ACP load development in a Smith and Wesson Model 25 revolver.

IMG_0692-450

Taguchi testing is an ANOVA-based approach that allows evaluating the impact of several variables simultaneously while minimizing sample size.  This is a powerful technique because it allows identifying which factors are statistically significant and which are not.   We are interested in both from the perspective of their influence on an output parameter of concern.

Both categories of factors are good things to know.  If we know which factors are significant, we can control them to achieve a desired output.   If we know which factors are not significant, it means they require less control (thereby offering cost reduction opportunities).

The output parameter of concern in this experiment is accuracy.   When performing a Taguchi test, the output parameter must be quantifiable, and this experiment provides this by measuring group size.   The input factors under evaluation include propellant type, propellant charge, primer type, bullet weight, brass type, bullet seating depth, and bullet crimp.  These factors were arranged in a standard Taguchi L8 orthogonal array as shown below (along with the results):

Taguchi-1

As the above table shows, three sets of data were collected.  We tested each load configuration three times (Groups A, B, and C) and we measured the group size for each 3-shot group.

After accomplishing the above, we prepared the standard Taguchi ANOVA evaluation to assess which of the above input factors most influenced accuracy:

Taguchi-2

The above results suggest that crimp (or lack thereof) has the greatest effect on accuracy.   The results indicate that rounds with no crimp are more accurate than rounds with the bullet crimped.

We can’t simply stop here, though.  We have to assess if the results are statistically significant.   Doing so requires performing an ANOVA on the crimp versus no crimp results.  Using Excel’s data analysis feature (the f-test for two samples) on the crimp-vs-no-crimp results shows the following:

Taguchi-3

Since the calculated f-ratio (3.817) does not exceed the critical f-ratio (5.391), we cannot conclude that the findings are statistically significant at the 90% confidence level.  If we allow a lower confidence level (80%), the results are statistically significant, but we usually would like at least a 90% confidence level for such conclusions.

So what does all the above mean?   Here are our conclusions from this experiment:

  • This particular revolver shoots any of the loads tested extremely well.  Many of the groups (all fired at a range of 50 feet) were well under an inch.
  • Shooter error (i.e., inaccuracies resulting from the shooter’s unsteadiness) overpowers any of the factors evaluated in this experiment.

Although the test shows that the results are not statistically significant, this is good information to know.  What it means is that any of the test loads can be used with good accuracy (as stated above, this revolver is accurate with any of the loads tested).  It suggests (but does not confirm to a 90% confidence level) that absence of a bullet crimp will result in greater accuracy.

QMCoverThe parallels to design and process challenges are obvious.   We can use the Taguchi technique to identify which factors are critical so that we can control them to achieve desired product or process performance requirements.   As significantly, Taguchi testing also shows which factors are not critical.  Knowing this offers cost reduction opportunities because we can relax tolerances, controls, and other considerations in these areas without influencing product or process performance.

If you’d like to learn more about Taguchi testing and how it can be applied to your products or processes, please consider purchasing Quality Management for the Technology Sector, a book that includes a detailed discussion of this fascinating technology.

And if you’d like a more in depth exposure to these technologies, please contact us for a workshop tailored to your needs.

Statistical Tolerance Analysis

Posted in Creativity, Manufacturing Improvement with tags , , , , , on June 17, 2013 by manufacturingtraining

Dimensional tolerances specify allowed variability around nominal dimensions.   We assign tolerances to assure component interchangeability while meeting performance and producibility requirements.  In general, as tolerances become smaller manufacturing costs become greater.  The challenge becomes finding ways to increase tolerances without sacrificing product performance and overall assembly conformance.  Statistical tolerance analysis provides a proven approach for relaxing tolerances and reducing cost.

Before we dive into how to use statistical tolerance analysis, let’s first consider how we usually assign tolerances.  Tolerances should be based on manufacturing process capabilities and requirements in the areas of component interchangeability, assembly dimensions, and product performance.  In many cases, though, tolerances are based on an organization’s past tolerance practices, standardized tolerance assignment tables, or misguided attempts to improve quality by specifying needlessly-stringent tolerances.  These latter approaches are not good, they often induce fit and performance issues, and organizations that use them often leave money on the table.

There are two approaches to tolerance assignment – worst case tolerance analysis and statistical tolerance analysis.

In the worst case approach, we analyze tolerances assuming components will be at their worst case conditions.  This seemingly innocuous assumption has several deleterious effects.  It requires larger than necessary assembly tolerances if we simply stack the worst case tolerances.   On the other hand, if we start with the required assembly tolerance and use it to determine component tolerances, the worst case tolerance analysis approach forces us to make the component tolerances very small. Here’s why this happens:   The rule for assembly tolerance determination using the worst case approach is:

Tassy = ΣTi

where

Tassy = assembly tolerance

Ti = individual component tolerances

The worst case tolerance analysis and assignment approach assumes that the components will be at their worst case dimensions; i.e., each component will be at the extreme edge of its tolerance limits.  The good news is that this is not a realistic assumption.  It is overly conservative.

Here’s more good news:  Component dimensions will most likely be normally distributed between the component’s upper and lower tolerance bounds, and the probability of actually being at the tolerance limits is low.   The likelihood of all of the components in an assembly being at their upper and lower limits is even lower.  The most likely case is that individual component dimensions will hover around their nominal values.  This reasonable assumption underlies the statistical tolerance analysis approach.

We can use statistical tolerance analysis to our advantage in three ways:

  • If we start with component tolerances, we can assign a tighter assembly tolerance.
  • If we start with the assembly tolerance, we can increase component tolerances.
  • We can use combinations of the above two approaches to provide tighter assembly tolerances than we would use with the worst case tolerance analysis approach and to selectively relax component tolerances.

Statistical tolerance analysis uses a root sum square approach to develop assembly tolerances based on component tolerances.   In the worst case tolerance analysis approach discussed above, we simply added all of the component tolerances to determine the assembly tolerance.  In the statistical tolerance analysis approach, we find the assembly tolerance based on the following equation:

Tassy = (ΣTi2)(1/2)

Using the above formula is straightforward.  We simply square each component tolerance, take the sum of these squares, and then find the square root of the summed squares to determine our assembly tolerance.

Sometimes it is difficult to understand why the root sum square approach is appropriate.   We can think of this along the same lines as the Pythagorean theorem, in which the distance along the diagonal of a right triangle is equal to the square root of the sum of the squares of the triangle’s sides.  Or we can think of it as distance from an aim point.  If we have an inch of lateral dispersion and an inch of horizontal dispersion, the total dispersion is 1.414 inches as we see below:

STA-2

To continue our discussion on statistical tolerance analysis, consider this simple assembly with three parts, each with a tolerance of ±0.002 inch:

STA-1

The worst case assembly tolerance for the above design is the sum of all of the component tolerances, or ±0.006 inch.

Using the statistical tolerance analysis approach yields an assembly tolerance based on the root sum square of the component tolerances.  It is (0.0022 + 0.0022 + 0.0022)(1/2), or 0.0035 inch.  Note that the statistically-derived tolerance is 42% smaller than the worst case tolerance.   That’s a very significant decrease from the 0.006 inch worst case derived tolerance.

Based on the above, we can assign a tighter assembly tolerance while keeping the existing component tolerances.  Or, we can the stick with the worst case assembly tolerance (assuming this is an acceptable assembly tolerance) and relax the component tolerances.   In fact, this is why we usually use the statistical tolerance analysis approach – for a given assembly tolerance, it allows us to increase the component tolerances (thereby lowering manufacturing costs).

Let’s continue with the above example to see how we can do this.   Suppose we increase the tolerance of each component by 50% so that the component tolerances go from 0.002 inch to 0.003 inch.   Calculating the statistically-derived tolerances in this case results in an assembly tolerance of 0.0052 inch, which is still below the 0.006 inch worst case assembly tolerance.   This is very significant:  We increased component tolerance 50% and still came in with an assembly tolerance less that the worst case assembly tolerance.  We can even double one of the above component’s tolerances to 0.004 inch while increasing the other two by 50% and still lie within the worst case assembly tolerance.  In this case, the statistically-derived assembly tolerance would be (0.0032 + 0.0032 + 0.0042)(1/2), or 0.0058 inch. It’s this ability to use statistical tolerance analysis to increase component tolerances that is the real money maker here.

The only disadvantage to the statistical tolerance analysis approach is that there is a small chance we will violate the assembly tolerance.   An implicit assumption is that all of the components are produced using capable processes (i.e., the process capability is such that ±3σ or all parts produced lie within the tolerance limits for each part).  This really isn’t much of an assumption (whether you are using statistical tolerance analysis or worst case tolerance analysis, your processes have to be capable).  With a statistical tolerance analysis approach, we can predict that 99.73% (not 100%) of all assemblies will meet the required assembly dimension.  This relatively small predicted rejection rate (just 0.27%) is usually acceptable.  In practice, when the assembly dimension is not met we can usually address it by simply selecting different components to bring the assembly into conformance.

Manufacturing Delivery Performance Improvement

Posted in Manufacturing Improvement with tags , , , on July 5, 2012 by manufacturingtraining

Our newest book, Manufacturing Delivery Performance Improvement, is now available from Amazon.com!

If your company has ever struggled with shipping products on schedule, this book cuts through all the theory and software mysticism the MRP and ERP companies push…it’s what you need to know if you want to eliminate your delinquencies and stay on schedule.  It’s also the book we’ll be using in the University of Kansas online Manufacturing Performance course series, and you can learn more about the KU program right here.

Leaving money on the table…

Posted in Manufacturing Improvement with tags , , , on April 10, 2012 by manufacturingtraining

On the subject of drawing tolerances, many organizations leave a lot of money on the table.   This is an important area from both cost reduction and quality perspectives.  Here’s a question for  you:  How does your organization assign tolerances?

Common approaches for tolerance selection include the following:

  • In some organizations, tolerances are based on the nominal dimension.  Dimensions up to 1 inch might get a tolerance of ± 0.001 inch, dimensions up to 5 inches might get a tolerance of ± 0.01 inch, and everything above 5 inches might get a tolerance of ± 0.05 inch.  This makes the designer’s work easy, but it is a poor practice.
  • In some organizations, tolerances are based on decimal places.  If the designer specifies a nominal dimension of, say, 1.000 inch (3 decimal places), the tolerance for might be ± .001 inch (all 3-decimal-place dimensions are assigned a ± .001 inch tolerance).  If the designer specifies a nominal dimension of 1.00 inch (2 decimal places), the tolerance is ± .01 inch.  The tolerances are restricted to fixed steps, and it’s not likely the steps correspond to fit, function, or process capabilities.
  • In some cases, designers assign tight tolerances to parts in an effort to improve quality.  This practice is misguided and builds unnecessary cost into the product.
  • In some cases, the designers assess how the parts fit together, what the parts have to do, and how the parts will be manufactured, and base the tolerances on these factors.

That last approach is the best approach.  Based on our observations of many organizations, though, it’s not what usually happens.

Cost Reduction Opportunities

The best point for reducing cost is during the design process.   A good approach is to include the manufacturing folks in the design process, assess the production approach as designs emerge, and identify processes and process capabilities for each part.  It’s the engineering organization’s responsibility to select dimensions and assign tolerances that will assure fit and function; it’s the manufacturing organization’s responsibility to raise a red flag where tight tolerances mandate expensive processes or a high likelihood of nonconformances.

If you didn’t do the above during the design process any you have tightly-toleranced parts in production, you can still reduce cost by targeting unnecessarily-tight tolerances.  Here’s a recommended approach:

  • Talk to your QA and manufacturing people.   They’ll be able to identify parts and dimensions that cause frequent rejections.   Where this situation exists, evaluate relaxing the tolerances.
  • Look for “use as is” dispositions on nonconforming parts (trust me on this…your manufacturing people will know where this is occurring).  If a “use as is” disposition is the acceptable, it’s likely the tolerance on the nonconforming dimension can be relaxed.
  • Talk to your purchasing folks.   They can reach out to the supplier community and ask the same kinds of questions.   This is a particularly important area to explore, because in most manufacturing organizations approximately 70% of the cost of goods sold flows through the purchasing organization.  You may not know without asking how many parts your suppliers are rejecting; all you’ll see are the costs buried in what you have to pay for the parts.   The best way to ask the question is the most direct:   What are we doing that’s driving your costs?  The suppliers know, and they’re usually eager to answer the question.

All of the above is associated with cost reduction, but that’s not the only place where inappropriately-toleranced parts create problems.  In many cases, dimensioning and tolerancing practices can induce system-level failures.    That’s another fascinating area, and we’ll address it in a future blog entry.

Would you like to know more about cost reduction opportunities you act on right now?  Consider our cost reduction training programs, or take a look at our most recent book, Cost Reduction and Optimization for Manufacturing and Industrial Companies!

KU Online Courses Scheduled for 2012-2013

Posted in Manufacturing Improvement with tags , , , , , on March 21, 2012 by manufacturingtraining

ManufacturingTraining and the University of Kansas have finalized the course schedule for our next series of six online Manufacturing Optimization courses:

  • Delivery Performance Improvement:  21 August 2012
  • Cost Estimation:  16 October 2012
  • Industrial Statistics:  8 January 2013
  • Quality Management:  5 March 2013
  • Root Cause Failure Analysis:  30 April 2013
  • Cost Reduction and Optimization:  25 June 2013

Each course is 3 weeks long and the University of Kansas will grant Continuing Education credit.  We’ll meet for online lectures twice each week, with interactive assignments and discussion board activities following the lectures.  We’ll be posting more information here and on the ManufacturingTraining.com website in the near future, so stay tuned for more information on this exciting new professional education opportunity!  In the meantime, if you want advance information on pre-enrolling, you can do by shooting an email to info@ManufacturingTraining.com.

State of the Art?

Posted in Manufacturing Improvement with tags , , , , on March 14, 2012 by manufacturingtraining

Back to the photo I showed a week or so ago…

An Apache Rotor Blade Bond Joint

The photo above shows a bonded section of an AH-64A Apache helicopter main rotor blade in the area where you see the blue Dykem. It’s where the blade manufacturer and the Army experienced numerous disbonds, and it’s the problem the blade manufacturer had to solve.

An AH-64A Apache at Fort Knox, Kentucky

Before delving into the failure analysis, let’s consider the Apache rotor blade’s design and its history. The Apache helicopter has what are arguably the most advanced rotor blades in the world. They can take a direct hit from a 23mm ZSU-23/4 high explosive warhead and remain intact. During the Vietnam war, a single rifle bullet striking a Huey blade would take out the helicopter and everyone on board. When the Army wrote the specifications for the Apache, they wanted a much more survivable and much less vulnerable blade.

Vietnam-Era Huey Helicopters 

The Apache helicopter prime contractor designed a composite blade with four redundant load paths running the entire rotor blade length. The blade’s advanced design uses titanium, special stainless steels, and honeycomb, but those four redundant load paths were the key to its survivability. If one section of the blade took a hit with a 23mm warhead detonation, the three remaining load paths held the blade together. That actually happened once during the first Persian Gulf war, and the Apache helicopter made it back to its base. It’s an awesome design, but it had a production weakness.

Apache Rotor Blade Sectional View Showing Four Spars 

Let’s also consider the nature of the Apache production approach. Three entities are important here: The US Army (the Apache customer), the prime contractor (who designed the helicopter and its blade), and the blade manufacturer. The blade manufacturer was a built-to-print manufacturing organization. They built the blade in accordance with the helicopter prime contractor’s technical data package.

The manufacturing process consisted of laying up the blade in a cleanroom environment using special fixturing, bagging the blade components in a sealed environment, pulling a vacuum on the bag, transporting the blade to an autoclave, and then autoclave curing.  The autoclave cure was rigidly controlled in accordance with the prime contractor’s specification.

During production startup, many of the blades had a high rejection rate after the autoclave cure. The bond joint (where the stainless steel longitudinal spars overlapped, as shown in our photo above) frequently disbonded.  Eager to get the blade into production, the blade manufacturer, the prime contractor, and the Army pushed ahead.  They believed that due to the “state of the art” nature of the Apache blade’s design, a less-than-100% yield was inherent to the process.  The disbond failures continued into production.  To cut to the chase, the blade manufacturer continued producing the blade for the next decade with an approximate 50% rejection rate.  To make matters worse, blades in service on Apache helicopters only had about an 800-hour service life (the specification called for a 2,000-hour service life).

By any measure, this was not a good situation.  The blade manufacturer had attempted to find the disbond root cause off and on for about 10 years, with essentially no success. While not happy, the Army continued to buy replacement blades, and they continued to send blades back to the prime contractor from the field for depot repairs.  The prime contractor sent the blades back to the blade manufacturer.  In retrospect, neither the prime contractor nor the blade manufacturer were financially motivated to fix the disbond problem.

After a change in ownership, the blade manufacturer realized the in-house blade disbond rework costs were significant. The new management was serious about finding and correcting the blade disbonds. Using fault-tree-analysis-based root cause analysis techniques, the company identified literally hundreds of potential failure causes. The failure analysis team found and corrected many problems in the production process, but none had induced the blade disbonds.  The failures continued. Surprisingly (or perhaps not surprisingly, considering the lively spares and repair business), the helicopter prime contractor did not seem particularly interested in correcting the problem.

After ruling out hundreds of hypothesized failure causes, one of the remaining suspect causes was the bondline width where the longitudinal spars were bonded together. That’s the distance marked on the macro photo with scribe marks on the blue Dykem (the photo I showed you earlier, and the one at the top of this blog entry).  During a meeting with the helicopter prime contractor, the blade manufacturer asked if the bondline width was critical. The prime contractor, evasive at first, finally admitted that this distance was indeed critical. The prime contractor further admitted that if the distance was allowed to go below 0.440 inch, a disbond was likely.

Armed with this information, the blade manufacturer immediately analyzed the prime contractor’s build-to-print rotor blade drawings.  To their surprise, tolerance analysis showed the blade’s design allowed the bondline width to go as low as 0.330 inch. The blade manufacturer inspected all failed blades in house, and found that every one of the failed blades was, in fact, below 0.330 inch.  It was an amazing discovery.

The blade manufacturer immediately asked the prime contractor to change the drawings such that the bondline width would never go below 0.440 inch. The prime contractor refused, most likely fearing a massive claim from the blade manufacturer for a technical data package deficiency spanning several years.  The prime contractor instead accused the blade manufacturer of a quality lapse, stating that this was what allowed the bondline width to go below the 0.440 inch dimension.

The blade manufacturer explained the results of their tolerance analysis again, and once again pointed out that the blade design permitted the disbond-inducing condition. When the prime contractor refused to concede the point (and again accused the blade manufacturer of a quality lapse), the blade manufacturer took a different tack.  As repair facility, the blade manufacturer had blades in house for depot repairs from various points during the Apache program’s life (including the 12th ever blade built, which went back to the first year of production). All of these earlier failed blades had the same problem: They conformed to the technical data package, but their bondline width was below 0.440 inch.

The blade manufacturer, faced with an ongoing 50% rejection rate, decided to hold the blade’s components to much tighter tolerances than required by the prime contractor’s technical data package. By doing so, the blade manufacturer produced conforming blades with bondline widths above 0.440 inch. After implementing this change, the blade disbond rejection rate essentially went to zero.

So what’s the message here?  There are several:

  1. Don’t accept that you have to live with yields less than 100%. You can focus on finding and fixing a failure’s root cause if you are armed with the right tools. Don’t accept the “state of the art” argument as a reason for living with ongoing yield issues.
  2. Don’t think that simply because the product meets the design (i.e., there are no nonconformances) that everything is good. In many cases, the cause of a recurring failure is design related. Finding and addressing these deficiencies is often a key systems failure analysis outcome.
  3. If you are a build-to-print contractor, be wary.  The design agency may not always be completely open to revealing design deficiencies.
  4. It’s easy to become complacent and accept a less-than-100% yield as a necessary fact of life. In some cases, the yield is not just a little below 100%; it’s dramatically less than 100% (as occurred on the Apache rotor blade production program for many years).
  5. There are significant savings associated with finding and fixing recurring nonconformances. You can do it if you want to, and if you have the right tools.

You know, the wild thing about this failure and the Mast Mounted Sight failure mentioned a week or so ago is that the two companies making these different products were literally across the street from each other.  The Mast Mounted Sight was a true show stopper…it stopped production and it probably delayed the start of Operation Desert Storm.  The Apache blade didn’t stop production…it was just a nagging, long-term, expensive rework driver for the Army and the blade manufacturer.  Which one was more expensive?  Beats me, but if I had to guess, I’d guess that the ongoing (but non-show-stopping) nature of the Apache rotor blade failures carried a heftier price tag.

Do you have recurring inprocess failures that you’d like to kill?  Give us a call at 909 204 9984…we can help you equip your people with the tools you need to address these cost and quality drivers!