Meaningful Manufacturing Metrics

Posted in Manufacturing Improvement with tags , , , , on June 29, 2014 by manufacturingtraining

I question I frequently hear from clients is this:

     What metrics should I use for managing manufacturing?

The answer depends on the nature of your business.   Whatever you do, though, your metrics should meet these paramount requirements:

  • Your metrics should convey an honest sense of how the business is doing at a level that can be influenced by those who see the metric.
  • Your metrics should be posted where the performance is being measured.
  • The meaning of your metrics should be clear (simple is better).
  • Your metrics should be prepared by the folks doing the work.

What I usually see in client facilities are artfully-crafted Excel or PowerPoint plots that purport to show company performance.   It’s always at the company level, and the chartsmanship is always impressive.   Not the contents or the information contained in the charts, mind you, but the charts are beautiful   There must be an army of folks out there earning good livings churning out charts using everything MSOffice has to offer.   No kidding…the charts are awesome.  There are usually lots of charts in a central location (it seems like there are always more than a dozen, sometimes many more than that).   Like I said, they’re beautiful…a true testament to the capabilities of Excel and PowerPoint.

Usually, I’m the only one examining the MSOffice artistry…I never see anyone else examining them.   If you’re smiling while visualizing this image and my comments, consider this:   I often stop the next person who walks by (it doesn’t matter if it’s the CEO or a machine operator) and I ask this question:   What do the charts mean?   If it’s a productivity chart, I ask how it’s calculated.   If it’s on time delivery performance, I’ll ask how they measure it.  I can pick any chart on the wall, and after an embarrassed silence, the response is always the same:   I’m not sure.

I’m going to suggest just three metrics that I know will make a difference in your organization’s profitability and on time delivery performance:

  • Shipments Against Plan
  • Percent of Work Orders Completed On Time
  • MRB Aging

Let’s consider each of these.Shipments Against Plan

The first one, shipments against plan, is a monthly x-y plot that shows a cumulative shipping plan (in dollars) for the month, with another line showing actual shipments (again, in dollars).   Here’s what it looks like:

PlanVsActuals

The beauty of the above metric lies in several areas:

  • If the product area manager prepares it (and I’ve always made that be the case in any manufacturing organization I’ve ever managed), they know every day where the shipments are with respect to the contact due date.  They don’t have to wait until the end of the month to find out where they are.
  • If you base the dollars on the product values and their contract due dates, you get a true plan of what the shipments (both planned and actual) should look like.  I often hear manufacturers claim they have to plan by revenue rather that the product due dates, but that’s a mistake from several perspectives (and it will the subject of a future blog entry).  A bit of a prelude on that one:  If you start pulling in anything you can to make the monthly sales figure (i.e., ship product earlier because it’s closer to being ready to ship than what is actually due), you’re doing serious damage to next month’s shipping schedule.  Like I said, more on this topic later.
  • If you put this metric in the factory in the final assembly area (and especially if the product manufacturing manager is located in this area), everyone sees exactly where the company is.
  • It avoids the typical “hockey stick” shipping profile, where little goes out of the factory during the first three weeks of the month, and there’s a mad dash to ship everything during the last week of the month.

Percent Of Work Orders Completed On Time

This is another simple chart, and it’s one that should be prepared for and prominently displayed in every work center in your factory.  It looks like this:

PercentOnTime

The premise here is that something or someone assigns work orders to each work center, and that the assignment includes a required completion date.   In companies with an MRP or ERP system, it’s usually called the “dispatch report” or the “to do” list.  It almost goes without saying, but I’ll say it anyway:  If the company is to deliver its products on time, each work center must strive to complete its dispatch-report-assigned work orders on time.

The metric here is simple:  It just shows the percent of work orders the work center completes on schedule each day.    The work center supervisor should prepare it at the end of the day, and post it in a prominent location so the folks assigned to each work center know how they’re doing.   It used to take me no more than 5 minutes to do this.  It was the essence of what I was being paid to do (manage the work center).

The beauty of this metric is that it is simple (everyone in the work center will understand it), it only takes a few minutes each day to prepare, and it naturally encourages the work center to improve performance.   Hitting that 100% on time in the work center is manageable and achievable.

Sometimes I hear folks tell me this:  We can’t do this because everything in the work center is late, so there’s no way we can hit 100%.  If that’s the case in any of your work centers, you need to replan the work.   That’s important for several reasons, the most significant of which is that the master production schedule should define who needs to do what and by when.  If your master production schedule doesn’t assign the work order completion dates and everything (or nearly everything) in the work center is late, the folks in the work center will decide which jobs they work.   That’s not a formula for success.

MRB Aging

The exhortations about 6 Sigma and other management fads du jour notwithstanding, anyone who’s ever worked in a manufacturing company knows that nonconformances occur.   Yes, we want robust processes and we’d like to have zero defects, but I’ve never been a factory that doesn’t experience rejections (and I’ve been in a lot of factories).  What governs our success is how we respond to them.

When items are rejected, they enter a material review process that determines nonconformance disposition:  Should the nonconforming item be scrapped, reworked, repaired, or used as is?

The above is interesting and you could write a book about the nuances associated with managing nonconforming material (I know because I actually did write a book that addresses this topic).   In my experience, strong root cause corrective action is essential for the obvious reasons (please see our Root Cause Failure Analysis training program), and so is rapid nonconformance disposition for a less obvious reason I’ll get to in a second.   Root cause failure analysis means finding out why the nonconformance occurred and taking steps to preclude recurrence.

Nonconformance disposition means what we do with the nonconforming item, and whatever we do, it’s important that we do it quickly.   Very quickly, in fact.  From a delivery performance perspective here’s a little known fact with a huge impact:   Stuff in MRB is invisible to MRP.   The MRP system thinks the rejected items in MRB are still available.   What that means to us is this:   When rejected items languish in MRB, they interfere with on time deliveries.   Items in MRB need to be rejected rapidly.   In the plants I’ve managed, I’ve put that limit at 1 day.   I’ve scrapped stuff that was hanging around too long even it could be reworked.   My reasoning was that I was in a better position letting MRP know the material was gone so we could get on with fabricating replacement material.   It drove the Materials folks nuts, but I only had to do it a couple of times before they became world-class proponents of dispositioning rejected material in less than a day.

This brings us to the third metric, and that’s a simple list of what’s in MRB, with a requirement that anything in there for more than 24 hours was highlighted in red.   You can set up an Excel spreadsheet with conditional formatting to check the time something entered MRB to the current time, and highlight it automatically if it goes over 1 day.   I posted that list outside the MRB bond area so that anyone walking by the area (which always included me at least once daily) could immediately see if things were growing whiskers in there.   It worked well.

MRP Aging

If your company is not delivering on schedule, the above metrics will have a rapid impact on highlighting where it hurts and where the improvement opportunities lie.   It’s a great start at putting control of the factory in the hands of the folks who can make a difference in how your plant performs.   There’s much more to getting on schedule and staying there, of course, but the above is a good start.

If you’d like to learn more about on time delivery performance, please pick up a copy of Manufacturing Delivery Performance Improvement, available from Amazon.

Manufacturing-Delivery-Performance-Improvement

If you have any questions or suggestions, please give us a call at 909 204 9984; we’d love to hear from you.

 

 

Cool free stuff!

Posted in Creativity, Manufacturing Improvement, Uncategorized with tags , , , , on April 2, 2014 by manufacturingtraining

In many of our courses we teach people about the many free references and other information available on the Internet for use in reliability predictions, FMEA preparation, product design, cost estimation, and other areas in which we teach and consult.   We’re including a partial list of these free resources on the ManufacturingTraining blog for your easy reference.   There will be more of our favorites here on the blog, so check back often (or better yet, hit the RSS button to subscribe).

Electronic Equipment Reliability Data.   MIL-HDBK-217F has been the “go to” source for electrical and electronic equipment reliability data for decades (I first learned about it when preparing reliability predictions for Honeywell’s military targeting systems in the 1970s).   It’s a comprehensive failure rate source, and perhaps just as significantly, it includes environmental modifiers to tailor a prediction to your system’s operating environment.   MIL-HDBK-217 also includes directions for performing an electronic equipment reliability prediction.   You can download a free copy of MIL-HDBK-217F here.

217

Galvanic Corrosion Prevention.   Corrosion is an expensive problem, and its annual cost has been estimated at $270 billion dollars in the US alone.   That’s a whopping $1,000 for every man, woman, and child in the United States!   One of the principal contributors to corrosion is galvanic corrosion, which can occur if the wrong metals are in intimate contact.   If you’re concerned about potential reactions between metals in your designs, MIL-STD-889B is the US standard for defining what’s acceptable and what’s not.   You can download a free copy of MIL-STD-889B here.

889

Procedures for Performing an FMEA.   Failure Modes and Effects Analysis is a superior tool for alerting the design team of potential failure modes during the development process.   We teach an FMEA course that receives high marks from all who have taken it, and one of the topics we address is how FMEA was first developed by the US Department of Defense just after World War II for use in new program development.   MIL-STD-1629 has been superceded by commercial FMEA standards, but it is still the defining document for performing FMEAs, and you can still download a copy for free.   It’s available for free here.

1629

System Safety Procedures.   There are a family of system safety analyses similar in concept to Failure Modes and Effects Analysis but focused exclusively on safety issues. These include Preliminary Hazard Analyses, Subsystem Hazard Analyses, System Hazard Analyses, Common Mode Analyses, and Operating Hazard Analyses.   MIL-STD-882D addresses all of these and more.   You can download a free copy of MIL-STD-882D here.

882D

Gantt Chart Excel Software.   H.L. Gantt, an industrial engineer, developed the Gantt chart scheduling approach that bears his name during World War I to keep track of large projects.   He hit a home run with this one.   It’s the “go to” approach used throughout the world, and it makes it very easy to rapidly determine if a program is on schedule.     I don’t much care for Microsoft Project, as its Gantt charts tend to be tough to manage and nearly impossible to portray in a Word or PowerPoint file.   I’ve found Excel to be much easier to use, and to import into a Word document or PowerPoint presentation.   You can download a free Excel template for Gantt charts here.

GanttExcel

That’s it for now.   Keep an eye on this blog, as we’ll be adding more free stuff in future posts.

 

Book of the Month!

Posted in Uncategorized on November 22, 2013 by manufacturingtraining

Unleashing Engineering Creativity was recently named book of the month by the editorial board at Industrial Engineer magazine!   Woohoo!

BOTM-IE-650

Unleashing Engineering Creativity focuses on creativity techniques directly applicable to engineering challenges.   It’s a great read, and you can order your copy by clicking on the link above!

 

Applying Taguchi to Load Development

Posted in Manufacturing Improvement with tags , , , , , , , on August 4, 2013 by manufacturingtraining

This blog entry describes the application of the Taguchi design of experiments technique to .45 ACP load development in a Smith and Wesson Model 25 revolver.

IMG_0692-450

Taguchi testing is an ANOVA-based approach that allows evaluating the impact of several variables simultaneously while minimizing sample size.  This is a powerful technique because it allows identifying which factors are statistically significant and which are not.   We are interested in both from the perspective of their influence on an output parameter of concern.

Both categories of factors are good things to know.  If we know which factors are significant, we can control them to achieve a desired output.   If we know which factors are not significant, it means they require less control (thereby offering cost reduction opportunities).

The output parameter of concern in this experiment is accuracy.   When performing a Taguchi test, the output parameter must be quantifiable, and this experiment provides this by measuring group size.   The input factors under evaluation include propellant type, propellant charge, primer type, bullet weight, brass type, bullet seating depth, and bullet crimp.  These factors were arranged in a standard Taguchi L8 orthogonal array as shown below (along with the results):

Taguchi-1

As the above table shows, three sets of data were collected.  We tested each load configuration three times (Groups A, B, and C) and we measured the group size for each 3-shot group.

After accomplishing the above, we prepared the standard Taguchi ANOVA evaluation to assess which of the above input factors most influenced accuracy:

Taguchi-2

The above results suggest that crimp (or lack thereof) has the greatest effect on accuracy.   The results indicate that rounds with no crimp are more accurate than rounds with the bullet crimped.

We can’t simply stop here, though.  We have to assess if the results are statistically significant.   Doing so requires performing an ANOVA on the crimp versus no crimp results.  Using Excel’s data analysis feature (the f-test for two samples) on the crimp-vs-no-crimp results shows the following:

Taguchi-3

Since the calculated f-ratio (3.817) does not exceed the critical f-ratio (5.391), we cannot conclude that the findings are statistically significant at the 90% confidence level.  If we allow a lower confidence level (80%), the results are statistically significant, but we usually would like at least a 90% confidence level for such conclusions.

So what does all the above mean?   Here are our conclusions from this experiment:

  • This particular revolver shoots any of the loads tested extremely well.  Many of the groups (all fired at a range of 50 feet) were well under an inch.
  • Shooter error (i.e., inaccuracies resulting from the shooter’s unsteadiness) overpowers any of the factors evaluated in this experiment.

Although the test shows that the results are not statistically significant, this is good information to know.  What it means is that any of the test loads can be used with good accuracy (as stated above, this revolver is accurate with any of the loads tested).  It suggests (but does not confirm to a 90% confidence level) that absence of a bullet crimp will result in greater accuracy.

QMCoverThe parallels to design and process challenges are obvious.   We can use the Taguchi technique to identify which factors are critical so that we can control them to achieve desired product or process performance requirements.   As significantly, Taguchi testing also shows which factors are not critical.  Knowing this offers cost reduction opportunities because we can relax tolerances, controls, and other considerations in these areas without influencing product or process performance.

If you’d like to learn more about Taguchi testing and how it can be applied to your products or processes, please consider purchasing Quality Management for the Technology Sector, a book that includes a detailed discussion of this fascinating technology.

And if you’d like a more in depth exposure to these technologies, please contact us for a workshop tailored to your needs.

Statistical Tolerance Analysis

Posted in Creativity, Manufacturing Improvement with tags , , , , , on June 17, 2013 by manufacturingtraining

Dimensional tolerances specify allowed variability around nominal dimensions.   We assign tolerances to assure component interchangeability while meeting performance and producibility requirements.  In general, as tolerances become smaller manufacturing costs become greater.  The challenge becomes finding ways to increase tolerances without sacrificing product performance and overall assembly conformance.  Statistical tolerance analysis provides a proven approach for relaxing tolerances and reducing cost.

Before we dive into how to use statistical tolerance analysis, let’s first consider how we usually assign tolerances.  Tolerances should be based on manufacturing process capabilities and requirements in the areas of component interchangeability, assembly dimensions, and product performance.  In many cases, though, tolerances are based on an organization’s past tolerance practices, standardized tolerance assignment tables, or misguided attempts to improve quality by specifying needlessly-stringent tolerances.  These latter approaches are not good, they often induce fit and performance issues, and organizations that use them often leave money on the table.

There are two approaches to tolerance assignment – worst case tolerance analysis and statistical tolerance analysis.

In the worst case approach, we analyze tolerances assuming components will be at their worst case conditions.  This seemingly innocuous assumption has several deleterious effects.  It requires larger than necessary assembly tolerances if we simply stack the worst case tolerances.   On the other hand, if we start with the required assembly tolerance and use it to determine component tolerances, the worst case tolerance analysis approach forces us to make the component tolerances very small. Here’s why this happens:   The rule for assembly tolerance determination using the worst case approach is:

Tassy = ΣTi

where

Tassy = assembly tolerance

Ti = individual component tolerances

The worst case tolerance analysis and assignment approach assumes that the components will be at their worst case dimensions; i.e., each component will be at the extreme edge of its tolerance limits.  The good news is that this is not a realistic assumption.  It is overly conservative.

Here’s more good news:  Component dimensions will most likely be normally distributed between the component’s upper and lower tolerance bounds, and the probability of actually being at the tolerance limits is low.   The likelihood of all of the components in an assembly being at their upper and lower limits is even lower.  The most likely case is that individual component dimensions will hover around their nominal values.  This reasonable assumption underlies the statistical tolerance analysis approach.

We can use statistical tolerance analysis to our advantage in three ways:

  • If we start with component tolerances, we can assign a tighter assembly tolerance.
  • If we start with the assembly tolerance, we can increase component tolerances.
  • We can use combinations of the above two approaches to provide tighter assembly tolerances than we would use with the worst case tolerance analysis approach and to selectively relax component tolerances.

Statistical tolerance analysis uses a root sum square approach to develop assembly tolerances based on component tolerances.   In the worst case tolerance analysis approach discussed above, we simply added all of the component tolerances to determine the assembly tolerance.  In the statistical tolerance analysis approach, we find the assembly tolerance based on the following equation:

Tassy = (ΣTi2)(1/2)

Using the above formula is straightforward.  We simply square each component tolerance, take the sum of these squares, and then find the square root of the summed squares to determine our assembly tolerance.

Sometimes it is difficult to understand why the root sum square approach is appropriate.   We can think of this along the same lines as the Pythagorean theorem, in which the distance along the diagonal of a right triangle is equal to the square root of the sum of the squares of the triangle’s sides.  Or we can think of it as distance from an aim point.  If we have an inch of lateral dispersion and an inch of horizontal dispersion, the total dispersion is 1.414 inches as we see below:

STA-2

To continue our discussion on statistical tolerance analysis, consider this simple assembly with three parts, each with a tolerance of ±0.002 inch:

STA-1

The worst case assembly tolerance for the above design is the sum of all of the component tolerances, or ±0.006 inch.

Using the statistical tolerance analysis approach yields an assembly tolerance based on the root sum square of the component tolerances.  It is (0.0022 + 0.0022 + 0.0022)(1/2), or 0.0035 inch.  Note that the statistically-derived tolerance is 42% smaller than the worst case tolerance.   That’s a very significant decrease from the 0.006 inch worst case derived tolerance.

Based on the above, we can assign a tighter assembly tolerance while keeping the existing component tolerances.  Or, we can the stick with the worst case assembly tolerance (assuming this is an acceptable assembly tolerance) and relax the component tolerances.   In fact, this is why we usually use the statistical tolerance analysis approach – for a given assembly tolerance, it allows us to increase the component tolerances (thereby lowering manufacturing costs).

Let’s continue with the above example to see how we can do this.   Suppose we increase the tolerance of each component by 50% so that the component tolerances go from 0.002 inch to 0.003 inch.   Calculating the statistically-derived tolerances in this case results in an assembly tolerance of 0.0052 inch, which is still below the 0.006 inch worst case assembly tolerance.   This is very significant:  We increased component tolerance 50% and still came in with an assembly tolerance less that the worst case assembly tolerance.  We can even double one of the above component’s tolerances to 0.004 inch while increasing the other two by 50% and still lie within the worst case assembly tolerance.  In this case, the statistically-derived assembly tolerance would be (0.0032 + 0.0032 + 0.0042)(1/2), or 0.0058 inch. It’s this ability to use statistical tolerance analysis to increase component tolerances that is the real money maker here.

The only disadvantage to the statistical tolerance analysis approach is that there is a small chance we will violate the assembly tolerance.   An implicit assumption is that all of the components are produced using capable processes (i.e., the process capability is such that ±3σ or all parts produced lie within the tolerance limits for each part).  This really isn’t much of an assumption (whether you are using statistical tolerance analysis or worst case tolerance analysis, your processes have to be capable).  With a statistical tolerance analysis approach, we can predict that 99.73% (not 100%) of all assemblies will meet the required assembly dimension.  This relatively small predicted rejection rate (just 0.27%) is usually acceptable.  In practice, when the assembly dimension is not met we can usually address it by simply selecting different components to bring the assembly into conformance.

Drawing Tolerances and the Manufacturing Impact

Posted in Manufacturing Improvement with tags , , , , , on May 29, 2013 by manufacturingtraining

Emergency Egress SearDimensional tolerances specify allowed variability around nominal dimensions.  In general, as tolerances become smaller manufacturing costs become greater.   This isn’t always the case, but it is generally true (we’ll cover exceptions in a future blog entry).

The approach used by most organizations for assigning tolerances often offers improvement opportunities in the areas of fit, performance improvement, and cost reduction.   It makes sense to consider tolerance modifications (and in particular, tolerance relaxations) where we can do so for all of the above reasons and more.  The photo on the right, for example, shows a product that was poorly toleranced and ultimately resulted in the failure of an aircraft emergency egress system.   We’ll tell you more about it in a subsequent blog entry.

If you’re wondering if any of the above might be applicable to your design and manufacturing organization, we’d like to suggest the following questions:

  • How do we assign tolerances?
  • Do we or our suppliers have any recurring rejections we suspect are induced by needlessly-stringent tolerances?
  • Are there any areas where we or our suppliers are taking extreme measures to hold tight tolerances?
  • Have we ever experienced failures with otherwise conforming equipment?
  • Do we require drawing changes to relax the tolerance whenever we disposition nonconforming parts “use as is?”

Future Blog Entries

We’ll have a series of articles in the next several weeks addressing the pitfalls in how most organizations assign tolerances, how we can approach relaxing tolerances, how tighter tolerances can sometimes actually lower cost, the need for appropriately-targeted tolerance analysis, and how statistical process control implementation can allow increasing tolerances.

Keep an eye on the ManufacturingTraining blog for important and informative updates in each of these areas!

Creativity

Posted in Uncategorized on April 26, 2013 by manufacturingtraining

We’ve been doing a lot of work in the engineering creativity area lately, and we’ve been published repeatedly in Design News and Product Design magazines.   When you have a chance, take a peek at these articles…

http://www.pddnet.com/blogs/2013/04/unleashing-engineering-creativity-concept-fans

http://www.pddnet.com/articles/2013/03/unleashing-engineering-creativity

http://www.pddnet.com/articles/2013/04/unleashing-engineering-creativity-nine-screens

http://www.pddnet.com/blogs/2013/03/unleashing-engineering-creativity-kano-model

http://www.designnews.com/author.asp?section_id=1365&doc_id=262284&page_number=2

http://www.designnews.com/author.asp?section_id=1365&doc_id=260565

It’s all interesting material, and it’s all related to finding innovative solutions to product and process creativity challenges.

Enjoy!

 

Follow

Get every new post delivered to your Inbox.