Cool free stuff!

Posted in Creativity, Manufacturing Improvement, Uncategorized with tags , , , , on April 2, 2014 by manufacturingtraining

In many of our courses we teach people about the many free references and other information available on the Internet for use in reliability predictions, FMEA preparation, product design, cost estimation, and other areas in which we teach and consult.   We’re including a partial list of these free resources on the ManufacturingTraining blog for your easy reference.   There will be more of our favorites here on the blog, so check back often (or better yet, hit the RSS button to subscribe).

Electronic Equipment Reliability Data.   MIL-HDBK-217F has been the “go to” source for electrical and electronic equipment reliability data for decades (I first learned about it when preparing reliability predictions for Honeywell’s military targeting systems in the 1970s).   It’s a comprehensive failure rate source, and perhaps just as significantly, it includes environmental modifiers to tailor a prediction to your system’s operating environment.   MIL-HDBK-217 also includes directions for performing an electronic equipment reliability prediction.   You can download a free copy of MIL-HDBK-217F here.

217

Galvanic Corrosion Prevention.   Corrosion is an expensive problem, and its annual cost has been estimated at $270 billion dollars in the US alone.   That’s a whopping $1,000 for every man, woman, and child in the United States!   One of the principal contributors to corrosion is galvanic corrosion, which can occur if the wrong metals are in intimate contact.   If you’re concerned about potential reactions between metals in your designs, MIL-STD-889B is the US standard for defining what’s acceptable and what’s not.   You can download a free copy of MIL-STD-889B here.

889

Procedures for Performing an FMEA.   Failure Modes and Effects Analysis is a superior tool for alerting the design team of potential failure modes during the development process.   We teach an FMEA course that receives high marks from all who have taken it, and one of the topics we address is how FMEA was first developed by the US Department of Defense just after World War II for use in new program development.   MIL-STD-1629 has been superceded by commercial FMEA standards, but it is still the defining document for performing FMEAs, and you can still download a copy for free.   It’s available for free here.

1629

System Safety Procedures.   There are a family of system safety analyses similar in concept to Failure Modes and Effects Analysis but focused exclusively on safety issues. These include Preliminary Hazard Analyses, Subsystem Hazard Analyses, System Hazard Analyses, Common Mode Analyses, and Operating Hazard Analyses.   MIL-STD-882D addresses all of these and more.   You can download a free copy of MIL-STD-882D here.

882D

Gantt Chart Excel Software.   H.L. Gantt, an industrial engineer, developed the Gantt chart scheduling approach that bears his name during World War I to keep track of large projects.   He hit a home run with this one.   It’s the “go to” approach used throughout the world, and it makes it very easy to rapidly determine if a program is on schedule.     I don’t much care for Microsoft Project, as its Gantt charts tend to be tough to manage and nearly impossible to portray in a Word or PowerPoint file.   I’ve found Excel to be much easier to use, and to import into a Word document or PowerPoint presentation.   You can download a free Excel template for Gantt charts here.

GanttExcel

That’s it for now.   Keep an eye on this blog, as we’ll be adding more free stuff in future posts.

 

Book of the Month!

Posted in Uncategorized on November 22, 2013 by manufacturingtraining

Unleashing Engineering Creativity was recently named book of the month by the editorial board at Industrial Engineer magazine!   Woohoo!

BOTM-IE-650

Unleashing Engineering Creativity focuses on creativity techniques directly applicable to engineering challenges.   It’s a great read, and you can order your copy by clicking on the link above!

 

Applying Taguchi to Load Development

Posted in Manufacturing Improvement with tags , , , , , , , on August 4, 2013 by manufacturingtraining

This blog entry describes the application of the Taguchi design of experiments technique to .45 ACP load development in a Smith and Wesson Model 25 revolver.

IMG_0692-450

Taguchi testing is an ANOVA-based approach that allows evaluating the impact of several variables simultaneously while minimizing sample size.  This is a powerful technique because it allows identifying which factors are statistically significant and which are not.   We are interested in both from the perspective of their influence on an output parameter of concern.

Both categories of factors are good things to know.  If we know which factors are significant, we can control them to achieve a desired output.   If we know which factors are not significant, it means they require less control (thereby offering cost reduction opportunities).

The output parameter of concern in this experiment is accuracy.   When performing a Taguchi test, the output parameter must be quantifiable, and this experiment provides this by measuring group size.   The input factors under evaluation include propellant type, propellant charge, primer type, bullet weight, brass type, bullet seating depth, and bullet crimp.  These factors were arranged in a standard Taguchi L8 orthogonal array as shown below (along with the results):

Taguchi-1

As the above table shows, three sets of data were collected.  We tested each load configuration three times (Groups A, B, and C) and we measured the group size for each 3-shot group.

After accomplishing the above, we prepared the standard Taguchi ANOVA evaluation to assess which of the above input factors most influenced accuracy:

Taguchi-2

The above results suggest that crimp (or lack thereof) has the greatest effect on accuracy.   The results indicate that rounds with no crimp are more accurate than rounds with the bullet crimped.

We can’t simply stop here, though.  We have to assess if the results are statistically significant.   Doing so requires performing an ANOVA on the crimp versus no crimp results.  Using Excel’s data analysis feature (the f-test for two samples) on the crimp-vs-no-crimp results shows the following:

Taguchi-3

Since the calculated f-ratio (3.817) does not exceed the critical f-ratio (5.391), we cannot conclude that the findings are statistically significant at the 90% confidence level.  If we allow a lower confidence level (80%), the results are statistically significant, but we usually would like at least a 90% confidence level for such conclusions.

So what does all the above mean?   Here are our conclusions from this experiment:

  • This particular revolver shoots any of the loads tested extremely well.  Many of the groups (all fired at a range of 50 feet) were well under an inch.
  • Shooter error (i.e., inaccuracies resulting from the shooter’s unsteadiness) overpowers any of the factors evaluated in this experiment.

Although the test shows that the results are not statistically significant, this is good information to know.  What it means is that any of the test loads can be used with good accuracy (as stated above, this revolver is accurate with any of the loads tested).  It suggests (but does not confirm to a 90% confidence level) that absence of a bullet crimp will result in greater accuracy.

QMCoverThe parallels to design and process challenges are obvious.   We can use the Taguchi technique to identify which factors are critical so that we can control them to achieve desired product or process performance requirements.   As significantly, Taguchi testing also shows which factors are not critical.  Knowing this offers cost reduction opportunities because we can relax tolerances, controls, and other considerations in these areas without influencing product or process performance.

If you’d like to learn more about Taguchi testing and how it can be applied to your products or processes, please consider purchasing Quality Management for the Technology Sector, a book that includes a detailed discussion of this fascinating technology.

And if you’d like a more in depth exposure to these technologies, please contact us for a workshop tailored to your needs.

Statistical Tolerance Analysis

Posted in Creativity, Manufacturing Improvement with tags , , , , , on June 17, 2013 by manufacturingtraining

Dimensional tolerances specify allowed variability around nominal dimensions.   We assign tolerances to assure component interchangeability while meeting performance and producibility requirements.  In general, as tolerances become smaller manufacturing costs become greater.  The challenge becomes finding ways to increase tolerances without sacrificing product performance and overall assembly conformance.  Statistical tolerance analysis provides a proven approach for relaxing tolerances and reducing cost.

Before we dive into how to use statistical tolerance analysis, let’s first consider how we usually assign tolerances.  Tolerances should be based on manufacturing process capabilities and requirements in the areas of component interchangeability, assembly dimensions, and product performance.  In many cases, though, tolerances are based on an organization’s past tolerance practices, standardized tolerance assignment tables, or misguided attempts to improve quality by specifying needlessly-stringent tolerances.  These latter approaches are not good, they often induce fit and performance issues, and organizations that use them often leave money on the table.

There are two approaches to tolerance assignment – worst case tolerance analysis and statistical tolerance analysis.

In the worst case approach, we analyze tolerances assuming components will be at their worst case conditions.  This seemingly innocuous assumption has several deleterious effects.  It requires larger than necessary assembly tolerances if we simply stack the worst case tolerances.   On the other hand, if we start with the required assembly tolerance and use it to determine component tolerances, the worst case tolerance analysis approach forces us to make the component tolerances very small. Here’s why this happens:   The rule for assembly tolerance determination using the worst case approach is:

Tassy = ΣTi

where

Tassy = assembly tolerance

Ti = individual component tolerances

The worst case tolerance analysis and assignment approach assumes that the components will be at their worst case dimensions; i.e., each component will be at the extreme edge of its tolerance limits.  The good news is that this is not a realistic assumption.  It is overly conservative.

Here’s more good news:  Component dimensions will most likely be normally distributed between the component’s upper and lower tolerance bounds, and the probability of actually being at the tolerance limits is low.   The likelihood of all of the components in an assembly being at their upper and lower limits is even lower.  The most likely case is that individual component dimensions will hover around their nominal values.  This reasonable assumption underlies the statistical tolerance analysis approach.

We can use statistical tolerance analysis to our advantage in three ways:

  • If we start with component tolerances, we can assign a tighter assembly tolerance.
  • If we start with the assembly tolerance, we can increase component tolerances.
  • We can use combinations of the above two approaches to provide tighter assembly tolerances than we would use with the worst case tolerance analysis approach and to selectively relax component tolerances.

Statistical tolerance analysis uses a root sum square approach to develop assembly tolerances based on component tolerances.   In the worst case tolerance analysis approach discussed above, we simply added all of the component tolerances to determine the assembly tolerance.  In the statistical tolerance analysis approach, we find the assembly tolerance based on the following equation:

Tassy = (ΣTi2)(1/2)

Using the above formula is straightforward.  We simply square each component tolerance, take the sum of these squares, and then find the square root of the summed squares to determine our assembly tolerance.

Sometimes it is difficult to understand why the root sum square approach is appropriate.   We can think of this along the same lines as the Pythagorean theorem, in which the distance along the diagonal of a right triangle is equal to the square root of the sum of the squares of the triangle’s sides.  Or we can think of it as distance from an aim point.  If we have an inch of lateral dispersion and an inch of horizontal dispersion, the total dispersion is 1.414 inches as we see below:

STA-2

To continue our discussion on statistical tolerance analysis, consider this simple assembly with three parts, each with a tolerance of ±0.002 inch:

STA-1

The worst case assembly tolerance for the above design is the sum of all of the component tolerances, or ±0.006 inch.

Using the statistical tolerance analysis approach yields an assembly tolerance based on the root sum square of the component tolerances.  It is (0.0022 + 0.0022 + 0.0022)(1/2), or 0.0035 inch.  Note that the statistically-derived tolerance is 42% smaller than the worst case tolerance.   That’s a very significant decrease from the 0.006 inch worst case derived tolerance.

Based on the above, we can assign a tighter assembly tolerance while keeping the existing component tolerances.  Or, we can the stick with the worst case assembly tolerance (assuming this is an acceptable assembly tolerance) and relax the component tolerances.   In fact, this is why we usually use the statistical tolerance analysis approach – for a given assembly tolerance, it allows us to increase the component tolerances (thereby lowering manufacturing costs).

Let’s continue with the above example to see how we can do this.   Suppose we increase the tolerance of each component by 50% so that the component tolerances go from 0.002 inch to 0.003 inch.   Calculating the statistically-derived tolerances in this case results in an assembly tolerance of 0.0052 inch, which is still below the 0.006 inch worst case assembly tolerance.   This is very significant:  We increased component tolerance 50% and still came in with an assembly tolerance less that the worst case assembly tolerance.  We can even double one of the above component’s tolerances to 0.004 inch while increasing the other two by 50% and still lie within the worst case assembly tolerance.  In this case, the statistically-derived assembly tolerance would be (0.0032 + 0.0032 + 0.0042)(1/2), or 0.0058 inch. It’s this ability to use statistical tolerance analysis to increase component tolerances that is the real money maker here.

The only disadvantage to the statistical tolerance analysis approach is that there is a small chance we will violate the assembly tolerance.   An implicit assumption is that all of the components are produced using capable processes (i.e., the process capability is such that ±3σ or all parts produced lie within the tolerance limits for each part).  This really isn’t much of an assumption (whether you are using statistical tolerance analysis or worst case tolerance analysis, your processes have to be capable).  With a statistical tolerance analysis approach, we can predict that 99.73% (not 100%) of all assemblies will meet the required assembly dimension.  This relatively small predicted rejection rate (just 0.27%) is usually acceptable.  In practice, when the assembly dimension is not met we can usually address it by simply selecting different components to bring the assembly into conformance.

Drawing Tolerances and the Manufacturing Impact

Posted in Manufacturing Improvement with tags , , , , , on May 29, 2013 by manufacturingtraining

Emergency Egress SearDimensional tolerances specify allowed variability around nominal dimensions.  In general, as tolerances become smaller manufacturing costs become greater.   This isn’t always the case, but it is generally true (we’ll cover exceptions in a future blog entry).

The approach used by most organizations for assigning tolerances often offers improvement opportunities in the areas of fit, performance improvement, and cost reduction.   It makes sense to consider tolerance modifications (and in particular, tolerance relaxations) where we can do so for all of the above reasons and more.  The photo on the right, for example, shows a product that was poorly toleranced and ultimately resulted in the failure of an aircraft emergency egress system.   We’ll tell you more about it in a subsequent blog entry.

If you’re wondering if any of the above might be applicable to your design and manufacturing organization, we’d like to suggest the following questions:

  • How do we assign tolerances?
  • Do we or our suppliers have any recurring rejections we suspect are induced by needlessly-stringent tolerances?
  • Are there any areas where we or our suppliers are taking extreme measures to hold tight tolerances?
  • Have we ever experienced failures with otherwise conforming equipment?
  • Do we require drawing changes to relax the tolerance whenever we disposition nonconforming parts “use as is?”

Future Blog Entries

We’ll have a series of articles in the next several weeks addressing the pitfalls in how most organizations assign tolerances, how we can approach relaxing tolerances, how tighter tolerances can sometimes actually lower cost, the need for appropriately-targeted tolerance analysis, and how statistical process control implementation can allow increasing tolerances.

Keep an eye on the ManufacturingTraining blog for important and informative updates in each of these areas!

Creativity

Posted in Uncategorized on April 26, 2013 by manufacturingtraining

We’ve been doing a lot of work in the engineering creativity area lately, and we’ve been published repeatedly in Design News and Product Design magazines.   When you have a chance, take a peek at these articles…

http://www.pddnet.com/blogs/2013/04/unleashing-engineering-creativity-concept-fans

http://www.pddnet.com/articles/2013/03/unleashing-engineering-creativity

http://www.pddnet.com/articles/2013/04/unleashing-engineering-creativity-nine-screens

http://www.pddnet.com/blogs/2013/03/unleashing-engineering-creativity-kano-model

http://www.designnews.com/author.asp?section_id=1365&doc_id=262284&page_number=2

http://www.designnews.com/author.asp?section_id=1365&doc_id=260565

It’s all interesting material, and it’s all related to finding innovative solutions to product and process creativity challenges.

Enjoy!

 

Unleashing Engineering Creativity

Posted in Creativity with tags , , , on January 12, 2013 by manufacturingtraining

That’s the title of our newest book (available here from Amazon.com).  We also offer focused creativity training available exclusively through www.Eogogics.com/create.

UECMfgTngBlogLink

In this latest book we explore the best techniques for stimulating creative thinking, creating new products, improving existing products, and solving design challenges.  Surprisingly, even those of us who are paid to be creative often need help.  Most of us lose much of our natural creativity by the time we finish high school, but we can regain it through the techniques included in Unleashing Engineering Creativity.  This is exciting and fun material, and Unleashing Engineering Creativity presents it in an interesting and engaging manner.

Many organizations and engineers rely on brainstorming as their primary creative and inventive tool, but this simplistic approach often fails to stimulate creativity in a meaningful way.  Unleashing Engineering Creativity goes far beyond brainstorming.  This book explores powerful new creativity stimulation approaches and provides recommendations for overcoming self-imposed obstacles.   The title says it all.  If you want to unleash your engineering creativity, this book will help you and your organization attain significant creativity improvements.

Follow

Get every new post delivered to your Inbox.