March 17, 2006

Crystal Ball and Design for Six Sigma (DfSS)

Crystal Ball and Design for Six Sigma (DfSS)
Abstract -
fundamental objective of Design for Six Sigma (DFSS) is to design products and processes that meet your customers’ needs, cost effectively, without production or integration problems. This objective can be achieved by following a simple set of good design practices: understand your customers’ needs, select the design concept that is most likely to succeed, mathematically predict the cost and performance of your design, and then make design improvements before committing to capital purchases and supplier contracts.
There can of course be pitfalls in applying each of these practices, but the most common weaknesses we have seen in DFSS implementations are in the area of mathematical prediction. In this paper I will show how modeling and simulation can be used to overcome these weaknesses and reap the full benefits of DFSS.
Foundation: Configuring DFSS for Success
Implementations of DFSS can take many forms. While all share the common thread of the good design practices above, most implementations lock them into a sequence of phases like DMADV (Design, Measure, Analyse, Design, Validate) or IDOV (Identify, Design, Optimise, Validate). Company-specific acronyms are also common. More often than not, these sequences exist separately from the company’s established development processes, so extra overhead of “aligning” the DFSS acronym with the development process often becomes necessary. A solution to this complexity is to recognise that any development process is a series of tasks, and that the DFSS good design practices fit within every task.
Implementing DFSS then becomes completely independent from the specific development process, and instead can be deployed as an enabler to the process itself. This type of implementation is shown in Figure 1.



The design tasks of a typical development process are shown down the left side of Figure 1, with the tasks grouped under the headings of “Concept Exploration”, “Conceptual Design”, and so on.
These specific headings are somewhat arbitrary and will change from one implementation to another, but the activities within them are largely universal. In Concept Exploration and Conceptual Design, many different design concepts are generated and evaluated. Each “concept” is typically a high-level layout of the different functions or subsystems of the design, with selected components or technologies for each. These concepts exist primarily on paper (or on computer). In Detail Design and Engineering Model the selected best concept(s) are fleshed out in more detail, again primarily on paper. Physical prototypes or mock-ups will be built to test and verify the calculations behind key features of the design, but the expense of prototypes is kept as small as possible.
Finally in Initial Production and Final Production the completed design is implemented and released to your customers, perhaps in large quantities.
A design task typically culminates in a design decision – the selection of a technology or component or material, the determination of the best dimensions or parameter values, and so on. One by one, these decisions push the design forward and flesh out its details. Ideally these decisions are made in the best interests of your customers and stakeholders, both internal and external. Following the good design practices on the right side of Figure 1 helps accomplish this goal. The “Voice of the Customer” is data collected directly from your customers through interviews, focus groups, surveys, warranty and complaint data, enhancement requests, etc. From this unfiltered data the true customer needs are determined, and these are used to identify the “Critical Requirements” for the design – quantifiable targets and characteristics that can be tested and measured on the design itself. The requirements should be stated generically enough to encourage the creation of several different potential solutions for each element of the design. The “Design Concept” is then the combination of all of the selected best solutions.
Once the design concept is assembled, a determination can then be made as to whether it is affected by variation. If it will be made in large quantities, will all units perform the same the same way every time? Even for just one unit, will it be used in precisely the same manner and in the same environmental conditions? Will its performance remain constant over the life of the design? Typically the answer to at least one of these questions is “no”. Physical components and materials vary from one unit to the next and degrade over time. Process parameters shift and drift over time, and environmental conditions can seldom be controlled very precisely.
If variation must be considered, it is not sufficient to predict the design’s performance deterministically. In other words, having one prototype that works under controlled conditions does not prove that the design will perform well under other conditions or over time. Instead a statistical analysis is used to assess the performance of the design across the complete range of variation. From this analysis an estimate of the probability of the design performing acceptably can be determined. There are two ways in which this analysis can be performed: build many samples and test and measure their performance, or predict the design’s performance mathematically. For the obvious reasons of time and expense, mathematical prediction is often the only viable option. Therefore it becomes a crucial step in DFSS to create mathematical models of the design.
Types of Mathematical Models for DFSS:
A mathematical model of a design can take many forms. It can be an equation from a textbook (physics, engineering, finance, accounting, etc.), a computer simulation, a prototype whose performance can be measured, or a set of historical data. Regardless of the type of model it can be represented as a “black box” as is shown in Figure 2.



The output of the black box, “Y”, is a quantifiable parameter that maps directly to a critical requirement from your customer. Examples of such output parameters are weight, efficiency, cycle time, cost, and so on. (A model may have more than one “Y”, but for simplicity only one is shown here.) The inputs to the black box are parameters “Xi” that characterise the given design. Examples of these input parameters are material properties, process settings, dimensions, component values, etc. When the inputs are specified, one can “turn the crank” and generate a value for the output Y. This output parameter is deterministic and is completely determined by the model’s input parameter values. Notice that there is no distinction between “control factors” or “noise factors” here; all are classified as X’s. The mathematics of the black box is shown as “Y=f(X)”, but the math may not actually be an explicit equation. For example, different prototypes can be built with different parameter values Xi, and their outputs Y can be measured instead of calculated.
Statistical analyses such as Sensitivity Analysis or Monte Carlo analysis are performed on the black box by changing the input parameter values and observing the changes on the output parameter. When a sufficient number of output values have been collected, a probability distribution can be constructed for the output parameter, and this distribution will tell us the likelihood of the design satisfying the customer requirement. More details of these analyses will be given in the next section, but for now it is enough to note that typically large numbers of output values (on the order of hundreds or thousands) will be required. It therefore becomes important that the output values can be generated quickly and cost-effectively.
However, not all models can be computed in a cost-efficient manner. Prototype designs can be very expensive to produce and measure. Finite-element simulations of a design can take hours or days to generate a single output Y value. In these cases statistical modeling techniques can be applied to create fast, accurate approximations of the original model. A decision tree to guide users through this statistical modeling process is shown in Figure 3. All of the different categories of models are listed down the left side of the figure – equations, data, simulations, prototypes, and the actual system. These represent the actual models gathered or generated to predict each of the design’s critical requirements. Observe that for all cases it is possible to generate fast, accurate approximations if necessary.
In Figure 3 there are three paths shown. For the first path, existing equations (from textbooks, from expert judgement, etc.) typically will compute very quickly with today’s computer tools.




As long as their accuracy is sufficient, they can be used for DFSS as is. (If their accuracy is not sufficient then that may initiate a separate activity of improving them or perhaps choosing a different type of model to predict the critical requirement.) The second path is for models that exist as data sets. These data sets typically are a set of measurements taken over time or over different production units where, for each unit, all of the input X’s and the output Y’s are measured. The standard format for this data is shown on the left of Figure 4. By applying regression analysis a fitted equation can be generated from the data, and a simple equation is shown on the right of Figure 4.





Benefits of Modeling and Simulation:
If a model can be created to predict your design's performance with respect to a critical requirement, and if this model can be computed relatively quickly, then powerful statistical analyses become available that allow you to reap the full benefits of DFSS. You can predict the probability of the design meeting the requirement given environmental variation, manufacturing variation, and usage variation. If this probability is not sufficiently large then you can determine the maximum allowable variation on the models inputs to achieve the desired output probability. And if the input variation can not be controlled, you can explore new input parameter values that may improve your design's statistical performance with respect to multiple requirements simultaneously. We will call these techniques 'Statistical Analysis', 'Statistical Allocation', and 'Statistical Optimisation', respectively.
Statistical Analysis
With a typical model, a single value is specified for each input parameter, and a single value is computed for each output parameter. However, in real-world applications each input parameter will vary over time or from one unit to the next. It is imperative while designing to capture all of these sources’ variation and to ensure that the output parameter behaves as needed across all combinations of input parameter values. This problem is represented in Figure 5 as a black-box model with a probability distribution specified for each input parameter. As these probability distributions filter through the mathematics of the model, they generate a probability distribution for each output parameter. If the mathematics is nonlinear, then the shape of the output distribution may not bear a resemblance to any of the input distributions.



When the output parameter maps to a customer requirement, typically there are numerical limits defined that represent acceptable or unacceptable values. For an output parameter like weight or cost or cycle time, often there will be an upper limit that when exceeded would lead to customer dissatisfaction. For an output parameter like efficiency or speed or reliability, often there will be a lower limit that if not achieved would lead to customer dissatisfaction. There are also parameters like resonant frequencies or delivery times that may have both upper and lower limits. These limits are shown as the Upper Specification Limit (USL) and Lower Specification Limit (LSL) in Figure 5. The areas of the output probability distribution that fall outside these limits represent the probability of non-compliance (PNC) of the requirement. PNC can be a very useful metric in DFSS for tracking and improving the design’s performance in terms of customer satisfaction.
The central challenge in this type of statistical analysis is generating the probability distribution for the output parameter. Many different techniques exist, but two of the most prevalent are Sensitivity Analysis and Monte Carlo Analysis. Sensitivity Analysis approximates the output distribution by taking a low-order Taylor’s Series expansion of the mathematical model and combining it with the low-order moments of the input distributions to compute the low-order moments of the output distribution. From these moments the location, spread and sometimes the shape of the output distribution can be determined. Monte Carlo Analysis approximates the output distribution by randomly generating single values for each of the input parameters, plugging these values into the model, and computing a value for the output parameter. This process is repeated hundreds or thousands of times, generating a large sample of output values. By then applying sample statistics a substantial amount of information can be derived about the output distribution – its location, spread and shape. Monte Carlo Analysis is the gold standard to which all other techniques are judged, and if computational expense is not a problem, then it will always be the preferred method. (We have found Crystal Ball by Decisioneering in either the Standard or Professional version to be a very useful, Excel-based Monte Carlo package.) Sensitivity Analysis often is just as accurate as Monte Carlo Analysis, and it typically requires fewer model calculations to generate results. If the model is slow or expensive to compute, then Sensitivity Analysis is often used.
Statistical Allocation
Once a statistical analysis is performed, often the PNC value that results is unacceptably high. In these cases, one way to improve the PNC value is to apply statistical allocation. Allocation techniques use Sensitivity Analysis or Monte Carlo analysis to work the problem backward and identify the input factor standard deviations (roughly a measure of the spread of their distributions) that will reduce the PNC to a desired level. An illustration of statistical allocation is shown in Figure 6. An infinite number of closed-form solutions (vectors of input standard deviations) can be generated if the allocation is based on Sensitivity Analysis techniques. If Monte Carlo analysis is employed, then allocation becomes a brute-force iterative process.




Statistical Optimisation
Improving an output parameter’s PNC by reducing its input parameters’ variation is seldom ideal. Often it is preferable to reduce an output’s PNC while keeping the input variation constant, thus making the output robust to input parameter variation. This can be accomplished by searching for new mean values for each input parameter – in essence shifting the distributions higher or lower but keeping the width of the distributions the same. This can easily be done for multiple output parameters simultaneously by employing multi-objective statistical optimisation techniques. A graphical representation of multi-objective, statistical optimisation is shown in Figure 7.
To apply optimisation techniques, the first step is to build a formulation of the problem that captures all of the input parameters and output parameters. Each input parameter’s mean value can be specified as fixed or as searchable, and in both cases variation can be specified. Searchable input parameters can be defined three ways: as continuous, integer, or discrete. A continuous parameter can take on any values between a specified minimum and a specified maximum. Integer parameters can take on only integer values between a minimum and a maximum, and discrete parameters can take on only values specified in a user-defined list.
Optimisation software tools automatically try different values for the input parameters in an attempt to improve the design’s output parameters. “Improvement” is defined by constraints and goals in the formulation. Constraints typically take an output’s mean, standard deviation or PNC and specify a threshold value that it must achieve. Goals typically take a similar value and define a target value that it will ideally achieve. By definition constraints are a higher priority than goals. The logic behind the search sequence of input parameter mean values is contained in the software tool’s algorithm, typically either gradient-based or heuristic. The output of a statistical optimisation problem is a vector of input parameters mean values and/or standard deviations that bring the critical requirements’ means, standard deviations, and PNC values as close as possible to their customer-specified targets.



Statistical Optimisation Example:
The typical examples we see in DFSS applications contain several nonlinear models that map into multiple, conflicting objectives across the dimensions of design performance, cost and reliability. Therefore it is our standard recommendation to employ optimisation packages and algorithms that can perform a global, nonlinear, multi-objective, statistical search. One such example that works well for engineering audiences is the design of a low-pass filter. This is a component selection problem where the circuit configuration is given and the mathematical models are well-understood. A schematic of the filter to be designed is shown in Figure 8.




In this design problem resistors must be chosen for R1 and R2, capacitor values must be chosen for C1 and C2, and an op amp must also be selected. For each resistor and capacitor a nominal value must be specified along with a “percent tolerance”. When components are purchased in bulk, the supplier will warranty that all individual values fall within plus-or-minus some percentage of the nominal value. If we assume that all values are uniformly distributed, then an equation can be derived for the component’s standard deviation. In addition, we also have the ability to select components with different published failure rates. Components with higher failure rates are cheaper but less reliable. Because we wish to purchase standard components from published catalogs, we do not have the luxury of specifying custom values for each component. Instead we must select our components from standard published lists. A summary of all of this design parameter information is shown in Table 1.

0 Comments:

Post a Comment

<< Home