Presentation is loading. Please wait.

Presentation is loading. Please wait.

Raising the Bar: Equipping Systems Engineers to Excel with DOE

Similar presentations


Presentation on theme: "Raising the Bar: Equipping Systems Engineers to Excel with DOE"— Presentation transcript:

1 Raising the Bar: Equipping Systems Engineers to Excel with DOE
Plan Ponder Process Manpower Materials Methods Machines Response to Effect Causes Measurements Milieu (Environment) Cause-Effect (CNX) Diagram Produce Raising the Bar: Equipping Systems Engineers to Excel with DOE presented to: INCOSE Luncheon September 2009 Greg Hutto Wing Ops Analyst, 46th Test Wing

2 Bottom Line Up Front Test design is not an art…it is a science
Talented scientists in T&E Enterprise however…limited knowledge in test design…alpha, beta, sigma, delta, p, & n Our decisions are too important to be left to professional opinion alone…our decisions should be based on mathematical fact 53d Wg, AFOTEC, AFFTC, and 46 TW/AAC experience Teaching DOE as a sound test strategy not enough Leadership from senior executives (SPO & Test) is key Purpose: DOD adopts experimental design as the default approach to test, wherever it makes sense Exceptions include demos, lack of trained testers, no control

3 Background -- Greg Hutto
B.S. US Naval Academy, Engineering - Operations Analysis M.S. Stanford University, Operations Research USAF Officer -- TAWC Green Flag, AFOTEC Lead Analyst Consultant -- Booz Allen & Hamilton, Sverdrup Technology Mathematics Chief Scientist -- Sverdrup Technology Wing OA and DOE Champion – 53rd Wing, now 46 Test Wing USAF Reserves – Special Assistant for Test Methods (AFFTC/CT) and Master Instructor in DOE for USAF TPS Practitioner, Design of Experiments Years Selected T&E Project Experience – 18+ Years Green Flag EW Exercises ‘79 F-16C IOT&E ‘83 AMRAAM, JTIDS ‘ 84 NEXRAD, CSOC, Enforcer ‘85 Peacekeeper ‘86 B-1B, SRAM, ‘87 MILSTAR ‘88 MSOW, CCM ’89 Joint CCD T&E ‘90 SCUD Hunting ‘91 AGM-65 IIR ‘93 MK-82 Ballistics ‘94 Contact Lens Mfr ‘95 Penetrator Design ‘96 30mm Ammo ‘97 60 ESM/ECM projects ‘98--’00 B-1B SA OUE, Maverick IR+, F-15E Suite 4E+,100’s more ’01-’06

4 Systems Engineering Experience
Modular Standoff Weapon (MSOW) – Multinational NATO efforts for next gen weapons. Died deserved death … – Next Generation AGM-130 yields surprising result – no next gen AGM-130; instead JDAM, JSOW, JAASM – Clear Airfield Mines & Buried Unexploded Ordnance requirements – >98% P(clearance); <5m UXO location error engineering solutions - rollers, seismic tomography, explosive foams Bare Base Study led to reduction of 50% in weight, cost, and improved sustainability through COTS solutions $20,000, 60 BTU-hr (5 ton) crash-survivable, miniaturized air conditioner replaced by 1 Window A/C - $600 Google Specs: Frigidaire FAM18EQ2A Window Mounted Heavy Duty Room Air Conditioner, 18,000/17,800 BTU Cool, 16,000 BTU (Heat), 1,110 Approximately Cool Area Sq. Ft, 9.7 EER, 11" Max. Wall Thickness (FAM18EQ2 FAM18EQ FAM18E FAM18 FAM-18EQ2A) $640.60

5 Overview 3 DOE Fables for Systems Engineers Policy & Deployment
Four Challenges – The 80% JPADS 3 DOE Fables for Systems Engineers Policy & Deployment Summary

6 Systems Engineering Challenges
Robust Product Design Process Flow Analysis AoA -Feasibility Studies Product & Process Design Concept Requirements - Quality Function Deployment (QFD) Designed Experiments – Improve Performance, Reduce Variance Historical / Empirical Data Analysis Systems Engineering Challenges SPC – Defect SourcesSupply Chain Management Failure Modes & Effects Analysis (FMEA) Decommission/ End-use Production Lean Manufacturing Statistical Process Control (Acceptance Test) Simulation Operations Reliability Serviceability Maintainability Availability

7 Systems Engineering Simulations of Reality
At each stage of development, we conduct experiments Ultimately – how will this device function in service (combat)? Simulations of combat differ in fidelity and cost Differing goals (screen, optimize, characterize, reduce variance, robust design, trouble-shoot) Same problems – distinguish truth from fiction: What matters? What doesn’t?

8 Industry Statistical Methods: General Electric
Fortune 500 ranked #5 in Revenues Global Most Admired Company - Fortune 2005 America’s Most Admired #2, World’s Most Respected #1 (7 years running), Forbes 2000 list #2 2004 Revenues $152 B Products and Services Aircraft engines, appliances, financial services, aircraft leasing, equity, credit services, global exchange services, NBC, industrial systems, lighting, medical systems, mortgage insurance, plastics

9 GE’s (Re)volution In Improved Product/Process
Began in late 1980’s facing foreign competition 1998, Six Sigma Quality becomes one of three company-wide initiatives ‘98 Invest $0.5B in training; Reap $1.5B in benefits! “Six Sigma is embedding quality thinking - process thinking - across every level and in every operation of our Company around the globe”1 “Six Sigma is now the way we work – in everything we do and in every product we design” 1 1 Jack Welch - General Electric website at ge.com

10 What are Statistically Designed Experiments?
Purposeful, systematic changes in the inputs in order to observe corresponding changes in the outputs Results in a mathematical model that predicts system responses for specified factor settings JS

11 Why DOE? Scientific Answers to Four Fundamental Test Challenges
Four Challenges How many? Depth of Test – effect of test size on uncertainty Which Points? Breadth of Testing – searching the vast employment battlespace How Execute? Order of Testing – insurance against “unknown-unknowns” What Conclusions? Test Analysis – drawing objective, supported conclusions DOE effectively addresses all these challenges!

12 Today’s Example – Precision Air Drop System
The dilemma for airdropping supplies has always been a stark one. High-altitude airdrops often go badly astray and become useless or even counter-productive. Low-level paradrops face significant dangers from enemy fire, and reduce delivery range. Can this dilemma be broken? A new advanced concept technology demonstration shows promise, and is being pursued by U.S. Joint Forces Command (USJFCOM), the U.S. Army Soldier Systems Center at Natick, the U.S. Air Force Air Mobility Command (USAF AMC), the U.S. Army Project Manager Force Sustainment and Support, and industry. The idea? Use the same GPS-guidance that enables precision strikes from JDAM bombs, coupled with software that acts as a flight control system for parachutes. JPADS (the Joint Precision Air-Drop System) has been combat-tested successfully in Iraq and Afghanistan, and appears to be moving beyond the test stage in the USA… and elsewhere. Capability: Assured SOF re-supply of material Requirements: Probability of Arrival Unit Cost $XXXX Damage to payload Payload Accuracy Time on target Reliability … Just when you think of a good class example – they are already building it! 46 TS – 46 TW Testing JPADS

13 A beer and a blemish … 1906 – W.T. Gossett, a Guinness chemist
Draw a yeast culture sample Yeast in this culture? Guess too little – incomplete fermentation; too much -- bitter beer He wanted to get it right 1998 – Mike Kelly, an engineer at contact lens company Draw sample from 15K lot How many defective lenses? Guess too little – mad customers; too much -- destroy good product He wanted to get it right

14 The central test challenge …
In all our testing – we reach into the bowl (reality) and draw a sample of JPADS performance Consider an “80% JPADS” Suppose a required 80% P(Arrival) Is the Concept version acceptable? We don’t know in advance which bowl God hands us … The one where the system works or, The one where the system doesn’t The central challenge of test – what’s in the bowl?

15 Start -- Blank Sheet of Paper
Let’s draw a sample of _n_ drops How many is enough to get it right? 3 – because that’s how much $/time we have 8 – because I’m an 8-guy 10 – because I’m challenged by fractions 30 – because something good happens at 30! Let’s start with 10 and see … => Switch to Excel File – JPADS Pancake.xls

16 A false positive – declaring JPADS is degraded (when it’s not) -- a
Suppose we fail JPADS when it has 4 or more misses We’ll be wrong (on average) about 10% of the time We can tighten the criteria (fail on 7) by failing to field more good systems We can loosen the criteria (fail on 5) by missing real degradations Let’s see how often we miss such degradations … In this bowl – JPADS performance is acceptable JPADS Wrong ~10% of time

17 A false negative – we field JPADS (when it’s degraded) -- b
In this bowl – JPADS P(A) decreased 10% -- it is degraded Use the failure criteria from the previous slide If we field JPADS with 6 or fewer hits, we fail to detect the degradation If JPADS has degraded, with n=10 shots, we’re wrong about 65% of the time We can, again, tighten or loosen our criteria, but at the cost of increasing the other error JPADS Wrong 65% of time

18 We seek to balance our chance of errors
JPADS Combining, we can trade one error for other (a for b) We can also increase sample size to decrease our risks in testing These statements not opinion –mathematical fact and an inescapable challenge in testing There are two other ways out … factorial designs and real-valued MOPs Wrong 10% of time JPADS P(A) Wrong 65% of time Enough to Get It Right: Confidence in stating results; Power to find small differences

19 A Drum Roll, Please … For a = b = 10%, d = 10% degradation in PA
But if we measure miss distance for same confidence and power N=8

20 Recap – First Challenge
Challenge 1: effect of sample size on errors – Depth of Test So -- it matters how many we do and it matters what we measure Now for the 2nd challenge – Breadth of testing – selecting points to search the employment battlespace

21 Challenge 2: Breadth -- How Do Designed Experiments Solve This?
Designed Experiment (n). Purposeful control of the inputs (factors) in such a way as to deduce their relationships (if any) with the output (responses). Inputs (Conditions) JPADS Concept A B C … Tgt Sensor (TP, Radar) Payload Type Platform (C-130, C-117) Test JPADS Payload Arrival Outputs (MOPs) Hits/misses RMS Trajectory Dev P(payload damage) Miss distance (m) Statistician G.E.P Box said … “All math models are false …but some are useful.” “All experiments are designed … most, poorly.”

22 Battlespace Conditions for JPADS Case
Systems Engineering Question: Does JPADS perform at required capability level across the planned battlespace? 12 Dimensions - Obviously a large test envelope … how to search it?

23 Populating the Space - Traditional
Altitude Altitude OFAT Cases Mach Mach Altitude Change variables together Mach

24 Populating the Space - DOE
Altitude Altitude Response Surface Factorial Mach Mach Altitude single point replicate Optimal Mach

25 More Variables - DOE 3-D Factorials 2-D 4-D Altitude Altitude Range
Mach Mach 4-D Mach Altitude Range Weapon – type A Weapon – type B

26 Even More Variables F A B C + + D E +

27 Efficiencies in Test - Fractions
B C + + D E +

28 We have a wide menu of design choices with DOE
Optimal Designs Full Factorials Response Surface Space Filling Fractional Factorials JMP Software DOE Menu

29 Problem context guides choice of designs
Space-Filling Designs Optimal Designs Response Surface Method Designs Fractional Factorial Designs # Levels Per Factor Needed Classical Factorials Number of Factors Constraints/Complexity of Surface

30 Challenge 2: Choose Points to Search the Relevant Battlespace
4 reps 1 var 2 reps 2 vars 1 reps 3 vars Factorial (crossed) designs let us learn more from the same number of assets We can also use Factorials to reduce assets while maintaining confidence and power Or we can combine the two ½ rep 4 vars All four Designs share the same power and confidence

31 Challenge 3: What Order? Guarding against “Unknown-Unknowns”
Learning Task performance (unrandomized) Task performance (randomized) Good run sequence time easy runs hard runs Randomizing runs protects from unknown background changes within an experimental period (due to Fisher)

32 Blocks Protect Against Day-to-Day Variation
detection (blocks) visibility detection (no blocks) Good time run sequence large target small target Blocking designs protects from unknown background changes between experimental periods (also due to Fisher)

33 Challenge 4: What Conclusions - Traditional “Analysis”
Table of Cases or Scenario settings and findings Graphical Cases summary Errors (false positive/negative) Linking cause & effect: Why? Sortie Alt Mach MDS Range Tgt Aspect OBA Tgt Velocity Target Type Result 1 10K 0.7 F-16 4 truck Hit 0.9 7 180 bldg 2 20K 1.1 F-15 3 10 tank Miss Case findings are usually reported as successful or anomalies are investigated and explained. Very little in the form of analysis For the rare times analysis is performed, it is usually excel based graphs with the analyst taking license to make the desired point based on leadership pressure or a sense from the test. Notice we can say most anything about the plot below (intentional generic domain – could be time or separate systems or just separate test subsets) based on zooming in or out along the vertical axis and by chopping off secions of the data.

34 How Factorial Matrices Work -- a peek at the linear algebra under hood
We set the X settings, observe Y Solve for b s.t. the error (e) is minimized Simple 2-level, 2 X-factor design Where y is the grand mean, A,B are effects of variables alone and AB measures variables working together

35 How Factorial Matrices Work II
To solve for the unknown b’s, X must be invertible Factorial design matrices are generally orthogonal, and therefore invertible by design

36 With DOE, we fit an empirical (or physics-based) model
Very simple to fit models of the form above (among others) with polynomial terms and interactions Models fit with ANOVA or multiple regression software Models are easily interpretable in terms of the physics (magnitude and direction of effects) Models very suitable for “predict-confirm” challenges in regions of unexpected or very nonlinear behavior Run to run noise can be explicitly captured and examined for structure

37 Analysis using DOE: CV-22 TF Flight
INPUTS (Factors) OUTPUTS (Responses) PROCESS: TF / TA Radar Performance Gross Weight Radar Measurement Noise Airspeed Nacelle Set Clearance Plane Turn Rate Crossing Angle Ride Mode Pilot Rating Set Clx Plane Deviation Terrain Type

38 Weight SCP Rate Ride Airspeed Dev Ratings
Analysis Using DOE Gross Turn SCP Pilot Weight SCP Rate Ride Airspeed Dev Ratings Medium Medium Medium Medium Hard Hard Hard Hard Medium Medium Medium Medium Hard Hard Hard Hard Medium Hard Medium Hard

39 Interpreting Deviation from SCP: speed matters when turning
Evidence of an interaction – describe this one Complete with uncertainty bounds Note that we quantify our degree of uncertainty – the whiskers represent 95% confidence intervals around our performance estimates. An objective analysis – not opinion

40 Pilot Ratings Response Surface Plot
Actual Factors: X = Airspeed Y = Turn Rate Actual Constants: SCP = Ride = Medium 3.3 3.8 4.3 4.8 5.3 Pilot Ratings 160.00 177.50 195.00 212.50 230.00 0.00 1.00 2.00 3.00 4.00 Airspeed Turn Rate Another display showing a response surface indicating an interaction Any combination within the input space can be predicted using this surface, which is a pictorial representation of an empirical statistical model The surface does not portray our uncertainty associated with prediction – next slide

41 Radar Performance Results SCP Deviation Estimating Equation
Prediction Model - Coded Units (Low=-1, High=+1) Deviation from SCP = * SCP * Turn Rate * Ride * Airspeed * Turn * Ride * Turn * Airspeed The statistical model enables us to easily characterize – determine which inputs matter and which don’t (notice gross weight is not present), what relationship exists between the inputs and outputs (linear, interaction), and a rank order of importance by the magnitude of the coefficients The model can be used to predict any combination of the inputs – next slde

42 Performance Predictions
Name Setting Low Level High Level SCP Turn Rate Ride Hard Medium Hard Airspeed Prediction 95% PI low 95% PI high Deviation from SCP Pilot Ratings For a combination of the inputs – not run during test – we can predict performance – for multiple responses The estimates are accompanied with uncertainty bounds – SCP deviation bounds from 5 to 8 -

43 Design of Experiments Test Process is Well-Defined
Planning: Factors Desirable and Nuisance Desired Factors and Responses Design Points Test Matrix Results and Analysis Model Build Discovery, Prediction

44 Caveat – we need good science!
We understand operations, aero, mechanics, materials, physics, electro-magnetics … To our good science, DOE introduces the Science of Test Bonus: Match faces to names – Ohm, Oppenheimer, Einstein, Maxwell, Pascal, Fisher, Kelvin

45 It applies to our tests: DOE in 50+ operations over 20 years
IR Sensor Predictions Ballistics 6 DOF Initial Conditions Wind Tunnel fuze characteristics Camouflaged Target JT&E ($30M) AC /105mm gunfire CEP evals AMRAAM HWIL test facility validation 60+ ECM development + RWR tests GWEF Maverick sensor upgrades 30mm Ammo over-age LAT testing Contact lens plastic injection molding 30mm gun DU/HEI accuracy (A-10C) GWEF ManPad Hit-point prediction AIM-9X Simulation Validation Link 16 and VHF/UHF/HF Comm tests TF radar flight control system gain opt New FCS software to cut C-17 PIO AIM-9X+JHMCS Tactics Development MAU 169/209 LGB fly-off and eval Characterizing Seek Eagle Ejector Racks SFW altimeter false alarm trouble-shoot TMD safety lanyard flight envelope Penetrator & reactive frag design F-15C/F-15E Suite 4 + Suite 5 OFPs PLAID Performance Characterization JDAM, LGB weapons accuracy testing Best Autonomous seeker algorithm SAM Validation versus Flight Test ECM development ground mounts (10’s) AGM-130 Improved Data Link HF Test TPS A-G WiFi characterization MC/EC-130 flare decoy characterization SAM simulation validation vs. live-fly Targeting Pod TLE estimates Chem CCA process characterization Medical Oxy Concentration T&E Multi-MDS Link 16 and Rover video test

46 Three DOE Stories for T&E
Plan Ponder Process Manpower Materials Methods Machines Response to Effect Causes Measurements Milieu (Environment) Cause-Effect (CNX) Diagram Produce Requirements: SDB II Build-up SDD Shot Design Acquisition: F-15E Suite 4E+ OFP Qualification Test: Combining Digital-SIL-Live Simulations We’ve selected these from 1000’s to show T&E Transformation

47 Testing to Diverse Requirements: SDB II Shot Design
Test Objective: SPO requests help – 46 shots right N? Power analysis – what can we learn? Consider Integrated Test with AFOTEC What are the variables? We do not know yet … How can we plan? What “management reserve” 46 shots too few to check binary values +/ % Goal  Perf shift Power  Results: Binary Pacq N=200+ Demo laser + coords N=4 ea Prove normal mode N=32 DOE Approach: Partition performance questions: Pacq/Rel + laser + coords + “normal mode” Consider total test pgm: HWIL+Captive+Live Build 3x custom, “right-size” designs to meet objectives/risks 32-shot factorial screens 4-8 variables to 0.5 std dev shift from KPP Integrate 20 AFOTEC shots for “Mgt Reserve”

48 Acquisition: F-15E Strike Eagle Suite 4E+ (circa 2001-02)
Test Objectives: Qualify new OFP Suite for Strikes with new radar modes, smart weapons, link 16, etc. Test must address dumb weapons, smart weapons, comm, sensors, nav, air-to-air, CAS, Interdiction, Strike, ferry, refueling… Suite 3 test required 600+ sorties DOE Approach: Build multiple designs spanning: EW and survivability BVR and WVR air to air engagements Smart weapons captive and live Dumb weapons regression Sensor performance (SAR and TP) Results: Vast majority of capabilities passed Wrung out sensors and weapons deliveries Dramatic reductions in usual trials while spanning many more test points Moderate success with teaming with Boeing on design points (maturing) Source: F-15E Secure SATCOM Test, Ms. Cynthia Zessin, Gregory Hutto, 2007 F-15 OFP CTF 53d Wing / 46 Test Wing, Eglin AFB, Florida

49 Strike Weapons Delivery a Scenario Design Improved with DOE

50 Case: Integration of Sim-HWIL-Captive-Live Fire Events
Test Objective: Most test programs face this – AIM-9X, AMRAAM, JSF, SDB II, etc… Multiple simulations of reality with increasing credibility but increasing cost Multiple test conditions to screen for most vital to performance How to strap together these simulations with prediction and validation? 1000’s Digital Mod/Sim $ - Credibility Predict 15-20 factors 8-12 factors 3-5 factors 100’s HWIL or captive Predict Validate 10’s Live Shot + Validate DOE Approach: In digital sims screen variables with fractional factorials and predict performance In HWIL, confirm digital prediction (validate model) and further screen 8-12 factors; predict In live fly, confirm prediction (validate) and test 3-5 most vital variables Prediction Discrepancies offer chance to improve sims Results: Approach successfully used in 53d Wing EW Group SIL labs at Eglin/PRIMES > HWIL on MSTE Ground Mounts > live fly (MSTE/NTTR) for jammers and receivers Trimmed live fly sorties from to (typical) today AIM-9X, AMRAAM, ATIRCM: 90% sim reduction

51 A Strategy to be the Best … Using Design of Experiments
Inform Leadership of Statistical Thinking for Test Adopt most powerful test strategy (DOE) Train & mentor total team Combo of AFIT, Center, & University Revise AF Acq policy, procedures Share these test improvements Targets HQ AFMC ASC & ESC Service DT&E Adopting DOE 53d Wing & AFOTEC 18 FTS (AFSOC) RAF AWC DoD OTA: DOT&E, AFOTEC, ATEC, OPTEVFOR & MCOTEA AFFTC & TPS 46 TW & AAC AEDC

52 53d Wing Policy Model: Test Deeply & Broadly with Power & Confidence
From 53d Wing Test Manager’s Handbook*: “While this [list of test strategies] is not an all- inclusive list, these are well suited to operational testing. The test design policy in the 53d Wing supplement to AFI mandates that we achieve confidence and power across a broad range of combat conditions. After a thorough examination of alternatives, the DOE methodology using factorial designs should be used whenever possible to meet the intent of this policy.” * Original Wing Commander Policy April 2002

53 March 2009: OTA Commanders Endorse DOE for both OT & DT
“Experimental design further provides a valuable tool to identify and mitigate risk in all test activities. It offers a framework from which test agencies may make well-informed decisions on resource allocation and scope of testing required for an adequate test. A DOE-based test approach will not necessarily reduce the scope of resources for adequate testing. Successful use of DOE will require a cadre of personnel within each OTA organization with the professional knowledge and expertise in applying these methodologies to military test activities. Utilizing the discipline of DOE in all phases of program testing from initial developmental efforts through initial and follow-on operational test endeavors affords the opportunity for rigorous systematic improvement in test processes.”

54 Nov 2008: AAC Endorses DOE for RDT&E Systems Engineering
AAC Standard Systems Engineering Processes and Practices

55 July 2009: 46 TW Adopts DOE as default method of test

56 We Train the Total Test Team … but first, our Leaders!
Leadership Series DOE Orientation (1 hour) DOE for Leaders (half day) Introduction to Designed Experiments (PMs- 2 days) OA/TE Practitioner Series 10 sessions and 1 week each Reading--Lecture--Seatwork Basic Statistics Review (1 week) Random Variables and Distributions Descriptive & Inferential Statistics Thorough treatment of t Test Applied DOE I and II (1 week each) Advanced Undergraduate treatment Graduates know both how and why Journeymen Testers

57 Ongoing Monday Continuation Training
Weekly seminars online Topics wide ranging New methods New applications Problem decomposition Analysis challenges Reviewing the basics Case studies Advanced techniques DoD web conferencing Q&A session following Monday CR 22 Bldg 1 or via DCO at desk

58 DOE Initiative Status Gain AAC Commander endorsement and policy announcement Train and align leadership to support initiative Commence training technical testers Launch multiple quick-win pilot projects to show applicability Communicate the change at every opportunity Gain AAC leadership endorsement and align client SPOs Influence hardware contractors with demonstrations and suitable contract language Keep pushing the wheel to build momentum Confront nay-sayers and murmurers Institutionalize with promotions, policies, Wing structures and practices Roll out to USAF at large and our Army & Navy brethren

59 But Why Should Systems Engineers Adopt DOE?
It’s the scientific, structured, objective way to build better ops tests DOE is faster, less expensive, and more informative than alternative methods Uniquely answers deep and broad challenge: Confidence & Power across a broad battlespace Our less-experienced testers can reliably succeed Better chance of getting it right! Why not ... “If DOE is so good, why haven’t I heard of it before?” “Aren’t these ideas new and unproven?” “To call in the statistician after the experiment is ... asking him to perform a postmortem examination: he may be able to say what the experiment died of.” Address to Indian Statistical Congress, 1938. DOE Founder Sir Ronald A. Fisher

60 DOE: The Science of Test
What’s Your Method of Test? DOE: The Science of Test Questions?

61 Backups – More Cases

62 Where we intend to go with Design of Experiments
Design of Experiments can aid acquisition in many areas M&S efforts (AoA, flyout predictions, etc.) Model verification (live-fire, DT, OT loop) Manufacturing process improvements Reliability Improvements Operational effectiveness monitoring Make DOE the default for shaping analysis and test program Like any engineering specialty, DOE takes time to educate and mature in use Over next 3-5 years, make DOE the way we do test - policy Design of Experiments can aid acquisition in many areas Planning and AoA execution M&S efforts (AoA, flyout predictions, etc.) Model verification (live-fire, DT, OT loop) Provide a more rigorous model improvement loop Manufacturing process improvements Reliability Improvements Failure analysis and attribution of cause-and-effect Performance monitoring of maintenance/logistics data Operational effectiveness monitoring Damage assessment feedback into weapons effectiveness models 5 Year Plan – Making DOE the Default Option Like any engineering specialty, DOE takes time to educate and mature Over next 3-5 years, make DOE the way we do test Implies Training and mentoring our current work force to appropriate level Hiring/educating B.S. and M.S. grads with DoE skills into engineering workforce mix 2008 2009 2010 2011 Initial Awareness Hire/Grad Ed Industrial/OR Types Begin Tech Trn Mature DOE in all programs Mentor Practitioners Adjust PDs, OIs, trn plans

63 Case 1 Problem Binary data – tendency to score hit/miss or kill/no kill Bad juju – poor statistical power, challenge to normal theory Binary leads to bimodal distributions

64 Normal theory fit is inadequate
Typically get surfaces removed from responses – poor fit, poor predictions Also get predicted responses outside 0-1: poor credibility with clients 

65 Solution is GLM – with binary response (error)
We call this the logit link function Logit model restricts predictions in 0/1 Commonly used in medicine, biology, social sciences

66 Logit Solution Model Works Well
Term Estimate L-R ChiSquare Prob>ChiSq Lower CL Upper CL Visibility 1.72 43.40 <.0001 1.08 2.57 Aspect -0.02 27.22 -0.03 -0.01 Intercept 3.34 7.96 0.00 0.94 6.34 (Aspect-270)*(Aspect-270) 1.59 0.21 Visibility*(Aspect-270) 0.01 0.98 0.32 0.02 (TGT_Range-2125)*(Aspect-270) 0.61 0.43 Visibility*(TGT_Range-2125) 0.60 0.44 Visibility*(TGT_Range-2125)*(Aspect-270) 0.13 0.72 TGT_Range 0.04 0.84 Visibility*Visibility -0.04 0.93 -0.85 0.74 With 10 replicates good model With only 1 replicate (90% discard) same model! Term Estimate L-R ChiSquare Prob>ChiSq Lower CL Upper CL Visibility 2.44 18.13 <.0001 0.98 5.34 Aspect -0.03 12.76 0.00 -0.07 -0.01 Intercept 8.23 5.53 0.02 1.05 21.28 (Aspect-270)*(Aspect-270) 3.82 0.05 Visibility*(TGT_Range-2125)*(Aspect-270) 2.48 0.12 Visibility*(Aspect-270) TGT_Range 1.48 0.22 (TGT_Range-2125)*(Aspect-270) 0.84 0.36 Visibility*(TGT_Range-2125) 0.52 0.47 Visibility*Visibility 0.24 0.62 -1.15 1.85

67 Problem – Blast Arena Instrumentation Characterization
Do the blast measuring devices read the same at various ranges/angles? Fractional Factorial Graphic of design geometry in upper right

68 Solution – unequal measurement variance - methods
Normal theory confidence intervals Rank Transform confidence intervals Closer to weapon, measurement variability is large Note confirmation with three methods Bootstrap confidence intervals

69 Problem – Nonlinear problem with restricted resources
Machine gun test – theory says error will increase with the square of range Picture is not typical, but I love Google

70 Solution – efficient nonlinear (quadratic) design - CCD
Invented by Box in 1950’s 2D and 3D looks – factorial augmented with center points and axial points – fit quadratics

71 Power is pretty good with 4-factor, 30 run design
Power at 5 % a level to detect signal/noise ratios of Term StdErr** 0.5 Std. Dev. 1 Std. Dev. 2 Std. Dev. Linear Effects 0.2357 16.9 % 51.0 % 97.7 % Interactions 0.2500 15.5 % 46.5 % 96.2 % Quadratic Effects 0.6213 11.7 % 32.6 % 85.3 % **Basis Std. Dev. = 1.0

72 Prediction Error Flat over Design Volume
This chart (recent literature innovation) measures prediction error – mostly used as a comparative tool to trade off designs – Design Expert from StatEase.com

73 Logit fit to binary problem
Summary Remarkable savings possible esp in M&S and HWIL Questions are scientifically answerable (and debatable) We have run into 0.00 problems where we cannot apply principles Point is that DOE is widely applicable, yields excellent solutions, often leads to savings, but we know why N To our good science – add the Science of Test: DOE Logit fit to binary problem

74 Case: Secure SATCOM for F-15E Strike Eagle
Test Objective: To achieve secure A-G comms in Iraq and Afghanistan, install ARC-210 in fighters Characterize P(comms) across geometry, range, freq, radios, bands, modes Other ARC-210 fighter installs show problems – caution needed here! Results: For higher-power radio – all good For lower power radio, range problems Despite urgent timeframe and canceled missions, enough proof to field DOE Approach: Use Embedded Face-Centered CCD design (created for X-31 project, last slide) Gives 5-levels of geometric variables across radios & modes Speak 5 randomly-generated words and score number correct Each Xmit/Rcv a test event – 4 missions planned Source: F-15E Secure SATCOM Test, Ms. Cynthia Zessin, Gregory Hutto, 2007 F-15 OFP CTF 53d Wing / 46 Test Wing, Eglin AFB, Florida

75 Case: GWEF Large Aircraft IR Hit Point Prediction
Test Objective: IR man-portable SAMs pose threat to large aircraft in current AOR Dept Homeland Security desired Hit point prediction for a range of threats needed to assess vulnerabilities Solution was HWIL study at GWEF (ongoing) IR Missile C-5 Damage Results: Revealed unexpected hit point behavior Process highly interactive (rare 4-way) Process quite nonlinear w/ 3rd order curves Reduced runs required 80% over past Possible reduction of another order of magnitude to runs DOE Approach: Aspect – degees, 7each Elevation – Lo,Mid,Hi, 3 each Profiles – Takeoff, Landing, 2 each Altitudes – 800, 1200, 2 each Including threat – 588 cases With usual reps nearly 10,000 runs DOE controls replication to min needed

76 Case: Reduce F-16 Ventral Fin Fatigue from Targeting Pod
Face-Centered CCD Test Objective: blah Embedded F-CCD Expert Chose 162 test points Ventral fin DOE Approach: Many alternate designs for this 5-dimensional space (a, b , Mach , alt, Jets on/off) Results: New design invented circa 2005 capable of efficient flight envelope search Suitable for loads, flutter, integration, acoustic, vibration – full range of flight test Experimental designs can increase knowledge while dramatically decreasing required runs A full DOE toolbox enables more flexible testing

77 Case: Ejector Rack Forces for MAU-12 Jettison with SDB
Test Objective: AF Seek Eagle characterize forces to jettison rack with from 0-4 GBU-39 remaining Forces feed simulation for safe separation Desire robust test across multiple fleet conditions Stores Certification depend on findings Results: From Data Mined ‘99 force data with regression Modeled effects of temp, rack, cg etc Cg effect insignificant Store weight, orifice, cart lot, temperature all significant DOE Approach: Multiple factors: rack S/N, temperature, orifice, SDB load-out, cart lot, etc AFSEO used innovative bootstrap technique to choose 15 racks to characterize rack-variation Final designs in work, but promise 80% reduction from last characterization in FY99

78 Case: AIM-9X Blk II Simulation Pk Predictions
Test Objective: Validate simulation-based acquisition predictions with live shots to characterize performance against KPP Nearly 20 variables to consider: target, countermeasures, background, velocity, season, range, angles … Effort: 520 x 10 reps x 1.5 hr/rep = 7800 CPU-hours = 0.89 CPU-years Block II more complex still! DOE Approach: Data mined actual “520 set” to ID critical parameters affecting Pk  many conditions have no detectable effect … Shot grid  greatly reduce grid granularity Replicates  1-3 reps per shot maximum Result – DOE reduces simulation CPU workload by 50-90% while improving live-shot predictions Next Steps: Ball in Raytheon court to learn/apply DOE … Notional 520 grid Notional DOE grid

79 Case: AIM-9X Blk II Simulation Pk Predictions Update
Test Objective: Validate simulation-based Pk “520 Set”: 520 x 10 reps x 1.5 hr/rep = 7800 CPU-hours = 0.89 CPU-years Block II contemplates “1380 set” 1 replicate-1 case: 1 hour => 1380*1*10 =13,800 hours = 1.6 CPU-years DOE Approach: We’ve shown: many conditions have no effect, 1-3 reps max, shot grid 10x too fine Raytheon response – hire PhD OR-type to lead DOE effort Projection – not 13,800, but 1000 hours max saving 93% of resources 2 June week DOE meeting Next Steps: Raytheon responds to learn/apply DOE … Notional 520 grid Notional DOE grid

80 Case: CFD for NASA CEV Test Objective:
Select geometries to minimize total drag in ascent to orbit for NASA’s new Crew Exploration Vehicle (CEV) Experts identified 7 geometric factors to explore including nose shape Down-selected parameters further refined in following wind tunnel experiments DOE Approach: Two designs – with 5 and 7 factors to vary Covered elliptic and conic nose to understand factor contributions Both designs were first order polynomials with ability to detect nonlinearities Designs also included additional confirmation points to confirm the empirical math model in the test envelope Results: Original CFD study envisioned 1556 runs DOE optimized parameters in 84 runs – 95%! ID’d key interaction driving drag Source: A Parametric Geometry CFD Study Utilizing DOE Ray D. Rhew, Peter A. Parker, NASA Langley Research Center, AIAA

81 Case: Notional DOE for Robust Engineering Instrumentation Design
Test Objective: Design data link recording/telemetry design elements with battery backup Choose battery technology and antenna robust to noise factors Environmental factors include Link-16 Load (msg per minute) and temperature Optimize cost, cubes, memory life, power out DOE Approach: Set up factorial design to vary multiple design and noise parameters simultaneously DOE can sort out which design choices and environmental factors matter – how to build a robust engineering design In this case (Montgomery base data) one battery technology outshone others over temperature – Link 16 messages do not matter

82 Case: Wind Tunnel X-31 AOA
Test Objective: Model nonlinear aero forces in moderate and high AoA and yaw conditions for X-31 Characterize effects of three control surfaces – Elevon, canard, rudder Develop classes of experimental designs suitable for efficiently modeling 3rd order behavior as well as interactions Results: Usual wind tunnel trials encompass points – this was 104 Revealed process nonlinear (X3), complex and interactive Predictions accurate < 1% DOE Approach: Developed six alternate response surface designs Compared and contrasted designs’ matrix properties including orthogonality, predictive variance, power and confidence Ran designs, built math models, made predictions and confirmed Coeff Lift (CL) Vs AoA (a) Source: A High Performance Aircraft Wind Tunnel Test using Response Surface Methodologies Drew Landman, NASA Langley Research Center, James Simpson Florida State University, AIAA

83 Case: SFW Fuze Harvesting LAT
Test Problem/Objective: Harvest GFE fuzes from excess cluster/chem weapons Textron “reconditions” fuzes, installs in SFW Proposed sequential 3+3 LAT-pass/reject lot Is this an acceptable LAT? Recall: Confidence = P(accept good lot) Power = P(reject bad lot) DOE/SPC Approach: Remember – binary vars weak Test destroy fuze Establish LAT performance: Confidence = 75% Power = 50% Q: What n is required then? A: destroy 20+ of 25 fuzes Results: Shift focus from 1-0 to fuze function time With 20kft delivery d >> 250 ms Required n is now only 4 or 5 fuzes 1-a 1-b

84 Case: DOE for Troubleshooting & False Alarms (& software)
Test Problem/Objective: 782 TS has malf photos during weapons tests What causes it? Possibilities include camera, laptop, config file, and operator – 4 each 256 combinations – how to test? Remember – binary vars weak But – d is also huge: 0 or 1 not 0.8 to 0.9 DOE/SPC Approach: We can design several ways: 44 design is 256 combos (not impossible) “Likely” Screen with 24 design (16 combos) Use Optimal or Space-filling design Results: TE Marshall solves with “Most Likely” screen Soln: Laptop x Config file interaction Others: MWS, Altimeter false alarms, software JMP Latin Hypercubes n=8, n=10 JMP Sphere-Packing n=8, n=10

85 Case: DOE Characterizing Onboard Squib Ignition on Test Track
Test Problem/Objective: Track wants to test ability to initiate several ignitors / squibs both on- and -off board sleds What is the firing delay – average & variance? Min of three squibs/ignitors On and off board sled, voltages, number, and delays are variables Standard approach shown on next slide DOE/SPC Approach: We can design several ways: Choose to split design on and off board Can choose nested or confounded design Objective search reveals two: Characterize firing delay for signal Characterize ignition delay from capacitor Build two designs to characterize with dummy/actual loads Dummy resistors for signal delay Actual (scarce) ordnance for ignition delay Results: Still TBD – late summer execution Want to characterize also gauge reliability using harvested data mining Improved power across test space while examining more combinations Sniff out two objectives with process – two test matrices Power improved from measuring each firing event

86 Comparison of Designs As commonly found – more combos (4x), improved power (4x), new ideas and objectives

87 Case: Arena 500 Lb Blast Characterization vs. Humans
Test Problem/Objective: 780 TS Live fire testers want to characterize blast energy waveform outside & inside Toyota HiLux Map Pk as a function of range, aspect, weapon orientation, gauge, vehicle orientation, glass How repeatable are the measurements bomb to bomb and gauge to gauge? Results: Brilliant design from Capt Stults, Cameron, with assists from Higdon/Schroeder/Kitto Great collaborative, creative test design DOE/SPC Approach: Blast test response – peak pressure, rise time, fall time, slopes (for JMEM models) Directly blast test symmetry usually assumed Critical information on gauge repeatability Test is known as a “Split Plot” as each detonation restricts randomization Consequence: less information on bomb-bomb variations (but of lesser interest)

88 Case: Computer Code Testing
Cyber Defense Mission Planning Test Problem/Objective: OK-DOE works for CEP, TLE, Nav Err, Penetrators, blast testing, SFW Fuze Function … But what about testing computer codes?? Mission Planning: load-outs, initial fuel, route length, refueling, headwinds, runway length, density altitude, threats, waypoints, ACO, package, targets, deliveries/LARS … How do we ascertain the package works? CWDS UAI JAWS/JMEM CFD IOW Campaign Models TBMCS NetWar F-15E Suite 5E OFP Results: An evolving area – NPS, AFIT, Bell Labs Several space-filling design algorithms Aircraft OFP testing a proof-case: weapons, comm, sensors work fine More to do: Mission Planning, C4I, IW, etc. Proposal – try some designs and get some help from active units and universities (AFIT, Univ of Fl, Va Tech, Army RDECOM, ASU, Ga Tech, Bell Labs … Whitebox Code Verif NA SW Physics Space Blackbox DOE Approach: We can design several ways: Code verification – not too much contribution Physics or SW code – “Space Filling” Designs SW Performance – errors, time, success/fail, degree = classical DOE designs Performance Classic

89 Some Space-Filling 3D designs
10 point 10x10x4 Sphere Packing Design 50 point 10x10x4 Latin Hypercube Design 30 point 10x10x4 Uniform Design 27 point 3x3x3 Classical DOE Factorial Design

90 CV-22 IDWS Issues/Concerns Program Description DOE approach
Accelerated testing Primary performance issues: vibration, performance and flying qualities Full battlespace tested, except Climbs/dives excluded 400-round burst (gun heating effect) excluded For Official Use Only Program Description DOE approach Rescoped current phase of testing Completed process flow diagrams Completed test matrix Developed run matrix Initial proposal = 108 runs Factorial with center points = 92 runs FCD with center points = 40 runs FCD with 2 center points = 36 runs FCD in 2 blocks = 30 runs IDWS: Interim Defensive Weapon System Objective: Evaluate baseline weapons fire accuracy for the CV-22. Flight test Jan 09 31 Dec 08

91 MC-130H BASS Issues/Concerns LMAS proposed test matrix
Inconsistent test objective – overall purpose Program Description DOE approach MC-130H Bleed Air System study Test Objective: Collect bleed air system performance data on the MC‑130H Combat Talon II aircraft Flight test Jan 09 Completed process description and decomposition Led to fewer, better-defined test factors “Augmented” LMAS test matrix of 114 required cases Split-split plot design of 209 runs Increases power, confidence; decreases confounding 31 Dec 08

92 SABIR/ATACS Issues/Concerns Scope provided by LMAS Palmdale
Acceptable confidence, power, accuracy, precision TBD Picture Program Description DOE approach SABIR: Special Airborne Mission Installation & Response ATACS: Advanced Tactical Airborne C4ISR System Test Objectives: Maintain aircraft pressurization Determine vibration Functional test at max loads Determine aerodynamic loading Performance and flying qualities EMC/EMI Flight test spring 2009 Gov’t proposed matrix ~720 runs DOE proposed approach = embedded FCD  factor of 5 reduction 31 Dec 08

93 AFSAT Issues/Concerns Program Description DOE approach Limited Funding
Full orthogonal design limited by test geometry Limited time of flight at test conditions Safety concerns- flying in front of unmanned vehicle For Official Use Only Program Description DOE approach Used minimal, factorial design. Final design is a full factorial in some factors augmented with an auxiliary design. Minimized risk of missing key data points and protected against background variation with series of baseline cases AFSAT: Air Force Subscale Aerial Target Objective: Collect IR signature data on the AFSAT for future augmentation and threat representation Flight test Jan-Feb 09 1 Jan 09

94 LASI HPA Issues/Concerns Program Description DOE approach
Bi-modal data distribution Near-discontinuities in responses High-order interactions and effects Legend: FYI Heads-Up Need Help TE: ? OA: ? Program Description DOE approach Large Aircraft Survivability Initiative (LASI) Hit-Point Analysis (HPA) High Fidelity Models of Large Transport Aircraft Supports Susceptibility Reduction Studies Supports Vulnerability Reduction Studies Harvested historic data sets for knowledge Identified intermediate responses with richer information Identified run-savings for CM testing 31 Dec 08

95 Anubis MAV Fleeting Targets QRF
Micro-munitions Air Vehicle proof of concept & lethality assessment Objective Successful tgt engagement Verify lethality Original MoT Factor effects confounded Imbalanced test conditions Susceptible to background noise No measure of wind effects DoE used to Fixed original problems Used 8 instead of 10 prototypes Inserted pause after first test to make midcourse correntions


Download ppt "Raising the Bar: Equipping Systems Engineers to Excel with DOE"

Similar presentations


Ads by Google