Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Raising the Bar: Equipping Systems Engineers to Excel with DOE PlanPonderProcess Manpower Materials MethodsMachines Response to Effect Causes Measurements.

Similar presentations

Presentation on theme: "1 Raising the Bar: Equipping Systems Engineers to Excel with DOE PlanPonderProcess Manpower Materials MethodsMachines Response to Effect Causes Measurements."— Presentation transcript:

1 1 Raising the Bar: Equipping Systems Engineers to Excel with DOE PlanPonderProcess Manpower Materials MethodsMachines Response to Effect Causes Measurements Milieu (Environment) Cause- Effect (CNX) Diagram Produce presented to: INCOSE Luncheon September 2009 Greg Hutto Wing Ops Analyst, 46 th Test Wing

2 2 Bottom Line Up Front Test design is not an art…it is a science Talented scientists in T&E Enterprise however…limited knowledge in test design…alpha, beta, sigma, delta, p, & n Our decisions are too important to be left to professional opinion alone…our decisions should be based on mathematical fact 53d Wg, AFOTEC, AFFTC, and 46 TW/AAC experience Teaching DOE as a sound test strategy not enough Leadership from senior executives (SPO & Test) is key Purpose: DOD adopts experimental design as the default approach to test, wherever it makes sense Exceptions include demos, lack of trained testers, no control

3 3 Background -- Greg Hutto B.S. US Naval Academy, Engineering - Operations Analysis M.S. Stanford University, Operations Research USAF Officer -- TAWC Green Flag, AFOTEC Lead Analyst Consultant -- Booz Allen & Hamilton, Sverdrup Technology Mathematics Chief Scientist -- Sverdrup Technology Wing OA and DOE Champion – 53 rd Wing, now 46 Test Wing USAF Reserves – Special Assistant for Test Methods (AFFTC/CT) and Master Instructor in DOE for USAF TPS Practitioner, Design of Experiments Years Green Flag EW Exercises 79 F-16C IOT&E 83 AMRAAM, JTIDS 84 NEXRAD, CSOC, Enforcer 85 Peacekeeper 86 B-1B, SRAM, 87 MILSTAR 88 MSOW, CCM 89 Joint CCD T&E 90 SCUD Hunting 91 AGM-65 IIR 93 MK-82 Ballistics 94 Contact Lens Mfr 95 Penetrator Design 96 30mm Ammo ESM/ECM projects B-1B SA OUE, Maverick IR+, F-15E Suite 4E+,100s more Selected T&E Project Experience – 18+ Years

4 Systems Engineering Experience Modular Standoff Weapon (MSOW) – Multinational NATO efforts for next gen weapons. Died deserved death … – Next Generation AGM-130 yields surprising result – no next gen AGM-130; instead JDAM, JSOW, JAASM – Clear Airfield Mines & Buried Unexploded Ordnance requirements – >98% P(clearance); <5m UXO location error engineering solutions - rollers, seismic tomography, explosive foams Bare Base Study led to reduction of 50% in weight, cost, and improved sustainability through COTS solutions $20,000, 60 BTU-hr (5 ton) crash-survivable, miniaturized air conditioner replaced by 1 Window A/C - $600 4 Google Specs: Frigidaire FAM18EQ2A Window Mounted Heavy Duty Room Air Conditioner, 18,000/17,800 BTU Cool, 16,000 BTU (Heat), 1,110 Approximately Cool Area Sq. Ft, 9.7 EER, 11" Max. Wall Thickness (FAM18EQ2 FAM18EQ FAM18E FAM18 FAM-18EQ2A) $640.60

5 5 Overview Four Challenges – The 80% JPADS 3 DOE Fables for Systems Engineers Policy & Deployment Summary

6 Systems Engineering Challenges Concept Product & Process Design Production Operations Decommission/ End-use Requirements - Quality Function Deployment (QFD) AoA - Feasibility Studies Process Flow Analysis Designed Experiments – Improve Performance, Reduce Variance SPC – Defect SourcesSupply Chain Management Lean Manufacturing Statistical Process Control (Acceptance Test) Serviceability AvailabilityMaintainability Reliability Failure Modes & Effects Analysis (FMEA) Robust Product Design Simulation Historical / Empirical Data Analysis

7 Systems Engineering Simulations of Reality At each stage of development, we conduct experiments Ultimately – how will this device function in service (combat)? Simulations of combat differ in fidelity and cost Differing goals (screen, optimize, characterize, reduce variance, robust design, trouble-shoot) Same problems – distinguish truth from fiction: What matters? What doesnt? 7

8 Industry Statistical Methods: General Electric Fortune 500 ranked #5 in Revenues Global Most Admired Company - Fortune 2005 Americas Most Admired #2, Worlds Most Respected #1 (7 years running), Forbes 2000 list # Revenues $152 B Products and Services Aircraft engines, appliances, financial services, aircraft leasing, equity, credit services, global exchange services, NBC, industrial systems, lighting, medical systems, mortgage insurance, plastics

9 GEs (Re)volution In Improved Product/Process Began in late 1980s facing foreign competition 1998, Six Sigma Quality becomes one of three company-wide initiatives 98 Invest $0.5B in training; Reap $1.5B in benefits! Six Sigma is embedding quality thinking - process thinking - across every level and in every operation of our Company around the globe 1 Six Sigma is now the way we work – in everything we do and in every product we design 1 1 Jack Welch - General Electric website at

10 What are Statistically Designed Experiments? Purposeful, systematic changes in the inputs in order to observe corresponding changes in the outputs Results in a mathematical model that predicts system responses for specified factor settings

11 Why DOE? Scientific Answers to Four Fundamental Test Challenges Four Challenges 1. How many? Depth of Test – effect of test size on uncertainty 2. Which Points? Breadth of Testing – searching the vast employment battlespace 3. How Execute? Order of Testing – insurance against unknown-unknowns 4. What Conclusions? Test Analysis – drawing objective, supported conclusions DOE effectively addresses all these challenges!

12 Todays Example – Precision Air Drop System Just when you think of a good class example – they are already building it! 46 TS – 46 TW Testing JPADS 12 The dilemma for airdropping supplies has always been a stark one. High-altitude airdrops often go badly astray and become useless or even counter-productive. Low-level paradrops face significant dangers from enemy fire, and reduce delivery range. Can this dilemma be broken? A new advanced concept technology demonstration shows promise, and is being pursued by U.S. Joint Forces Command (USJFCOM), the U.S. Army Soldier Systems Center at Natick, the U.S. Air Force Air Mobility Command (USAF AMC), the U.S. Army Project Manager Force Sustainment and Support, and industry. The idea? Use the same GPS-guidance that enables precision strikes from JDAM bombs, coupled with software that acts as a flight control system for parachutes. JPADS (the Joint Precision Air-Drop System) has been combat-tested successfully in Iraq and Afghanistan, and appears to be moving beyond the test stage in the USA… and elsewhere.JDAM bombs Requirements: Probability of Arrival Unit Cost $XXXX Damage to payload Payload Accuracy Time on target Reliability … Capability: Assured SOF re-supply of material

13 13 A beer and a blemish … 1906 – W.T. Gossett, a Guinness chemist Draw a yeast culture sample Yeast in this culture? Guess too little – incomplete fermentation; too much -- bitter beer right He wanted to get it right 1998 – Mike Kelly, an engineer at contact lens company Draw sample from 15K lot How many defective lenses? Guess too little – mad customers; too much -- destroy good product right He wanted to get it right

14 14 The central test challenge … In all our testing – we reach into the bowl (reality) and draw a sample of JPADS performance Consider an 80% JPADS Suppose a required 80% P(Arrival) Is the Concept version acceptable? We dont know in advance which bowl God hands us … The one where the system works or, The one where the system doesnt The central challenge of test – whats in the bowl?

15 15 Start -- Blank Sheet of Paper Lets draw a sample of _n_ drops How many is enough to get it right? 3 – because thats how much $/time we have 8 – because Im an 8-guy 10 – because Im challenged by fractions 30 – because something good happens at 30! Lets start with 10 and see … => Switch to Excel File – JPADS Pancake.xls

16 16 A false positive – declaring JPADS is degraded (when its not) -- In this bowl – JPADS performance is acceptable Suppose we fail JPADS when it has 4 or more misses Well be wrong (on average) about 10% of the time We can tighten the criteria (fail on 7) by failing to field more good systems We can loosen the criteria (fail on 5) by missing real degradations Lets see how often we miss such degradations … Wrong ~10% of time JPADS

17 17 A false negative – we field JPADS (when its degraded) -- Use the failure criteria from the previous slide If we field JPADS with 6 or fewer hits, we fail to detect the degradation If JPADS has degraded, with n=10 shots, were wrong about 65% of the time We can, again, tighten or loosen our criteria, but at the cost of increasing the other error In this bowl – JPADS P(A) decreased 10% -- it is degraded Wrong 65% of time JPADS

18 18 We seek to balance our chance of errors Combining, we can trade one error for other ( for We can also increase sample size to decrease our risks in testing These statements not opinion –mathematical fact and an inescapable challenge in testing There are two other ways out … factorial designs and real-valued MOPs Enough to Get It Right: Confidence in stating results; Power to find small differences Wrong 65% of time Wrong 10% of time JPADS P(A)

19 19 A Drum Roll, Please … For = = 10%, = 10% degradation in P A N=120! But if we measure miss distance for same confidence and power N=8

20 20 Recap – First Challenge Challenge 1: effect of sample size on errors – Depth of Test So -- it matters how many we do and it matters what we measure Now for the 2 nd challenge – Breadth of testing – selecting points to search the employment battlespace

21 21 Challenge 2: Breadth -- How Do Designed Experiments Solve This? Designed Experiment (n). Purposeful control of the inputs (factors) in such a way as to deduce their relationships (if any) with the output (responses). Test JPADS Payload Arrival Inputs (Conditions) JPADS Concept A B C … Tgt Sensor (TP, Radar) Payload Type Platform (C-130, C-117) Outputs (MOPs) Hits/misses RMS Trajectory Dev P(payload damage) Miss distance (m) Statistician G.E.P Box said … All math models are false …but some are useful. All experiments are designed … most, poorly.

22 22 Battlespace Conditions for JPADS Case Systems Engineering Question: Does JPADS perform at required capability level across the planned battlespace? 12 Dimensions - Obviously a large test envelope … how to search it?

23 Populating the Space - Traditional OFAT Cases Change variables together Mach Altitude Mach Altitude Mach Altitude

24 Populating the Space - DOE Mach Altitude Mach Altitude Mach Altitude Factorial Response Surface Optimal single point replicate

25 More Variables - DOE Mach Altitude Factorials Mach Altitude Range Mach Altitude Range Weapon – type A Weapon – type B 4-D 3-D 2-D

26 Even More Variables D –+ E – + F +– A B C

27 Efficiencies in Test - Fractions D –+ E – + F +– A B C

28 We have a wide menu of design choices with DOE Optimal Designs Fractional Factorials Space Filling Response Surface Full Factorials JMP Software DOE Menu

29 # Levels Per Factor Needed Problem context guides choice of designs 29 Number of Factors Constraints/Complexity of Surface Classical Factorials Fractional Factorial Designs Response Surface Method Designs Optimal Designs Space- Filling Designs

30 30 Challenge 2: Choose Points to Search the Relevant Battlespace Factorial (crossed) designs let us learn more from the same number of assets We can also use Factorials to reduce assets while maintaining confidence and power Or we can combine the two 4 reps 1 var 2 reps 2 vars ½ rep 4 vars All four Designs share the same power and confidence 1 reps 3 vars

31 Challenge 3: What Order? Guarding against Unknown-Unknowns Randomizing Randomizing runs protects from unknown background changes within an experimental period (due to Fisher) Learning run sequence easy runs hard runs Task performance (unrandomized) Task performance (randomized) time Good

32 Blocks Protect Against Day-to- Day Variation Day 1 Day 2 time run sequence large target small target visibility detection (no blocks ) detection (blocks) Blocking Blocking designs protects from unknown background changes between experimental periods (also due to Fisher) Good

33 Table of Cases or Scenario settings and findings Graphical Cases summary Errors (false positive/negative) Linking cause & effect: Why? Challenge 4: What Conclusions - Traditional Analysis SortieAltMachMDSRangeTgt AspectOBATgt VelocityTarget TypeResult 110K0.7F truckHit 110K0.9F bldgHit 220K1.1F tankMiss

34 How Factorial Matrices Work -- a peek at the linear algebra under hood We set the X settings, observe Y Solve for b s.t. the error (e) is minimized Simple 2-level, 2 X-factor design Where y is the grand mean, A,B are effects of variables alone and AB measures variables working together

35 How Factorial Matrices Work II To solve for the unknown bs, X must be invertible Factorial design matrices are generally orthogonal, and therefore invertible by design

36 With DOE, we fit an empirical (or physics-based) model Very simple to fit models of the form above (among others) with polynomial terms and interactions Models fit with ANOVA or multiple regression software Models are easily interpretable in terms of the physics (magnitude and direction of effects) Models very suitable for predict-confirm challenges in regions of unexpected or very nonlinear behavior Run to run noise can be explicitly captured and examined for structure

37 Analysis using DOE: CV-22 TF Flight INPUTS (Factors) OUTPUTS (Responses) PROCESS: TF / TA Radar Performance Gross Weight Radar Measurement Noise Airspeed Nacelle Set Clearance Plane Turn Rate Crossing Angle Ride Mode Pilot Rating Set Clx Plane Deviation Terrain Type DOE I S0-37

38 Analysis Using DOE GrossTurnSCPPilot Weight SCP Rate Ride Airspeed Dev Ratings Medium Medium Medium Medium Hard Hard Hard Hard Medium Medium Medium Medium Hard Hard Hard Hard Medium Hard Medium Hard

39 Interpreting Deviation from SCP: speed matters when turning Note that we quantify our degree of uncertainty – the whiskers represent 95% confidence intervals around our performance estimates. An objective analysis – not opinion

40 Pilot Ratings Response Surface Plot

41 Deviation from SCP = * SCP * Turn Rate * Ride * Airspeed * Turn * Ride * Turn * Airspeed Prediction Model - Coded Units (Low=-1, High=+1) Radar Performance Results SCP Deviation Estimating Equation

42 NameSettingLow LevelHigh Level SCP Turn Rate RideHardMediumHard Airspeed Prediction95% PI low95% PI high Deviation from SCP Pilot Ratings Performance Predictions

43 Design of Experiments Test Process is Well-Defined Test Matrix Results and Analysis Planning: Factors Desirable and Nuisance Desired Factors and Responses Design Points Model BuildDiscovery, Prediction

44 Caveat – we need good science! We understand operations, aero, mechanics, materials, physics, electro- magnetics … To our good science, DOE introduces the Science of Test Bonus: Match faces to names – Ohm, Oppenheimer, Einstein, Maxwell, Pascal, Fisher, Kelvin

45 45 It applies to our tests: DOE in 50+ operations over 20 years IR Sensor Predictions Ballistics 6 DOF Initial Conditions Wind Tunnel fuze characteristics Camouflaged Target JT&E ($30M) AC /105mm gunfire CEP evals AMRAAM HWIL test facility validation 60+ ECM development + RWR tests GWEF Maverick sensor upgrades 30mm Ammo over-age LAT testing Contact lens plastic injection molding 30mm gun DU/HEI accuracy (A-10C) GWEF ManPad Hit-point prediction AIM-9X Simulation Validation Link 16 and VHF/UHF/HF Comm tests TF radar flight control system gain opt New FCS software to cut C-17 PIO AIM-9X+JHMCS Tactics Development MAU 169/209 LGB fly-off and eval Characterizing Seek Eagle Ejector Racks SFW altimeter false alarm trouble-shoot TMD safety lanyard flight envelope Penetrator & reactive frag design F-15C/F-15E Suite 4 + Suite 5 OFPs PLAID Performance Characterization JDAM, LGB weapons accuracy testing Best Autonomous seeker algorithm SAM Validation versus Flight Test ECM development ground mounts (10s) AGM-130 Improved Data Link HF Test TPS A-G WiFi characterization MC/EC-130 flare decoy characterization SAM simulation validation vs. live-fly Targeting Pod TLE estimates Chem CCA process characterization Medical Oxy Concentration T&E Multi-MDS Link 16 and Rover video test

46 46 Three DOE Stories for T&E Requirements: SDB II Build-up SDD Shot Design Acquisition: F-15E Suite 4E+ OFP Qualification Test: Combining Digital-SIL-Live Simulations Weve selected these from 1000s to show T&E Transformation PlanPonderProcess Manpower Materials MethodsMachines Response to Effect Causes Measurements Milieu (Environment) Cause- Effect (CNX) Diagram Produce

47 47 Testing to Diverse Requirements: SDB II Shot Design Test Objective: SPO requests help – 46 shots right N? Power analysis – what can we learn? Consider Integrated Test with AFOTEC What are the variables? We do not know yet … How can we plan? What management reserve DOE Approach: Partition performance questions: Pacq/Rel + laser + coords + normal mode Consider total test pgm: HWIL+Captive+Live Build 3x custom, right- size designs to meet objectives/risks Results: Binary P acq N=200+ Demo laser + coords N=4 ea Prove normal mode N=32 32-shot factorial screens 4-8 variables to 0.5 std dev shift from KPP 46 shots too few to check binary values +/ % Goal Perf shift Power Integrate 20 AFOTEC shots for Mgt Reserve

48 48 Acquisition: F-15E Strike Eagle Suite 4E+ (circa ) Test Objectives: Qualify new OFP Suite for Strikes with new radar modes, smart weapons, link 16, etc. Test must address dumb weapons, smart weapons, comm, sensors, nav, air-to-air, CAS, Interdiction, Strike, ferry, refueling… Suite 3 test required 600+ sorties DOE Approach: Build multiple designs spanning: EW and survivability BVR and WVR air to air engagements Smart weapons captive and live Dumb weapons regression Sensor performance (SAR and TP) Results: Vast majority of capabilities passed Wrung out sensors and weapons deliveries Dramatic reductions in usual trials while spanning many more test points Moderate success with teaming with Boeing on design points (maturing) Source: F-15E Secure SATCOM Test, Ms. Cynthia Zessin, Gregory Hutto, 2007 F-15 OFP CTF 53d Wing / 46 Test Wing, Eglin AFB, Florida

49 49 Strike Weapons Delivery a Scenario Design Improved with DOE

50 Case: Integration of Sim- HWIL-Captive-Live Fire Events Test Objective: Most test programs face this – AIM-9X, AMRAAM, JSF, SDB II, etc… Multiple simulations of reality with increasing credibility but increasing cost Multiple test conditions to screen for most vital to performance How to strap together these simulations with prediction and validation? DOE Approach: In digital sims screen variables with fractional factorials and predict performance In HWIL, confirm digital prediction (validate model) and further screen 8-12 factors; predict In live fly, confirm prediction (validate) and test 3-5 most vital variables Prediction Discrepancies offer chance to improve sims Results: Approach successfully used in 53d Wing EW Group SIL labs at Eglin/PRIMES > HWIL on MSTE Ground Mounts > live fly (MSTE/NTTR) for jammers and receivers Trimmed live fly sorties from to (typical) today AIM-9X, AMRAAM, ATIRCM: 90% sim reduction 1000s Digital Mod/Sim Predict Validate 10s Live Shot 100s HWIL or captive Predict factors 8-12 factors 3-5 factors $ - Credibility +

51 51 Best A Strategy to be the Best … Using Design of Experiments Inform Leadership of Statistical Thinking for Test Adopt most powerful test strategy (DOE) Train & mentor total team Combo of AFIT, Center, & University Revise AF Acq policy, procedures Share these test improvements Adopting DOE 53d Wing & AFOTEC 18 FTS (AFSOC) RAF AWC DoD OTA: DOT&E, AFOTEC, ATEC, OPTEVFOR & MCOTEA AFFTC & TPS 46 TW & AAC AEDC Targets HQ AFMC ASC & ESC Service DT&E

52 52 53d Wing Policy Model: Test Deeply & Broadly with Power & Confidence From 53d Wing Test Managers Handbook*: While this [list of test strategies] is not an all- inclusive list, these are well suited to operational testing. The test design policy in the 53d Wing supplement to AFI mandates that we achieve confidence and power across a broad range of combat conditions. After a thorough examination of alternatives, the DOE methodology using factorial designs should be used whenever possible to meet the intent of this policy. * Original Wing Commander Policy April 2002

53 March 2009: OTA Commanders Endorse DOE for both OT & DT 53 Experimental design further provides a valuable tool to identify and mitigate risk in all test activities. It offers a framework from which test agencies may make well- informed decisions on resource allocation and scope of testing required for an adequate test. A DOE-based test approach will not necessarily reduce the scope of resources for adequate testing. Successful use of DOE will require a cadre of personnel within each OTA organization with the professional knowledge and expertise in applying these methodologies to military test activities. Utilizing the discipline of DOE in all phases of program testing from initial developmental efforts through initial and follow-on operational test endeavors affords the opportunity for rigorous systematic improvement in test processes.

54 Nov 2008: AAC Endorses DOE for RDT&E Systems Engineering AAC Standard Systems Engineering Processes and Practices 54

55 July 2009: 46 TW Adopts DOE as default method of test

56 56 We Train the Total Test Team … but first, our Leaders! OA/TE Practitioner Series 10 sessions and 1 week each Reading--Lecture--Seatwork Basic Statistics Review (1 week) Random Variables and Distributions Descriptive & Inferential Statistics Thorough treatment of t Test Applied DOE I and II (1 week each) Advanced Undergraduate treatment Graduates know both how and why Journeymen Testers Leadership Series DOE Orientation (1 hour) DOE for Leaders (half day) Introduction to Designed Experiments (PMs- 2 days)

57 Ongoing Monday Continuation Training Weekly seminars online Topics wide ranging New methods New applications Problem decomposition Analysis challenges Reviewing the basics Case studies Advanced techniques DoD web conferencing Q&A session following Monday CR 22 Bldg 1 or via DCO at desk

58 58 DOE Initiative Status Gain AAC Commander endorsement and policy announcement Train and align leadership to support initiative Commence training technical testers Launch multiple quick-win pilot projects to show applicability Communicate the change at every opportunity Gain AAC leadership endorsement and align client SPOs Influence hardware contractors with demonstrations and suitable contract language Keep pushing the wheel to build momentum Confront nay-sayers and murmurers Institutionalize with promotions, policies, Wing structures and practices Roll out to USAF at large and our Army & Navy brethren

59 59 Why : Its the scientific, structured, objective way to build better ops tests DOE is faster, less expensive, and more informative than alternative methods Uniquely answers deep and broad challenge: Confidence & Power across a broad battlespace Our less-experienced testers can reliably succeed Better chance of getting it right! Why not... If DOE is so good, why havent I heard of it before? Arent these ideas new and unproven? To call in the statistician after the experiment is... asking him to perform a postmortem examination: he may be able to say what the experiment died of. Address to Indian Statistical Congress, DOE Founder Sir Ronald A. Fisher But Why Should Systems Engineers Adopt DOE?

60 DOE: The Science of Test Questions ? Whats Your Method of Test?

61 Backups – More Cases 61

62 Where we intend to go with Design of Experiments Design of Experiments can aid acquisition in many areas M&S efforts (AoA, flyout predictions, etc.) Model verification (live-fire, DT, OT loop) Manufacturing process improvements Reliability Improvements Operational effectiveness monitoring Make DOE the default for shaping analysis and test program Like any engineering specialty, DOE takes time to educate and mature in use Over next 3-5 years, make DOE the way we do test - policy Initial Awareness Hire/Grad Ed Industrial/OR Types Begin Tech Trn Mentor Practitioners Mature DOE in all programs Adjust PDs, OIs, trn plans

63 Case 1 Problem Binary data – tendency to score hit/miss or kill/no kill Bad juju – poor statistical power, challenge to normal theory Binary leads to bimodal distributions

64 Normal theory fit is inadequate Typically get surfaces removed from responses – poor fit, poor predictions Also get predicted responses outside 0-1: poor credibility with clients

65 Solution is GLM – with binary response (error) We call this the logit link function Logit model restricts predictions in 0/1 Commonly used in medicine, biology, social sciences

66 Logit Solution Model Works Well With 10 replicates good model With only 1 replicate (90% discard) same model! TermEstimateL-R ChiSquare Prob>ChiS q Lower CLUpper CL Visibility < Aspect < Intercept (Aspect-270)*(Aspect-270) Visibility*(Aspect-270) (TGT_Range-2125)*(Aspect-270) Visibility*(TGT_Range-2125) Visibility*(TGT_Range-2125)*(Aspect- 270) TGT_Range Visibility*Visibility TermEstimateL-R ChiSquareProb>ChiSqLower CLUpper CL Visibility < Aspect Intercept (Aspect-270)*(Aspect-270) Visibility*(TGT_Range-2125)*(Aspect-270) Visibility*(Aspect-270) TGT_Range (TGT_Range-2125)*(Aspect-270) Visibility*(TGT_Range-2125) Visibility*Visibility

67 Problem – Blast Arena Instrumentation Characterization Do the blast measuring devices read the same at various ranges/angles? Fractional Factorial Graphic of design geometry in upper right

68 Solution – unequal measurement variance - methods Closer to weapon, measurement variability is large Note confirmation with three methods Normal theory confidence intervals Rank Transform confidence intervals Bootstrap confidence intervals

69 Problem – Nonlinear problem with restricted resources Machine gun test – theory says error will increase with the square of range Picture is not typical, but I love Google

70 Solution – efficient nonlinear (quadratic) design - CCD Invented by Box in 1950s 2D and 3D looks – factorial augmented with center points and axial points – fit quadratics

71 Power is pretty good with 4- factor, 30 run design Power at 5 % level to detect signal/noise ratios of TermStdErr**0.5 Std. Dev.1 Std. Dev.2 Std. Dev. Linear Effects %51.0 %97.7 % Interactions %46.5 %96.2 % Quadratic Effects %32.6 %85.3 % **Basis Std. Dev. = 1.0

72 Prediction Error Flat over Design Volume This chart (recent literature innovation) measures prediction error – mostly used as a comparative tool to trade off designs – Design Expert from

73 Summary Remarkable savings possible esp in M&S and HWIL Questions are scientifically answerable (and debatable) We have run into 0.00 problems where we cannot apply principles Point is that DOE is widely applicable, yields excellent solutions, often leads to savings, but we know why N To our good science – add the Science of Test: DOE Logit fit to binary problem

74 Case: Secure SATCOM for F-15E Strike Eagle Test Objective: To achieve secure A-G comms in Iraq and Afghanistan, install ARC- 210 in fighters Characterize P(comms) across geometry, range, freq, radios, bands, modes Other ARC-210 fighter installs show problems – caution needed here! DOE Approach: Use Embedded Face-Centered CCD design (created for X-31 project, last slide) Gives 5-levels of geometric variables across radios & modes Speak 5 randomly-generated words and score number correct Each Xmit/Rcv a test event – 4 missions planned Results: For higher-power radio – all good For lower power radio, range problems Despite urgent timeframe and canceled missions, enough proof to field Source: F-15E Secure SATCOM Test, Ms. Cynthia Zessin, Gregory Hutto, 2007 F-15 OFP CTF 53d Wing / 46 Test Wing, Eglin AFB, Florida

75 Case: GWEF Large Aircraft IR Hit Point Prediction Test Objective: IR man-portable SAMs pose threat to large aircraft in current AOR Dept Homeland Security desired Hit point prediction for a range of threats needed to assess vulnerabilities Solution was HWIL study at GWEF (ongoing) DOE Approach: Aspect – degees, 7each Elevation – Lo,Mid,Hi, 3 each Profiles – Takeoff, Landing, 2 each Altitudes – 800, 1200, 2 each Including threat – 588 cases With usual reps nearly 10,000 runs DOE controls replication to min needed Results: Revealed unexpected hit point behavior Process highly interactive (rare 4-way) Process quite nonlinear w/ 3 rd order curves Reduced runs required 80% over past Possible reduction of another order of magnitude to runs IR Missile C-5 Damage

76 Case: Reduce F-16 Ventral Fin Fatigue from Targeting Pod Test Objective: blah DOE Approach: Many alternate designs for this 5-dimensional space (,, Mach, alt, Jets on/off) Results: New design invented circa 2005 capable of efficient flight envelope search Suitable for loads, flutter, integration, acoustic, vibration – full range of flight test Experimental designs can increase knowledge while dramatically decreasing required runs A full DOE toolbox enables more flexible testing Ventral fin Expert Chose 162 test points Face-Centered CCD Embedded F-CCD

77 Case: Ejector Rack Forces for MAU-12 Jettison with SDB Test Objective: AF Seek Eagle characterize forces to jettison rack with from 0-4 GBU-39 remaining Forces feed simulation for safe separation Desire robust test across multiple fleet conditions Stores Certification depend on findings DOE Approach: Multiple factors: rack S/N, temperature, orifice, SDB load-out, cart lot, etc AFSEO used innovative bootstrap technique to choose 15 racks to characterize rack-variation Final designs in work, but promise 80% reduction from last characterization in FY99 Results: From Data Mined 99 force data with regression Modeled effects of temp, rack, cg etc Cg effect insignificant Store weight, orifice, cart lot, temperature all significant

78 Notional 520 grid Case: AIM-9X Blk II Simulation Pk Predictions Test Objective: Validate simulation-based acquisition predictions with live shots to characterize performance against KPP Nearly 20 variables to consider: target, countermeasures, background, velocity, season, range, angles … Effort: 520 x 10 reps x 1.5 hr/rep = 7800 CPU-hours = 0.89 CPU-years Block II more complex still! DOE Approach: Data mined actual 520 set to ID critical parameters affecting Pk many conditions have no detectable effect … Shot grid greatly reduce grid granularity Replicates 1-3 reps per shot maximum Result – DOE reduces simulation CPU workload by 50-90% while improving live- shot predictions Next Steps: Ball in Raytheon court to learn/apply DOE … Notional DOE grid

79 Notional 520 grid Case: AIM-9X Blk II Simulation Pk Predictions Update Test Objective: Validate simulation-based Pk 520 Set: 520 x 10 reps x 1.5 hr/rep = 7800 CPU-hours = 0.89 CPU-years Block II contemplates 1380 set 1 replicate-1 case: 1 hour => 1380*1*10 =13,800 hours = 1.6 CPU-years DOE Approach: Weve shown: many conditions have no effect, 1-3 reps max, shot grid 10x too fine Raytheon response – hire PhD OR-type to lead DOE effort Projection – not 13,800, but 1000 hours max saving 93% of resources 2 June week DOE meeting Next Steps: Raytheon responds to learn/apply DOE … Notional DOE grid

80 Case: CFD for NASA CEV Test Objective: Select geometries to minimize total drag in ascent to orbit for NASAs new Crew Exploration Vehicle (CEV) Experts identified 7 geometric factors to explore including nose shape Down-selected parameters further refined in following wind tunnel experiments DOE Approach: Two designs – with 5 and 7 factors to vary Covered elliptic and conic nose to understand factor contributions Both designs were first order polynomials with ability to detect nonlinearities Designs also included additional confirmation points to confirm the empirical math model in the test envelope Results: Original CFD study envisioned 1556 runs DOE optimized parameters in 84 runs – 95%! IDd key interaction driving drag Source: A Parametric Geometry CFD Study Utilizing DOE Ray D. Rhew, Peter A. Parker, NASA Langley Research Center, AIAA

81 Case: Notional DOE for Robust Engineering Instrumentation Design Test Objective: Design data link recording/telemetry design elements with battery backup Choose battery technology and antenna robust to noise factors Environmental factors include Link-16 Load (msg per minute) and temperature Optimize cost, cubes, memory life, power out DOE Approach: Set up factorial design to vary multiple design and noise parameters simultaneously DOE can sort out which design choices and environmental factors matter – how to build a robust engineering design In this case (Montgomery base data) one battery technology outshone others over temperature – Link 16 messages do not matter

82 Case: Wind Tunnel X-31 AOA Test Objective: Model nonlinear aero forces in moderate and high AoA and yaw conditions for X-31 Characterize effects of three control surfaces – Elevon, canard, rudder Develop classes of experimental designs suitable for efficiently modeling 3 rd order behavior as well as interactions DOE Approach: Developed six alternate response surface designs Compared and contrasted designs matrix properties including orthogonality, predictive variance, power and confidence Ran designs, built math models, made predictions and confirmed Results: Usual wind tunnel trials encompass points – this was 104 Revealed process nonlinear (X 3 ), complex and interactive Predictions accurate < 1% Source: A High Performance Aircraft Wind Tunnel Test using Response Surface Methodologies Drew Landman, NASA Langley Research Center, James Simpson Florida State University, AIAA Coeff Lift (C L ) Vs AoA ( )

83 Case: SFW Fuze Harvesting LAT Test Problem/Objective: Harvest GFE fuzes from excess cluster/chem weapons Textron reconditions fuzes, installs in SFW Proposed sequential 3+3 LAT- pass/reject lot Is this an acceptable LAT? Recall: Confidence = P(accept good lot) Power = P(reject bad lot) DOE/SPC Approach: Remember – binary vars weak Test destroy fuze Establish LAT performance: –Confidence = 75% –Power = 50% Q: What n is required then? A: destroy 20+ of 25 fuzes Results: Shift focus from 1-0 to fuze function time With 20kft delivery >> 250 ms Required n is now only 4 or 5 fuzes 1-

84 Case: DOE for Troubleshooting & False Alarms (& software) Test Problem/Objective: 782 TS has malf photos during weapons tests What causes it? Possibilities include camera, laptop, config file, and operator – 4 each 256 combinations – how to test? Remember – binary vars weak But – is also huge: 0 or 1 not 0.8 to 0.9 DOE/SPC Approach: We can design several ways: –4 4 design is 256 combos (not impossible) –Likely Screen with 2 4 design (16 combos) –Use Optimal or Space-filling design Results: TE Marshall solves with Most Likely screen Soln: Laptop x Config file interaction Others: MWS, Altimeter false alarms, software JMP Latin Hypercubes n=8, n=10 JMP Sphere-Packing n=8, n=10

85 Case: DOE Characterizing Onboard Squib Ignition on Test Track Test Problem/Objective: Track wants to test ability to initiate several ignitors / squibs both on- and - off board sleds What is the firing delay – average & variance? Min of three squibs/ignitors On and off board sled, voltages, number, and delays are variables Standard approach shown on next slide DOE/SPC Approach: We can design several ways: –Choose to split design on and off board –Can choose nested or confounded design Objective search reveals two: –Characterize firing delay for signal –Characterize ignition delay from capacitor Build two designs to characterize with dummy/actual loads –Dummy resistors for signal delay –Actual (scarce) ordnance for ignition delay Results: Still TBD – late summer execution Want to characterize also gauge reliability using harvested data mining Improved power across test space while examining more combinations Sniff out two objectives with process – two test matrices Power improved from measuring each firing event

86 Comparison of Designs As commonly found – more combos (4x), improved power (4x), new ideas and objectives

87 Case: Arena 500 Lb Blast Characterization vs. Humans Test Problem/Objective: 780 TS Live fire testers want to characterize blast energy waveform outside & inside Toyota HiLux Map P k as a function of range, aspect, weapon orientation, gauge, vehicle orientation, glass How repeatable are the measurements bomb to bomb and gauge to gauge? DOE/SPC Approach: Blast test response – peak pressure, rise time, fall time, slopes (for JMEM models) Directly blast test symmetry usually assumed Critical information on gauge repeatability Test is known as a Split Plot as each detonation restricts randomization Consequence: less information on bomb-bomb variations (but of lesser interest) Results: Brilliant design from Capt Stults, Cameron, with assists from Higdon/Schroeder/Kitto Great collaborative, creative test design

88 Case: Computer Code Testing Test Problem/Objective: OK-DOE works for CEP, TLE, Nav Err, Penetrators, blast testing, SFW Fuze Function … But what about testing computer codes?? Mission Planning: load-outs, initial fuel, route length, refueling, headwinds, runway length, density altitude, threats, waypoints, ACO, package, targets, deliveries/LARS … How do we ascertain the package works? DOE Approach: We can design several ways: –Code verification – not too much contribution –Physics or SW code – Space Filling Designs –SW Performance – errors, time, success/fail, degree = classical DOE designs Results: An evolving area – NPS, AFIT, Bell Labs Several space-filling design algorithms Aircraft OFP testing a proof-case: weapons, comm, sensors work fine More to do: Mission Planning, C4I, IW, etc. Proposal – try some designs and get some help from active units and universities (AFIT, Univ of Fl, Va Tech, Army RDECOM, ASU, Ga Tech, Bell Labs … Mission Planning CFD CWDS TBMCS JAWS/JMEM Campaign Models F-15E Suite 5E OFP UAI IOW NetWar Cyber Defense SW Whitebox Blackbox Code Verif Performance Physics NA Space Classic

89 Some Space-Filling 3D designs 10 point 10x10x4 Sphere Packing Design 50 point 10x10x4 Latin Hypercube Design 30 point 10x10x4 Uniform Design 27 point 3x3x3 Classical DOE Factorial Design

90 CV-22 IDWS Program DescriptionDOE approach Issues/Concerns Rescoped current phase of testing Completed process flow diagrams Completed test matrix Developed run matrix Initial proposal = 108 runs Factorial with center points = 92 runs FCD with center points = 40 runs FCD with 2 center points = 36 runs FCD in 2 blocks = 30 runs Accelerated testing Primary performance issues: vibration, performance and flying qualities Full battlespace tested, except Climbs/dives excluded 400-round burst (gun heating effect) excluded 31 Dec 08 IDWS: Interim Defensive Weapon System Objective: Evaluate baseline weapons fire accuracy for the CV-22. Flight test Jan 09 For Official Use Only

91 MC-130H BASS Program DescriptionDOE approach Issues/Concerns Completed process description and decomposition Led to fewer, better-defined test factors Augmented LMAS test matrix of 114 required cases Split-split plot design of 209 runs Increases power, confidence; decreases confounding MC-130H Bleed Air System study Test Objective: Collect bleed air system performance data on the MC 130H Combat Talon II aircraft Flight test Jan 09 LMAS proposed test matrix Inconsistent test objective – overall purpose 31 Dec 08

92 SABIR/ATACS Program Description DOE approach Picture Issues/Concerns Flight test spring 2009 Govt proposed matrix ~720 runs DOE proposed approach = embedded FCD factor of 5 reduction SABIR: Special Airborne Mission Installation & Response ATACS: Advanced Tactical Airborne C4ISR System Test Objectives: Maintain aircraft pressurization Determine vibration Functional test at max loads Determine aerodynamic loading Performance and flying qualities EMC/EMI Scope provided by LMAS Palmdale Acceptable confidence, power, accuracy, precision TBD 31 Dec 08

93 AFSAT Program DescriptionDOE approach Issues/Concerns Used minimal, factorial design. Final design is a full factorial in some factors augmented with an auxiliary design. Minimized risk of missing key data points and protected against background variation with series of baseline cases Limited Funding Full orthogonal design limited by test geometry Limited time of flight at test conditions Safety concerns- flying in front of unmanned vehicle 1 Jan 09 AFSAT: Air Force Subscale Aerial Target Objective: Collect IR signature data on the AFSAT for future augmentation and threat representation Flight test Jan-Feb 09 For Official Use Only

94 LASI HPA Program DescriptionDOE approach Issues/Concerns Harvested historic data sets for knowledge Identified intermediate responses with richer information Identified run-savings for CM testing Bi-modal data distribution Near-discontinuities in responses High-order interactions and effects FYI Heads-Up Need HelpLegend: 31 Dec 08 TE: ?OA: ? Large Aircraft Survivability Initiative (LASI) Hit-Point Analysis (HPA) High Fidelity Models of Large Transport Aircraft Supports Susceptibility Reduction Studies Supports Vulnerability Reduction Studies

95 Anubis MAV Fleeting Targets QRF Original MoT Factor effects confounded Imbalanced test conditions Susceptible to background noise No measure of wind effects Micro-munitions Air Vehicle proof of concept & lethality assessment – Objective – Successful tgt engagement – Verify lethality DoE used to –Fixed original problems –Used 8 instead of 10 prototypes –Inserted pause after first test to make midcourse correntions

Download ppt "1 Raising the Bar: Equipping Systems Engineers to Excel with DOE PlanPonderProcess Manpower Materials MethodsMachines Response to Effect Causes Measurements."

Similar presentations

Ads by Google