Presentation on theme: "FAA V&V Conference 20 Oct 2011 Jeff Olinger Technical Director"— Presentation transcript:
1FAA V&V Conference 20 Oct 2011 Jeff Olinger Technical Director AFOTEC Case Study: F-22 Air Combat Simulation (ACS) Verification and Validation (V&V)FAA V&V Conference20 Oct 2011Jeff OlingerTechnical Director
3AFOTEC MissionAFOTEC tests and evaluates new war-fighting capabilities in operationally realistic environments, influencing, and informing national resource decisions.- Not all Air Force Programs, ACAT I, ACAT II and programs on DOT&E Oversight- Realistic battlespace: as realistic as possible and practical using live, virtual, & contstuctive simulation as required- Fact-based (or Knowledge-based), decision quality information for decision makers- Effectiveness: degree of mission accomplishment when used by representative personnel- Suitability: degree to which a system can be placed in field use considering availability, compatibility, transportability, interoperability, reliability, maintainability, safety, and training- Mission Capability: Additionally, AFOTEC determines overall mission capability of the system. Includes information such as: evaluation of capabilities provided, missions supported, operational impacts, and decisions supported. Overall degree of mission accomplishment considering operational cost, aspects of the mission, operational conditions (Fully Mission Capable, Mission Capable, Partially Mission Capable, Not Mission Capable, and Not Determined)
4Scope & Responsibilities Independent operational test agency for AFReports directly to CSAFOperationally tests all Acquisition Category I, II, & oversight programsRequired by USC Title 10Guidance & direction from DoD 5000-seriesACAT I> $365M RDT&E> $2.19B ProcurementACAT II> $140M RDT&E>$660M ProcurementUPDATEAFOTEC only answers to CSAF; CC reports directly to CSAFPart of the acquisition community, but independent by law.ACAT I = programs are major defense acquisition programs (MDAP) requiring eventual expenditure for RDT&E of more than$365 million, or procurement of more than $2.19 billion.ACAT II = programs are major systems requiring eventual expenditure for RDT&E of $140 million, or procurement of morethan $660 million.OSD DOT&E oversight list = Acquisition category level; Potential for becoming an acquisition program (such as an Advanced Concept Technology Demonstration project or pre-MDAP); Stage of development or production; Whether program is subject to DAES reporting; Congressional and DoD interest; Programmatic risk (cost, schedule, performance); Past history of the developmental command with other programs; Relationship with other systems as part of a system-of-systems; and Technical complexity of system.NSS was rescinded in Mar 09 by USD (AT&L) and now follows DoD 5000 series with waivers as approved.Space updates to DoD submittedOrganizationally independentNot separate/black hat; work with users but no influence
5Transparent Planning and Operations Roles & MissionsOperational testers test the “as is” systemDevelopmental testers test the “to be” systemGovernment Developmental Test exists to insert user inputs into the design and development cycle phase early on. Contractual agreements require trained government DT professionals as partners with industry, otherwise unconstrained cost growth and delays in fielding can occur.Transparent Planning and Operations
6Roles and Responsibilities DOT&EIssuing DoD OT&E policy & proceduresApproves operational test plans for test adequacyReviewing & analyzing results of OT&E for MDAPsProviding independent assessments to SecDef, USD(AT&L), & CongressAFOTECInitial Operational Test & Evaluation, Qualification Operational Test & Evaluation, Follow-on Test & Evaluation, Operational Utility Evaluation (OUE), Operational Assessment (OA)MAJCOMsForce Development Evaluation, Tactics Development & Evaluation, Weapons System Evaluation Program, OUE, OAThe Director, Operational Test & Evaluation (DOT&E) is the principal staff assistant and senior advisor to the Secretary of Defense on operational test and evaluation (OT&E) in the Department of Defense.DOT&E is responsible for issuing DoD OT&E policy and procedures; reviewing and analyzing the results of OT&E conducted for each major DoD acquisition program; providing independent assessments to SecDef, the Under Secretary of Defense for Acquisition, Technology and Logistics (USD(AT&L)), and Congress making budgetary and financial recommendations to the SecDef regarding OT&E; and oversight to ensure OT&E for major DoD acquisition programs is adequate to confirm operational effectiveness and suitability of the defense system in combat useAccording to chapter 2“Evaluations” collect, analyze, and report data against stated criteria with a high degree of analytical rigor and are used to support FRP or fielding decisions. “Assessments” usually collect and analyze data with less analytical rigor, need not report against stated criteria, and cannot be the sole source of T&E data for FRP or fielding decisions.Initial Operational Test and Evaluation (IOT&E). IOT&E is only conducted by the Air Force Operational Test and Evaluation Center (AFOTEC). AFOTEC determines the operational effectiveness and suitability of the items under test using production or production-representative articles with stabilized performance and operationally representative personnel. Additionally, AFOTEC will resolve the mission capability of the system. Tests are conducted under operational conditions, including combat mission scenarios that are as operationally realistic as possible. IOT&E determines if operational requirements and critical operational issues (COI) have been satisfied and assesses system impacts to peacetime and combat operations. A dedicated phase of IOT&E is required for new ACAT I and II programs according to Title 10 §2399. Dedicated IOT&E is also required for all OSD OT&E Oversight programs according to DODI The determination of appropriate types of operational testing for subsequent modifications and upgrades as well as applicability to other types of programs will be addressed according to paragraph 4.6.Qualification Operational Test and Evaluation (QOT&E). QOT&E is a tailored type of IOT&E conducted only by AFOTEC. It is used to evaluate military-unique portions and applications of COTS, NDI, and GFE for military use in an operational environment when little or no government-funded R&D takes place. PMs cannot disregard T&E of COTS, NDI, and GFE simply because these items came from pre-established sources. QOT&E supports the same kinds of decisions as IOT&E. See paragraph 5.15 for more information on COTS, NDI, and GFE.Follow-on Operational Test and Evaluation (FOT&E). By definition, FOT&E is the continuation of operational test and evaluation (OT&E) after IOT&E or QOT&E and is conducted only by AFOTEC. It answers specific questions about unresolved COIs and test issues; verifies the resolution of deficiencies or shortfalls determined to have substantial or severe impact(s) on mission operations; or completes T&E of those areas not finished during OT&E. AFOTEC OT&E reports will document known requirements for FOT&E. More than one FOT&E may be required.Multi-Service Operational Test and Evaluation (MOT&E). MOT&E can be IOT&E, QOT&E, or FOT&E when two or more military Services are involved. It can also be a Multi-Service FDE if a MAJCOM is the lead test organization. See the Memorandum of Agreement [MOA] on Multi-Service Operational Test and Evaluation (MOT&E) and Operational Suitability and Definitions, and paragraphs 4.7, 4.8, and 7.9. If MAJCOMs are involved with multi-Service testing without AFOTEC, they should use this MOA as a guide.
7V&V Foundation Different V&V requirements for different applications Technical documentationData sourcesSoftwareTest CapabilitiesCommon elementsPurpose and process descriptionDocumentation standardsEmploy agent-based activities to support accreditationCriteria to assess risk of useBY WAY OF INTRODUCTION TO AIR FORCE V&V, WE NEED TO RECOGNIZE DIFFERENT APPLICATIONS HAVE DIFFERENT V&V REQUIREMENTS. WHAT IS DONE FOR TECHNICAL DOCUMENTATION MAY NOT BE AS ROBUST AS THAT FOR SOFTWARE V&V…AND SOFTWARE V&V RESULTING FROM APPLICATION OF THE SYSTEM ENGINEERING “V-MODEL” MAY BE COMPLETELY DIFFERENT THAN V&V PROCESSES FOR TEST CAPABILITIES EVEN THOUGH THE TEST CAPABILITY IS SOFTWARE INTENSIVE AS WAS THE CASE FOR F-22 ACS.THERE ARE SOME COMMON ELEMENTS HOWEVER…THE “WHY” AND “HOW” V&V IS ACCOMPLISHED IS IMPORTANT FOR DECISION-MAKING PURPOSES. POLICY AND STANDARDS EXIST IN THE DEFENSE DEPARTMENT TO DOCUMENT THE V&V ACTIVITIES. TYPICALLY, THE STANDARDS ARE TAILORED FOR THE SPECIFIC V&V APPLICATION.MOST IF NOT ALL V&V APPLICATIONS EMPLOY AGENT-BASED ACTIVITIES TO ACCOMPLISH THE EFFORT WITH THE AGENTS ACCOUNTABLE FOR PRODUCING THE V&V PLAN(S), EXECUTING THE V&V PROCESSES, AND PRODUCING THE V&V REPORT(S)LASTLY, ALL V&V EFFORTS ESTABLISH CRITERIA TO ASSESS THE RISK OF USING THE APPLICATION FOR A SPECIFIC PURPOSE. THIS CRITERIA REFLECTS THE AMOUNT OF RISK ACCEPTABLE TO THE DECISION MAKER.
8Accreditor Accredits M&S Air Force VV&A ProcessRequirement Review Implementation AccreditationAdditional V&V RequirementV&V Agent(s) V&V PlanTechnical Review Working GroupAccreditation AgentV&V Agent(s)Model ManagerDomain/Subj Matter ExpertsOther Experts (As Req’d)V&V Agent(s) Implement PlanTRWG Review V&V Final ReportAccreditation ReportAccreditor Accredits M&SModel DevelopmentUpgradesBacklog (V&V)Extracted from AFI Fig 1
9ACS Fundamental Requirements Installed system performanceProduction representative systemSoftware-in-the-loopActual software complexity & limitationsRealistic loading on sensor allocation / timelinesMan-in-the-loopMan-machine interface critical to overall evaluationThinking / reactive adversary for realistic operationsSeamless mission flow & realistic relationship between operational tasks
10ACS Use in F-22 IOT&ESpecifically created to support operational effectiveness evaluation during F-22 Engineering & Manufacturing Development (EMD)Simulation requirements for IOT&E developed early in the EMD programExtensive usage during IOT&E preparation & executionPilot trainingTest development exercisesCulminated in “Test-the-Test” missionsFormal verification & validation (V&V)Accreditation by AFOTEC5 weeks / 152 trials during formal evaluation
11ACS DescriptionThe ACS is a high fidelity, man-in-the-loop (MITL) simulation that AFOTEC uses to evaluate the effectiveness of the F-22 in combat scenarios that cannot be simulated on the open-air range. The ACS simulates many systems, each of which consists of several technical subsystems. Although the ACS is a simulation, important elements that are key portions of aerial engagements will be real: Live operators man foreground blue and red aircraft, a blue Airborne Warning and Control System (AWACS) controller position, and key posts in the red IADS. The F‑22 OFP is loaded and operated as it is in the aircraft. All systems interact with each other (weapon attacks) through emissions and transmissions produced within the engagement environment (infrared [IR], radio frequency [RF], and visual).The centerpiece of the ACS consists of manned F-22 cockpits that are surrounded by back lit geodesic panels that provide simulation of the pilot's full field of view. In addition, the ACS has manned interactive control stations (MICSs). MICSs use large, high-definition graphics terminals to provide switched forward/rear out-the-window views and depictions of aircraft controls and displays. Each MICS also provide hands-on-throttle-and-stick (HOTAS) controls to its pilots. There are stations designed for live red GCI and air defense operations center controllers and a live blue AWACS controller.In addition to the manned aircraft and controller stations, ACS is able to generate background aircraft that will be taking part in peripheral engagements with the F‑22s.
12ACS CapabilitiesThe ACS has the capability to simulate different threat air-superiority fighters, blue fighters, other airborne platforms, and ground-based players. These other players include ground attack and airborne aircraft with radars, a variety of blue and red air-to-air missiles, and an IADS laydown including early warning (EW) and ground control intercept (GCI) radar, surface-to-air missile (SAM) sites, and SAMs.
13V&V Requirements Cost: Contractor proprietary Schedule: 3 years Focus on reuse, re-hosting of existing software/modelsPerformance: 10% ruleFunctionalOperatingFidelityAFOTEC WAS INVOLVED EARLY IN THE ACS REQUIREMENTS DEFINITION AND DEVELOPMENT PROCESS IN ORDER TO ENSURE THE ACS REPRESENTED THE INSTALLED PERFORMANCE SEEN ON THE F-22 DURING FLIGHT TESTING.REGRETTABLY, I CANNOT DISCLOSE THE COST OF THE ACS V&V EFFORT SINCE THE CAPABILITY IS NOT GOVERNMENT-OWNED. BUT THE EFFORT TOOK ABOUT 3 YEARS TO ACCOMPLISH. THIS SHORT TIMEFRAME LED TO A FOCUS ON REUSING AND RE-HOSTING EXISTING SOFTWARE AND MODELS ESPECIALLY FOR NON-F-22 COMPONENTS OF ACS, SUCH AS THE DIADS, REAL-TIME SAM (RTSAM), AND THE OTHER BLUE AND RED AIRCRAFT MODELS.V&V PERFORMANCE REQUIREMENTS WERE DEVELOPED FOR INDIVIDUAL ACS COMPONENTS AND FOR THE ACS AS A WHOLE. THIS STRATEGY ALLOWED FOR AN EASIER EXAMINATION OF THE TECHNICAL PERFORMANCE OF THOSE COMPONENTS BEFORE THEY ARE COMBINED INTO A COMPLICATED TACTICAL ENGAGEMENT INVOLVING MULTIPLE AIRCRAFT AND GROUND RADARS, OPERATION CENTERS, AND SAMs. IF IT CAN BE DEMONSTRATED THAT THE INDIVIDUAL PIECES OF THE ACS ARE PROPERLY MODELED, AFOTEC WAS MORE CONFIDENT IN THE ACCURACY OF THE COMPLETE SIMULATION.AT THE FUNCTIONAL PERFORMANCE LEVEL, V&V REQUIRMENTS WERE BASED ON MEASURES OF PERFORMANCE SUCH AS RADAR MODE DETECTION RANGE AND ANGLE TRACKING ACCURACY. AT THE OPERATING OR MISSION LEVEL OF PERFORMANCE, MEASURES OF EFFECTIVENESS SUCH AS THE NUMBER OF VALID MISSILES LAUNCHED PER SORTIE, TYPES OF MISSILE LAUNCHED, AND LAUNCH RANGES WERE USED. BECAUSE OF THE SHEER MAGNITUDE OF THE NUMBER OF MEASURES CALCULATED DURING ACS RUNS AND BECAUSE OF THE COMPLICATED INTERACTIONS BETWEEN ACS COMPONENTS, THE USE OF SENSITIVITY ANALYSES TO DETERMINE ERROR TOLERANCES IS IMPRACTICAL. RATHER, AFOTEC USED A THREE-STEP APPROACH TO DETERMINE FIDELITY ACCEPTANCE:THE FIRST STEP OF THIS APPROACH WILL INVOLVE SCREENING THE MEASURES PRODUCED BY THE ACS TO IDENTIFY THOSE CASES THAT DIFFER FROM REFERENCE DATA BY 10 PERCENT OR MORE (THIS IS THE 10% RULE).AFTER IDENTIFYING THESE CASES, AFOTEC THEN TRIED TO FIND A REASONABLE EXPLANATION FOR THE DIFFERENCES. AN EXAMPLE OF A REASONABLE EXPLANATION MAY BE THAT THE ACS TEST CASE WAS NOT CONDUCTED AT THE SAME FLIGHT CONDITIONS AS THE OPEN-AIR TEST, THUS PRODUCING DIFFERENT RESULTS.FINALLY, IF AFOTEC COULD NOT IDENTIFY A REASONABLE EXPLANATION AND RESOLVE THE DISCREPANCY, WE DECIDED TO EITHER ACCEPT LESS STRINGENT FIDELITY OR INITIATE CORRECTIVE ACTION TO IMPROVE AGREEMENT BETWEEN TEST MEASUREMENTS AND ACS PERFORMANCE.
14ACS Models Requirements identification key to VV&A success Includes models, modifications, and interfacesThreat Model Analysis Program (TMAP)Fidelity requirements focused on System Under Test (SUT) capabilitiesCreate models/modify ACSVerify stand-alone models and modificationsNew/modified models integrated into ACSVerify integrated performanceAs model suppliers, IPCs* must work closely with ACSSupport fidelity of interaction with other ACS modelsReal-time requirements* IPC: Intelligence Production Center (NASIC, MSIC, ONI)
15V&V Plan Accredit ACS for a specific purpose Evaluate aspects of F-22 performance not obtainable in OARACS provides “test for score” environmentV&V each ACS element and ACS as a wholeAircraft (F-22)Digital Integrated Air Defense System (DIADS)Blue and Red airborne players/threatsReal time surface to air missile (RTSAM)Background (player density)Environment (ECM, clutter, weather)EndgameSeparate plans…single report approved by CommanderAT AFOTEC, TEST CAPABILITIES ARE ACCREDITED FOR A SPECIFIC PURPOSE. IN THE CASE OF ACS, THE PURPOSE WAS TWO-FOLD:TO USE THE CAPABILITY FOR MISSION REHEARSAL BY EXECUTING SCENARIOS/EVENTS THAT REQUIRED LAUNCHING AND DETECTION OF AIR-TO-AIR AND SURFACE-TO-AIR MISSILES, EXPENDITURE/ACTIVATION OF COUNTER-MEASURES, AND END-GAME PROCESSING. THESE EVENTS, BY THEMSELVES REPRESENT A CHALLENGE TO TESTING ON THE OPEN-AIR RANGE BECAUSE PILOTS WILL NOT ACTUALLY LAUNCH MISSILES OR ACTIVATE AND EXPEND MISSILE COUNTERMEASURES, AND TARGETS WILL NOT REALLY EXPLODE AND DISAPPEAR FROM SENSOR DISPLAYS.ANDTO EVALUATE THE F-22’s PERFORMANCE IN AIR-SUPERIORITY. AIR-SUPERIORITY PERFORMANCE CENTERS ON LETHALITY AND SURVIVABILITY; THAT IS, INTERACTIONS AMONG VARIOUS PLAYERS.THE V&V AGENTS DEVELOPED SEPARATE PLANS FOR EACH ELEMENT OF THE ACS AND FOR THE ACS AS A WHOLE WHILE PRODUCING A SINGLE REPORT APPROVED BY THE AFOTEC COMMANDER.
16V&V Plan Used Simulation, Test and Evaluation Process (STEP) Goal: Give Commander sufficient information to make sound decision about using ACS for F-22 OT&EAFOTEC FOLLOWED THE SIMULATION TEST EVALUATE PROCESS (STEP) ADVOCATED BY THE DEPARTMENT OF DEFENSE (DOD) AS THE MOST DESIRABLE METHOD FOR ACCREDITING SIMULATIONS USED TO DEVELOP AND VERIFY WEAPON SYSTEMS. FOR MANY OF THE ACS PLAYERS AND COMPONENTS, WE HAD THE OPPORTUNITY TO COMPARE ACS PERFORMANCE TO ACTUAL FRIENDLY AND THREAT EQUIPMENT PERFORMANCE THROUGH BOTH OPEN AIR RANGE (OAR) AND HARDWARE-IN-THE-LOOP (HITL) TESTING.
17V&V Execution Verification Validation Validation Working Groups (VWGs) Prime contractor and major subcontractor effortsGovernment LabsIntelligence CentersValidationConceptual model validationResults validationCorrelation to testing validationFace validationValidation Working Groups (VWGs)THE PRIME CONTRACTOR AND MAJOR SUBCONTRACTORS WHO DEVELOPED OR RE-HOSTED THE MODELS WROTE SOFTWARE DEVELOPMENT PLANS FOLLOWING STANDARD SOFTWARE PRACTICES AS PART OF THEIR VERIFICATION PROCESS. IN MOST CASES, THEY HAD INDEPENDENT TEAMS ACCOMPLISHING THE ACCEPTANCE TESTING.CONCEPTUAL MODEL VALIDATION INVOLVED INTERNAL EXAMINATIONS OF M&S ASSUMPTIONS, ARCHITECTURE, AND ALGORITHMS IN THE CONTEXT OF THE INTENDED USE.RESULTS VALIDATION INVOLVED COMPARISONS OF ACS RESULTS WITH HISTORICAL/ANALYTICAL RESULTS FROM OTHER M&S.CORRELATION TO TESTING VALIDATION WAS AFOTEC’S PREFERRED TECHNIQUE FOR THE F-22 COMPONENT BECAUSE BENCH OR FLIGHT-TEST DATA WAS AVAILABLE FOR COMPARISON. HOWEVER, IT HAD THE MOST RISK BECAUSE IT DEPENDED ON THE F-22 TEST SCHEDULE AND THE CAPABILITIES OF MANY ORGANIZATIONS TO PRODUCE DATA.FOR FACE VALIDATION, AFOTEC TOOK ADVANTAGE OF THE FACT THAT THE ACS IS A MITL SIMULATION, WITH THE PILOTS AND CONTROLLERS WHO OPERATE THE REAL SYSTEMS ABLE TO ASSESS HOW WELL THE SIMULATION MATCHES.THE WORK WAS PERFORMED BY THREE VWGs ALIGNED WITH THE ACS SIMULATION FRAMEWORK, F-22 SYSTEMS, AND THREAT AND FRIENDLY SYSTEMS PRODUCT CENTERS. THE VWGs EXECUTED THE V&V PLANS FOR THE EIGHT ACS COMPONENTS PREVIOUSLY DISCUSSED.
18Validation & Accreditation Validate performanceValidation Working Groups (VWG)Stakeholders supported by appropriate subject matter expertsCorrelation to Open Air Range (OAR) is capstone eventTest-the-Test (TTT)Correct defects – re-verify/re-validate as requiredAccreditation by AFOTECV&V process supports accreditation with continuous documentation
19Test-the-Test (TTT) Process TTT is the final step in validationOAR modeled with anticipated test conditionsTMAP followed for exploited or surrogate threats in OARExecuted with same procedures, safety limits, etc. as the OARParticipants same as OAR – OT&E & Aggressor pilots, etc.Results compared following OAR testingQuantitative metrics: mission-level and supporting MOEsFace validation based on participant commentsDeltas are investigated and explainedDifferences in test conditions or procedures identifiedChanges to models assessed & implemented as requiredTTT re-flyConduct if necessaryCorrelation of virtual & OAR results must satisfy accreditation authority
20V&V Results Functional verification by contractor teams ACS validated by AFOTEC98 component experimentsFlew simulated sorties over 21 weeksLimitations (all minor)Four for F-22 system modelsFour for blue airborne modelsTwo Digital Integrated Air Defense SystemsOne each red airborne model, background player, endgame, environmentACS accredited for use in F-22 OTÐE ACS VERIFICATION EFFORT ENTAILED 5 CONTRACTOR TEAMS and independent agents.AFOTEC Validation WAS completed for Each component AND THE AIR COMBAT SYSTEM AS A WHOLE BY:comparing their simulation performance with the installed performance of those same systems demonstrated during open-air testing. This entailed 98 component experiments…ANDcomparing INTEGRATED PERFORMANCE measures observed during open-air trials with trials conducted in the ACSONLY 14 MINOR LIMITATIONS WHERE ACS COMPONENTS DID NOT ACHIEVE THE 10% RULE were notedCONSEQUENTLY, THE ACS WAS ACCREDITED FOR USE IN F-22 OT&E
21Lessons Learned Attributes for VV&A success Models created with fidelity focused towards replicating installed system performance required for mission level evaluationKnow conditions/variables that drive sensor performanceIdentify data sources that support validationDT scope & methodology are backbone to validationKnow SUT capabilities & user CONEMPVV&A must occur at the subsystem, system, and system of systems levels
22Summary Complex V&V effort Required to accomplish F-22 evaluation Player interactionsMan-machine interfacesRequired to accomplish F-22 evaluationRepresent realistic operational environmentMitigate test range limitationsKeys to successEarly influenceLeverage existing guidance and practicesTeamworkDocument workIN SUMMARY, ACS IS A COMPLICATED SIMULATION THAT ALLOWS THE INTEGRATION OF LIVE AND DIGITAL PILOTS INTO A DENSE PLAYER AND SIGNAL ENVIRONMENT. HUNDREDS OF BACKGROUND PLAYERS (AIRBORNE AND GROUND-BASED) CONTRIBUTE TO THE OPERATIONAL REALISM OF THE ACS FOR F-22 EFFECTIVENESS EVALUATIONS. THE ACS V&V AGENTS DEFINED A ROBUST V&V STRATEGY BASED ON PLANNED AND OPPORTUNISTIC REUSE OF EXISTING CODE. THE ACS WAS V&V’ED AT BOTH THE FUNCTIONAL LEVEL AND THE MISSION LEVEL BEFORE IT WAS FINALLY ACCREDITED.OUR KEYS TO SUCCESS: BE INVOLVED EARLY, LEVERAGE EXISTING GUIDANCE/PRACTICES, WORK AS A TEAM, AND (MOST IMPORTANTLY) DOCUMENT THE WORK SO OTHERS CAN TAKE ADVANTAGE OF REUSING THE CAPABILITY.QUESTIONS?