DOMAIN SPECIFIC LANGUAGE AND MANAGEMENT ENVIRONMENT FOR EXTENSIBLE CPS Martin Lehofer Siemens Corporate Technology Princeton, NJ, USA Subhav Pradhan Institute.

Slides:



Advertisements
Similar presentations
Martin Wagner and Gudrun Klinker Augmented Reality Group Institut für Informatik Technische Universität München December 19, 2003.
Advertisements

Presented by: Thabet Kacem Spring Outline Contributions Introduction Proposed Approach Related Work Reconception of ADLs XTEAM Tool Chain Discussion.
Introduction to Cyber Physical Systems Yuping Dong Sep. 21, 2009.
Objektorienteret Middleware Presentation 2: Distributed Systems – A brush up, and relations to Middleware, Heterogeneity & Transparency.
Approaches to EJB Replication. Overview J2EE architecture –EJB, components, services Replication –Clustering, container, application Conclusions –Advantages.
Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS PRIMERGY Servers and Windows Server® 2008 R2 Benefit from an efficient, high performance and flexible platform.
Distributed Systems Architectures
1 ITC242 – Introduction to Data Communications Week 12 Topic 18 Chapter 19 Network Management.
ATSN 2009 Towards an Extensible Agent-based Middleware for Sensor Networks and RFID Systems Dirk Bade University of Hamburg, Germany.
Using Architecture Frameworks
SensIT PI Meeting, April 17-20, Distributed Services for Self-Organizing Sensor Networks Alvin S. Lim Computer Science and Software Engineering.
©Silberschatz, Korth and Sudarshan18.1Database System Concepts Centralized Systems Run on a single computer system and do not interact with other computer.
.NET Mobile Application Development Introduction to Mobile and Distributed Applications.
Course Instructor: Aisha Azeem
Cloud Computing Guide & Handbook SAI USA Madhav Panwar.
System Design/Implementation and Support for Build 2 PDS Management Council Face-to-Face Mountain View, CA Nov 30 - Dec 1, 2011 Sean Hardman.
Web Application Architecture: multi-tier (2-tier, 3-tier) & mvc
ATIF MEHMOOD MALIK KASHIF SIDDIQUE Improving dependability of Cloud Computing with Fault Tolerance and High Availability.
February Semantion Privately owned, founded in 2000 First commercial implementation of OASIS ebXML Registry and Repository.
DOT’98 Heidelberg 1 A. Hoffmann & M. Born Requirements for Advanced Distribution and Configuration Support GMD FOKUS Andreas Hoffmann & Marc Born
Fault and Intrusion Tolerant (FIT) Event Broker & BFT-SMaRt A. Casimiro, D. Kreutz, A. Bessani, J. Sousa, I. Antunes, P. Veríssimo University of Lisboa,
Tufts Wireless Laboratory School Of Engineering Tufts University “Network QoS Management in Cyber-Physical Systems” Nicole Ng 9/16/20151 by Feng Xia, Longhua.
Fault Tolerance via the State Machine Replication Approach Favian Contreras.
Version 4.0. Objectives Describe how networks impact our daily lives. Describe the role of data networking in the human network. Identify the key components.
An Introduction to Software Architecture
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
Multi-Agent Testbed for Emerging Power Systems Mark Stanovich, Sanjeev Srivastava, David A. Cartes, Troy Bevis.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
Computer Science Open Research Questions Adversary models –Define/Formalize adversary models Need to incorporate characteristics of new technologies and.
Cluster Reliability Project ISIS Vanderbilt University.
Distributed Systems: Concepts and Design Chapter 1 Pages
© DATAMAT S.p.A. – Giuseppe Avellino, Stefano Beco, Barbara Cantalupo, Andrea Cavallini A Semantic Workflow Authoring Tool for Programming Grids.
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
Modeling Component-based Software Systems with UML 2.0 George T. Edwards Jaiganesh Balasubramanian Arvind S. Krishna Vanderbilt University Nashville, TN.
Secure Systems Research Group - FAU 1 Active Replication Pattern Ingrid Buckley Dept. of Computer Science and Engineering Florida Atlantic University Boca.
Introduction Infrastructure for pervasive computing has many challenges: 1)pervasive computing is a large aspect which includes hardware side (mobile phones,portable.
Investigating Survivability Strategies for Ultra-Large Scale (ULS) Systems Vanderbilt University Nashville, Tennessee Institute for Software Integrated.
1 ACTIVE FAULT TOLERANT SYSTEM for OPEN DISTRIBUTED COMPUTING (Autonomic and Trusted Computing 2006) Giray Kömürcü.
1 Qualitative Reasoning of Distributed Object Design Nima Kaveh & Wolfgang Emmerich Software Systems Engineering Dept. Computer Science University College.
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
1 Computing Challenges for the Square Kilometre Array Mathai Joseph & Harrick Vin Tata Research Development & Design Centre Pune, India CHEP Mumbai 16.
Information Technology Needs and Trends in the Electric Power Business Mladen Kezunovic Texas A&M University PS ERC Industrial Advisory Board Meeting December.
Distributed Information Systems. Motivation ● To understand the problems that Web services try to solve it is helpful to understand how distributed information.
AL-MAAREFA COLLEGE FOR SCIENCE AND TECHNOLOGY INFO 232: DATABASE SYSTEMS CHAPTER 1 DATABASE SYSTEMS Instructor Ms. Arwa Binsaleh.
An Architecture to Support Context-Aware Applications
VMware vSphere Configuration and Management v6
© Chinese University, CSE Dept. Distributed Systems / Distributed Systems Topic 1: Characterization of Distributed & Mobile Systems Dr. Michael R.
Internet of Things. IoT Novel paradigm – Rapidly gaining ground in the wireless scenario Basic idea – Pervasive presence around us a variety of things.
Chapter 8 – Cloud Computing
Web Technologies Lecture 13 Introduction to cloud computing.
Status & Challenges Interoperability and global integration of communication infrastructure & service platform Fixed-mobile convergence to achieve a future.
Euro-Par, HASTE: An Adaptive Middleware for Supporting Time-Critical Event Handling in Distributed Environments ICAC 2008 Conference June 2 nd,
Distributed Geospatial Information Processing (DGIP) Prof. Wenwen Li School of Geographical Sciences and Urban Planning 5644 Coor Hall
Efficient Opportunistic Sensing using Mobile Collaborative Platform MOSDEN.
IoT Mashup as a Service: Cloud-based Mashup Service for the Internet of Things By: Benny Bazumnik Lidor Otmazgin Date: 21/05/14.
 Cloud Computing technology basics Platform Evolution Advantages  Microsoft Windows Azure technology basics Windows Azure – A Lap around the platform.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
Chapter 1 Characterization of Distributed Systems
Databases and DBMSs Todd S. Bacastow January 2005.
IoT at the Edge Technical guidance deck.
University of Technology
Introduction to Cloud Computing
IoT at the Edge Technical guidance deck.
Sentio: Distributed Sensor Virtualization for Mobile Apps
Data, Databases, and DBMSs
Fault Tolerance Distributed Web-based Systems
3rd Studierstube Workshop TU Wien
Presentation transcript:

DOMAIN SPECIFIC LANGUAGE AND MANAGEMENT ENVIRONMENT FOR EXTENSIBLE CPS Martin Lehofer Siemens Corporate Technology Princeton, NJ, USA Subhav Pradhan Institute for Software Integrated Systems Vanderbilt University Nashville, TN, USA This work was supported in part by research grant from Siemens Corporate Technology CPSWeek 2016, Vienna, Austria

Outline Introduction: CPS to Extensible CPS CHARIOT Overview CHARIOT-ML (Modeling Language) Example: Smart Parking System CHARIOT-ML Demonstration CHARIOT Runtime Autonomous Resilience CHARIOT Runtime Demonstration Redundancy Patterns supported by CHARIOT Redundancy Patterns Demonstration Conclusion

Cyber-Physical Systems Cyber-physical Systems interact with, and expand the capabilities of the physical world through computation, communication, and control CPS systems combines fields to: Connect machines embedded with sensors to other machines, operators, end users Enable command & control of mechanical devices Extract data from devices / humans & make sense of it Deliver the right info to the right people at the right time Derive value in terms of improved utility & cost savings Radhakisan Baheti and Helen Gill, National Science Foundation (NSF), 2006

Evolution towards Extensible CPS Mid 1990s Mid 2000s Grid Computing Distributed System Late 2000s Cloud Computing Present Ubiquitous Computing Focus mainly on functionality evolution Functionalities provided by applications composed of distributed objects Extensibility straightforward in case of one Cloud Service Provider (CSP) Extensibility across multiple CSPs is much more complex and requires federation mechanism Focus on functionality as well as resource extensibility Extensible schemas and object models Grid system uses extensible object model Class/metaclass objects for functionality Host objects for resource Mobile computing to IoT and IIoT Large scale, dynamic and high degree of heterogeneity Interoperability and extensibility crucial Complex edge devices with purpose of connecting physical world with cyber world. E.g.: smart sensors Future of ubiquitous systems is cyber-physical Ultra-large-scale, multi-domain CPS

Extensible CPS Collection of loosely-connected cyber- physical (sub-)systems that “virtualize” their resources to provide an open platform capable of hosting multiple cyber-physical applications simultaneously Why are these systems extensible? Dynamic resources and applications that can be added/removed at any time Functionalities provided by a platform changes (evolves) over time depending on hosted applications How does this differ from CPS? Dynamic applications, so their lifecycle cannot be tightly-coupled with the underlying system Resources shared by dynamic applications resulting in varying resource demand and usage Challenges: Application management and autonomous resilience HVACLightingSafety Energy s un- shades Building automation … Building(s) Energy Market National Power Grid Local Smart Grid …

CHARIOT Cyber-pHysical Application aRchiTecture with Objective-based reconfiguraTion Generated artifacts Management Infrastructure Managed system Monitoring infrastructure Resilience infrastructure Distributed Database Design-time Runtime System model Interpreter + Design-time analysis tools Autonomous Resilience loop System architect

Goal-based System Description A system’s goal requires one or more objectives to be satisfied An objective requires existence of one or more functionalities, where functionalities are provided by components A component provides exactly one functionality Multiple components can provide same functionality Dynamic representation of a system, which doesn’t require systems to be described as a collection of concrete components Doesn’t matter which component provides a certain functionality as long as it is provided Live system comprising one or more instances of different components Redundancy patterns can be applied to functionalities for resilience Voter, Consensus, Simple cluster patterns Goal Objective 1 Functionality 1 Component A (App) Objective n Functionality n Component B (App) System Model …......

Run-time Self-reconfiguration Logic Configuration space Initial configuration point Fault disables part of space New configuration point Reconfiguration Function recovered Another fault New configuration point Reconfiguration Function recovered A great deal of information about a system is locked up in the models. When the requirements or environment experience an unforeseen change, we can use the models at runtime to recover. Configuration space generated from models of the system

CHARIOT-ML Textual Modeling Language (ML) for extensible CPS Supports different first class modeling concepts Facilitates goal-based system description Role assignment App Developer System Architect Goal Objective 1 Functionality 1 Component A (App) Objective n Functionality n Component B (App) System Model … Modeling Concepts

Example: Smart Parking System Occupancy Sensor Parking Manager Server Client Backend nodes to run Resilience Engine 1234 Edge nodes to run applications Deploy one on each edge node. Cannot be migrated. Deploy on a single edge node. Can be migrated. Runs on client machine (phone/ tablet/ car)

Example: Smart Parking System Occupancy Sensor Parking Manager Server Client Backend nodes to run Resilience Engine 1234 Edge nodes to run applications Deploy one on each edge node. Cannot be migrated. Deploy on a single edge node. Can be migrated. Runs on client machine (phone/ tablet/ car) Send status information

Example: Smart Parking System Occupancy Sensor Parking Manager Server Client Backend nodes to run Resilience Engine 1234 Edge nodes to run applications Deploy one on each edge node. Cannot be migrated. Deploy on a single edge node. Can be migrated. Runs on client machine (phone/ tablet/ car) Send status information Store and maintain status for each space

Example: Smart Parking System Occupancy Sensor Parking Manager Server Client Backend nodes to run Resilience Engine 1234 Edge nodes to run applications Deploy one on each edge node. Cannot be migrated. Deploy on a single edge node. Can be migrated. Runs on client machine (phone/ tablet/ car) Send status information Store and maintain status for each space Send parking request Send parking response

Example: Smart Parking System System Goal Smart Parking System Occupancy Checking Client Interaction Occupancy Sensor Parking Manager Server Client Occupancy SensorCom p Parking Manager ServerComp ClientComp Objectives Functionalities Component types

Example: Smart Parking System Per node Local System Goal Smart Parking System Occupancy Checking Client Interaction Occupancy Sensor Parking Manager Server Client Occupancy SensorCom p Parking Manager ServerComp ClientComp Objectives Functionalities Component types

Smart Parking System Modeling Demonstration

CHARIOT Autonomous Resilience Loop Management Infrastructure Managed system Monitoring infrastructure Resilience infrastructure Distributed Database Design-time Runtime Autonomous Resilience loop Comprises distributed Application Managers (AMs) that are responsible for starting and stopping local application processes. Each node has an AM. Comprises distributed monitors responsible for monitoring and detecting failures. Current implementation of CHARIOT uses distributed Node Monitors (NMs) to detect node failures. Comprises a Resilience Engine (RE) or a solver that uses Z3 SMT solver to compute configuration points at runtime. Multiple of these can be deployed to tolerate single point of failure.

CHARIOT Autonomous Resilience Loop Application Manager Distributed Database Application Node Monitor Compute Nodes Solver Nodes Resilience Engine (Solver) Distributed Database C A - F C B - F C A – F, C B - F C B - F C A - F Configuration space Initial configuration point Configuration point Stores Manages Generate and store configuration space Query 4 Update Compute initial configuration point 5 Notify 6 7 Update Update Failure Query Solver Reconfiguration Deployment actions Reconfiguration actions Leader Reconfiguration actions

CHARIOT Autonomous Resilience Loop NMDatabaseREAM Node failure detected Update configuration space with detected failure Ask RE to compute new configuration point Get latest configuration space and point Compute new configuration point and deployment actions Store deployment actions Deployment actions notification Determine and take local actions Update configuration space

Configuration Point Computation (CPC) Algorithm

CHARIOT Layered Architecture

Smart Parking System Autonomous Resilience Demonstration

Redundancy Patterns Redundancy patterns are important for various reasons in context of CPS –Redundancy to avoid anomalies Useful for scenarios with multiple sensors measuring same property –Redundancy as a fault-tolerance mechanism Useful for hard real-time systems that cannot tolerate re-configuration overhead –Redundancy for large-scale, multi-instance deployment CHARIOT supports three different, well-known, redundancy patterns –Voter pattern –Consensus pattern –Cluster pattern

Redundancy Patterns In CHARIOT redundancy patterns are associated with functionalities F1F1 Voter with 3 instances F 1_1 F 1_2 F 1_3 Voter F1F1 Consensus with 4 instances F 1_1 F 1_2 F 1_3 F 1_4 Consensus ring F1F1 Simple cluster with 2 instances F 1_1 F 1_2

Redundancy Patterns Demonstration

Summary of Our Approach for Achieving Resilience Fail-stop model, where failures are detected and handled as close to the source as possible –If failure handling not possible, propagate failure to the layer above Component layer will comprises component monitors (future) –Detect and handle component failures/anomalies –Switching component modes is one mechanism of handling component failure (assuming components have modes) Node layer comprises node monitors –Detect and handle process failures, node failures, and propagated component failures –Trying to re-spawn a process can be a mechanism to handle process failure (future) Redundancy layer comprises redundant application entities –If a node failure affects some application components, their redundant replicas can achieve resilience as long as they are not deployed on the same node Reconfiguration layer comprises resilience engine –If nothing else can be done, then the resilience engine tries to reconfigure the system in order to mitigate failure –If reconfiguration doesn’t work, system needs to shutdown Component layer Node layer Redundancy layer Reconfiguration layer Failure Propagation

Example: Smart Parking System II Occupancy Sensor Parking Manager Navigation Server Localization Server Deca Wave Client Store and maintain status for each space Loop Send status info Send parking request Find parking space Send parking response Start localization Send location Send localized info Send parking response Send target heading direction Perform localization Loop