Exploiting decision theory for supporting therapy selection in computerized clinical guidelines Stefania Montani, Paolo Terenziani, Alessio Bottrighi DI,

Slides:



Advertisements
Similar presentations
Clinical Guidelines Adaptation: Managing Authoring and Versioning Issues Paolo Terenziani 1, Stefania Montani 1, Alessio Bottrighi 1, Gianpaolo Molino.
Advertisements

Flexible and efficient retrieval of haemodialysis time series S. Montani, G. Leonardi, A. Bottrighi, L. Portinale, P. Terenziani DISIT, Sezione di Informatica,
Towards a Flexible Integration of Clinical Guideline Systems With Medical Ontologies and Medical Information Systems Gianluca Correndo 1, Paolo Terenziani.
Markov Decision Processes (MDPs) read Ch utility-based agents –goals encoded in utility function U(s), or U:S  effects of actions encoded in.
Markov Decision Process
A Case-based Approach to Business Process Monitoring S. Montani 1, G. Leonardi 1 1 Dipartimento di Informatica, University of Piemonte Orientale, Alessandria,
How to Measure ANYTHING. The Analytical Process The analytical process Think Decompose Simplify Specify Rethink Think Decompose Simplify Specify Rethink.
GLARE (GuideLine Acquisition Representation and Execution) Paolo Terenziani Dipartimento di Informatica, Universita’ del Piemonte Orientale “Amedeo Avogadro”,
Dynamic Bayesian Networks (DBNs)
Computational Stochastic Optimization:
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
A Hybridized Planner for Stochastic Domains Mausam and Daniel S. Weld University of Washington, Seattle Piergiorgio Bertoli ITC-IRST, Trento.
An Introduction to Markov Decision Processes Sarah Hickmott
OPTIMIZATION Lecture 24. Optimization Uses sophisticated mathematical modeling techniques for the analysis Multi-step process Provides improved benefit.
Markov Decision Processes
The File Mover: An Efficient Data Transfer System for Grid Applications C. Anglano, M. Canonico Dipartimento di Informatica Universita' del Piemonte Orientale,
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
KI Kunstmatige Intelligentie / RuG Markov Decision Processes AIMA, Chapter 17.
A. BobbioReggio Emilia, June 17-18, Dependability & Maintainability Theory and Methods Part 2: Repairable systems: Availability Andrea Bobbio Dipartimento.
A. BobbioBertinoro, March 10-14, Dependability Theory and Methods Part 4: Fault-tree analysis Andrea Bobbio Dipartimento di Informatica Università.
Discretization Pieter Abbeel UC Berkeley EECS
Markov Decision Processes Value Iteration Pieter Abbeel UC Berkeley EECS TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Slide 1 Tutorial: Optimal Learning in the Laboratory Sciences Working with nonlinear belief models December 10, 2014 Warren B. Powell Kris Reyes Si Chen.
The SmartWheeler platform Collaboration between McGill, U.Montreal, Ecole Polytechnique Montreal + 2 clinical rehab centers. Standard commercial power.
Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,
Computational Stochastic Optimization: Bridging communities October 25, 2012 Warren Powell CASTLE Laboratory Princeton University
Search and Planning for Inference and Learning in Computer Vision
A. BobbioBertinoro, March 10-14, Dependability Theory and Methods 2. Reliability Block Diagrams Andrea Bobbio Dipartimento di Informatica Università.
Pisa, 11/25/2002Susanna Donatelli1 Modelling process and heterogeneous model construction Susanna Donatelli Modelling and evaluation groups.
Message-Passing for Wireless Scheduling: an Experimental Study Paolo Giaccone (Politecnico di Torino) Devavrat Shah (MIT) ICCCN 2010 – Zurich August 2.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
CPSC 422, Lecture 9Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 9 Sep, 28, 2015.
Business Process Change and Discrete-Event Simulation: Bridging the Gap Vlatka Hlupic Brunel University Centre for Re-engineering Business Processes (REBUS)
Pisa, 11/25/2002Susanna Donatelli1 Heterogeneous model construction Susanna Donatelli Modelling and evaluation groups of the Dipartimento.
Interfacing NGSIM Lane Selection Algorithm with TSIS/CORSIM Li Zhang, Ph.D., P.E. Guanghua Zhang, JiZhan Gou Fatemeh Sayyady, Di Wu & Fan Ye January 20,
Clinical Guidelines Contextualization in GLARE Alessio Bottrighi*, Paolo Terenziani*, Stefania Montani*, Mauro Torchio #, Gianpaolo Molino # *DI, Univ.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
Applying AI temporal reasoning techniques to Clinical Guidelines Luca Anselma%, Paolo Terenziani*, Stefania Montani*, Alessio Bottrighi* %DI, Università.
Ant colony optimization. HISTORY introduced by Marco Dorigo (MILAN,ITALY) in his doctoral thesis in 1992 Using to solve traveling salesman problem(TSP).traveling.
GLARE: a Domain-Independent System for Acquiring, Representing and Executing Clinical Guidelines Paolo Terenziani, Stefania Montani, Alessio Bottrighi,
ECOMPOSE: development of Executable COntent in Medicine using Proprietary and Open Standards Engineering Dipartimento di Informatica, Universita’ del Piemonte.
Stochastic Optimization for Markov Modulated Networks with Application to Delay Constrained Wireless Scheduling Michael J. Neely University of Southern.
Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach Aaron Wilson, Alan Fern, Prasad Tadepalli School of EECS Oregon State.
Markov Decision Process (MDP)
A GOAL-DIRECTED RATIONAL COMPONENT FOR EMOTIONAL AGENTS Antonio Camurri and Gualtiero Volpe DIST - University of Genova Italy 10/04/1999 " AFFECTIVE COMPUTING:
Rule Engine for executing and deploying the SAGE-based Guidelines Jeong Ah Kim', Sun Tae Kim 2 ' Computer Education Department, Kwandong University, KOREA.
Markov Decision Processes AIMA: 17.1, 17.2 (excluding ), 17.3.
Partial Observability “Planning and acting in partially observable stochastic domains” Leslie Pack Kaelbling, Michael L. Littman, Anthony R. Cassandra;
1 Markov Decision Processes Finite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Comparing Dynamic Programming / Decision Trees and Simulation Techniques BDAuU, Prof. Eckstein.
Recap of L09: Normative Decision Theory
Paolo Terenziani, Alessio Bottrighi, Stefania Montani
CS b659: Intelligent Robotics
Markov Decision Processes
Markov Decision Processes
Markov Decision Processes
 Real-Time Scheduling via Reinforcement Learning
Reinforcement Learning with Partially Known World Dynamics
Recipe for any Hypothesis Test
Operations Research Lecture 2.
Discrete Controller Synthesis
 Real-Time Scheduling via Reinforcement Learning
Chapter 10: Dimensions of Reinforcement Learning
Hidden Markov Models (cont.) Markov Decision Processes
Extending computer guideline system with advanced AI and DB facilities
CS 416 Artificial Intelligence
Department of Computer Science Ben-Gurion University
Process Wind Tunnel for Improving Business Processes
Presentation transcript:

Exploiting decision theory for supporting therapy selection in computerized clinical guidelines Stefania Montani, Paolo Terenziani, Alessio Bottrighi DI, Università del Piemonte Orientale “Amedeo Avogadro”, Alessandria, Italy

Decision support  Computer-based treatment of GL to support (therapeutic) decisions  Therapy selection may be critical –Clinically equivalent alternatives  Several decisions in sequence may be prompted (dynamic decision problem) –Simulation and collection of global decision parameters (utility, costs, duration)

Mapping decision theory concepts to clinical GL (1)  Decision theory: a natural candidate for decision support  Mapping of concepts between the two areas –Each decision is based on a data collection completed at decision time –Does not depend on the previous history Markov assumption Discrete time process Completely observable –Markov Decision Process (MDP)

Exploiting DT algorithms in the GL management domain  Implementation issues –Classical algorithms –Integrations and specific choices to deal with non trivial GL control flow constructs  Calculation of the optimal policy  Calculation of the expected utility along a given GL path  Sharable across different GL management systems  Implementation in GLARE: a system for acquiring, representing and executing clinical GL