TEXTAL Progress Basic modeling of side-chain and backbone coordinates seems to be working well. –even for experimental MAD maps, 2.5-3A –using pattern-recognition.

Slides:



Advertisements
Similar presentations
Active Appearance Models
Advertisements

Markov Decision Process
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 3-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Prepared by.
Decision Analysis Chapter 3
Introduction to Decision Analysis
Decision Theory.
Managerial Decision Modeling with Spreadsheets
Part 3 Probabilistic Decision Models
THE HONG KONG UNIVERSITY OF SCIENCE & TECHNOLOGY CSIT 5220: Reasoning and Decision under Uncertainty L09: Graphical Models for Decision Problems Nevin.
Class Project Due at end of finals week Essentially anything you want, so long as it’s AI related and I approve Any programming language you want In pairs.
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Operations Management Decision-Making Tools Module A
An Introduction to Markov Decision Processes Sarah Hickmott
Planning under Uncertainty
The Rational Decision-Making Process
The TEXTAL System: Automated Model-Building Using Pattern Recognition Techniques Dr. Thomas R. Ioerger Department of Computer Science Texas A&M University.
Operations Management Decision-Making Tools Module A
CAPRA: C-Alpha Pattern Recognition Algorithm Thomas R. Ioerger Department of Computer Science Texas A&M University.
The TEXTAL System for Automated Model Building Thomas R. Ioerger Texas A&M University.
Neural Networks Marco Loog.
Nov 14 th  Homework 4 due  Project 4 due 11/26.
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2005.
PcaA Mycolic acid cyclopropyl synthase (Smith&Sacchettini) original structure solved at 2.0A via MAD R-value = 0.22, R-free = residues,  fold.
Current Status and Future Directions for TEXTAL March 2, 2003 The TEXTAL Group at Texas A&M: Thomas R. Ioerger James C. Sacchettini Tod Romo Kreshna Gopal.
Automated Model-Building with TEXTAL Thomas R. Ioerger Department of Computer Science Texas A&M University.
TEXTAL: A System for Automated Model Building Based on Pattern Recognition Thomas R. Ioerger Department of Computer Science Texas A&M University.
Recursion Chapter 7. Chapter 7: Recursion2 Chapter Objectives To understand how to think recursively To learn how to trace a recursive method To learn.
Automated protein structure solution for weak SAD data Pavol Skubak and Navraj Pannu Automated protein structure solution for weak SAD data Pavol Skubak.
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2003 material from Jean-Claude Latombe, and Daphne Koller.
1 Algorithm Design Techniques Greedy algorithms Divide and conquer Dynamic programming Randomized algorithms Backtracking.
DECISION MAKING. What Decision Making Is?  Decision making is the process of identifying problems and opportunities, developing alternative solutions,
AN INTRODUCTION TO PORTFOLIO MANAGEMENT
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
PowerPoint presentation to accompany Operations Management, 6E (Heizer & Render) © 2001 by Prentice Hall, Inc., Upper Saddle River, N.J A-1 Operations.
Some Background Assumptions Markowitz Portfolio Theory
Data Mining Chapter 1 Introduction -- Basic Data Mining Tasks -- Related Concepts -- Data Mining Techniques.
Generic Approaches to Model Validation Presented at Growth Model User’s Group August 10, 2005 David K. Walters.
Online Financial Intermediation. Types of Intermediaries Brokers –Match buyers and sellers Retailers –Buy products from sellers and resell to buyers Transformers.
STS track recognition by 3D track-following method Gennady Ososkov, A.Airiyan, A.Lebedev, S.Lebedev, E.Litvinenko Laboratory of Information Technologies.
Investment Analysis and Portfolio Management First Canadian Edition By Reilly, Brown, Hedges, Chang 6.
Copyright Catherine M. Burns
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
Sequential decision behavior with reference-point preferences: Theory and experimental evidence - Daniel Schunk - Center for Doctoral Studies in Economics.
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
The chi-squared statistic  2 N Measures “goodness of fit” Used for model fitting and hypothesis testing e.g. fitting a function C(p 1,p 2,...p M ; x)
IE 2030 Lecture 7 Decision Analysis Expected Value Utility Decision Trees.
Fundamentals of Decision Theory Chapter 16 Mausam (Based on slides of someone from NPS, Maria Fasli)
Decision Making Under Uncertainty CMSC 471 – Spring 2041 Class #25– Tuesday, April 29 R&N, material from Lise Getoor, Jean-Claude Latombe, and.
Classification Ensemble Methods 1
QUANTITATIVE TECHNIQUES
Announcements  Upcoming due dates  Wednesday 11/4, 11:59pm Homework 8  Friday 10/30, 5pm Project 3  Watch out for Daylight Savings and UTC.
Decision Analysis.
Reinforcement Learning AI – Week 22 Sub-symbolic AI Two: An Introduction to Reinforcement Learning Lee McCluskey, room 3/10
1 Chapter 17 2 nd Part Making Complex Decisions --- Decision-theoretic Agent Design Xin Lu 11/04/2002.
Markov Decision Process (MDP)
CSC321: Introduction to Neural Networks and Machine Learning Lecture 15: Mixtures of Experts Geoffrey Hinton.
Decision Analysis Pertemuan Matakuliah: A Strategi Investasi IT Tahun: 2009.
Confidence Measures As a Search Guide In Speech Recognition Sherif Abdou, Michael Scordilis Department of Electrical and Computer Engineering, University.
DECISION MAKING This is an act of choice where a conclusion is formed about what must and what must not be done in a given situation.
Learning From Observations Inductive Learning Decision Trees Ensembles.
CHAPTER 11 STRUCTURE AND CONTROLS WITH ORGANIZATIONS.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Bump Hunting The objective PRIM algorithm Beam search References: Feelders, A.J. (2002). Rule induction by bump hunting. In J. Meij (Ed.), Dealing with.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
Boosting and Additive Trees (2)
A Crash Course in Reinforcement Learning
David Kauchak CS52 – Spring 2016
Markov Decision Processes
Presentation transcript:

TEXTAL Progress Basic modeling of side-chain and backbone coordinates seems to be working well. –even for experimental MAD maps, 2.5-3A –using pattern-recognition with feature-extracted database –assuming C-alpha coordinates are correct Use sequence-alignment to match fragments after prediction & correct identities

CAPRA Progress Picks C  ’s using neural network, connect into chains Re-implement based on new tracing routine Does good job with 2Fo-Fc maps, secondary structure is apparent, RMS<0.8A Has harder time with low-quality maps Sec. Str. recognition from trace geometry

Prelim. Design for Xtal Agent Decision-making in structure solution Which program to use? Params? Iterations? PHASES, SOLVE, SHARP, DM, TNT, CNS, TEXTAL, WARP… Local decision-making: input params, when to stop iterating - for 1 program at a time Try a statistical approach (Terwilliger)

Global Decision-Making When to back-track? What to make of information gained by exploring 1 path? Example: select initial, conservative mask for solvent-flattening; if doesn’t lead to good model, go back and re-flatten When to “throw out” data (e.g. low FOM)? Use NCS or not? Alternative paths compete

AI Search Problem Choice-points form branches in tree Initial data collection at root Try to find path (sequence of computational actions) that produces a solved structure Question: when to continue down one path versus re-start from a previous branch-pt?

Sequential Decision Procedures Branch of Decision Theory Focus on utility of information gained in earlier steps to make better choices later Attempt to optimize “long-term payoff” Define a target utility function that measures model goodness, e.g. combination of Rfree, completeness, consistency...

Parameter Estimation Need quantitative estimates of probabilistic effects of running a program on quality of model Fit equations from synthetic experiments: –Prob[Rfree(S’)=x | FOM(S)=y] –where S’ is result of running program on S –Prob[Rfree*(flatten(S,50%))=x | Rfree*(flatten(S,40%))=y]

Utility and Risk Utility of an action A is integral over expected value of outcomes, weighted by prob: U(S,A)=  v(S’) x Prob(S’|S) Can use to compare different actions & states, provided v is “final model quality” Risk-aversion: modify the values in integral to prefer possibility of higher reward over average loss - for handling uncertainty

Computational Cost Why not just run all programs with all params? Want to minimize CPU time. At any given moment, pick the action that produces a state with highest expected utility minus estimated cost of runtime: –gain: G(A,S)=U(A,S)-f(T(A,S)) –where T(A,S) is estimated time to run A on S –and f(.) correlates effort to model quality scale