A Markov Chain Model of Baseball

Slides:



Advertisements
Similar presentations
1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
Advertisements

Softball By: Lindsay Harjak.
Freshman Physical Education Curriculum
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
The Basics of Baseball for middle school physical education class’s
Mrs. Santucci, MHRD STARTER: If you get 31 out of 40 points on a test, what percent of the problems did you answer correctly?
Markov Chains in Baseball
 1791 was the first mention of baseball because it was banned from being played with 80 yards of town hall in the state of Massachusetts.  Believed.
 Baseball Hall of Fame Video Conference Information.
Who’s on First: Simulating the Canadian Football League regular season Keith A. Willoughby, Ph.D. University of Saskatchewan Joint Statistical Meetings.
SCOREKEEPING CLINIC Presented by: Anita Arnold 2/19/2015.
SLOW PITCH Softball Basic Rules Of Coed.
Activity 4 Jeopardy ScoringRulesFun Facts Conclusion.
Bunting and Sacrificing in MLB Lauren McNulty and Drew Jack.
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
Topics Review of DTMC Classification of states Economic analysis
Our Group Members Ben Rahn Janel Krenz Lori Naiberg Chad Seichter Kyle Colden Ivan Lau.
11 - Markov Chains Jim Vallandingham.
Lecture 12 – Discrete-Time Markov Chains
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
INFM 718A / LBSC 705 Information For Decision Making Lecture 9.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Group exercise For 0≤t 1
A multiple-choice test consists of 8 questions
 Started in Chicago in the late 19 th century  Equipment consisted of an old boxing glove made in a ball with shoe strings wrap around it  1889 George.
Lets see how much you really know about the rules of baseball?
Baseball Challenge! Team Sports Final Exam Practice.
Who Wants To Be A Millionaire Sports Edition!. In what sport can you hit a home run? Football Baseball Basketball Tennis Q:1-100.
Baseball Objects Baseball Actions Fantasy Baseball.
Mike Herrmann Audience: 6 th Grade Students  Major league baseball is composed of two leagues  National League present  American League –
1.3 Simulations and Experimental Probability (Textbook Section 4.1)
Probability With Number Cubes Today’s Learning Goals  We will continue to understand the link between part-whole ratios, decimals, and percents.  We.
7 yr Dizzy Dean World Series Approved Ruling Mechanics Points of Emphasis Dizzy Dean Tournament Rules.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Baseball. Rules Each team has 9 players plus substitutes. The only ways you can be out is by being caught, tagged, struck out or out on a base. The.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
 Rules and gameplay  Scoring  Field  Red is input.
Information gathered from: The Wiffle Ball Inc. ( )
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
ST3236: Stochastic Process Tutorial 6
S ECTION 5.3 – S IMULATIONS. W HAT IS A SIMULATION ? A simulation is a mock trial of an experiment without doing the experiment. It uses theoretical probabilities.
Markov Processes What is a Markov Process?
Multinomial Distribution World Premier League Soccer Game Outcomes.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Problem: Roll a two dice until you get doubles. How long did you have to wait?
Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Introduction to Markov Chains Resource: Fayez Gebali, Analysis of Computer and.
Chapter 9: Markov Processes
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Miss Woodward’s Classes Softball. Softball - History Softball originated in Chicago on Thanksgiving Day, A group of about twenty young men had got.
Slides Prepared by JOHN LOUCKS
Courtesy of J. Bard, L. Page, and J. Heyl
video time
A True/False quiz consists of 8 questions.
The Basics of Baseball for middle school physical education class’s
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Slides by John Loucks St. Edward’s University.
Discrete-time markov chain (continuation)
Random Processes / Markov Processes
Presentation transcript:

A Markov Chain Model of Baseball Used as a project for an undergraduate Stochastic Modeling course Eric Kuennen Department of Mathematics University of Wisconsin Oshkosh kuennene@uwosh.edu Presented at: Joint Mathematics Meetings Washington, D.C. January 6, 2009

Markov Chain Model for Baseball View an inning of baseball as a stochastic process with 25 possible states. There are 8 different arrangements of runners on the bases: (bases empty, runner on 1st, runner on 2nd, runner on 3rd, runners on 1st and 2nd , runners on 1st and 3rd, runners on 2nd and 3rd , bases loaded) and three possibilities for the number of outs (0 outs, 1 out, 2 outs), for a total of 24 non-absorbing states. The 25th state (3 outs) is an absorbing state for the inning. 0,0 1,0 2,0 3,0 12,0 13,0 23,0 123,0 0,1 1,1 2,1 3,1 12,1 13,1 23,1 123,1 0,2 1,2 2,2 3,2 12,2 13,2 23,2 123,2

Transition Probabilities A Markov Chain is a stochastic process in which the next state depends only on the present state. In other words, future states are independent of past states. Let Pij denote the probability the next state is j, given the current state is i. Form the Transition Matrix T = [Pij ]. w = probability of a walk s = probability of a single d = probability of a double t = probability of a triple h = probability of a home run out = probability of an out

h s+w d t d/2 w+s/2 s/2 3s/4 w s/4 s w+s/4 0,0 1,0 2,0 3,0 12,0 13,0 23,0 123,0 h s+w d t d/2 w+s/2 s/2 3s/4 w s/4 s w+s/4

Transition Matrix

Run Matrix 0,0 1,0 2,0 3,0 12,0 13,0 23,0 123,0 1 2 3 4

Methods of Analysis Theoretical Calculations with Maple Expected Run Values for each state Steady State Probability Vector Expected Value of a given play in a given state or in general

Expected Run Values Let vi be the expected number of runs scored starting in state i Students use Maple’s linear algebra package to solve for the vector v

Expected Run Values From 2005 MLB: w = .094 s = .157 d = .049 t = .005 h = .029 out = .661 0,0 0.56 1,0 0.97 2,0 1.13 3,0 1.17 12,0 1.57 13,0 1.61 23,0 1.72 123,0 2.31 0,1 0.31 1,1 0.58 2,1 0.73 3,1 0.77 12,1 1.03 13,1 1.07 23,1 123,1 0,2 0.12 1,2 0.25 2,2 0.36 3,2 0.38 12,2 0.50 13,2 0.53 23,2 0.61 123,2 0.84

Sacrifice Bunting Is it ever advantageous to sacrifice bunt? 0,0 0.56 1,0 0.97 2,0 1.13 3,0 1.17 12,0 1.57 13,0 1.61 23,0 1.72 123,0 2.31 0,1 0.31 1,1 0.58 2,1 0.73 3,1 0.77 12,1 1.03 13,1 1.07 23,1 123,1 0,2 0.12 1,2 0.25 2,2 0.36 3,2 0.38 12,2 0.50 13,2 0.53 23,2 0.61 123,2 0.84

Stealing Bases How successful does a base-stealer need to be on average in order for it to be worth-while to attempt to steal second base with a runner on first and no outs? 0,0 0.56 1,0 0.97 2,0 1.13 3,0 1.17 12,0 1.57 13,0 1.61 23,0 1.72 123,0 2.31 0,1 0.31 1,1 0.58 2,1 0.73 3,1 0.77 12,1 1.03 13,1 1.07 23,1 123,1 0,2 0.12 1,2 0.25 2,2 0.36 3,2 0.38 12,2 0.50 13,2 0.53 23,2 0.61 123,2 0.84

Methods of Analysis Experimental Simulations with Minitab Students write a Minitab macro that uses a random number generator to simulate the step by step evolution of the Markov Chain Large-scale simulations are used to estimate Expected Run Values and perform situational strategy analyses

Two Simulated Innings First Inning 1. Single 2. Out 3. Double Second Inning 8. Single 9. Homerun 10. Out 11. Out 12. Single 13. Out

Sacrificing with the game on the line In the ninth inning, your team needs one run to win or tie. Suppose the first batter reaches first. Should you bunt? Mean number of runs scored: 0.909 Probability of scoring at least one run: 0.390 Mean number of runs scored: 0.665 Probability of scoring at least one run: 0.406

Reference Sokol, J.S. (2004) “An Intuitive Markov Chain Lesson From Baseball,” Informs Transactions on Education. 5 pp. 47-55.

Please contact me for: Sample Maple Worksheet Sample Minitab Macro Project Assignment Handout kuennene@uwosh.edu