Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Petri Nets Marco Sgroi EE249 - Fall 2002.

Similar presentations


Presentation on theme: "1 Petri Nets Marco Sgroi EE249 - Fall 2002."— Presentation transcript:

1 1 Petri Nets Marco Sgroi (sgroi@eecs.berkeley.edu) EE249 - Fall 2002

2 2 Outline Petri nets –Introduction –Examples –Properties –Analysis techniques

3 3 Petri Nets (PNs) Model introduced by C.A. Petri in 1962 –Ph.D. Thesis: “Communication with Automata” Applications: distributed computing, manufacturing, control, communication networks, transportation… PNs describe explicitly and graphically: –sequencing/causality –conflict/non-deterministic choice –concurrency Asynchronous model (partial ordering) Main drawback: no hierarchy

4 4 Petri Net Graph Bipartite weighted directed graph: –Places: circles –Transitions: bars or boxes –Arcs: arrows labeled with weights Tokens: black dots t1 p1 p2 t2 p4 t3 p3 2 3

5 5 Petri Net A PN (N,M 0 ) is a Petri Net Graph N –places: represent distributed state by holding tokens marking (state) M is an n-vector (m 1,m 2,m 3 …), where m i is the non- negative number of tokens in place p i. initial marking (M 0 ) is initial state –transitions: represent actions/events enabled transition: enough tokens in predecessors firing transition: modifies marking …and an initial marking M 0. t1 p1 p2 t2 p4 t3 p3 2 3 Places/Transition: conditions/events

6 6 Transition firing rule A marking is changed according to the following rules: –A transition is enabled if there are enough tokens in each input place –An enabled transition may or may not fire –The firing of a transition modifies marking by consuming tokens from the input places and producing tokens in the output places 2 2 3 2 2 3

7 7 Concurrency, causality, choice t1 t2 t3t4 t5 t6

8 8 Concurrency, causality, choice Concurrency t1 t2 t3t4 t5 t6

9 9 Concurrency, causality, choice Causality, sequencing t1 t2 t3t4 t5 t6

10 10 Concurrency, causality, choice Choice, conflict t1 t2 t3t4 t5 t6

11 11 Concurrency, causality, choice Choice, conflict t1 t2 t3t4 t5 t6

12 12 Communication Protocol P1 Send msg Receive Ack Send Ack Receive msg P2

13 13 Communication Protocol P1 Send msg Receive Ack Send Ack Receive msg P2

14 14 Communication Protocol P1 Send msg Receive Ack Send Ack Receive msg P2

15 15 Communication Protocol P1 Send msg Receive Ack Send Ack Receive msg P2

16 16 Communication Protocol P1 Send msg Receive Ack Send Ack Receive msg P2

17 17 Communication Protocol P1 Send msg Receive Ack Send Ack Receive msg P2

18 18 Producer-Consumer Problem Produce Consume Buffer

19 19 Producer-Consumer Problem Produce Consume Buffer

20 20 Producer-Consumer Problem Produce Consume Buffer

21 21 Producer-Consumer Problem Produce Consume Buffer

22 22 Producer-Consumer Problem Produce Consume Buffer

23 23 Producer-Consumer Problem Produce Consume Buffer

24 24 Producer-Consumer Problem Produce Consume Buffer

25 25 Producer-Consumer Problem Produce Consume Buffer

26 26 Producer-Consumer Problem Produce Consume Buffer

27 27 Producer-Consumer Problem Produce Consume Buffer

28 28 Producer-Consumer Problem Produce Consume Buffer

29 29 Producer-Consumer Problem Produce Consume Buffer

30 30 Producer-Consumer Problem Produce Consume Buffer

31 31 Producer-Consumer Problem Produce Consume Buffer

32 32 Producer-Consumer with priority Consumer B can consume only if buffer A is empty Inhibitor arcs A B

33 33 PN properties Behavioral: depend on the initial marking (most interesting) –Reachability –Boundedness –Schedulability –Liveness –Conservation Structural: do not depend on the initial marking (often too restrictive) –Consistency –Structural boundedness

34 34 Reachability Marking M is reachable from marking M 0 if there exists a sequence of firings   M 0 t 1 M 1 t 2 M 2 … M that transforms M 0 to M. The reachability problem is decidable. t1p1 p2 t2 p4 t3 p3   = (1,0,1,0) M = (1,1,0,0)   = (1,0,1,0) t3 M 1 = (1,0,0,1) t2 M = (1,1,0,0)

35 35 Liveness: from any marking any transition can become fireable –Liveness implies deadlock freedom, not viceversa Liveness Not live

36 36 Liveness: from any marking any transition can become fireable –Liveness implies deadlock freedom, not viceversa Liveness Not live

37 37 Liveness: from any marking any transition can become fireable –Liveness implies deadlock freedom, not viceversa Liveness Deadlock-free

38 38 Liveness: from any marking any transition can become fireable –Liveness implies deadlock freedom, not viceversa Liveness Deadlock-free

39 39 Boundedness: the number of tokens in any place cannot grow indefinitely –(1-bounded also called safe) –Application: places represent buffers and registers (check there is no overflow) Boundedness Unbounded

40 40 Boundedness: the number of tokens in any place cannot grow indefinitely –(1-bounded also called safe) –Application: places represent buffers and registers (check there is no overflow) Boundedness Unbounded

41 41 Boundedness: the number of tokens in any place cannot grow indefinitely –(1-bounded also called safe) –Application: places represent buffers and registers (check there is no overflow) Boundedness Unbounded

42 42 Boundedness: the number of tokens in any place cannot grow indefinitely –(1-bounded also called safe) –Application: places represent buffers and registers (check there is no overflow) Boundedness Unbounded

43 43 Boundedness: the number of tokens in any place cannot grow indefinitely –(1-bounded also called safe) –Application: places represent buffers and registers (check there is no overflow) Boundedness Unbounded

44 44 Conservation Conservation: the total number of tokens in the net is constant Not conservative

45 45 Conservation Conservation: the total number of tokens in the net is constant Not conservative

46 46 Conservation Conservation: the total number of tokens in the net is constant Conservative 2 2

47 47 Analysis techniques Structural analysis techniques –Incidence matrix –T- and S- Invariants State Space Analysis techniques –Coverability Tree –Reachability Graph

48 48 Incidence Matrix Necessary condition for marking M to be reachable from initial marking M 0 : there exists firing vector v s.t.: M = M 0 + A v p1p2p3t1 t2 t3 A= -100 11-1 0-11 t1t2t3 p1 p2 p3

49 49 State equations p1p2p3t1 t3 A= -100 11-1 0-11 E.g. reachability of M =|0 0 1| T from M 0 = |1 0 0| T but also v 2 = | 1 1 2 | T or any v k = | 1 (k) (k+1) | T t2100+ -100 11-1 0-11 001=101 v 1 = 101

50 50 Necessary Condition only 2 2 Firing vector: (1,2,2) t1 t2 t3 Deadlock!!

51 51 State equations and invariants Solutions of Ax = 0 (in M = M 0 + Ax, M = M 0 ) T-invariants –sequences of transitions that (if fireable) bring back to original marking –periodic schedule in SDF –e.g. x =| 0 1 1 | T p1p2p3t1 t3 A= -100 11-1 0-11 t2

52 52 Application of T-invariants Scheduling –Cyclic schedules: need to return to the initial state i *k2 + *k1 Schedule: i *k2 *k1 + o T-invariant: (1,1,1,1,1) o

53 53 State equations and invariants Solutions of yA = 0 S-invariants –sets of places whose weighted total token count does not change after the firing of any transition (y M = y M’) –e.g. y =| 1 1 1 | T p1p2p3t1 t3 AT=AT=AT=AT= -110 01-1 0-11 t2

54 54 Application of S-invariants Structural Boundedness: bounded for any finite initial marking M 0 Existence of a positive S-invariant is CS for structural boundedness –initial marking is finite –weighted token count does not change

55 55 Summary of algebraic methods Extremely efficient (polynomial in the size of the net) Generally provide only necessary or sufficient information Excellent for ruling out some deadlocks or otherwise dangerous conditions Can be used to infer structural boundedness

56 56 Coverability Tree Build a (finite) tree representation of the markings Karp-Miller algorithm Label initial marking M0 as the root of the tree and tag it as new While new markings exist do: –select a new marking M –if M is identical to a marking on the path from the root to M, then tag M as old and go to another new marking –if no transitions are enabled at M, tag M dead-end –while there exist enabled transitions at M do: obtain the marking M’ that results from firing t at M on the path from the root to M if there exists a marking M’’ such that M’(p)>=M’’(p) for each place p and M’ is different from M’’, then replace M’(p) by  for each p such that M’(p) >M’’(p) introduce M’ as a node, draw an arc with label t from M to M’ and tag M’ as new.

57 57 Coverability Tree Boundedness is decidable with coverability tree p1p2p3 p4 t1 t2 t3 1000

58 58 Coverability Tree Boundedness is decidable with coverability tree p1p2p3 p4 t1 t2 t3 1000 0100 t1

59 59 Coverability Tree Boundedness is decidable with coverability tree p1p2p3 p4 t1 t2 t3 1000 0100 0011 t1 t3

60 60 Coverability Tree Boundedness is decidable with coverability tree p1p2p3 p4 t1 t2 t3 1000 0100 0011 t1 t3 0101 t2

61 61 Coverability Tree Boundedness is decidable with coverability tree p1p2p3 p4 t1 t2 t3 1000 0100 0011 t1 t3 t2 010 

62 62 Coverability Tree Is (1) reachable from (0)? t1 p1t2 t1 0  0  p1t2 2 2

63 63 Coverability Tree Is (1) reachable from (0)? t1 p1t2 t1 0  0  p1t2 2 2 (0) (1) (2)…

64 64 Coverability Tree Is (1) reachable from (0)? Cannot solve the reachability problem t1 p1t2 t1 0  0  p1t2 2 2 (0) (1) (2)… (0) (2) (0)…

65 65 Reachability graph p1p2p3t1 t2 t3 100 For bounded nets the Coverability Tree is called Reachability Tree since it contains all possible reachable markings

66 66 Reachability graph p1p2p3t1 t2 t3 100 010 t1 For bounded nets the Coverability Tree is called Reachability Tree since it contains all possible reachable markings

67 67 Reachability graph p1p2p3t1 t2 t3 100 010 001 t1 t3 For bounded nets the Coverability Tree is called Reachability Tree since it contains all possible reachable markings

68 68 Reachability graph p1p2p3t1 t2 t3 100 010 001 t1 t3 t2 For bounded nets the Coverability Tree is called Reachability Tree since it contains all possible reachable markings

69 69 Subclasses of Petri nets Reachability analysis is too expensive State equations give only partial information Some properties are preserved by reduction rules e.g. for liveness and safeness Even reduction rules only work in some cases Must restrict class in order to prove stronger results

70 70 PNs Summary PN Graph: places (buffers), transitions (actions), tokens (data) Firing rule: transition enabled if there are enough tokens in each input place Properties –Structural (consistency, structural boundedness…) –Behavioral (reachability, boundedness, liveness…) Analysis techniques –Structural (only CN or CS): State equations, Invariants –Behavioral: coverability tree Reachability Subclasses: Marked Graphs, State Machines, Free-Choice PNs 2

71 71 Subclasses of Petri nets: MGs Marked Graph: every place has at most 1 predecessor and 1 successor transition Models only causality and concurrency (no conflict) NO YES

72 72 Subclasses of Petri nets: SMs State machine: every transition has at most 1 predecessor and 1 successor place Models only causality and conflict –(no concurrency, no synchronization of parallel activities) YES NO

73 73 Free-Choice Petri Nets (FCPN) Free-Choice (FC) Extended Free-ChoiceConfusion (not-Free-Choice) t1t1 t2t2 Free-Choice: the outcome of a choice depends on the value of a token (abstracted non-deterministically) rather than on its arrival time. every transition after choice has exactly 1 predecessor place

74 74 Free-Choice nets Introduced by Hack (‘72) Extensively studied by Best (‘86) and Desel and Esparza (‘95) Can express concurrency, causality and choice without confusion Very strong structural theory –necessary and sufficient conditions for liveness and safeness, based on decomposition –exploits duality between MG and SM

75 75 MG (& SM) decomposition An Allocation is a control function that chooses which transition fires among several conflicting ones ( A: P T). Eliminate the subnet that would be inactive if we were to use the allocation... Reduction Algorithm –Delete all unallocated transitions –Delete all places that have all input transitions already deleted –Delete all transitions that have at least one input place already deleted Obtain a Reduction (one for each allocation) that is a conflict free subnet

76 76 Choose one successor for each conflicting place : MG reduction and cover

77 77 Choose one successor for each conflicting place: MG reduction and cover

78 78 Choose one successor for each conflicting place: MG reduction and cover

79 79 Choose one successor for each conflicting place: MG reduction and cover

80 80 Choose one successor for each conflicting place: MG reduction and cover

81 81 MG reductions The set of all reductions yields a cover of MG components (T-invariants)

82 82 MG reductions The set of all reductions yields a cover of MG components (T-invariants)

83 83 Hack’s theorem (‘72) Let N be a Free-Choice PN: –N has a live and safe initial marking (well-formed) if and only if every MG reduction is strongly connected and not empty, and the set of all reductions covers the net every SM reduction is strongly connected and not empty, and the set of all reductions covers the net

84 84 Hack’s theorem Example of non-live (but safe) FCN

85 85 Hack’s theorem Example of non-live (but safe) FCN

86 86 Hack’s theorem Example of non-live (but safe) FCN

87 87 Hack’s theorem Example of non-live (but safe) FCN

88 88 Hack’s theorem Example of non-live (but safe) FCN

89 89 Hack’s theorem Example of non-live (but safe) FCN

90 90 Hack’s theorem Example of non-live (but safe) FCN

91 91 Hack’s theorem Example of non-live (but safe) FCN

92 92 Hack’s theorem Example of non-live (but safe) FCN

93 93 Hack’s theorem Example of non-live (but safe) FCN

94 94 Hack’s theorem Example of non-live (but safe) FCN

95 95 Hack’s theorem Example of non-live (but safe) FCN

96 96 Hack’s theorem Example of non-live (but safe) FCN

97 97 Hack’s theorem Example of non-live (but safe) FCN

98 98 Hack’s theorem Example of non-live (but safe) FCN

99 99 Hack’s theorem Example of non-live (but safe) FCN Deadlock

100 100 Summary of LSFC nets Largest class for which structural theory really helps Structural component analysis may be expensive (exponential number of MG and SM components in the worst case) But… –number of MG components is generally small –FC restriction simplifies characterization of behavior

101 101 Petri Net extensions Add interpretation to tokens and transitions –Colored nets (tokens have value) Add time –Time/timed Petri Nets (deterministic delay) type (duration, delay) where (place, transition) –Stochastic PNs (probabilistic delay) –Generalized Stochastic PNs (timed and immediate transitions) Add hierarchy –Place Charts Nets

102 102 Summary of Petri Nets Graphical formalism Distributed state (including buffering) Concurrency, sequencing and choice made explicit Structural and behavioral properties Analysis techniques based on –linear algebra –structural analysis (necessary and sufficient only for FC)

103 103 References T. Murata Petri Nets: Properties, Analysis and Applications http://www.daimi.au.dk/PetriNets/ Cortadella, J.; Kishinevsky, M.; Lavagno, L.; Yakovlev, A. Synthesizing Petri nets from state-based models. 1995 IEEE/ACM International Conference on Computer-Aided Design. Digest of Technical Papers

104 104 Outline Part 3: Models of Computation – FSMs – Discrete Event Systems – CFSMs – Petri Nets – Data Flow Models – The Tagged Signal Model

105 105 Data-flow networks Kahn Process Networks Dataflow Networks –actors, tokens and firings Static Data-flow –static scheduling –code generation –buffer sizing Other Data-flow models –Boolean Data-flow –Dynamic Data-flow

106 106 Data-flow networks Powerful formalism for data-dominated system specification Partially-ordered model (no over-specification) Deterministic execution independent of scheduling Used for –simulation –scheduling –memory allocation –code generation for Digital Signal Processors (HW and SW)

107 107 A bit of history Karp computation graphs (‘66): seminal work Kahn process networks (‘58): formal model Dennis Data-flow networks (‘75): programming language for MIT DF machine Several recent implementations –graphical: Ptolemy (UCB), Khoros (U. New Mexico), Grape (U. Leuven) SPW (Cadence), COSSAP (Synopsys) –textual: Silage (UCB, Mentor) Lucid, Haskell

108 108 Kahn Process networks Network of sequential processes running concurrently and communicating through single-sender single- receiver unbounded FIFOs. Process: mapping from input sequences to output sequences (streams) Blocking read: process attempting to read from empty channel stalls until the buffer has enough tokens Determinacy: the order in which processes are fired does not affect the final result Difficult to schedule –Dataflow Networks … Send(); … Wait(); …

109 109 Determinacy Process: “continuous mapping” of input sequence to output sequences Continuity: process uses prefix of input sequences to produce prefix of output sequences. Adding more tokens does not change the tokens already produced The state of each process depends on token values rather than their arrival time Unbounded FIFO: the speed of the two processes does not affect the sequence of data values F x1,x2,x3… y1,y2,y3…

110 110 Scheduling Multiple choices for ordering process execution Dynamic scheduling Context switching overhead Data-driven scheduling –run processes as soon as data is available –boundedness Demand-driven scheduling –run a process when its output is needed as input by another process Bounded scheduling [Parks96] –define bounds on buffers, if program deadlocks extend capacity A C B ABC…ABC…

111 111 Data-flow networks A Data-flow network is a collection of actors which are connected and communicate over unbounded FIFO queues Actors firing follows firing rules –Firing rule: number of required tokens on inputs –Function: number of consumed and produced tokens Breaking processes of KPNs down into smaller units of computation makes implementation easier (scheduling) Tokens carry values –integer, float, audio samples, image of pixels Network state: number of tokens in FIFOs +

112 112 Intuitive semantics At each time, one actor is fired When firing, actors consume input tokens and produce output tokens Actors can be fired only if there are enough tokens in the input queues

113 113 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i(-1)

114 114 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i(-1) i

115 115 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i(-1) i

116 116 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i(-1) i i *c1

117 117 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i(-1) i i *c1 *c2

118 118 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i(-1) i i *c1 *c2 *c1

119 119 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i i *c1 *c2 *c1 +

120 120 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i i *c1 *c2 *c1 + *c2

121 121 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i i *c1 *c2 *c1 + *c2 +

122 122 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i i *c1 *c2 *c1 + *c2 + o

123 123 Filter example Example: FIR filter –single input sequence i(n) –single output sequence o(n) –o(n) = c1 i(n) + c2 i(n-1) * c1 +o i * c2 i i *c1 *c2 *c1 + *c2 + o o

124 124 Examples of Data Flow actors SDF: Synchronous (or, better, Static) Data Flow fixed number of input and output tokens per invocation BDF: Boolean Data Flow control token determines consumed and produced tokens + 1 1 1 FFT 1024 101 select switch TF FT TTTF

125 125 Static scheduling of DF Number of tokens produced and consumed in each firing is fixed SDF networks can be statically scheduled at compile- time –no overhead due to sequencing of concurrency –static buffer sizing Different schedules yield different –code size –buffer size –pipeline utilization

126 126 Static scheduling of SDF Based only on process graph (ignores functionality) Objective: find schedule that is valid, i.e.: –admissible (only fires actors when fireable) –periodic (cyclic schedule) (brings network back to initial state firing each actor at least once) Optimize cost function over admissible schedules

127 127 Balance equations Number of produced tokens must equal number of consumed tokens on every edge Repetitions (or firing) vector v S of schedule S: number of firings of each actor in S v S (A) n p = v S (B) n c must be satisfied for each edge npnp ncnc AB

128 128 Balance equations BC A 3 1 1 1 2 2 1 1 Balance for each edge: –3 v S (A) - v S (B) = 0 –v S (B) - v S (C) = 0 –2 v S (A) - v S (C) = 0

129 129 Balance equations M v S = 0 iff S is periodic Full rank (as in this case) no non-zero solution no periodic schedule (too many tokens accumulate on A->B or B->C) 3-10 01-1 20-1 M = BC A 3 1 1 1 2 2 1 1

130 130 Balance equations Non-full rank infinite solutions exist (linear space of dimension 1) Any multiple of q = |1 2 2| T satisfies the balance equations ABCBC and ABBCC are minimal valid schedules ABABBCBCCC is non-minimal valid schedule 2-10 01-1 20-1 M = BC A 2 1 1 1 2 2 1 1

131 131 Static SDF scheduling Main SDF scheduling theorem (Lee ‘86): –A connected SDF graph with n actors has a periodic schedule iff its topology matrix M has rank n-1 –If M has rank n-1 then there exists a unique smallest integer solution q to M q = 0 Rank must be at least n-1 because we need at least n-1 edges (connected-ness), providing each a linearly independent row Admissibility is not guaranteed, and depends on initial tokens on cycles

132 132 Admissibility of schedules No admissible schedule: BACBA, then deadlock… Adding one token on A->C makes BACBACBA valid Making a periodic schedule admissible is always possible, but changes specification... BC A 1 2 1 3 2 3

133 133 From repetition vector to schedule Repeatedly schedule fireable actors up to number of times in repetition vector q = |1 2 2| T Can find either ABCBC or ABBCC If deadlock before original state, no valid schedule exists (Lee ‘86) BC A 2 1 1 1 2 2 1 1

134 134 From schedule to implementation Static scheduling used for: –behavioral simulation of DF (extremely efficient) –code generation for DSP –HW synthesis (Cathedral by IMEC, Lager by UCB, …) Code generation by code stitching (chaining custom code for each actor) Issues in code generation –execution speed (pipelining, vectorization) –code size minimization –data memory size minimization (allocation to FIFOs)

135 135 Code size minimization Assumptions (based on DSP architecture): –subroutine calls expensive –fixed iteration loops are cheap (“zero-overhead loops”) Absolute optimum: single appearance schedule e.g. ABCBC -> A (2BC), ABBCC -> A (2B) (2C) may or may not exist for an SDF graph… buffer minimization relative to single appearance schedules (Bhattacharyya ‘94, Lauwereins ‘96, Murthy ‘97)

136 136 Assumption: no buffer sharing Example: q = | 100 100 10 1| T Valid SAS: (100 A) (100 B) (10 C) D requires 210 units of buffer area Better (factored) SAS: (10 (10 A) (10 B) C) D requires 30 units of buffer areas, but… requires 21 loop initiations per period (instead of 3) Buffer size minimization CD 110 A B 1 1

137 137 Dynamic scheduling of DF SDF is limited in modeling power –no run-time choice –cannot implement Gaussian elimination with pivoting More general DF is too powerful –non-Static DF is Turing-complete (Buck ‘93) –bounded-memory scheduling is not always possible General case: thread-based dynamic scheduling (Parks ‘96: may not terminate, but never fails if feasible)

138 138 Summary of DF networks Advantages: –Easy to use (graphical languages) –Powerful algorithms for verification (fast behavioral simulation) synthesis (scheduling and allocation) –Explicit concurrency Disadvantages: –Efficient synthesis only for restricted models (no input or output choice) –Cannot describe reactive control (blocking read)

139 139 Quasi-Static Scheduling of Embedded Software Using Free-Choice Petri Nets Marco Sgroi, Alberto Sangiovanni-Vincentelli Luciano Lavagno University of California at Berkeley Cadence Berkeley Labs Yosinori Watanabe Cadence Berkeley Labs

140 140 Outline Motivation Scheduling Free-Choice Petri Nets Algorithm

141 141 Embedded Software Synthesis System specification: set of concurrent functional blocks (DF actors, CFSMs, CSP, …) Software implementation: (smaller) set of concurrent software tasks Two sub-problems: –Generate code for each task (software synthesis) –Schedule tasks dynamically (dynamic scheduling) Goal: minimize real-time scheduling overhead

142 142 Petri Nets Model

143 143 Petri Nets Model Schedule: t 12, t 13, t 16... a = 5 c = a + b t12 t13 t16

144 144 Petri Nets Model Shared Processor + RTOS Task 1 Task 2 Task 3

145 145 Classes of Scheduling Static: schedule completely determined at compile time Dynamic: schedule determined at run-time Quasi-Static: most of the schedule computed at compile time, some scheduling decisions made at run-time (but only when necessary)

146 146 Embedded Systems Specifications Static Quasi-Static Dynamic Specification Scheduling Data-dependent Control (if..then..else, while..do) Real-time Control (preemption, suspension) Data Processing (+, -, *...)

147 147 An example i k2 +o k1 Static Data Flow network Example: 2nd order IIR filter o(n) = k 2 i(n) + k 1 o(n-1)

148 148 Data Processing i *k2 + o *k1 Schedule: i, *k2, *k1, +, o IIR 2nd order filter o(n)=k 1 o(n-1) + k 2 i(n) Schedule: i, *k1, *k2, +, o

149 149 Data computation (Multirate) o Fast Fourier Transform i FFT o 256 Schedule: ii…i FFT oo…. o 256 i Sample rate conversion Multirate Data Flow networkPetri Net AB C DE 277382 F 5 Schedule: (147A) (147B) (98C) (28D) (32E) (160F)

150 150 Data-dependent Control io >0 *2 /2 Schedule: i, if (i>0) then{ /2} else{ *2}, o Petri Nets provide a unified model for mixed control and data processing specifications Free-Choice (Equal Conflict) Nets: the outcome of a choice depends on the value of a token (abstracted non-deterministically) rather than on its arrival time

151 151 Existing approaches Lee - Messerschmitt ‘86 –Static Data Flow: cannot specify data-dependent control Buck - Lee ‘94 –Boolean Data Flow: scheduling problem is undecidable Thoen - Goossens - De Man ‘96 –Event graph: no schedulability check, no minimization of number of tasks Lin ‘97 –Safe Petri Net: no schedulability check, no multi-rate Thiele - Teich ‘99 –Bounded Petri Net: partial schedulability check, complex (reachability-based) algorithm

152 152 Scheduling FC Petri Nets Petri Nets provide a unified model for mixed control and dataflow specification Most properties are decidable A lot of theory available Abstract Dataflow networks by representing if-then-else structures as non-deterministic choices Non-deterministic actors (choice and merge) make the network non-determinate according to Kahn’s definition Free-Choice: the outcome of a choice depends on the value of a token (abstracted non-deterministically) rather than on its arrival time

153 153 Bounded scheduling (Marked Graphs) A finite complete cycle is a finite sequence of transition firings that returns the net to its initial state Bounded memory Infinite execution To find a finite complete cycle solve f(  ) D = 0 t1t1 t2t2 t3t3 T-invariant f(  ) = (4,2,1) 22 2 t1t1 t2t2 t3t3 No schedule D = 1 0 -2 1 0 -2 f(  ) D = 0 has no solution

154 154 Bounded scheduling (Marked Graphs) Existence of a T-invariant is only a necessary condition Verify that the net does not deadlock by simulating the minimal T-invariant [Lee87] t1t1 t2t2 t3t3 T-invariant f(  ) = (4,2,1) 22 t1t1 t2t2 2 3 23 t3t3 T-invariant f(  ) = (3,2,1) Deadlock (0,0) t1t1t1t1t2t2t4 t1t1t1t1t2t2t4  = t 1 t 1 t 1 t 1 t 2 t 2 t 4 Not enough initial tokens

155 155 Free-Choice Petri Nets (FCPN) Marked Graph (MG) Free-Choice Confusion (not-Free-Choice) Free-Choice: choice depends on token value rather than arrival time easy to analyze (using structural methods)

156 156 t1t1 t2t2 t3t3 t5t5 t6t6 Bounded scheduling (Free-Choice Petri Nets) t1t1 t2t2 t3t3 t4t4 t5t5 t6t6 t7t7 t1t1 t2t2 t3t3 t5t5 t6t6 Can the “adversary” ever force token overflow?

157 157 t1t1 t2t2 t4t4 t7t7 Bounded scheduling (Free-Choice Petri Nets) t1t1 t2t2 t3t3 t4t4 t5t5 t6t6 t7t7 t1t1 t2t2 t4t4 t7t7 Can the “adversary” ever force token overflow?

158 158 Bounded scheduling (Free-Choice Petri Nets) t1t1 t2t2 t3t3 t4t4 t5t5 t7t7 t6t6 Can the “adversary” ever force token overflow?

159 159 Bounded scheduling (Free-Choice Petri Nets) t1t1 t2t2 t3t3 t4t4 t5t5 t7t7 t6t6 Can the “adversary” ever force token overflow?

160 160 Bounded scheduling (Free-Choice Petri Nets) t1t1 t2t2 t3t3 t4t4 t5t5 t7t7 t6t6 Can the “adversary” ever force token overflow?

161 161 Schedulability (FCPN) Quasi-Static Scheduling at compile time find one schedule for every conditional branch at run-time choose one of these schedules according to the actual value of the data.  ={(t1 t2 t4),(t1 t3 t5)}

162 162 Bounded scheduling (Free-Choice Petri Nets) Valid schedule  is a set of finite firing sequences that return the net to its initial state contains one firing sequence for every combination of outcomes of the free choices t3t3 t2t2 t1t1 t5t5 t4t4 Schedulable  ={( t 1 t 2 t 4 ),( t 1 t 3 t 5 )} t3t3 t2t2 t1t1 t5t5 t4t4 (t1 t2 t4)(t1 t2 t4) t3t3 t2t2 t1t1 t5t5 t4t4 (t1 t3 t5)(t1 t3 t5)

163 163 How to check schedulability Basic intuition: every resolution of data- dependent choices must be schedulable Algorithm: –Decompose (by applying the Reduction Algorithm) the given Equal Conflict Net (FCPN with weights) into as many Conflict-Free components as the number of possible resolutions of the non-deterministic choices. –Check if every component is statically schedulable –Derive a valid schedule, i.e. a set of finite complete cycles one for each conflict-free component

164 164 Allocatability (Hack, Teruel) An Allocation is a control function that chooses which transition fires among several conflicting ones ( A: P T). A Reduction is the Conflict Free Net generated from one Allocation by applying the Reduction Algorithm. A FCPN is allocatable if every Reduction generated from an allocation is consistent. Theorem: A FCPN is schedulable iff –it is allocatable and –every Reduction is schedulable (following Lee)

165 165 Reduction Algorithm t6 t1 t5 t4t2 t7 t6 t7 t1 t4t2 t4t2 t6 t1 t5 t4 t7 t2 t6 t7 t1 t4t2 t1 t3 t5 t4 t6 t2 t7 T-allocation A1={t1,t2,t4,t5,t6,t7}

166 166 How to find a valid schedule t1t1 t2t2 t4t4 t5t5 t6t6 t7t7 t9t9 t8t8 t 10 t3t3 Conflict Relation Sets:{t2,t3},{t7,t8} T-allocations: A 1 ={t 1,t 2,t 4,t 5,t 6,t 7,t 9,t 10 } A 2 ={t 1,t 3,t 4,t 5,t 6,t 7,t 9,t 10 } A 3 ={t 1,t 2,t 4,t 5,t 6,t 8,t 9,t 10 } A 4 ={t 1,t 3,t 4,t 5,t 6,t 8,t 9,t 10 }

167 167 Valid schedule t1t1 t2t2 t4t4 t5t5 t6t6 t7t7 t9t9 t1t1 t3t3 t5t5 t6t6 t7t7 t9t9 t1t1 t2t2 t4t4 t6t6 t8t8 t 10 t1t1 t3t3 t5t5 t6t6 t8t8 (t 1 t 2 t 4 t 6 t 7 t 9 t 5 )(t 1 t 3 t 5 t 6 t 7 t 9 t 5 )(t 1 t 2 t 4 t 6 t 8 t 10 )(t 1 t 3 t 5 t 6 t 8 t 10 )

168 168 C code implementation  ={(t1 t2 t1 t2 t4 t6 t7 t5) (t1 t3 t5 t6 t7 t5)} t1 t3 t5 t4 t2 2 t6t7 Task 1: { t1; if (p1) then{ t2; count(p2)++; if (count(p2) = 2) then{ t4; count(p2) = count(p2) - 2;} else{ t3; t5;} } Task 2: { t6; t7; t5; } p1 p3 p4 p2

169 169 Application example: ATM Switch Input cells: accept? Output cells: emit? Internal buffer Clock (periodic) Incoming cells (non-periodic) Outgoing cells No static schedule due to: –Inputs with independent rates (need Real-Time dynamic scheduling) –Data-dependent control (can use Quasi-Static Scheduling)

170 170 Petri Nets Model

171 171 Functional decomposition 4 Tasks Accept/discard cell Output time selector Output cell enablerClock divider

172 172 Decomposition with min # of tasks 2 Tasks Input cell processing Output cell processing

173 173 Real-time scheduling of independent tasks + RTOS Shared Processor Task 1 Task 2

174 174 ATM: experimental results Sw Implementation QSS Functional partitioning Number of tasks 2 5 Lines of C code 1664 2187 Clock cycles 197526 249726 Functional partitioning (4+1 tasks)QSS (2 tasks)

175 175 Conclusion Advantages of Quasi-Static Scheduling: QSS minimizes run-time overhead with respect to Dynamic Scheduling by Automatic partitioning of the system functions into a minimum number of concurrent tasks The underlying model is FCPN: can check schedulability before code generation Related work –Larger PN classes –Code optimizations


Download ppt "1 Petri Nets Marco Sgroi EE249 - Fall 2002."

Similar presentations


Ads by Google