Download presentation

Presentation is loading. Please wait.

Published byArianna Rooney Modified about 1 year ago

1
LEARNING ALGORITHMS for SERVO- MECHANISM TIME SUBOPTIMAL CONTROL 1 - Time Optimal Control - Switching Function (SwF) 2 - Sliding Mode Control (SMC), Adaptive Sliding Mode Control 3 - Learning Control (LC) based on SMC – approximation of SwF 4 – LC based on Neural Nets – quasi real time computation 5 – LC based on Identification – real time computation of SwF 6 - Real Time Simulation M. Alexík, University of Žilina, Slovak Republic

2
(6+1)x 0.6 kg Cart with variable load Time and position Display DC Drive with gear Hand Control Communication with PC – RS 232 Load GOAL: Derivation of Time Optimal Control algorithm for Servomechanism with variable load. „Time Optimal (feedback) Control“ - „Sliding Mode Control“ – estimation „Time Optimal (feedback) Control“ - „Sliding Mode Control“ – estimation of switching function (switching curved line, or approximation- only line, polynomial). of switching function (switching curved line, or approximation- only line, polynomial). For variable unknown load of servomechanism and time suboptimal control is necessary For variable unknown load of servomechanism and time suboptimal control is necessary to apply learning algorithm for looking for switching function (curved line, line). to apply learning algorithm for looking for switching function (curved line, line). Problem: Problem: Nonlinearities – variable fiction, two springs – non sensitivity in output variable Laboratory Model of Servomechanism Spring Load M. Alexík, KEGA,06- 08, Žilina, Sept µP Atmel

3
Physical Model of Servomechanism real time simulation 61Brown Green Purple Blue Weights circular mecha- nism Weights - cart Colours T em [s ] KmKm )( )( )( )( )( )( (t) )()()( 2 1 x bAxx ty tyw te te tx tutt m = Weights(changeable), b = coef. of friction (changeable) then K m, T em are also changeable S(s) = s( T em s + 1) KmKm K m = 1/b, T em = m/b Controller output 20 times reduced scale Umax = 5 [V] Umin = - 5 [V] D/A converter pulse modulation of Action variable u(k) u(k) max = 5 [V], u(k) min = -5 [V],

4
L [m] Controller output 20 times reduced scale Why we need hysteresis in the controller output? Controller output have to be without oscillation (zero ) in steady state. But then there is small control error in steady state, which depends from controller output, sampling interval and plant dynamics. If good condition also transient state is without oscillation. Time Optimal Responses digital simulation hysteresis (non sensitivity - dead zone) on controller output hysteresis (non sensitivity - dead zone) on controller output From hysteresis on controller output Hysteresis in this simulation examples deS= (-0.05 0.05) t [s ] Analog model + Real time Hardware in Loop Simulation Sampling interval: 5, 10, 20 [ms]. Problem with Interrupts: DOS, Linux, W98. XP Position measurenment: 1 m = 2600 impulses 1 impulse = mm Speed measurenment: 0.1 ms- 1 = 260 imp/s = 1.3 imp/5 ms

5
Time Optimal Responses real time simulation speed measurement problem speed measurement problem Sampling interval 5 [ms], no filter, no noise Sampling interval 20 [ms] Add special noise signal to the measured position for elimination of speed quantization error, and after this filtration. Or state observer for position and speed as signal from state reconstruction (see later).

6
position[rad] [ rad/s] Cp3 Cp=e(t) / [ e’(t)], e’(t)=d/dt[e(t)] x 2 =e’(t) x 1 (t)= e(t) [rad] t[s] y Optimal responses and trajectories Cp1 Cp2 1-nominal Jm -T 1,K 1 2- J= 5*Jm – T 2,K 2 3- J= 10*Jm - T 3,K 3 L[m] Cp3= x 1,3 (C p ) / x 2,3 (C p ) = tg(α 3,p ) x 1,3 (C p ) x 2,3 (C p ) α 3,p Cp= optimal slope of switching line M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

7
Optimal trajectories and switching curved line Switching line for w = 100 [rad/s] – Cp3 Switching function Switching line for w = 300 [rad/s] - Cp1 One Switching curved line (switching function) but More switching line (depends on set point) S(s) = Km s(Tem*s +1) S(s) = s(0.108 s ( s + 1 ) + 1) Cp1 < Cp3 M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

8
Switching curved line function It can be computed only for known „K m “, „T em “. Switching function Switching line u[x(t)] = 0 for -deS V[x(t)] deS Umax for V[x(t)] deS Umin for V[x(t)] deS deS – hysteresis of state variable measurement U max U min deS 0S S ~

9
x s(x) u+u+ u-u- Plant u Controller sAsA sBsB x 1 [m] x 2 [m/s] yAyA yByB u>0 u<0 Sliding mode – trajectory “slide” along sliding line Condition of SMC: Lyapunov function : Relay control: C x – instantaneous slope of trajectory point Sliding Mode Control - SMC

10
1. t-suboptimal control with SL (Switching Line) 2. t-suboptimal adaptive control 3. t-optimal control SL for t-optimal control. ΔCΔC x 1, t x 1, y d C u>0 u<0 Adaptive adjusting of switching line slope 1. C i - initial slope of switching line Adaptive - SMC M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

11
position error - Xe1 [rad] Xe2 [rad/s] time optimal trajectory 2 - adaptive trajectory 3 - sliding mode trajectory Time [s] X1 [rad] 1 - time optimal response 2 - adaptive sliding mode response 3 - conventional sliding mode response 4 - actuating variable for response 2 (times 10) Adaptive Algorithm based on Sliding Mode

12
position error - Xe1 [rad] Xe1(1) Xe2(1) Ct = Xe1 Xe2 [rad/s] Es - angular speed error Cp0 Cp C time optimal trajectory 2 - adaptive trajectory 3 - sliding mode trajectory 1,2,3 Copt Cp IF C_1= C3 > Ct =C4 (C5) t THEN Change Cp Adaptive adjustment of the switching line slope

13
Optimal trajectory of all II. Order Systems Optimal trajectory of all II. Order Systems slope of switching line on the optimal trajectory be on the decrease slope of switching line on the optimal trajectory be on the decrease S1S1 S2S2 S3S3 S4S4 x1x1 x2x2 S 3 (s) = 1 s(2*s +1) S 1 (s) = 1 s 2 S 2 (s) = 0.5s + 1 2*s 2 S 4 (s) = 1 (0.7*s +1)(1.7s+1) M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

14
Automatic generation of suboptimal responses and trajectories

15
Generation of suboptimal trajectories 1 4 Point for slope of Suboptimal switching line Learning = looking for Points for suboptimal switching line + look up table (memory) for its + classification (identification) of Load (parameters of transfer function – parameters of controlled process) M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

16
1- SL - Switching line 2- LSC - Linear switching curve 3- NS - Neural network 4- SCL - Switching curved line. (Identification of Km, Tem and computation of SCL) Classification option: 1 -Hopfield net 2 -fuzzy clustering 3 -ART net (1-3 – classific. off line) 4 – Parameters identification (on line) Possibilities of Learning (historical evolution) 1- fractional changing of SL slope and polynomial interlace 2- adaptation of LSC profile (online and offline) 3- simulation of finishing trajectories on neuro – model (1,2,3 – off line learning) 4- continuous identification of process parameters (Km, Tem) (on line learning) 1- slope of SL and polynomials parameters 2- LSC points 3- NS veights 4- structure of SC function Clasification (Identification) (number of load) s(x)s(x) u+u+ u-u- Plant u SMC –control algorithm s(x) – switching function (line) Learning algorithm Memory c_sus x After learning process, recognition of „number of load“ – Km, Tm Learning Controller based on SMC basic problems basic problems Classification problems = non linearity's in K m, T em bring about changing instantaneous values of this parameters and then also changing of step response for the same number of load. M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

17
s(x) = -x 2 -Cx 1 c_sus C min C max krok e(0) c_sys a 0 a 1 a 2 Memory - „look up“ tables Classification u+u+ u-u- plant u SMC Learning algorithm c_sys x Learning algorithm based on switching line – SL Real Time Simulation Experiments Real Time Simulation Experiments Memory Polynomial approximation of switching function }

18
x1x1 x2x2 xnxn S1S1 S2S2 SmSm w ij Stochastic asynchronous dynamics : scale adaptation: Transient response y(t) t t α1α1 α2α2 α3α3 α4α4 α5α5 α6α6 α7α7 4α14α1 4α24α2 4α74α7 Pattern coding Classification - Hopfield NET M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

19
1. S1 3. S4 5. S1 7. S5 9. S2 11. S3 2. S2 4. S5 6. S4 8. S2 10. S4 12. S3 n.č.1 n.č.3 s.č.1 s.č.3 s.č.2 n.č.4 n.č.2 s.č.3 s.č.2 s.č.3 Input Output Evolution of nets energy according to number of iteration Advantage: quality disadvantages: speed, number of pattern limited, pattern numbering Classification - Hopfield net (N=255)

20
Plant Input data (x) Output (y) S1 [x 1, x 2,..., x 25 ] 1 1 S2 [x 1, x 2,..., x 25 ] SnSnSnSn [x 1, x 2,..., x 25 ] n n FIS (Sugeno) y x1x1 x2x2 x 25 Data clustering (counts of rules and membership functions ) Parameters estimation in consequent rules of fuzzy classifier t y(t) Fuzzy classification M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

21
1. S1 3. S3 5. S2 7. S2 9. S4 11. S4 2. S3 4. S3 6. S3 8. S3 10. S3 12. S2 Disadvantages: too lot of parameters, necessity to keep data patterns Advantage: quality Fuzzy classification M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

22
x1x1 x2x2 xnxn Initialisation: Recognition: Comparison: Searching: Adaptation: t y(t) 4α14α1 4α24α2 4α74α7 y1y1 y2y2 ymym Control signal 2 Control signal 1 w ij t ij Advantages: quality, speed Classification – ART network M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

23
SC for t-optimal control linearized SC r x(m) x(n) x1x1 x2x2 SC for t-optimal control LSC step=1 LSC step=2 Method for LSC points setting: Learning switching curve (LSC) definition

24
On-line – according to adaptation For LSC points. Off-line – according to trajectory profile For LSC points x 1, t x 2, y 1. trajectory 2. LSC 3. SC for t-optimal control. 4. system output ∆C∆C x 1, t x 2, y 1. LSC according to adaptation 2. LSC according to trajectory 3. SC for t-optimal control. 4. System output Settings of LSC profile (1. Learning step ) M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

25
x 1, t x 2, y 1. LSC in single steps 2. SC for t-optimal control. 3. System output x 1, t x 2, y x 1, t x 2, y According to adaptation According to trajectory profile together Control on LSC for different set points M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

26
s LPK (x) c_sus C Pamäť Classification u+u+ u-u- System u SMC Learning Algorithm c_sus x Learning algorithm based on LSC Real Time Simulation Experiments Real Time Simulation Experiments

27
Two Neuro Networks: NS1 and NS2. First step: From measured values of input (Umax, Umin) and output [y(k)] to set up NS1. Then NS1 can generated t - optimal phase trajectories and to set up NS2. Second step: t – optimal control with NS2 as the switching function. It is possible to find t-suboptimal control only from ONE loop response (with switching line). This t- suboptimal control is compliance for all set points (but only for one combination of loads). NS1- 2 layers (6 and 1) neurons with linear activation function. (Model of servo system (output) with inverted time). n= transfer function order (2,3) NS2 - 3 layers, model of switching function. Input layer – 6 neurons with tangential sigmoid activation function. Hidden layer – 6 neurons with linear activation function Output layer - 1 neuron with linear activation function. For 2 order transfer function it is needed from simulation approximately 300 points as the substitution of switching function. Learning algorithm based on neuro networks- NN Basic description M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

28
Learning algorithm based on NN Steps of computation x 2 (t), y(t) x 1 (t),t Output (1.step) Output (2.step) Phase trajectory (1.step) Phase trajectory (2.step) Switching function (2.step) Switching function (1.step) 1. Step: Real time response 2. Step a: Off line computation of switching function : 5 [s]{DOS}, 3 [s] Windows on line computation – {in progress} b: Real time suboptimal time response M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

29
s NS2 (x) c_sus W NS2 Memory Clasification u+u+ u-u- System u SMC Learning algorithm c_sus Model NN1 Block of simulation according to NN model x NN model in invert. time: NN switching function: Learning algorithm based on NN Real Time Simulation Experiment

30
x 2 (t) y(t) x 1 (t),t Output (1.step) Output (2.step) Phase trajectory (1.step) Phase trajectory (2.step) Switching function (2.step) Switching function (1.step) Learning algorithm based on NN Steps of Computation

31
Learning algorithm based on NN, Simulation Experiments Load: 1+0 Load: 1+2 Load: 1+4 Load: 1+6 Response quality: Settling time, t R = 2.83 [s], 3.31 IAE: = 1.53 [Vs], 1.63 Response quality: t R = 3.68 [s], 3.99 IAE = 1.76 [Vs], 1.81 Response quality: t R = 4.17 [s], 4.74 IAE = 1.89 [Vs], 1.93 Response quality: t R = 4.54 [s], 4.88 IAE = 1.98 [Vs], 2.01

32
Initial switching plain: Switching plain according to NN2 Points of phase trajectories from simulation koncový state Model phase trajectories for u=U min x1x1 x2x2 x3x3 x1x1 x2x2 x3x3 t y(t) First control according to switching plain Second control according to neuro nete NN2 Model phase trajectory for u=U max Model phase trajectory for u=U min Model phase trajec- tories for u=U max Optimal Trajectories for 3. Order Controlled System Computed by Neuro Networks M. Alexík, KEGA,06- 08, Žilina, Sept )2)(1 7,0( 4 )( sss sS

33
K m =[x 2 (t/2)] 2 /{U max [2x 2 (t/2)-x 2 (t)]} T em = -t/{ln[1-(x 2 (t)/K m )]} S(s) = Km s(Tem*s +1) S(z) = b 1 z -1 + b 2 z a 1 z -1 + a 2 z -2 Step response of transfer function: h(t) = K m t + K m T em exp ((-1/T) t) – K m T em Analytical derivation of parameters K m and T m Is possible with static optimisation or continuous identification 1. Static optimization from h(t) K m -1 – t = T em (exp((-1/T em ) t) - 1) 2. Continuous Identification. Parameters of discrete transfer function from Identification (a i, b i ) and recalculation to parameters of continuous transfer function K m, T em Advantages: Direct calculation of parameters of switching function Disadvantages: Real time calculation of RLS algorithm. 3.Iterative computation of K m, T em. T em = T 0 /[ln(1/a 2 )] K m =b 1 /[T 0 +T em (a 2 - 1)] Classification with Identification 3 possibilities 3 possibilities M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

34
Classification with Identification speed measurement problem speed measurement problem x 2 (t) = [x 1 (k) - x 1 (k-1)]/ T 0 T 0 – sampling interval Speed measurenment: 0.1 ms- 1 = 260 imp/s = 1.3 imp/5 ms x 1 [mm] u(k) [V] 2 3 -controlled variable 2 – u(k)- control output 5 [V] 4 -set point w= 400 [mm] x ! - position [mm] 3 Time [s] 1 - control trajectory Settling time = 3.75 [s] 1 - control trajectory 2 – u(k)- control output 5 [V] 4 -set point w= 0.6 [m] -500

35
Classification with Identification M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

36
K m =[x 2 (t/2)] 2 /{U max [2x 2 (t/2)-x 2 (t)]} T em = -t/{ln[1-(x 2 (t)/K m )]} Classification with Identification state estimator state estimator b r S z -1 F c h y(t) u(k) w e s δ x(k)= x 1 (k),x 2 (k) ε (k) ?( k) d[e(t)]/dt M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

37
Learning algorithm - Identification + state estimator real time hardware in loop simulation real time hardware in loop simulation

38
Learning algorithm - Identification + state estimator real time hardware in loop simulation real time hardware in loop simulation Load: 1+2 Load: 1+4 Load: 1+6

39
Learning algorithm - Identification + state estimator real time hardware in loop simulation real time hardware in loop simulation M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

40
Set Point Identification and state estimator Switching function learned with NN Switching function identification Trajectory - Neuro State trajectory - estimator Controller output - Neuro *x 1, t[s] 0.5*x 1 0.4*u(k) y (t) Settling time [s] Set point [m] Integral of ab- solute value of tthe error [ms] Algo - rithm Switching function Identif.+ estimation Neuro Comparison of Learning algorithm – loop response quality Comparison of Learning algorithm – loop response quality real time hardware in loop simulation real time hardware in loop simulation

41
3 –Realization t – optimal control based on sliding mode and Neuro Nets (real time computation of NS1 and NS2) but also real tike identification with estimator state have to use parallel computing. So control algorithm than can be classified as „intelligent control“. 2- Nowadays, paradigm of optimal and adaptive control theory culminates. It is needed to solve problems such as MIMO control, multi level and large-scale dynamic systems with discrete event, intelligent control. That demands to turn adaptive control chapter into appearance of classical theory. Moreover, we need to classify adaptive systems with one loop among as classic ones and focus on multi level algorithms and hierarchical systems. Then we will be able to formulate new paradigm of large-scale systems control and intelligent control. Conclusion and outlook M. Alexík, KEGA,06- 08, Žilina, Sept. 2008

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google