!DENT!F!CATION OF ELECTROLYTIC CELL PARAMETERS USJNG A SELF-TUNING PREDICTOR by FREDERICK LLOYD COHEN B.S.E.E. TUFTS UNIVERISTY 1974 Submitted in Partial Fulfillment of the Requirements .. for- the Degree of Master of Science at the Massachusetts Institute of Technology May 1978 / j Sfgn~ture of Author. ~ Signature redacted ~ -- • - '! • ' - • '' • ~. ~ ! ~ - ! ~ • • • • • .. • • t • • • • Depart,IT)ent c;,f Electrical Engineering, May 26, 1978 Signature redacted Certified by. . ,. -r~~~:-:· / .- . --'/~·~· > .· • ~ / • rhe~ i ~ Su~e~v j s~r. / Signature redacted --· Accepted ~-MASSACH~SE;SffiS~llf.iil~Dep·-{rt~;~~~ t c~mfui ~t;e o~ Grad~a~e • S~ude~t~ OF TECHNOU)GV"' A h' . JUL 2 8 19i8 re ives LIBRARIES MIT Document Services Room 14-055177 Massachusetts Avenue Cambridge, MA 02139 ph: 617/253-5668 1 fx: 617/253-1690 email: docs@mit.edu http://libraries.mit.edu/docs DISCLAIMER OF QUALITY Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort to provide you with the best copy available. If you are dissatisfied with this product and find it unusable, please contact Document Services as soon as possible. Thank you. J VUpJkP 4 I I IDENTIFICATION OF ELECTROLYTIC CELL PARAMETERS USING A SELF-TUNING PREDICTOR by FREDERICK LLOYD COHEN Submitted to the Department of Electrical Engineering and Computer Science on May26, 1978 in parital fulfillment of the requirements for the Degree of Master of Science. ABSTRACT Electrode measurements of bioelectric signals are corrupted by noise, offset, and parameter variations. The confidence on observer has that the measurements accurately reflect the bioelectric potential determines the extent to which analysis may be conducted. A self-tuning predictor is proposed to remove electrode-induced noise and offset from the observation signal, taking into account effects of parameter variations inthe biological medium and at the electrode junctions. Thesis Supervisor: Timothy L. Johnson Title; Associate Professor of Electrical Engineering F--i- -3- ACKNOWLEDGEMENT I take this opportunity to express my gratitude to my advisor, Profes- sor Tim Johnson, working with him has been a rewarding and educational experience. Particular thanks goes to Wolf Kohn. Though we worked together for a short time he was most helpful in introducing me to MIT's adage computer facility. Lou Dadok was extremely helpful in preparing the electrolytic cell and obtaining the impedence vs. frequency plots. The work done in this thesis has been partially supported by funds from the National Science Foundation under Grant NSF ENG 77-05200. DEDICATION To my mother and step-father Lillian and Joseph Goff I I --r -5- TABLE OF CONTENTS ABSTRACT ACKNOWLEDGEMENT DEDICATION CHAPTER 1 CHAPTER 2 CHAPTER 3 CHAPTER 4 CHAPTER 5 Introduction to the Problem 1.1 Introduction 1.2 Summary of Thesis Results 1.3 Organization of the Thesis Model ing 2.1 Introduction 2.2 The Model 2.3 The Optimal Predictor 2.4 The Self-Tuning Predictor Simulation Studies 3.1 Introduction 3.2 Implementation 3.3 Analysis of Simulation Results Electrode Studies 4.1 Introduction 4.2 Implementation 4.3 Discussion of Results Summary and Conclusions 5.1 Summary and Conclusions 5.2 Recommendations for Future Research Page 2 3 4 7 7 8 9 11 16 18 23 23 23 28 31 31 31 33 37 37 38 I I -6- Page APPENDIX ho FIGURES 46 TABLES 57 REFERENCES 63 -7'- CHAPTER I INTRODUCTION TO THE PROBLEM 1.1 INTRODUCTION Galvani demonstrated, through his third experiment on contraction the existence of a biolectric potential [1]. Since this discovery during the last decade of the eighteenth century, the use of electrodes to measure bioelectric events has progressed significantly. Until recently, careful attention to electrode selection and application has been the means of mini- mizing electrode artifacts [2]. As experimental situations become more de-': manding, requiring greater refinement of the measured signal, this empirical method, basic band-pass filtering methods, and fixed parameter linear fil- ters, are ineffective in selectively removing electrode effects. The adaptive predictor, introduced by Wittenmark and Astrom [4],[8] takes ad- vantage of modern statistical filtering theory to directly estimate para- meters of a minimum mean square error predictor from input-output data. The tuned parameters are then utilized for the prediction of future obser- vations. The optimal predictor parameters are not independent of the plant paramters: we investigate the possibility of deriving the plant parameters from the estimated predictor parameters. The usual experiment involves placement of metallic electrodes in contact with an electrolyte (electrode paste or biological material such as the skin) in order to make measurements of bioelectric potential either in response to some underlying bioelectric event or to a stimulating cur- rent pulse. The potential actually measured at the terminals of the elec- trodes is a result of the stimulating event and electrode-induced distor- I I -8- tions. Geddes points out [2] that even when procedural precautions are taken to minimize the electrochemical potential difference between elec- trodes, there often exists a residual potential difference which may be unstable and randomly varying. The origin of this residual voltage is hypothesized as slight differences in electrode metal or surface contamin- at ion of the electrodes. There are other sources of disturbance; for example, mechanical vibrations in the electrode-electrolyte interface gene- rate electrical artifacts in the frequency spectrum of the bioelectric source so that simple filtering can not be employed without loss of the desired signal component. Environmental changes such as temperature con- centration and pressure may result in parameter drift that would be dif- ficult to remove with a fixed parameter filter. The self-tuning predictor learns from experience and is capable of tracking these types of parameter fluctuat ions. 1.2 SUMMARY OF RESULTS The self-tuning predictor has been applied to a simulated plant, gene- rating input-output data digitally, and to data obtained from experiments with an electrolytic cell. The self-tuning predictor is compared to var- ious non-adaptive predictors, including the optimal predictors. The results of the simulation studies presented in tables one through four, lead us to the conclusion that the self-tuning predictor is very good at reducing the prediction error to a minimum. However there exist parameter sets, beside the calculated optimal sets, that are comparable in their prediction capabilities. The experimental results utilizing the cell, presented in tables 5 -9- and 6, indicate the estimated parameters of the predictor converge to values that upon implementation in a non-adaptive predictor have predic- tion capabilities comparable to the self-tuning predictor. In conclusion, the self-tuning predictor is an appropriate means of filtering random noise from observation signals, with the added feature of convergence to parameter sets that are nearly optimal. It is also shown that the self-tuning predictor is capable of following trend variations in the plant parameters. 1.3 ORGANIZATION OF THE THESIS This thesis deals with the applicability of a particular filtering scheme to the problem of estimating the true plant output given noisy observations, and the indentification of the underl ying plant parameters. In the following chapter on modelling, a continuous time state space model is developed from an equivalent circuit model of the electrode-subject system. The continuous time model of the plant is then discretized and expressed in auto-regressive form. The optimal predictor assuming known and constant plant parameters is derived to obtain the minimum square error. A type of extended Kalman filter for the predictor gains is then imple- mented for the case when the plant parameters are not a priori known or include some time variation. The following twochapters describe the implementation of the self-tuning predictor. A simulation study was conducted to help monitor proper coding of the algorithm along with presenting the opportunity to study the filter response in a controlled environment. The fourth chapter concerns the electrolytic cell study. A cell simi- -10- Var to the one studied by Johnson and Salzsieder [3] was used. The elec- trodes in this application performed both the stimulation and recording functions. Time series data was processed to determine estimates of the predictor parameters. The estimates are compared to those derived manu- ally from impedence measurements on the cell. The last chapter concludes the thesis with a summary and suggestions for further research. I I -11- CHAPTER 2 MODELLING 2.1 INTRODUCTION In order to apply modern filtering theory it is necessary to compose the problem in an appropriate manner. This chapter is intended to form a bridge between the empirical knowlege that exists concerning application of electrodes and the structured form of the theory. We begin with a linear circuit representation of the electrodes and their environment, generalize for variation in the environment and then develope a discrete model whose output may be considered an observation of the underlying system parameters. 2.2 THE MODEL To facilitiate conceptualization of the relationship between the bio- electric source, medium, and the electrodes refer to Figure la. Electrodes are placed on the surface of a subject in order to make observations of either the underlying source signal or response to stimulation from the electrodes themselves. A slight modification of Geddes' [2] approximate equivalent circuit for the electrode arrangement of Figure la is shown in Figure lb. Johnson and Salzsieder [3] have investigated the effects of variation of certain silver/silver cloride cell parameters on cell impedence. Johnson [9] has suggested the generalization of the circuit model of Figure lb by incorporating functional dependences of the model elements on the cell para- meters. Let p be the set of time varying parameters: p,= temperature of the medium P2 = bulk solution concentration of ion(s) participating wimppomppomEpw I I -12- in the electrode reaction p3 bulk solution concentration of non-participating ions p =pl9,p 2,p3 The following functional dependencies may be derived using exper- imental data obtained by methods presented by Geddes Il]. RI= R(p1 ,p2 'p3 vei = vei(p 1 p 2 p3 ,vs), i=1,2 R2 = R2(pl'p2 ,s) C = C(p 1,p2 vs R3 = constant Analysis of the circuit of Figure lb results in the following set of equations: x (t) = ax1 (t) + S(vs (t) + ve(t)) la y(t) = Yx1 (t) + g(vs(t) + Ve (t)) lb ve(t) = ve (t) + ve2(t) Ic a = -(2(R + R2) + R3)/CR2(2R + R3) $ = 1/(C(2RI + R3)) y = -2R3/(2R 1 + R3) = R3/(2R] + R3) The total junction potential, v (t), may be decomposed as: e v (t) = W'(p)v_ (t) + v0 (p) The first term on the right hand side represents fluctuations in the half cell potential due to the driving source voltage. The second -13- term shows the implicit dependence of the half cell potential on the parameters p. Substituting for v (t) in equations la,b: x (t) = tx(t) + M(1 + S'(p))v (t) + Sv"(p) 2a1s e - y(t) = YxI (t) + (1 + 61p))vs(t) + Ev(p) 2b - e In general, p is dependent on time, but substantial simplification results when we assume parameter variation is significantly slower than the response time of the biological system. We then make the approx- imat ion: p(t) = p0 and v0(p) = v0(_p 0 ) for ali't. Recognizing d o " - v 0(p) = 0 we can augment the state equation 2a setting x2 = v0 (p0) and x =0:2 e - 2 .t)a(t))0 B x(t0) (t) = = o t]+ L vs(t) 3a and the observation equation is: y(t) = fY E] Lx(t) + yl + s'(p)) Vs (t) 3b x2(t) The system equations are now in the matrix form: I - I I -14- k(t) = A x(t) + B vs(t) 4a y(t) = C x(t) + D vs(t) 4b whe re y(t) and v s(t) are scalars x(t) is aC2xl) vector A is a(2x2) matrix B is a(2x1) matrix C is a(lx2) matrix D is a(lx1) matrix The auto-regressive representation of the system may be derived by first obtaining the sampled data equivalent to equations 4a,b. x(t i+) = x(t 1) + Bu(t.) 5a y(t ) = tx(t.) + Du(ti 1) 5b where W=exp(AA) A=t i+1 - t 6a B expfA (t - T) ] BdT 6b C = C 6c 0 = D 6d U(ti ) = Vs(t = t ) for t < t < t i+l u(t1 ) is the piecewise constant sampled data equivalent of the continuous input vs(t). The following notational convention has been assumed: -15- t = continuous time .th t. = discrete instant of time representing the i sampl ing instant. For constant sampling interval A, t. = iA. The sampled data transfer function matrix is: A+ G(z) = Y(z)/U(z) G(z) may be written as G'(z ) = Y(z)/U(z). G(z ) is a -1 -1 ratio of polynomials in z . Taking z to be the backward shift (in time) operator [5] the system equations may be expressed in the autoregressive form: y(t ) = -aly(t il)-a2y(ti-2)-..-.-ay(ti-n)+bu(t 1)+b2 u(ti-2+...+ b u( t1 ).bm ( -m - The coefficients in this equation inherit the dependence on p, which is assumed to vary slowly with time. We will incorporate effects of nonlinearities and uncertainty in the source u(t.) as a single disturbance term v(t1) acting on the output y(t ) 18]. v(t1 ) is taken to be stationary, white and zero mean, with covariance E[v(t.)v(t.)] = V6(t.-t.). For convenience with later equation manipulation the autoregres- sive equation is represented in operator form: A(z ) y(t.) = B(z-I) u(t1 ) + C (z ) v(t) 7 I I -16- where: -1 -1 -nA(Z ) = 1 + a1 z + *.. + az B(z) bz + b2Z + ... + bmz C(z ) = 1 + c 1z + ... + cnZ z = The Backward Shift Operator We will concentrate on the case C(z~) = 1. 2.3 THE OPTIMAL PREDICTOR Consider the stochastic process introduced in the last section: A(z 1 ) Y(t.) = B(z ) u(t.) + C(z1 ) v(t.) 8 where Tv(t ) ,i = 0,1,2,...] is the output sequence Iu(t.) , i = 0,1,2,...] is the input sequence, and fv(t ), i = 0,1,2,...] is the noise sequence. For the moment A(z ), B(z ), C (z ) are assumed to be known and constant. The k step-ahead prediction of the output signal, given observations through time t. is denoted y(ti+k/t i The predictor error is defined as: E(t i+k) =Y(ti+k) - y(t i+k/tI). 9 The loss function is taken to be: L = EE2(t i+k) ] . 10 ,- F ------ lF -- -17- We now wish to find the predictor which minimizes the loss function. Astrom 18] has solved the prediction problem using the i'denti ty: C(z )=A(z )F(z ) + z-k G(z ) 11 where F(z ) = 1 + f z + .. k + fkl-k+1 G(z-) = g0 + g 1Z- + .. + g 1n-z-n+1 The process (8) may be written as: y(t ) = fB(z )/A(z )] u(t.) + IC(z -)/A(z )] v(t.) 12 The prediction error equation (9) becomes: E(ti+k) = zk(B(z I)/A(z )) u(t.) + zk C(z )/A(z )) v(t.) - Y(ti+k/t;) It can be shown that (ti+k i G(z 1 ) y(t1) + (zk _ G(z ) B(z) u(t.) 13y -i~iC (Z-. C(z )I A(z 1) reduces the prediction error (9) to E(ti+k -I (C(z'1 )zk - G(z1)) V(t ) A(z ) Applying the identity (11) in the form: zk C(z I= zk A(z ) F(z ) + G(z ) (9) becomes -18- k - (ti+k) = zk F(z ) v(t.). Thus the predictor (13) makes the error, z, a moving average of v. Since there is no "energy storage" this gives a minimum variance. The k-step ahead prediction (13) is a linear combination of past pre- dictions, present and past observations, and future, present and past in- puts. The inclusion of future input values may present a problem depending on the experimental situation. If the electrodes are used for both stimu- lating and recording it is likely the experimenter knows the input series a priori or can implement the identification off line with measured values of the input s ignal. However, where the electrodes are employed for recor- ding only; and the underlying source signal is not known, estimation of the input series must be conducted. This thesis focuses on the experimental configuration using one pair of electrodes for stimulation and recording. 2.4 THE SELF-TUNING PREDICTOR The coefficients in (8) are taken to be time varying and imperfectly known as a means of approximating the effect variation of p has on the equivalent model elements. Considering that the relation between the coef- ficients and p is very complicated we make the assumption that the coeffi- cients are themselves a random process generated by: q(ti+ 1) = p(t.) + Y(t.) where t()=[a a ... a b b ... b c c ...'c]T1 2 n 1 2 ml 1 n and Y(-) = LY Y2.....'.'' ''................' '2n+m {y(t )} is a zero-mean white noise sequence. I I Since A(z ) B(z ) and C(z ) are now time varying theperformance of the non-adaptive filter presented in the previous section will be suboptimal. The filter can be made adaptive by recognizing (8) as an observation on the process generating $(-). Kalman filtering can be applied to obtain mini- mum-variance estimates of C(t.) given observations through time t.. The new estimates of A(-), B(-), and C(-) are then used to derive the optimal predictor. This procedure requires three steps: parameter estimation and then calculation of the optimal predictor parameters via the factorization (11) and solving of the predictor equation (13). These latter steps are time consuming and thus not conducive to on- line implementation. It has been suggested in the literature [4,9,10] that the form of the predictor (13) may be adopted, allowing the parameters to be identified directly, as follows. For C(z )=l, the optimal predictor takes one of the following two forms: (i) Multiplication of both sides of equation (13) by the term A(z ) -1 -1 -1 k -1 -1A(z )y(tk/t.)=G(z )A(z )y(t )+(z -G(z ))B(z )u(t.); (15)i+k i I (ii) Recognize that the identity (11) may be expressed in the form (zk-G(z )) = zk A(z )F(z ) the predictor may be written as: A -l k -l -lY(ti+k/t.) = G(z )y(t.) + z F(z )B(z )u(t.) (16) In the case C(z1 )#l, (15) will be the optimal predictor. For either form of the predictor in the previous case, the predictor equation may be -20- written explicitly as: y(t ik/t.) = -q I'y(t. /t. Ai-)-...-q ny(t ikn/t. )n+k i+k-l - n i+k-n i-n + q Yn+ y(t)+...+q3n y(ti-2n+l + q3n+lu(t i+k-1 +. .+ u (t . ). (17) . q4n+m+k- i-m-n+l This equation may be factored into the following form: y(ti+k/ti) = H(t )Q (18) hre Q=[q q q ]T where [q=q1 2 4n+m+k-l and H(t ) = -y(t i k /t. -I)...-y(t i k n/t. n)y(t.).. Y~t u~t ) .u(t. y(ti-2n+l)u(t i+k-) i-m-n+l Each element of the vector Q is derived from the plant parameters by apply- ing (11) and (13). As stated earlier, if the plant parameters are known and constant the predictor parameters derived in this manner will repre- sent the optimal predictor; in the case where the plant parameters are un- known or time varying the predictor derived from our initial estimates of the plant parameters will be suboptimal. The time variation of the plant parameters and thus the predictor parameters is due to the dependence of the equivalent model elements on p. The assumption of slow time variation of p in relation to the response time of the subject, biological preparation, and electrodes, allows the assump- tion that the unkown coefficients Q are generated by the random process: -- -I---- I I -21- Q(t.) = Q(t._1 ) + w(tq) q(0) = Q(19) where w(t.) is zero mean, white sequence with correlation matrix E[w(t.)w(t )] = W 6(t.-t )j m j m We interpret (19) as the state equation of a linear dynamical system with the unknown predictor parameters as state variables. We view the observation given by (18), as an observation of the parameters, Q, where the observation matrix H('), is time varying, -consisting of past predic- tions, and measured input and output values. In order to apply the well known method of Kalman filtering it is necessary to make general statistical assumptions. We assume the initial estimate of Q, the state vector, is a Gaussian random variable: Q(0) ' N(Q ,P(O/-l)) We have already stated the assumptions concerning the state driving noise and the observation noise. The Kalman filter equations may be given as: Q(t.) = Q(t.i) + K(t.) (y (t.) - y(t.It.k)) (20a) i -1i i i -k T T -1K(t ) = P(t./t. )H (ti k)(V+H(tik)P(ti/t. 1)H (t.k)) (20b) P(t /t.) = P(t .i/t . -I)+W-K(t ) (V+H(t . -k)P(t ./t. i -)(2 cI i -k ))K-k t 1- (20c) HT (t.k))KJ(ti) y(t 1/ti-k) = H(ti-k)Q(ti-k) -22- A block diagram for this filter is presented in figure (3). The state vector Q(t), is interpreted as the parameter estimate incorporating obser- vations through time t.. The predict ion 9 (t /t-k) is the predicted plant output using the observat ions and parameter update through time t.k.i -k T I I -23- CHAPTER 3 SIMULATION STUDY 3.1 INTRODUCTION The simulation phase of this research was benficial in that it allowed controlled examination of the filter characteristics. The tests were designed to permit explicit determination of the capabilities of the self- tuning predictor when initial conditions, noise level, and design parameters were varied. The plant structure was also variable. Two cases are includ- ded here: constant plant parameters, and time-varying random parameters. 3.2 IMPLEMENTATION The simulation study was implemented as follows. A plant of the form (7) was selected. The choice was based on consideration of stability and computational simplicity. The input time series was previously determined and stored on disk. The noise sequence was generated digitally to allow for variability in thestarting "seed" value of the pseudorandom noise gene- rator. After certain initial conditions were established the observations were calculated recursively by application of the auto-regressive equation (7). With each new observation the self tuning predictor parameters were updated by application of the filter (20). The simulation ends when either the input series is exhausted or a specified number of iterations has occured. We now elaborate upon each of the above steps. Two plant structures were used for simulation. They differed in that one involved constant plant parameters, and the other a time series trend superimposed upon a nominal value. In general, the plant was of the following form: y(t.) = -a y(t._ )-a 2 y (t 9i-2)+b Iu(t. 1 )+v(t.) (21) The first set of tests involved constant plant parameters. In parti- cular they were: a = .25 a2 = .5 b,= 1.0 The operators are identified to be A(z ) = 1 + .25z + .5z-2 Bz-l -1 B(z) = z C(z ) = The stability of this plant is determined by ascertaining the location on the complex plane of the roots of the characteristics equation. The characteristic equation is: s2 + .25s + .5 = 0 Applying the quadratic formula the roots, a complex conjugate pair are obtained: = -.125 + .696j s2 = -.125 - .696j s19,21 = .707 I I -25- Since Fs.l<1 for i=1,2 the plant is stable. [5] The second set of tests involved trend variation in the plant para- meters. In this case the initial a.'s were taken to be the same as above.j The a.'s would then vary in time according to the following: a. (t )+l = a.(t.) + 6 where 6 was chosen to be a constant, .0001. The "poles" of this time- varying system move with time. Initially they are the same as in the case with constant plant parameters. At the end of these tests a,=.458 and a2=.708, and the new poles are s = -.23 .81. 1,2 Is1,21 = .842 The system is still stable. The input series was taken as a sum of sinusoids. Mehra [12] presents an algorithm for calculating optimal design inputs. We have chosen the input series as NF U(t.)= Z v.sin(2Trf.t.)j=l i where t. = iA i=1,2,... A = sampling period .th v. = magnitude of j sinusoid f.= frequency of j hsinusoid .= 1,2,...NF NF = number of frequencies superimposed. I I -26- Although different input groups were employed in the course of the research, one group was selected to impart some standar~rdizat ion to the results presented here. The input series was generated by recursive appliation of the following. u(t.) = .6 sin(lO0wiA)-.5sin(150iA)+.2sin(196TriA) with a sampling period of A=.00117 seconds. A random number generator was designed exclusively for this project. Because of some difficulty in obtaining a sequence with a variance equal to theone desired, a standard design variance was chosen. For the random number sequence generated, a mean and variance were calculated. Correction for bias was unnecessary since the mean was zero to 5 decimal places. To obtain sequences with various magnitudes and variances a scale factor was introduced. 2Let r be a random number sequence with mean vi and variance a2. The multiplication of r by a constant a results in a new random variable r'=ar. 2 2It is well known the mean of r' is cp and the variance of r' is a a2. The distribution, mean, and variance for the standard noise sequence is pre- sented in figure (4). The system was assumed to be initially at rest. Specifically, u(t)=O for t. 2080 YES WRITE STIMULUS SERIES u(-) AND RESPONSE SERIES y(-) = d(-) ON DISK Figure 10 Electrolytic Cell Implementation Flow Chart T I I -54- 10000 1000. Izl - (ohms) 1007- 10I 1 1 10 100 1000 FREQUENCY (Hz) (a) .01 Normal Solution 10000 1000 IzI (ohms) . 0 100- 0 0 0 0 0OoooO 10 110 100 1000 FREQUENCY (Hz) (b) .1 Normal Solution Figure 11 Log Impedence vs. Log Frequency -55- INITIALIZE FILTER DESIGN PARAMETERS, ESTIMATE OF PREDICTOR PARAMETERS,i= 1 READ STIMULUS AND RESPONSE SERIES FROM DISK TO MEMORY CALCULATE PREDICTOR Y2(ti+ 1/tWH(t.)Q (t) Y = y(t +) U= u(t + 1 CALL IDENTIFICATION ROUTINE ESTIMATE Q(iagra UPDATE H( t) NO4 i > 2080 4YES ENDI Figure 12 identification Block Diagram T- I I -56- Parameter Space *Q4 convergence trajectory Q3 I - region of near. optimal perforrmance Q2 91 and Q2 are optimal parameter sets assuming constant plant parameters Q3 faIIs within region of near optimal performance no convergence to optimal sets Q4 initial estimate outside region of near optimal performance... estimates of parameters converge to an optimal set Figure 13 Relation of predictor parameters in parameter space Actual Observation Non-Adaptive Accumulated Accumulated Test Noise Predictor Absolute Square Number Variance Parameters Error Error 1 0 [o -.25 -.5 0 0 1 0 0 1 0.00000 0.00000 2 0 [ .25 .5 -.25 -.563 -.25 -.25 1 .25 .5 1 0.00052 0.00000 3 0 [ .125 .25 -.25 -.532 -.125 -.125 1 .125 .25 1 0.13353 0.00001 4 0 [ 1 1 1 I 1 1 1 I 1228.7 1090.4 5 8 x 10 7 [0 0 -.25 -.5 0 0 1 0 0 1 1.55089 0.00171 -7 7 8 x 10 [ .125 .25 -.25 -.532 -.125 -.125 1 .125 .25 ] 1:55441 0.00172 8 8 x 107 [1 1 1 1 1 1 1 1 1 ] 1228.8 1090.7 9 .008 [0 0 -.25 -.5 0 0 1 0 0 1 155.1 17.13 10 .008 [ .25 .5 -.25 -.563 -.25 -.25 1 .25 .5 ] 155.1 17.13 11 .008 [ .125 .25 -.25 -.532 -.125 -.125 1 .125 .25 1 155.1 17.133 12 .008 [1 1 1 1 1 1 1 1 ] 1510.6 1678.4 Note: Accumulated Accumulated Absolute Square Noise Noise Tests 1-4 0 0 Tests 5-8 1.551 0.00171 Tests 9-12 155.1 17.13 Table I Non Adaptive Predictor Simulated Plant/Constant Parameters Ul Initial Estimate of Self-Tuning Predictor Paramters Final Estimate of Parameters ____ - -- + r 1- [0 0 1 .25 .5 1 .125 .25 [1 1 [I I -. 25 -. 25 -. 25 -. 5 -. 563 -. 562 0 -.25 -.125 0 -.25 -.125 0 0 1 .25 .5 ] .125 .25 1 I I ] [0 1 .25 .125 .013 ..073 0 .5 .25 :014 .078 -. 25 -. 5 -. 25 -. 563 -. 25 -. 532 .167 .083 .258 .069 0 -.25 -.125 .319 .322 0 -.25 -. 125 .235 .238 0 .25 .125 .999 .402 .951 -. 315 0 ] .5 1 .25 1 -. 470] -. 486] * $ (0 0 [ .25 .5 1 .125 .25 [0 0 [ .25 .5 [ .125 .25 (1 I [0 0 -. 25 -.25 -.25 1 -. 5 -.563 -. 532 -. 25 -.5 -. 25 -. 563 -. 25 -. 532 -. 25 -. 5 0 0 -.25 -.25 -. 125 -.125 0 0 -. 25 -. 25 -.125 -.125 0 0 0 0 ] .25 .5 1 .125 .25 S 1 ] 0 01 .25 .5 1 .125 .25 1 S 1 0 01l [-.005 -.005 .2 38 .488 -.117 .241 [ .011 .012 [-.064 .057 1 .128 .6 1 .034 .310 .018 .064 [-.039 -. 009 -.256 -.512 -.002 -.259 -.576 -.240 -.257 -.544 -. [25 .059 -.082 .237 -. 267 -. 269 -. 268 -. 254 -. 263 -. 517 -. 569 -. 545 -. 518 -. 523 -. 009 -. 252 -.127 -. 037 -.01 -. 003 1 -.251 1 -.127 1 .169 .997 -. 012 -. 303 -. 148 -. 002 .008 .975 .978 .975 .964 .983 .002 .006] .246 .501) .125 .25 1 -. 790 -. 343] .016 .183 .106 .107 .024 .035] .6171 .305] .0031 .001] Accumulated Accumultated Absolute Square Error Error 0.0 .00095 .00125 1.50167 17.02 1.57371 1.57073 1.5698 3.08 156.47 155.79 156.28 163.39 155.92 5 . _________________~.~~~~-t.- 0.0 0.0 0.0 .23674 4.5 .00176 .00177 .00176 .2377 17.42 17.29 17.39 24.83 17.29 Notes: 1. Tests run with following design parameters: W- [o.o]I P (0/-1)-. 51 V-. 0009 * V-. 09 P-.051 2. For accumulated noise loss see note Table I Self-Tuning Predictor Simulated Plant/Constant Parameters Test Number Actual Observation Noise Var iance 13 14 15 16 17* 18 19 20 21 22 23 24 25 26** 0.0 0.0 0.0 0.0 0.0 8 x 10-7 8 x 107 8 x o 8 x 107 .008 .008 .008 .008 ,008 InU-OD I Table I I -r -.- L- I I 1 i i i Actual Observation Accumulated Accumulated Test Noise Non Adaptive Predictor Parameters Absolute Square Number Variance Error Error 4o 0.0 [0 0 -.25 -.5 0 0 1 0 0 1 110.67 11.44 41 0.0 [ .25 .5 -.25 -.563 -.25 -.25 1 .25 .5 1 110.67 11.44 42 0.0 [ .125 .25 -.25 -.562 -.125 -.125 1 .125 .5 1 111.64 11.50 48 8 x 10-7 [0 0 -.25 -.5 0 0 1 0 0 1 110.72 11.44 43 .008 [0 0 -.25 -.5 0 0 1 0 0 1 198.73 29.43 TABLE I I I Non Adaptive Predictor Simulated Plant/Time Series Trend Ip '-0 4Table ,IV Self-Tuning Predictor Simulated Plant/Time Series Trend ActualT Observac ion Accumulated Accumulated Test Noise Initial Estimate of Self-Tuning Predictor Parameters Final Estimate of Parameters Absolute Square Number Variance Error Error 46* 0.0 [0 0 -.25 -.5 0 0 1 0 0 .0 .204 2.59 -.838 .601 -.467 .917 -145 .83 9.295 .06503 45'' 0.0 [0 0 -.25 -.5 0 0 1 0 0 1 .07 .07 -.32 -.57 .05 .02 1.0 -.07 -.03 .275 .00005 8 1x 07 I________ J ____________ I. - C' a Note: i. For design per meters see Table 2. -. 25 -. 5 0 0 1 0 0 1 .02 .01 -.49 -.67 -.03 -.03 1.0 -.04 -.03 2.07756 .00335 Table V Electrode Studies .01 Normal Solution 2080 2080 Tes t LI,-O Number P(0/-1 I V2080 iIi1872 Self- Tuning Predictors P 0dio 001].00091[0 1 0 500 -500 1 [0 .62 0 -239 435 .09285 4.16S6 105[ 2 1 6 001 .0009 [0 0 1 0 0 0 500 -500 0 j .5 .13 .84 .16 .25 .11 -293 463 .33 1 11.29 3.8 x 10 -5 3 106 .001 .0009,[0 0 0 0 0 1 [0 .625 .001 -242 438 1 .09072 4.17 x 10' 4o 10 .001 .0009 [0 0 0 0 0 0 0 0 0 3 -55 .14 .84 .19 .25 .11 -301 448 .37 1 .07856 3.7 x 10- Non Adaptive Predictors I -5 5 [0 1 0 500 -5001 .39419 14.6 x 10 6 [0 .6 0 -240 440 1 .14984 4.19 x 10- 7 [ .55 .13 .84 .17 .25 .11 -300 450 .351 .09623 4-63 x 10 8 [0 0 1 0 0 0 500 -500 0 . 39419 14.6 x 10-5 [0 0 0 0 0 1 [0 0 0 0 0 0 0 0 0 0 0 [0 0 1.8 -.8 0 0 h95 -877 400 0 0 [0 0 1.8 -.8 0 0 495 -877 400 0 0 1 .19 .03 -.01 -.12 .27 -.003 -243 120 1.13 115 32.8 I [o .238 .02 -180 239.2 1 Q2080 2080 2 .0 ii i- 1872 [0 .24 .02 -180 239 1 1.99 1.24 x 10-3 1 .19 .0) -. 01 -.12 .27 0 -243 125 1.5 165 32.5 1 855.5 1.06 x io0 ..19 .03 -.01 -.12 .27 -.003 -243 126 1.13 166 32.8 1 2.30573 1.1 x 10-3 21.2 19.9 10-3 2-55128 1.04 10-3 2.236 91 1.12 10-o 3 Table VI Electrode Studies .1 Normal Solution P w I 0 Terc Number 5e If- Tun Ing Predictors 2 3 Non-adapcI ve Predictors 4 6 0 10 6 10 6 .001 .001 .001 .0009 .0009 .0009 0~' I I -63- REFERENCES 1. Geddes, L.A., Electrodes and the Measurement of Bioelectric Events, Wiley-tnterscience, New York, 1972. 2. Geddes, L.A. and Hoff, H.E., "The discovery of Bioelectricy and current electricity," IEEE spectrum, Dec. 1971. 3. Johnson, T.L. and Salzsieder, B., "Physical Properties of Silver- Silver Cloride electrodes in Saline Solution." ESL-TM-714. 4. WiHenmark, B., "A Self-tuning Predictor," IEEE Trans. Auto. Control, Dec. 1974. 5. Bishop, A.B., Introduction To Discrete Linear Controls, Academic Press, New York, 1975. 6. Schweppe, F.C., Uncertain Dynamic Systems, Prentice-Hall, 1973. 7. Kohn,, W. and Johnson, T.L., "Parameter Identification of AglAgCl s Electrodes in NaCl Electrolyte," ESL-P-710. 8. Astrom, K.J., Introduction to Stochastic Control Theory, Academic Press, New York, 1970. 9. Johnson, T.L., "Inverse Digital Filtering of Electrode-Produced Distortions in Electrodiagnostic Signals," ESL-P-698. 10. Astrom, 8orrisson, Ljung, Wittenmark, "Theory and Applications of Self-tuning Regulators," Automatica, Sept. 1977. 11. Chen, Chi-Tsong, Introduction to Linear System Theory, Holt, Rine- hart and Winston, Inc., New York, 1970. 12. Mehra, Ramon, K., System Identification Advances and Case Studi es, Academic Press, New York, 1976. 13. DeRousso, P.M., State Variable for Engineers, Wiley and Sons, New 1~ -64- York, 1967. 14. Plonsey, R. and Fleming, D.G., Bioelectric Phonomena, McGraw Hill, New York, 1969. 15. Astrom, KJ. and WiHenmark, B., "On Self-Tuning Regulators," Auto- matica, Vol. 9, pp. 185-199, 1973. 16. Wieslander, J. and WiHenmark, B., "An Approach to Adaptive Control Using Real Time Identification," Automatica, vol. 7, pp. 211- 217, 1971. 17. Astrom, K.J. and Eykhoff, P., "System Identification - A Survey," Automatica, vol. 7, pp. 123-1.62, 1971. 18. Aoki, M. and Staley, R.M., "On input Signal Synthesis in Parameter Identifications," Automatica, vol. 6, 1970. 19. Ljung, L., "On The Consistency of Prediction Error Identification Methods," System Identification Advances and Case Studies, Academic Press, New York, 1976.