Video Lectures - Lecture 4
Please note that lecture 4 is unavailable in a higher quality version.
These video lectures of Professor Gilbert Strang teaching 18.06 were recorded in Fall 1999 and do not correspond precisely to the current edition of the textbook. However, this book is still the best reference for more information on the topics covered in each lecture.
Strang, Gilbert. Introduction to Linear Algebra. 4th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2009. ISBN: 9780980232714.
Instructor/speaker: Prof. Gilbert Strang
Free downloads: Free streaming: |
Resources: |
|
Transcript - Lecture 4
Are we ready? Okay, ready for me to start.
Ready for the taping to start in a minute.
He's going to raise his hand and signal when I'm on.
Just a minute, though, let them settle.
Okay, guys. Okay, give me the signal, then, when you want me to start.
Okay, this is linear algebra, lecture four.
And, the first thing I have to do is something that was on the list for last time, but here it is now.
What's the inverse of a product? If I multiply two matrices together and I know their inverses, how do I get the inverse of A times B? So I know what inverses mean for a single matrix A and for a matrix B. What matrix do I multiply by to get the identity if I have A B? Okay, that'll be simple but so basic. Then I'm going to use that to -- I will have a product of matrices and the product that we'll meet will be these elimination matrices and the net result of today's lectures is the big formula for elimination, so the net result of today's lecture is this great way to look at Gaussian elimination. We know that we get from A to U by elimination. We know the steps -- but now we get the right way to look at it, A equals L U.
So that's the high point for today.
Okay. Can I take the easy part, the first step first? So, suppose A is invertible -- and of course it's going to be a big question, when is the matrix invertible? But let's say A is invertible and B is invertible, then what matrix gives me the inverse of A B? So that's the direct question.
What's the inverse of A B? Do I multiply those separate inverses? Yes.
I multiply the two matrices A inverse and B inverse, but what order do I multiply? In reverse order.
And you see why. So the right thing to put here is B inverse A inverse. That's the inverse I'm after.
We can just check that A B times that matrix gives the identity. Okay.
So why -- once again, it's this fact that I can move parentheses around. I can just erase them all and do the multiplications any way I want to.
So what's the right multiplication to do first? B times B inverse. This product here I is the identity. Then A times the identity is the identity and then finally A times A inverse gives the identity. So forgive the dumb example in the book. Why do you, do the inverse things in reverse order? It's just like -- you take off your shoes, you take off your socks, then the good way to invert that process is socks back on first, then shoes. Sorry, okay.
I'm sorry that's on the tape. And, of course, on the other side we should really just check -- on the other side I have B inverse, A inverse.
That does multiply A B, and this time it's these guys that give the identity, squeeze down, they give the identity, we're in shape.
Okay. So there's the inverse.
Good. While we're at it, let me do a transpose, because the next lecture has got a lot to -- involves transposes.
So how do I -- if I transpose a matrix, I'm talking about square, invertible matrices right now.
If I transpose one, what's its inverse? Well, the nice formula is -- let's see.
Let me start from A, A inverse equal the identity.
And let me transpose both sides.
That will bring a transpose into the picture.
So if I transpose the identity matrix, what do I have? The identity, right? If I exchange rows and columns, the identity is a symmetric matrix. It doesn't know the difference.
If I transpose these guys, that product, then again it turns out that I have to reverse the order.
I can transpose them separately, but when I multiply, those transposes come in the opposite order.
So it's A inverse transpose times A transpose giving the identity. So that's -- this equation is -- just comes directly from that one.
But this equation tells me what I wanted to know, namely what is the inverse of this guy A transpose? What's the inverse of that -- if I transpose a matrix, what'ss the inverse of the result? And this equation tells me that here it is.
This is the inverse of A transpose.
Inverse of A transpose. Of A transpose.
So I'll put a big circle around that, because that's the answer, that's the best answer we could hope for.
That if you want to know the inverse of A transpose and you know the inverse of A, then you just transpose that.
So in a -- to put it another way, transposing and inversing you can do in either order for a single matrix.
Okay. So these are like basic facts that we can now use, all right -- so now I put it to use. I put it to use by thinking -- we're really completing, the subject of elimination.
Actually, -- the thing about elimination is it's the right way to understand what the matrix has got.
This A equal L U is the most basic factorization of a matrix.
I always worry that you will think this course is all elimination. It's just row operations.
And please don't. We'll be beyond that, but it's the right algebra to do first.
Okay. So, now I'm coming near the end of it, but I want to get it in a decent form.
So my decent form is matrix form. I have a matrix A, let's suppose it's a good matrix, I can do elimination, no row exchanges -- So no row exchanges for now. Pivots all fine, nothing zero in the pivot position. I get to the very end, which is U.
So I get from A to U. And I want to know what's the connection? How is A related to U? And this is going to tell me that there's a matrix L that connects them. Okay. Can I do it for a two by two first? Okay. Two by two, elimination.
Okay, so I'll do it under here. Okay.
So let my matrix A be -- We'll keep it simple, say two and an eight, so we know that the first pivot is a two, and the multiplier's going to be a four and then let me put a one here and what number do I not want to put there? Four.
I don't want a four there, because in that case, the second pivot would not -- we wouldn't have a second pivot. The matrix would be singular, general screw-up. Okay.
So let me put some other number here like seven.
Okay. Okay.
Now I want to operate on that with my elementary matrix.
So what's the elementary matrix? Strictly speaking, it's E21, because it's the guy that's going to produce a zero in that position.
And it's going to produce U in one shot, because it's just a two by two matrix. So two one and I'm going to take four of those away from those, produce that zero and leave a three there. And that's U.
And what's the matrix that did it? Quick review, then.
What's the elimination elementary matrix E21 -- it's one zero, thanks. And -- negative four one, right.
Good. Okay.
So that -- you see the difference between this and what I'm shooting for. I'm shooting for A on one side and the other matrices on the other side of the equation.
Okay. So I can do that right away.
Now here's going to be my A equals L U.
And you won't have any trouble telling me what -- so A is still two one eight seven. L is what you're going to tell me and U is still two one zero three.
Okay. So what's L in this case? Well, first -- so how is L related to this E guy? It's the inverse, because I want to multiply through by the inverse of this, which will put the identity here, and the inverse will show up there and I'll call it L.
So what is the inverse of this? Remember those elimination matrices are easy to invert. The inverse matrix for this one is 1 0 4 1, it's actually flip sign.
Okay. Do you want -- if we did the numbers right, we must -- this should be correct. Okay.
And of course it is. That says the first row's right, four times the first row plus the second row is eight seven. Good. Okay. That's simple, two by two. But it already shows the form that we're headed for. It shows -- so what's the L stand for? Why the letter L? If U stood for upper triangular, then of course L stands for lower triangular. And actually, it has ones on the diagonal, where this thing has the pivots on the diagonal. Oh, sometimes we may want to separate out the pivots, so can I just mention that sometimes we could also write this as -- we could have this one zero four one -- I'll just show you how I would divide out this matrix of pivots -- two three.
There's a diagonal matrix. And I just -- whatever is left is here. Now what's left? If I divide this first row by two to pull out the two, then I have a one and a one half.
And if I divide the second row by three to pull out the three, then I have a one. So if this is L U, this is maybe called L D or pivot U. And now it's a little more balanced, because we have ones on the diagonal here and here. And the diagonal matrix in the middle. So both of those...
Matlab would produce either one.
I'll basically stay with L U. Okay.
Now I have to think about bigger than two by two.
But right now, this was just like easy exercise. And, to tell the truth, this one was a minus sign and this one was a plus sign.
I mean, that's the only difference.
But, with three by three, there's a more significant difference. Let me show you how that works.
Let me move up to a three by three, let's say some matrix A, okay? Let's imagine it's three by three. I won't write numbers down for now. So what's the first elimination step that I do, the first matrix I multiply it by, what letter will I use for that? It'll be E two one, because it's -- the first step will be to get a zero in that two one position.
And then the next step will be to get a zero in the three one position. And the final step will be to get a zero in the three two position.
That's what elimination is, and it produced U.
And again, no row exchanges. I'm taking the nice case, now, the typical case, too -- when I don't have to do any row exchange, all I do is these elimination steps. Okay.
Now, suppose I want that stuff over on the right-hand side, as I really do. That's, like, my point here. I can multiply these together to get a matrix E, but I want it over on the right. I want its inverse over there.
So what's the right expression now? If I write A and U, what goes there? Okay. So I've got the inverse of this, I've got three matrices in a row now.
And it's their inverses that are going to show up, because each one is easy to invert.
Question is, what about the whole bunch? How easy is it to invert the whole bunch? So, that's what we know how to do.
We know how to invert, we should take the separate inverses, but they go in the opposite order.
So what goes here? E three two inverse, right, because I'll multiply from the left by E three two inverse, then I'll pop it up next to U. And then will come E three one inverse. And then this'll be the only guy left standing and that's gone when Ido an E two one inverse.
So there is L. That's L U.
L is product of inverses. Now you still can ask why is this guy preferring inverses? And let me explain why.
Let me explain why is this product nicer than this one? This product turns out to be better than this one. Let me take a typical case here.
Let me take a typical case. So let me -- I have to do three by three for you to see the improvement. Two by two, it was just one E, no problem.
But let me go up to this case. Suppose my matrices E21 -- suppose E21 has a minus two in there.
Suppose that -- and now suppose -- oh, I'll even suppose E31 is the identity. I'm going to make the point with just a couple of these. Okay.
Now this guy will have -- do something -- now let's suppose minus five one. Okay.
There's typical. That's a typical case in which we didn't need an E31. Maybe we already had a zero in that three one position. Okay.
Let me see -- is that going to be enough to, show my point? Let me do that multiplication. Okay. So if I do that multiplication it's like good practice to multiply these matrices.
Tell me what's above the diagonal when I do this multiplication? All zeroes.
When I do this multiplication, I'm going to get ones on the diagonal and zeroes above. Because -- what does that say? That says that I'm subtracting rows from lower rows.
So nothing is moving upwards as it did last time in Gauss-Jordan. Okay.
Now -- so really, what I have to do is check this minus two one zero, now this is -- what's that number? This is the number that I'm really have in mind. That number is ten.
And this one is -- what goes here? Row three against column two, it looks like the minus five.
It's that ten. How did that ten get in there? I don't like that ten. I mean -- of course, I don't want to erase it, because it's right.
But I don't want it there. It's because -- the ten got in there because I subtracted two of row one from row two, and then I subtracted five of that new row two from row three. So doing it in that order, how did row one effect row three? Well, it did, because two of it got removed from row two and then five of those got removed from row three. So altogether ten of row one got thrown into row three.
Now my point is in the reverse direction -- so now I can do it -- below it I'll do the inverses.
Okay. And, of course, opposite order. Reverse order.
Reverse order. Okay.
So now this is going to -- this is the E that goes on the left side. Left of A.
Now I'm going to do the inverses in the opposite order, so what's the -- So the opposite order means I put this inverse first. And what is its inverse? What's the inverse of E21? Same thing with a plus sign, right? For the individual matrices, instead of taking away two I add back two of row one to row two, so no problem. And now, in reverse order, I want to invert that. Just right? I'm doing just this, this.
So now the inverse is again the same thing, but add in the five.
And now I'll do that multiplication and I'll get a happy result. I hope.
Did I do it right so far? Yes, okay.
Let me do the multiplication. I believe this comes out.
So row one of the answer is one zero zero. Oh, I know that all this is going to be left, right? Then I have two one zero.
So I get two one zero there, right? And what's the third row? What's the third row in this product? Just read it out to me, the third row? And this is my matrix L.
And it's the one that goes on the left of U.
It goes into -- what do I mean here? Maybe rather than saying left of A, left of U, let me right down again what I mean.
E A is U, whereas A is L U. Okay.
Let me make the point now in words.
The order that the matrices come for L is the right order.
The two and the five don't sort of interfere to produce this ten in the right order, the multipliers just sit in the matrix L. That's the point -- that if I want to know L, I have no work to do.
I just keep a record of what those multipliers were, and that gives me L. So I'll draw the -- let me say it. So this is the A=L U.
So if no row exchanges, the multipliers that those numbers that we multiplied rows by and subtracted, when we did an elimination step -- the multipliers go directly into L. Okay. So L is -- this is the way, to look at elimination.
You go through the elimination steps, and actually if you do it right, you can throw away A as you create L U.
If you think about it, those steps of elimination, as when you've finished with row two of A, you've created a new row two of U, which you have to save, and you've created the multipliers that you used -- which you have to save, and then you can forget A.
So because it's all there in L and U.
So that's -- this moment is maybe the new insight in elimination that comes from matrix -- doing it in matrix form. So it was -- the product of Es is -- we can't see what that product of Es is. The matrix E is not a particularly attractive one.
What's great is when we put them on the other side -- their inverses in the opposite order, there the L comes out just right. Okay.
Now -- oh gosh, so today's a sort of, like, practical day. Can we think together how expensive is elimination? How many operations do we do? So this is now a kind of new topic which I didn't list as -- on the program, but here it came.
How many operations on an n by n matrix A.
I mean, it's a very practical question.
Can we solve systems of order a thousand, in a second or a minute or a week? Can we solve systems of order a million in a second or an hour or a week? I mean, what's the -- if it's n by n, we often want to take n bigger. I mean, we've put in more information. We make the whole thing is more accurate for the bigger matrix. But it's more expensive, too, and the question is how much more expensive? If I have matrices of order a hundred.
Let's say a hundred by a hundred.
Let me take n to be a hundred. Say n equal a hundred.
How many steps are we doing? How many operations are we actually doing that we -- And let's suppose there aren't any zeroes, because of course if a matrix has got a lot of zeroes in good places, we don't have to do those operations, and, it'll be much faster.
But -- so just think for a moment about the first step.
So here's our matrix A, hundred by a hundred.
And the first step will be -- that column, is got zeroes down here. So it's down to 99 by 99, right? That's really like the first stage of elimination, to get from this hundred by hundred non-zero matrix to this stage where the first pivot is sitting up here and the first row's okay the first column is okay. So, eventually -- how many steps did that take? You see, I'm trying to get an idea. Is the answer proportional to n? Is the total number of steps in elimination, the total number, is it proportional to n -- in which case if I double n from a hundred to two hundred -- does it take me twice as long? Does it square, so it would take me four times as long? Does it cube so it would take me eight times as long? Or is it n factorial, so it would take me a hundred times as long? I think, you know, from a practical point of view, we have to have some idea of the cost, here. So these are the questions that I'm -- let me ask those questions again.
Is it proportional -- does it go like N, like N squared, like N cubed -- or some higher power of N? Like N factorial where every step up multiplies by a hundred and then by a hundred and one and then by a hundred and two -- which is it? Okay, so that's the only way I know to answer that is to think through what we actually had to do. Okay.
So what was the cost here? Well, let's see.
What do I mean by an operation? I guess I mean, well an addition or -- yeah. No big deal.
I guess I mean an addition or a subtraction or a multiplication or a division. Okay.
And actually, what operation I doing all the time? When I multiply row one by multiplier L and I subtract from row six. What's happening there individually? What's going on? If I multiply -- I do a multiplication by L and then a subtraction.
So I guess operation -- Can I count that for the moment as, like, one operation? Or you may want to count them separately. The typical operation is multiply plus a subtract. So if I count those together, my answer's going to come out half as many as if -- I mean, if I count them separately, I'd have a certain number of multiplies, certain number of subtracts.
That's really want to do. Okay.
How many have I got here? So, I think -- let's see.
It's about -- well, how many, roughly? How many operations to get from here to here? Well, maybe one way to look at it is all these numbers had to get changed. The first row didn't get changed, but all the other rows got changed at this step.
So this step -- well, I guess maybe -- shall I say it cost about a hundred squared. I mean, if I had changed the first row, then it would have been exactly hundred squared, because -- because that's how many numbers are here. A hundred squared numbers is the total count of the entry, and all but this insignificant first row got changed. So I would say about a hundred squared. Okay.
Now, what about the next step? So now the first row is fine.
The second row is fine. And I'm changing these zeroes are all fine, so what's up with the second step? And then you're with me.
Roughly, what's the cost? If this first step cost a hundred squared, about, operations then this one, which is really working on this guy to produce this, costs about what? How many operations to fix? About ninety-nine squared, or ninety-nine times ninety-eight. But less, right? Less, because our problem's getting smaller.
About ninety-nine squared. And then I go down and down and the next one will be ninety-eight squared, the next ninety-seven squared and finally I'm down around one squared or -- where it's just like the little numbers. The big numbers are here.
So the number of operations is about n squared plus that was n, right? n was a hundred? n squared for the first step, then n minus one squared, then n minus two squared, finally down to three squared and two squared and even one squared.
No way I should have written that -- squeezed that in.
Let me try it so the count is N squared plus N minus one squared plus -- all the way down to one squared. That's a pretty decent count. Admittedly, we didn't catch every single tiny operation, but we got the right leading term here. And what do those add up to? Okay, so now we're coming to the punch of this, question, this operation count. So the operations on the left side, on the matrix A to finally get to U.
And anybody -- so which of these quantities is the right ballpark for that count? If I add a hundred squared to ninety nine squared to ninety eight squared -- ninety seven squared, all the way down to two squared then one squared, what have I got, about? It's just one of these -- let's identify it first. Is it N? Certainly not. Is it n factorial? No. If it was n factorial, we would -- with determinants, it is n factorial.
I'll put in a bad mark against determinants, because that -- okay, so what is it? It's n -- well, this is the answer. It's this order -- n cubed. It's like I have n terms, right? I've got n terms in this sum.
And the biggest one is N squared.
So the worst it could be would be n cubed, but it's not as bad as -- it's N cubed times -- it's about one third of n cubed. That's the magic operation count. Somehow that one third takes account of the fact that the numbers are getting smaller.
If they weren't getting smaller, we would have n terms times n squared, but it would be exactly n cubed. But our numbers are getting smaller -- actually, do you remember where does one third come in this -- I'll even allow a mention of calculus.
So calculus can be mentioned, integration can be mentioned now in the next minute and not again for weeks.
It's not that I don't like 18.01, but18.06 is better.
Okay. So, -- so what's -- what's the calculus formula that looks like? It looks like -- if we were in calculus instead of summing stuff, we would integrate. So I would integrate x squared and I would get one third x cubed.
So if that was like an integral from one to N, of x squared b x, if the answer would be one third n cubed -- and it's correct for the sum also, because that's, like, the whole point of calculus. The whole point of calculus is -- oh, I don't want to tell you the whole -- I mean, you know the whole point of calculus.
Calculus is like sums except it's continuous.
Okay. And algebra is discreet.
Okay. So the answer is one third n cubed. Now I'll just -- let me say one more thing about operations. What about the right-hand side? This was what it cost on the left side.
This is on A. Because this is A that we're working with. But what's the cost on the extra column vector b that we're hanging around here? So b costs a lot less, obviously, because it's just one column. We carry it through elimination and then actually we do back substitution.
Let me just tell you the answer there.
It's n squared. So the cost for every right hand side is n squared. So let me -- I'll just fit that in here -- for the the cost of b turns out to be n squared. So you see if we have, as we often have, a a matrix A and several right-hand sides, then we pay the price on A, the higher price on A to get it split up into L and U to do elimination on A, but then we can process every right-hand side at low cost. Okay.
So the -- We really have discussed the most fundamental algorithm for a system of equations.
Okay. So, I'm ready to allow row exchanges. I'm ready to allow -- now what happens to this whole -- today's lecture if there are row exchanges? When would there be row exchanges? There are row -- we need to do row exchanges if a zero shows up in the pivot position.
So moving then into the final section of this chapter, which is about transposes -- well, we've already seen some transposes, and -- the title of this section is, "Transposes and Permutations." Okay.
So can I say, now, where does a permutation come in? Let me talk a little about permutations. So that'll be up here, permutations. So these are the matrices that I need to do row exchanges. And I may have to do two row exchanges. Can you invent a matrix where I would have to do two row exchanges and then would come out fine? Yeah let's just, for the heck of it -- so I'll put it here.
Let me do three by threes. Actually, why don't I just plain list all the three by three permutation matrices.
There're a nice little group of them.
What are all the matrices that exchange no rows at all? Well, I'll include the identity.
So that's a permutation matrix that doesn't do anything.
Now what's the permutation matrix that exchanges -- what is P12? The permutation matrix that exchanges rows one and two would be -- 0 1 0 -- 1 0 0, right. I just exchanged those rows of the identity and I've got it. Okay.
Actually, I'll -- yes. Let me clutter this up.
Okay. Give me a complete list of all the row exchange matrices. So what are they? They're all the ways I can take the identity matrix and rearrange its rows. How many will there be? How many three by three permutation matrices? Shall we keep going and get the answer? So tell me some more. What one are you going to do now? I'm going to switch rows one and -- One and three, okay. One and three, leaving two alone. Okay.
Now what else? Switch -- what would be the next easy one -- is switch two and three, good. So I'll leave one zero zero alone and I'll switch -- I'll move number three up and number two down.
Okay. Those are the ones that just exchange single -- a pair of rows.
This guy, this guy and this guy exchanges a pair of rows, but now there are more possibilities. What's left? So tell -- there is another one here. What's that? It's going to move -- it's going to change all rows, right? Where shall we put them?4 So -- give me a first row. STUDENT: Zero one zero? STRANG: Zero one zero. Okay, now a second row -- say zero zero one and the third guy one zero zero.
So that is like a cycle. That puts row two moves up to row one, row three moves up to row two and row one moves down to row three. And there's one more, which is -- let's see. What's left? I'm lost. STUDENT: Is it zero zero one? STRANG: Okay. STUDENT: One zero zero.
STRANG: One zero zero, okay.
Zero one zero. Okay.
Great. Six.
Six of them. Six P.
And they're sort of nice, because what happens if I write, multiply two of them together? If I multiply two of these matrices together, what can you tell me about the answer? It's on the list. If I do some row exchanges and then I do some more row exchanges, then all together I've done row exchanges. So if I multiply -- but, I don't know. And if I invert, then I'm just doing row exchanges to get back again.
So the inverses are all there. It's a little family of matrices that -- they've got their own -- if I multiply, I'm still inside this group. If I invert I'm inside this group -- actually, group is the right name for this subject. It's a group of six matrices, and what about the inverses? What's the inverse of this guy, for example? What's the inverse -- if I exchange rows one and two, what's the inverse matrix? Just tell me fast. The inverse of that matrix is -- if I exchange rows one and two, then what I should do to get back to where I started is the same thing.
So this thing is its own inverse.
That's probably its own inverse.
This is probably not -- actually, I think these are inverses of each other. Oh, yeah, actually -- the inverse is the transpose. There's a curious fact about permutations matrices, that the inverses are the transposes. And final moment -- how many are there if I -- how many four by four permutations? So let me take four by four -- how many Ps? Well, okay.
Make a good guess. Twenty four, right. Twenty four Ps.
Okay. So, we've got these permutation matrices, and in the next lecture, we'll use them.