WEBVTT
00:00:03.080 --> 00:00:06.160
There’s two things here, the main topic and the meta topic.
00:00:06.600 --> 00:00:10.640
So the main topic is gonna be this really neat algorithm for solving two-dimensional equations.
00:00:11.000 --> 00:00:16.720
Things that have two unknown real numbers or also those involving a single unknown which is a complex number.
00:00:17.360 --> 00:00:26.240
So for example, if you wanted to find the complex roots of a polynomial, or maybe some of those million dollar zeros of the Riemann zeta function, this algorithm would do it for you.
00:00:26.920 --> 00:00:29.880
And this method is super pretty, since a lot of colors are involved.
00:00:30.480 --> 00:00:40.080
And more importantly, the core underlying idea applies to all sorts of math well beyond this algorithm for solving equations, including a bit of topology which I’ll talk about afterwards.
00:00:40.720 --> 00:00:47.320
But what really makes this worth 20 minutes or so of your time is that it illustrates a lesson much more generally useful throughout math.
00:00:47.880 --> 00:00:51.720
Which is, try to define constructs that compose nicely with each other.
00:00:52.400 --> 00:00:54.360
You’ll see what I mean by that as the story progresses.
00:00:55.440 --> 00:01:03.720
To motivate the case with functions that have 2D inputs and 2D outputs, let’s start off simpler, with functions that just take in a real number and spit out a real number.
00:01:05.080 --> 00:01:14.480
If you wanna know when a function, 𝑓 of 𝑥, equals some another function, 𝑔 of 𝑥, you might think of this as searching for when the graphs of these functions intersect, right?
00:01:15.040 --> 00:01:18.480
I mean that gives you an input where both functions have the same output.
00:01:19.280 --> 00:01:25.240
To take a very simple example, imagine 𝑓 of 𝑥 is 𝑥 squared and 𝑔 of 𝑥 is the constant function two.
00:01:25.960 --> 00:01:28.480
In other words, you want to find the square root of two.
00:01:29.440 --> 00:01:36.760
Even if you know almost nothing about finding square roots, you can probably see that one squared is less than two and two squared is bigger than two.
00:01:37.320 --> 00:01:41.880
So you realize, “Ah, there’s gonna be some solution in between those two values.”
00:01:42.600 --> 00:01:47.000
And then, if you wanted to narrow it down further, maybe you try squaring the halfway point, 1.5.
00:01:47.760 --> 00:01:50.280
And this comes out to be 2.25, a little bit too high.
00:01:50.600 --> 00:01:55.440
So you would focus on the region between one and 1.5, and so on.
00:01:55.440 --> 00:01:57.160
You can probably see how this would keep going.
00:01:57.560 --> 00:02:05.360
You keep computing at the midpoint and then chopping your search space in half.
00:02:05.360 --> 00:02:15.480
Now another way to think about this, which is gonna make it easier for us once we get up to higher dimensions, is to instead focus on the equivalent question for when the difference between these two functions is zero.
00:02:16.240 --> 00:02:21.080
In those terms, we found a region of inputs where that difference was negative on one end.
00:02:21.480 --> 00:02:23.000
And it was positive on another end.
00:02:23.880 --> 00:02:25.080
And then we split it into two.
00:02:25.480 --> 00:02:30.600
And the half that we narrowed our attention to was the one where the outermost points had varying signs.
00:02:31.360 --> 00:02:36.360
And like this, we were able to keep going forever, taking each region with varying signs on the border.
00:02:36.760 --> 00:02:39.000
Finding a smaller such region among its halves.
00:02:39.520 --> 00:02:44.080
Knowing that ultimately, we have to be narrowing in on a point which is gonna be exactly zero.
00:02:45.920 --> 00:02:51.440
So in short, solving equations can always be framed as finding when a certain function is equal to zero.
00:02:52.080 --> 00:02:53.800
And to do that, we have this heuristic.
00:02:54.120 --> 00:02:59.760
If 𝑓 is positive at one point and negative at another point, you can find some place in between where it’s zero.
00:03:00.360 --> 00:03:03.000
At least if everything changes smoothly with no sudden jumps.
00:03:03.800 --> 00:03:14.200
Now The amazing thing that I wanna show you is that you can extend this kind of thinking into two-dimensional equations, equations between functions whose inputs and outputs are both two-dimensional.
00:03:14.880 --> 00:03:16.760
For example, complex numbers are 2D.
00:03:17.000 --> 00:03:21.200
And this tool that we’re developing is perfect for finding solutions to complex equations.
00:03:22.320 --> 00:03:28.280
Now, since we’re gonna be talking about these 2D functions so much, let’s take a brief sidestep and consider how do we illustrate these.
00:03:28.920 --> 00:03:33.800
I mean, graphing a function with a 2D input and a 2D output would require four dimensions.
00:03:34.160 --> 00:03:37.880
And that’s not really gonna work so well in our 3D world or on our 2D screens.
00:03:38.360 --> 00:03:40.200
But, we still have a couple good options.
00:03:40.920 --> 00:03:45.200
One is to just look at both the input space and the output space side by side.
00:03:45.920 --> 00:03:50.320
Each point in the input space moves to a particular point in the output space.
00:03:50.920 --> 00:03:56.240
And I can show how moving around that input point corresponds to certain movements in the output space.
00:03:56.960 --> 00:04:04.760
All of the functions we consider will be continuous, in the sense that small little changes to the input only correspond to small little changes in the output.
00:04:05.160 --> 00:04:06.320
There’s no sudden jumps.
00:04:07.160 --> 00:04:16.440
Now another option we have is to imagine the arrow from the origin of the output space to that output point and to attach a miniature version of that arrow to the input point.
00:04:17.280 --> 00:04:25.560
This can give us a sense at a glance for where a given input point goes or where many different input points go by drawing the full vector field.
00:04:27.360 --> 00:04:30.480
And unfortunately, when you do this at a lot of points, it can get pretty cluttered.
00:04:30.480 --> 00:04:32.920
So here, let me make all of the arrows the same size.
00:04:33.280 --> 00:04:36.880
And what this means is we can get a sense just of the direction of each output point.
00:04:37.800 --> 00:04:46.280
But perhaps the prettiest way to illustrate two dimensional functions, and the one we’ll be using most this video, is to associate each point in that output space with a color.
00:04:47.240 --> 00:04:54.560
Here, we’ve used hues — that is, where the color falls along a rainbow or a color wheel — to correspond to the direction away from the origin.
00:04:55.240 --> 00:04:59.120
And we’re using darkness or brightness to correspond to the distance from the origin.
00:05:00.040 --> 00:05:04.840
For example, focusing just on this ray of outputs, all of these points are red.
00:05:05.440 --> 00:05:09.560
But the ones closer to the origin are a little darker and the ones further away are a little brighter.
00:05:10.280 --> 00:05:14.000
And focusing just on this ray of outputs, all of the points are green.
00:05:14.600 --> 00:05:19.440
And again, closer to the origin means darker; farther away means lighter, and so on.
00:05:19.720 --> 00:05:24.720
All we’re doing here is assigning a specific color to each direction, all changing continuously.
00:05:25.760 --> 00:05:28.560
You might notice the darkness and brightness differences here are pretty subtle.
00:05:28.880 --> 00:05:34.680
But for this video, all we care about is the direction of outputs not the magnitudes, the hues not the brightness.
00:05:35.240 --> 00:05:43.120
The one important thing about brightness for you to notice is that near the origin, which has no particular direction, all of the colors fade to black.
00:05:44.360 --> 00:05:48.320
So if we’re thinking about functions, now that we’ve decided on a color for each output.
00:05:48.760 --> 00:05:57.040
We can visualize 2D functions by coloring each point in the input space based on the color of the point where it lands in the output space.
00:05:57.880 --> 00:06:03.880
I like to imagine many different points from that input space hopping over to their corresponding outputs in the output space.
00:06:04.200 --> 00:06:06.920
Then getting painted based on the color of the point where they land.
00:06:07.360 --> 00:06:10.280
And then hopping back to where they came from in the input space.
00:06:11.640 --> 00:06:18.360
Doing this for every point in the input space, you can get a sense just by looking at that input space for roughly where the function takes each point.
00:06:19.120 --> 00:06:28.080
For example, this stripe of pink points on the left tells us that all of those points get mapped somewhere in the pink direction, that lower left of the output space.
00:06:29.800 --> 00:06:34.840
Also, those three points which are black with lots of colors around them are the ones that go to zero.
00:06:39.680 --> 00:06:48.800
Alright, so just like the 1D case, solving equations of two-dimensional functions can always be reframed by asking when a certain function is equal to zero.
00:06:49.640 --> 00:06:51.120
So that’s our challenge right now.
00:06:51.600 --> 00:06:56.800
Create an algorithm that finds which input points of a given 2D function go to zero.
00:07:00.800 --> 00:07:07.800
Now, you might point out that if you’re looking at a color map like this, by seeing those black dots, you already know where the zeros of the function are.
00:07:08.560 --> 00:07:09.920
So, does that count?
00:07:10.920 --> 00:07:17.480
Well, keep in mind that to create a diagram like, we’ve had the computer compute the function at all of the pixels on the plane.
00:07:18.160 --> 00:07:27.720
But our goal is to find a more efficient algorithm that only requires computing the function at as few points as possible, only having a limited view of the colors, so to speak.
00:07:29.520 --> 00:07:37.600
And also, from a more theoretical standpoint, it’d be nice to have a general construct that tells us the conditions for whether or not a zero exists inside a given region.
00:07:38.960 --> 00:07:47.840
Now remember, in one dimension, the main insight was that if a continuous function is positive at one point and negative at another, then somewhere in between, it must be zero.
00:07:48.840 --> 00:07:50.880
So how do we extend that into two dimensions?
00:07:50.880 --> 00:07:53.680
We need some sort of analog of talking about signs.
00:07:54.640 --> 00:07:58.280
Well, one way to think about what signs even are is directions.
00:07:58.760 --> 00:08:01.240
Positive means you’re pointing to the right along the number line.
00:08:01.760 --> 00:08:03.520
And negative means you’re pointing to the left.
00:08:04.160 --> 00:08:06.160
Two-dimensional quantities also have direction.
00:08:06.520 --> 00:08:08.400
But for them, the options are much wider.
00:08:08.760 --> 00:08:11.840
They can point anywhere along a whole circle of possibilities.
00:08:12.680 --> 00:08:20.840
So the same way that in one dimension we were asking whether a given function is positive or negative on the boundary of a range, which is just two points.
00:08:21.440 --> 00:08:30.800
For 2D functions, we’re gonna be looking at the boundary of a region, which is a loop, and ask about the direction of the function’s output along that boundary.
00:08:33.720 --> 00:08:41.080
For example, we see that along this loop around this zero, the output goes through every possible direction, all of the colors of the rainbow.
00:08:41.360 --> 00:08:44.920
Red, yellow, green, blue, and back to red, and everything in between along the way.
00:08:45.720 --> 00:08:51.040
But along this loop over here, with no zeros inside of it, the output doesn’t go through every color.
00:08:51.400 --> 00:08:54.560
It goes through some of the orangish ones but never, say, green or blue.
00:08:55.320 --> 00:08:56.000
And this is promising.
00:08:56.440 --> 00:08:59.640
It looks a lot like how things worked in one dimension.
00:09:00.040 --> 00:09:04.440
Maybe in the same way that if a 1D function takes both possible signs on the boundary of a 1D region.
00:09:04.720 --> 00:09:06.200
There was a zero somewhere inside.
00:09:06.880 --> 00:09:19.760
We might hypothesize that if a 2D function hits outputs of all possible directions, all possible colors, along the boundary of a 2D region, then somewhere inside that region it must go to zero.
00:09:20.880 --> 00:09:22.120
So, that’s our guess.
00:09:22.120 --> 00:09:26.080
And take a moment to think about if this should be true, and if so, why?
00:09:27.600 --> 00:09:36.200
If we start thinking about a tiny loop around some input point, we know that since everything is continuous, our function takes it to some tiny loop near the corresponding output.
00:09:37.160 --> 00:09:40.400
But look, for most tiny loops, the output barely varies in color.
00:09:41.080 --> 00:09:49.320
If you pick any output point other than zero and draw a sufficiently tight loop near it, the loop’s colors are all gonna be about the same color as that point.
00:09:49.880 --> 00:09:52.120
A tight loop over here is all bluish.
00:09:52.600 --> 00:09:55.080
A tight loop over here is gonna be all yellowish.
00:09:55.520 --> 00:09:57.600
You certainly aren’t gonna get every color of the rainbow.
00:09:58.400 --> 00:10:06.080
The only point where you can tighten loops around it while still getting all of the colors is the colorless origin, zero itself.
00:10:07.040 --> 00:10:16.360
So it is indeed the case that if you have loops going through every color of the rainbow, tightening and tightening, narrowing in on a point, then that point must in fact be a zero.
00:10:17.280 --> 00:10:21.680
And so, let’s set up a 2D equation solver just like our one-dimensional equation solver.
00:10:22.280 --> 00:10:26.680
When we find a large region whose border goes through every color, split it into two.
00:10:27.120 --> 00:10:29.760
And then look at the colors on the boundary of each half.
00:10:30.560 --> 00:10:34.600
In the example shown here, the border on the left half doesn’t actually go through all colors.
00:10:34.880 --> 00:10:38.520
There are no points that map to the orangish-yellowish directions, for example.
00:10:39.000 --> 00:10:42.560
So I’ll grey out this area as a way of saying we don’t wanna search it any further.
00:10:43.360 --> 00:10:50.520
Now the right half does go through all of the colors, spends a lot of time in the green direction then passes through yellow, orange, red, as well as blue, violet, pink.
00:10:51.560 --> 00:10:57.200
Now remember, what that means is that points of this boundary get mapped to outputs of all possible directions.
00:10:57.720 --> 00:11:01.720
So we’ll explore it further, subdividing again and checking the boundary for each region.
00:11:02.800 --> 00:11:06.320
And the boundary of the top right is all green, so we’ll stop searching there.
00:11:06.960 --> 00:11:09.480
But the bottom is colorful enough to deserve a subdivision.
00:11:10.280 --> 00:11:11.880
And just continue like this.
00:11:12.320 --> 00:11:19.000
Check which subregion has a boundary covering all possible colors, meaning points of that boundary get mapped to all possible directions.
00:11:19.560 --> 00:11:22.360
And keep chopping those regions in half like we did for the one-dimensional case.
00:11:23.000 --> 00:11:27.400
Eventually leading us to a zero of the fun — oh, actually hang on a second.
00:11:28.680 --> 00:11:30.200
What happened here?
00:11:30.200 --> 00:11:34.120
Neither of those last subdivisions on the bottom right passed through all the colors.
00:11:34.520 --> 00:11:37.560
So our algorithm stopped cause it didn’t wanna search through either of those.
00:11:38.080 --> 00:11:39.880
But it also didn’t find a zero.
00:11:41.240 --> 00:11:43.080
Okay, clearly something’s wrong here.
00:11:43.560 --> 00:11:44.200
And that’s okay!
00:11:44.400 --> 00:11:46.400
Being wrong is a regular part of doing math.
00:11:47.120 --> 00:11:48.920
If we look back, we had this hypothesis.
00:11:48.920 --> 00:11:50.760
And it led us to this proposed algorithm.
00:11:51.280 --> 00:11:52.920
So, we were mistaken somewhere.
00:11:53.560 --> 00:11:55.920
And being good at math is not about being right the first time.
00:11:56.280 --> 00:12:01.240
It’s about having the resilience to carefully look back and understand the mistakes and understand how to fix them.
00:12:02.480 --> 00:12:06.160
Now The problem here is that we had a region whose border went through every color.
00:12:06.640 --> 00:12:10.320
But when we split it in the middle, neither subregion’s border went through every color.
00:12:10.800 --> 00:12:13.000
We had no options for where to keep searching next.
00:12:13.360 --> 00:12:14.520
And that broke the zero finder.
00:12:15.360 --> 00:12:17.440
Now in one dimension, this sort of thing never happened.
00:12:17.960 --> 00:12:28.200
Any time you had an interval whose endpoints had different signs, if you split it up, you know that you’re guaranteed to get some subinterval whose endpoints also have different signs.
00:12:29.000 --> 00:12:38.280
Or put another way, any time you have two intervals whose endpoints don’t change signs, if you combine them, you’ll get a bigger interval whose endpoints also don’t change sign.
00:12:39.080 --> 00:12:48.960
But in two dimensions, it’s possible to find two regions whose borders don’t go through every color but whose boundaries combine to give a region whose border does go through every color.
00:12:49.840 --> 00:12:53.480
And in just this way, our proposed zero-finding algorithm broke.
00:12:54.360 --> 00:13:01.880
In fact, if you think about it, you can find a big loop whose border goes through every possible color without there being a zero inside of it.
00:13:03.040 --> 00:13:12.040
Now that’s not to say that we were wrong in our claims about tiny loops, when we said that a forever-narrowing loops going through every color has to be narrowing in on a zero.
00:13:12.920 --> 00:13:21.920
But what made a mess of things for us is that this “does my border go through every color or not” property doesn’t combine in a nice predictable way when you combine regions.
00:13:22.960 --> 00:13:23.400
But don’t worry!
00:13:23.880 --> 00:13:30.360
It turns out, we can modify this slightly to a more sophisticated property that does combine to give us what we want.
00:13:38.480 --> 00:13:43.120
The idea is that instead of simply asking whether we can find a color at some point along the loop.
00:13:43.680 --> 00:13:47.680
Let’s keep more careful track of how those colors change as we walk around that loop.
00:13:48.560 --> 00:13:49.800
Let me show what you I mean with an example.
00:13:50.400 --> 00:13:53.080
I’ll keep a little color wheel up here in the corner to help us keep track.
00:13:54.040 --> 00:14:04.800
When the colors along a path of inputs move through the rainbow in a specific direction — from red to yellow, yellow to green, green to blue, or blue to red — the output is swinging clockwise.
00:14:05.640 --> 00:14:15.880
But on the other hand, if the colors move the other way through the rainbow — from blue to green, green to yellow, yellow to red, or red to blue — the output is swinging counterclockwise.
00:14:17.120 --> 00:14:23.560
So walking along this short path here, the colors wind a fifth of the way clockwise through the color wheel.
00:14:24.240 --> 00:14:30.200
And walking along this path here, the colors wind another fifth of the way clockwise through the color wheel.
00:14:31.200 --> 00:14:38.640
And of course, that means that if you go through both paths, one after another, the colors wind a total of two-fifths of a full turn clockwise.
00:14:39.360 --> 00:14:43.200
The total amount of winding just adds up, and this is gonna be key.
00:14:43.200 --> 00:14:46.160
This is the kind of straightforward combining that will be useful to us.
00:14:47.000 --> 00:14:57.880
Now when I say total amount of winding, I want you to imagine an old-fashioned odometer that ticks forward as the arrow spins clockwise but backwards as the arrow spins counterclockwise.
00:14:58.640 --> 00:15:02.280
So counterclockwise winding counts as negative clockwise winding.
00:15:03.000 --> 00:15:04.320
The outputs may turn a lot.
00:15:04.720 --> 00:15:08.040
But if some of that turning is in the opposite direction, it cancels out.
00:15:08.920 --> 00:15:16.200
For example, if you move forward along this path and then move backwards along that same path, the total amount of winding ends up just being zero.
00:15:16.680 --> 00:15:20.800
The backwards movement literally rewinds through the previously seen colors.
00:15:21.240 --> 00:15:24.920
Reversing all the previous winding and returning the odometer back to where it started.
00:15:26.280 --> 00:15:29.920
For our purposes, we’ll care most about looking at the winding along loops.
00:15:30.480 --> 00:15:33.720
For example, let’s say we walk around this entire loop clockwise.
00:15:34.240 --> 00:15:38.760
The outputs that we come across wind around a total of three full clockwise turns.
00:15:39.440 --> 00:15:46.160
The colors swung through the rainbow, ROYGBIV, in order, from red to red again and then again and again.
00:15:47.080 --> 00:15:52.640
In the jargon mathematicians use, we say that along this loop, the total winding number is three.
00:15:53.880 --> 00:15:56.040
Now for other loops, it could be any other whole number.
00:15:56.520 --> 00:16:01.720
Maybe a larger one if the output swings around many times as the input walks around a single loop.
00:16:02.400 --> 00:16:05.960
Or could be a smaller number if the output only swings around once or twice.
00:16:06.960 --> 00:16:09.320
Or that winding number could even be a negative integer.
00:16:09.760 --> 00:16:14.120
If the output was swinging counterclockwise as we walk clockwise around the loop.
00:16:15.080 --> 00:16:18.760
But along any loop, this total amount of winding has to be a whole number.
00:16:20.880 --> 00:16:24.600
I mean, by the time you get back to where you started, you’ll have the same output that you started with.
00:16:26.200 --> 00:16:34.400
Incidentally, if a path actually contains a point where the output is precisely zero, then technically you can’t define a winding number along that.
00:16:34.720 --> 00:16:36.560
Since the output has no particular direction.
00:16:37.400 --> 00:16:41.080
Now this isn’t gonna be a problem for us because our whole goal is to find zeros.
00:16:41.480 --> 00:16:43.840
So if this ever comes up, we just lucked out early.
00:16:44.800 --> 00:16:50.600
Alright, so the main thing to notice about these winding numbers is that they add up nicely when you combine paths into bigger paths.
00:16:54.960 --> 00:17:02.440
But what we really want is for the winding numbers along the borders of regions to add up nicely when we combine regions to make bigger regions.
00:17:03.080 --> 00:17:04.520
So do we have that property?
00:17:08.160 --> 00:17:08.920
Well, take a look!
00:17:09.560 --> 00:17:16.960
The winding number as we go clockwise around this region on the left is the sum of the winding numbers from these four paths.
00:17:17.720 --> 00:17:24.360
And the winding as we go clockwise around this region on the right is the sum of the winding numbers from these four paths.
00:17:25.240 --> 00:17:32.440
And when we combine those two regions into a bigger one, most of those paths become part of the clockwise border of the bigger region.
00:17:33.640 --> 00:17:35.040
And as for those two paths that don’t?
00:17:35.640 --> 00:17:37.120
Well, they cancel out perfectly.
00:17:37.480 --> 00:17:41.760
One of them is just the reverse, the rewinding, of the other one like we saw before.
00:17:42.560 --> 00:17:47.600
So the winding numbers along borders of regions add up in just the way that we want them to.
00:17:48.600 --> 00:17:54.760
Also, side note, this reasoning about oriented borders adding up nicely like this comes up a lot in mathematics.
00:17:54.760 --> 00:17:56.600
And it often goes under the name Stokes’s Theorem.
00:17:57.200 --> 00:18:00.920
Those of you who’ve studied multivariable calculus might recognize it from that context.
00:18:02.800 --> 00:18:07.480
So now, finally, with winding numbers in hand, we can get back to our equation-solving goals.
00:18:08.000 --> 00:18:15.880
The problem with the region we saw earlier is that even though its border passed through all possible colors, the winding number was actually zero.
00:18:16.560 --> 00:18:19.680
The outputs wound around halfway through yellow to red.
00:18:20.120 --> 00:18:23.360
And then started going counterclockwise back the other direction.
00:18:23.720 --> 00:18:26.480
And continued going through blue and hitting red from the other way.
00:18:27.400 --> 00:18:30.840
All in such a way that the total winding netted out to be zero.
00:18:31.920 --> 00:18:37.760
But if you find a loop which not only hits every color, but it has the stronger condition of a nonzero winding number.
00:18:38.280 --> 00:18:45.080
Then if you were to split it in half, you’re guaranteed that at least one of those halves has a nonzero winding number as well.
00:18:45.600 --> 00:18:47.760
Since things add up nicely in the way we want them to.
00:18:48.520 --> 00:18:53.400
So in this way, you can keep going, narrowing in further and further onto one point.
00:18:54.200 --> 00:18:59.240
And as you narrow in onto a point, you’ll be doing so with tiny loops that have nonzero winding numbers.
00:18:59.480 --> 00:19:01.440
Which implies they go through all possible colors.
00:19:02.000 --> 00:19:06.360
And therefore, like I said before, the point they’re narrowing in on must be a zero.
00:19:07.400 --> 00:19:08.240
And that’s it!
00:19:08.680 --> 00:19:11.240
We have now created a two-dimensional equation solver.
00:19:11.560 --> 00:19:13.880
And this time, I promise, there are no bugs.
00:19:14.440 --> 00:19:17.320
Winding numbers are precisely the tool we needed to make this work.
00:19:18.120 --> 00:19:26.880
We can now solve equations that look like where does 𝑓 of 𝑥 equal 𝑔 of 𝑥 in two dimensions just by considering how the difference between 𝑓 and 𝑔 winds around.
00:19:27.560 --> 00:19:31.920
Whenever we have a loop whose winding number isn’t zero, we can run this algorithm on it.
00:19:32.200 --> 00:19:34.680
And we’re guaranteed to find a solution somewhere within it.
00:19:35.640 --> 00:19:39.800
And what’s more, just like in one dimension, this algorithm is incredibly efficient.
00:19:40.240 --> 00:19:43.560
We keep narrowing in on half the size of our region each round.
00:19:43.880 --> 00:19:45.840
Thus, quickly narrowing in on the zeros.
00:19:46.320 --> 00:19:54.440
And all the while, we only have to check the value of the function along points of these loops, rather than checking it on the many, many points of the interior.
00:19:55.160 --> 00:20:03.720
So in some sense, the overall work done is proportional only to the search space’s perimeter not the full area, which is amazing!
00:20:04.880 --> 00:20:09.800
Now once you understand what’s going on, it is weirdly mesmerizing to just watch this in action.
00:20:10.480 --> 00:20:12.960
Giving it some function and letting it search for zeros.
00:20:13.720 --> 00:20:16.320
Like I said before, complex numbers, they’re two-dimensional.
00:20:16.800 --> 00:20:19.560
So let’s apply it to some equation with complex numbers.
00:20:20.320 --> 00:20:27.960
For example, here’s the algorithm finding the zeros of the function 𝑥 to the fifth minus 𝑥 minus one over the complex plane.
00:20:28.880 --> 00:20:34.440
It started by considering a very large region around the origin, which ended up having a winding number of five.
00:20:35.320 --> 00:20:42.120
Each time you find a loop with a nonzero winding number, you split it in half and figure out the winding number of the two smaller loops.
00:20:42.680 --> 00:20:46.160
Either one or both of them is guaranteed to have a nonzero winding number.
00:20:46.640 --> 00:20:50.080
And when you see this, you know that there is a zero somewhere inside that smaller loop.
00:20:50.520 --> 00:20:53.400
So, you keep going in the same way, searching the smaller space.
00:20:54.240 --> 00:20:59.800
We also stop exploring a region if the path that we’re computing along happens to stumble once across a zero.
00:21:00.360 --> 00:21:02.760
Which actually happened once for this example on the right half here.
00:21:03.280 --> 00:21:06.560
Those rare occurrences interfere with our ability to compute winding numbers.
00:21:06.800 --> 00:21:08.080
But hey, we got a zero!
00:21:09.240 --> 00:21:12.880
And as for loops whose winding number is zero, you just don’t explore those further.
00:21:13.160 --> 00:21:15.000
Maybe they have a solution inside; maybe they don’t.
00:21:15.200 --> 00:21:16.280
We have no guarantees.
00:21:18.480 --> 00:21:23.920
And letting our equation solver continue in the same way, it eventually converges to lots of zeros for this polynomial.
00:21:25.960 --> 00:21:30.960
By the way, it is no coincidence that the total winding number in this example happen to be five.
00:21:31.600 --> 00:21:42.200
With complex numbers, the operation 𝑥 to the 𝑛 directly corresponds to walking around the output’s origin 𝑛 times as you walk around the input’s origin once.
00:21:44.960 --> 00:21:51.920
So the polynomial, for large enough inputs, every term other than the leading term becomes insignificant in comparison.
00:21:52.640 --> 00:21:59.760
So any complex polynomial whose leading term is 𝑥 to the 𝑛 has a winding number of 𝑛 around a large-enough loop.
00:22:00.640 --> 00:22:06.800
And in that way, our winding number technology actually guarantees that every complex polynomial has a zero.
00:22:07.560 --> 00:22:12.040
This is such an important fact that mathematicians call it the fundamental theorem of algebra.
00:22:13.800 --> 00:22:18.480
Having an algorithm for finding numerical solutions to equations like this is extremely practical.
00:22:18.960 --> 00:22:25.000
But the fundamental theorem of algebra is a good example of how these winding numbers are also quite useful on a theoretical level.
00:22:25.640 --> 00:22:30.840
Guaranteeing the existence of a solution to a broad class of equations for suitable conditions.
00:22:31.400 --> 00:22:33.800
Which is much more the kinda thing mathematicians like thinking about.