Friday, February 26, 2010

New Post

I'd had this update sitting on my hard drive for a few weeks, incomplete, before today, but I just got around to finishing it because of being sick and finals and whatnot keeping me out of the blogosphere. Now I'm on break, so "enjoy."

I've talked about topologies before, so if you are interested in reading my ramblings but don't know what it is, you can find it in some old post. Or you can just use Wikipedia like a normal person.

Anyway, many useful topologies are defined by a function called a metric, which just measures distance between two points. A metric d is defined as having three properties:

1) d(x,y) >= 0 and d(x,y) = 0 iff x = y (if we relax this last condition, it is a pseudo-metric

2) d(x,y) = d(y,x)

3) d(x,z) <= d(x,y) + d(y,z)

You can see that distance in the normal sense meets all of these conditions. In fact, property 3) is called the triangle inequality and you have to use it all the time in analysis.

The way that a metric induces a topology is pretty straightforward. You just define an "open ball" as the set of all points less than some distance from a point, which you can call the center of the ball. Picturing this in the Euclidean 3-space, known to non math nerds as just 3 dimensions in the usual sense, means spheres of some radius, not including any points on the surface. From here, you just say that a set is open if every point in the set has an open ball containing it which is entirely contained in the set. In Euclidean space, this again translates into anything that is missing its boundary. I'm not going to define boundary for you, although it is a rigorously defined thing in general topology, but it should be clear in Euclidean space what that means. For example, in Euclidean 1-space, heretofore known as the real line, the boundary of the interval [a,b) is {a,b}. So, this set is not open, since it contains some of its boundary. More rigorously, open balls in 1-space just mean open intervals, so if you try to put an open ball around a, it will contain things to the left of a, which can't be in the original interval. Then that set can't be open. It's not closed, either, but I won't get into that.

You'll note that the way we defined this topology does indeed give us a topology:

1) The empty set has no points, so it vacuously meets the condition to be open. Obviously, any open ball of a point will be contained in the entire space, so the entire space is also open.

2) If a point is in a union of open sets, it's in one of them, so it's got one of these open balls, which as a subset of one of the sets, is a subset of the union, so a union of open sets is open.

3) If you intersect two open sets and choose a point in their intersection, then there is an open ball centered at that point corresponding to each of the two sets. Just choose the minimum of those two radii and you've got yourself the open ball you wanted.

Anyway, I was just dragging my feet on undergrad stuff until now. It's time to step it up to measure theory, but not really. I'm just going to glide over all the annoying parts of trying to set up integration the Lebesgue way because it doesn't matter for what I want to get to.

Intuitively, how "big" is the interval [0,1]? It should have length 1-0 = 1, right? How about the interval (a,b)? If you said b-a, you know how to generalize, but aren't Stieltjes (which just means you are normal). Anyway, how about the interval (a,b]? It should still be b-a, right? All we added was one point, and that point should be infinitely small in a certain geometric sense.

Now, how about a union of intervals, like, say, [0,1] U [2,3]? It should just be 2, to my mind since it's just two (disjoint) intervals of length 1. And how about [a,b]U[c,d], assuming c>b; that is those intervals are disjoint? If you said b-a+d-c, congrats. So I think we're clear on how to "measure" intervals, and I'll let you work out for yourself how to do it if you want to union a countable number of intervals.

But, how do you measure something that's weird looking? For example, how big is the set of integers? Well, it should work out to be 0, since if you think about it, they don't really take up any space on the line. They're just like inch marks on an infinitely long ruler. To cut to the chase, the way that Lebesgue thought to do this was to put intervals (which we know how to measure) around sets, and call the measure of the set the infimum of the intervals we can put around it. The infimum of an ordered set is just its greatest lower bound for my intro analysis students out there. So, revisiting the integers question above with this new definition of measure makes it obvious because we can certainly cover the integers with a bunch of very small intervals, arbitrarily small, in fact. That's a sort of baby analysis problem for you, so I won't bother working out all the details without TeX handy.

Alright, so I jumped around a bit, going from metrics to measures, which seem from the names like they should be the same thing, but aren't. Now I'm going to tell you how to induce a metric from a measure, which as you recall, we used to induce a topology. Stick with me through all this terminology.

Some of the properties of a measure make it very attractive as a candidate to induce a topology not on say, the real line itself, but rather on the set of its subsets, called its power set, since a measure measures sets, not points. How would we do that? Intuitively, sets are close together if they overlap quite a bit and far apart if they don't. Less intuitively but still pretty clear, what we are really concerned with there is the parts of sets that DON'T overlap. For example, [0,2] and [1,3] have as their intersection [1,2], but so do [-100,2] and [1, 100] but this second set of measures seems more far apart than the first one. So what we want to measure is the symmetric difference of two sets, (A/B)U(B/A).

This seems to work out nicely, as (A/A)U(A/A) is empty, so it has measure 0. Furthermore, using the symmetric difference makes our would-be metric symmetric. You can check the triangle inequality for yourself, but you'll note that there is a little problem with our definition.

What is the "distance" between [0,1]U{2} and [0,1]? These are not the same sets (not the same points in a metric way of thinking), but their symmetric difference is {2}, which has measure zero, so according to our "metric" these points (sets to a measure way of thinking) are the same. That's a peculiarity.

So what do we do? What mathematicians always do in this kind of situation. Mod out.

What I mean is, these sets aren't equal, but the "metric" tells us that they are, so let's just say that they are and work from there. More precisely, let's define an equivalence relation R by saying that two sets are equivalent if their symmetric difference is 0. Now we have a new space, the set of equivalence classes mod R of [measurable]* sets of real numbers. Using the our "metric" based on the symmetric difference of two sets now gives an actual metric. So the question is, what are open sets in our induced topology? What are closed sets? Compact? Etc., Etc.

Have fun with that for a while, I'm on break.

*I say measurable because it turns out that not all sets are measurable (in the Lebesgue sense). What kind of sets aren't measurable? I don't know, but I can tell you what kind of sets are: Borel sets. These are the kind of sets that you get by performing countable set operations (unions, intersections, complements) on intervals. So, you are going to have to work hard to find a set that isn't measurable, but they are out there. In fact, because there are non-measurable sets in Euclidean 3-space, you can take apart the surface of a sphere and rotate the parts around without stretching them or anything and reassemble them into two spheres of the same size as the original sphere. I know this makes no sense, but it's called the Banach-Tarski paradox and it's one of the coolest results out there.

Friday, February 12, 2010

Converge Slow, Homie

Since a string of digits recently asked me to mention something about math on here, I'll bring up a problem I am working on that should be easier than it is.

Hopefully, we are all familiar with convergence in the numerical sense, but if not, I'll try to hand-wave at it so that even an eighth grader can understand it. I've been told a good teacher can explain anything so that an eighth grader can understand it. It seems like an arbitrary line to me, but maybe a good enough one. Perhaps that is when people start displaying abstract thinking ability.

So, we can start with a sequence. A sequence is just a special kind of function, and for our purposes, we'll stick to sequences of real numbers. As to what a real number is, it's just about any kind of number you can think of that doesn't involve i somewhere. So whole numbers, 0, fractions, even irrational stuff, like 2^(1/2) or pi.

That said, a sequence is just a function of natural numbers, so something like

1, 2, 4, 9, 16, ... you can see how this sequence "goes to infinity," in that it just keeps getting bigger (I am purposefully being vague about this concept). On the other hand, the sequence s(n) = 1/n, that is

1, 1/2, 1/3, 1/4, ... doesn't keep getting bigger; it keeps getting smaller. However, it doesn't "go to negative infinity." In fact it demonstrates the central idea of calculus, which is convergence. In particular, it is said to converge to 0, or that the limit as n approaches infinity of s(n) is 0. What do I mean by that? I mean that we can think of this sequence as approximating 0, as if I didn't know what 0 was, but I was guessing at it, and each time I guessed, my guess got closer. The sequence is said to converge to a number if it approximates that number to any error. More formally,

A sequence s(n) is said to converge to a number L if and only if for all E > 0, there exists an index N such that if n > N, |s(n) -L| < E.

If you think about it, it just says that the sequence gets as close to L as we would like and stays at least that close; that we can approximate L infinitely well with a big enough term of s(n).

A proof of convergence usually goes like this: Given E > 0 (we actually use epsilon, usually)

|s(n) - L |

(bunch of algebra with inequalities)

< E for n such and such

That is, we usually set E and find an expression for N in terms of E that suffices. In the simple case above, you just set N = 1/E and you're good.

Sometimes it isn't so easy, and that is what I'm dealing with at the moment. The sequence I'm looking at is

S(n) = 1, 1/2, (1/2)(3/4), (1/2)(3/4)(5/6), ...

And so on. The denominators are just the product of the even numbers and the numerators are the product of the odds, always smaller. Each factor is less than 1, so each term is smaller than the last term, but that's not enough to show that it converges to 0. One idea might be a comparison test.

That is, if I can show that for any positive integer k, there's a positive integer N such that S(N) < 1/k, I can just compare it with the previous limit problem and say the limits must be the same. [It is easy enough to show that a sequence of positive numbers cannot converge to a negative number, so the new sequence must be "squeezed" between the old 1/n sequence and 0.]

Just working out some terms of the sequence explicitly, I've found that the limit must be less than .15, and I'm convinced that it is actually 0; that I can somehow show it is squeezed down by 1/n if we look far enough along the sequence. The problem is that this convergence is very slow. You'll note that each subsequent factor is bigger than the last, in fact, the last factor converges to 1. However, they still are less than one, so they make each term decrease, just by less and less. It is rather annoying and making it hard for me to find the right expression or technique.

Anyway, it is just part of a somewhat bigger problem related to a theorem of Tauber.

Thursday, February 11, 2010

Snow Day 2: Bad Sequel Joke

I always see things with the number 2 appended to them, and then ":Electric Boogaloo" as a joke. I can't think of any specific times, but it seems like this is a hip joke to make, and every time it comes up I know it is going to happen and that annoys me. I suppose it is awesome to make fun of terrible movies from the 80's, but aren't we past that by this point? Can't we move on to making fun of terrible movies from the 90's?

Regardless, it is yet another snow day here in the city of brotherly love, which is hilarious to me because who cancels college classes, especially two days in a row? But, whatever, it has extended my weekend Wednesday - Monday, I think. I have to check if it is a holiday on Monday or not, but I never have specific work on Fridays and that pleases me in the greatest. I think I can manage a 2 day workweek. Alright, peace out!

Wednesday, February 10, 2010

Snow Day

Philly got hit pretty hard with a snowstorm, so today's classes were preemptively canceled, which is nice because it gives me more time to type up my homework and to generally laze about. As I'm not really doing anything, I don't really have that much to add at the moment. Just this:

The wind, it was howlin', and the snow was outrageous
We chopped through the night, and we chopped through the dawn
When he died, I was hopin' that it wasn't contagious
But I made up my mind and I had to go on

Know what that's from?