Aufgaben:Problem 10

From Ferienserie MMP2
Revision as of 20:07, 17 June 2015 by Carl (Talk | contribs) (a)

Jump to: navigation, search

Note

I copied Aarons Tex file into the wiki and tried to adjust the wrong stuff. If you find typos or wrong formatted stuff, please take those 5 secs to correct it. Thanks.

Since there is a lot to do in this excercise, and Aarons solution looks great, I did not copy Cravens solution.

Links

Here is a really messy solution to this problem.

If you would like to edit the texfile, you can find it here

Media:9+10.pdf <--- Craven's Solution

Solution

This is the solution for exercise 10 of the ferienserie. I didn't use any references besides the lecture notes here, because the problem is very calculation-heavy.

Before we begin with the proof, we should recall what all the stuff used in the exercise actually is. You will get lost in the calculation if you forget what everything means. So I'll repeat it here, so you don't have to look it up. I hate solutions which try to be as brief as possible, which ultimately generates a lot of work just to understand what's going up.

\( U \) is a domain. A domain is a connected open subset of \(\mathbb{R}^n\). \(C^\infty(U)\) is the set of all smooth, real valued function on U. A smooth vector field on \(U\) is a map from \(C^\infty(U)\) to itself satisfying linearity in addition and Leibnitz's rule in multiplication. \(\mathfrak{X}\) is the space of all smooth vector fields on U. Finally, a p-form is a map from \(\mathfrak{X}^p\) to \(C^\infty(U)\).

Still alright? Let's start then!

a

Here, we should probably use a result from the lecture notes, on General Notes, page 77. Here, we have the proposition with proof, that


\[V(f)(z) = \sum \limits_{j=1}^p V(x^j)(z) \cdot \frac{\partial}{\partial x^j }f(z)\]


Now let us consider that multilinear map at some arbitrary point in \(\mathfrak{X}^p\) We'll then decompose every vector field in the argument into the sum, as seen upstairs. I will omit both arguments from now on, the \(z\) and the \(f\).


\[\omega(V_1,\dots,V_p) = \omega(\sum \limits_{j_1=1}^n V(x^{j_1}) \frac{\partial}{\partial x^{j_1}}, \dots, \sum \limits_{j_p=1}^n V(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]


\[ = \sum \limits_{j_1, \dots , j_p = 1}^n \omega(V(x^{j_1}) \frac{\partial}{\partial x^{j_1}}) , \dots , V(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]


Note that \(V(x^{j_k}) \frac{\partial}{\partial x^{j_k}}\) is a vector field. On the left side, the \(V(x^{j_k})\) are smooth functions, as the vector field V has already been evaluated at the smooth function \(x^j\). The right side, that is, the partial differential operators, are the actual vector field. We can pull the smooth functions out, by the definition of the multilinearity of the p-form. We'll then get the expression

\[ \sum \limits_{j_1, \dots , j_p = 1}^n V(x^{j_1}) \dots V(x^{j_p}) \omega (\frac{\partial}{\partial x^{j_1}} , \dots ,\frac{\partial}{\partial x^{j_p}})\]


By the pigeonhole principle (The pigeonhole principle: If there are \(n\) pigeonholes and \(p>n\) pigeons, then there is at least one pigeonhole with more than one pigeon in it.) at least two partial differential operators are now the same in the argument of \(\omega\). By the alternating property of \(\omega\), it is then zero. So we have a sum of zeros, which is zero again. We're done!

b

Now, that we know what we're dealing with, we can start doing some painful calculations. And the next two exercises are exatly that, painful calculations.

Let's start with the additivity: Let \(X_k = X_a + X_b\).

Proposition \(d\omega(\dots,X_a+X_b,\dots) = d\omega(\dots, X_a, \dots) + d\omega(\dots, X_b, \dots)\)

The proof follows directly from the additivity of the vectorfields and the commutator and the multilinearity of \(\omega\). Since everything is additive, we just have to pull the sum out of everything. First sum, assume that the \(X_k\) is in the tail. (The \(\omega\))


\[ X_i(\omega(\dots,X_a+X_b,\dots))=X_i(\omega(\dots,X_a,\dots)+\omega(\dots,X_b,\dots)) \] \[ X_i(\omega(\dots,X_a,\dots)) + X_i(\omega(\dots,X_b,\dots))\]


First sum, assume the \(X_k\) is in front, that is, \(i=k\):


\[(X_a+X_b)(\omega(\dots)) = X_a(\omega(\dots))+X_b(\omega(\dots))\]


Second sum, assume the \(X_k\) is in the tail, that is, not in the commutator. Then


\[\omega(\dots,X_a+X_b,\dots) = \omega(\dots,X_a,\dots)+\omega(\dots,X_b,\dots)\]


Now assume that it is in the commutator. You might have to (trivially) show the commutator's additivity:


\[[A,B+C](f) = A((B+C)(f)) - B(A(f)) - C(A(f))\] \[ = A(B(f)) + A(C(f)) -B(A(f)) -B(C(f)) = [A,B] + [A,C]\]


It follows from the additivity of vectorfields. The other side follows from the asymmetry of the commutator. We then just slap the argument in the commutator:


\[ \omega([X_i,X_a + X_b],\dots) = \omega([X_i,X_a]+[X_i,X_b],\dots) = \omega([X_i,X_a], \dots) + \omega([X_i,X_a], \dots)\]

Any analogous for the other side of the commutator. So we have the additivity now.

Next up is the linearity in \(C^\infty(U)\) Consider \(X_k = g X_a\), where \(X_a\) is a vectorfield and \(g\) is a smooth function.

Warning: you cannot pull the smooth function out of a vector field. The vector field satisfies the leibnitz rule, so if \(V\) is a vectorfield, then \(V(gX) \neq gV(X)\), but \(V(gX) = V(g)X + gV(X)\) This will give us some trouble in our proof. We will get some junk terms from both sums, which will kill each other off, luckily.

The first case is the easies one: Consider the first sum, k-th term. There, nothing bad happens, the only thing we recover is the desired term by just inserting the definition:


\[(-1)^{k-1} gX_a(\omega(\dots))\]


So nothing to do here. From now on, it only gets worse. Now consider the \(i \neq k\) Terms of the first sum.


\[(-1)^{i-1} X_i(\omega(\dots,gX_a,\dots) = (-1)^{i-1} X_i(g\omega(\dots,X_a,\dots)\]


So we're allowed to pull the \(g\) out of the $\omega$, because \(\omega\) is \(C^\infty\) multilinear. (Check the definition if you don't know what I'm talikng about, or just ask me.) For the \(X_i\), we have to use the Leibnitz rule and we will get a junk term.

\[= (-1)^{i-1} gX_i(\omega(\dots,X_a,\dots)) + (-1)^{i-1} X_i(g)\omega(\dots,X_a,\dots)\]


The left term is the desirable term. The right one is the junk term. It will be killed of by the junk terms of the second sum.

Speaking of which, let's consider the second sum now. Assume first that the \(X_k\) is not in the commutator, Then, we have a simple case:


\[(-1)^{i+j} \omega([X_i,X_j],\dots,gX_a,\dots) = (-1)^{i+j} g\omega([X_i,X_j],\dots,X_a,\dots)\]


So no junk flying about here. Before we consider the second case, we wanna check out what happens if we stuff \(gX_a) in the commutator.

\[[X_i,gX_a] = X_i(gX_a) - gX_a(X_i) = X_i(g)X_a + gX_i(X_a) - gX_a(X_i)\] \[ = g[X_i,X_a] + X_i(g)X_a\]


Also, by the antisymmetry, we get


\[[gX_a,X_i] = g[X_a,X_i] - X_i(g)X_a\]


Now let us actually stuff it in the commutator. Consider all terms where \(i\) or \(j\) in the sum are k. There is a part in this sum where the other intex in the commutator is smaller than k and a part where it is bigger than k. We must consider both parts. Let's start with the part where our running variable, \(l < k\). Then, we have terms of the form


\[(-1)^{l+k}\omega([X_l,gX_a],\dots)) = (-1)^{l+k}\omega(g[X_l,X_k] + X_l(g)X_a,\dots))\]

Now split the whole thing up and get:


\[(-1)^{l+k}g\omega([X_l,X_k],\dots) + (-1)^{l+k}X_l(g) \omega(X_a,\dots)\]


An important note here is that we can pull \(X_l(g)\) out of \(\omega\), because the vector field has already been evaluated and is thus no longer a vector field, but a smooth function. The LHS term is the desired term for the multilinearity. The RHS term is the junk term, but it already look similar to the junk term of the first sum. We shall now push`` the \(X_a\) over to its proper place, that is, the argument between \(X_{k-1}\) and \(X_{k+1}\), by using permutations. Normally, this would need \(k-1\) pushes, but because there is an item missing, as the \(l < k\), it is only (k-2\) pushes. By doing those permutations, we get an extra factor of \((-1)^{k-2}\). This factor then kills off the other power of \(-1\) in the junk term, so we get:


\[(-1)^{l+k} (-1)^{k-2} \omega(X_1,\dots,X_{l-1},X_{l+1},\dots,X_{k-1},X_a,X_{k+1},\dots)\] \[= (-1)^l \omega(X_1,\dots,X_{l-1},X_{l+1},\dots,X_{k-1},X_a,X_{k+1},\dots)\]

This is {\em exatly} the form of the junk term of the first sum, but with power \((-1)^{l}\) instead of \((-1)^{l-1}\) in front. So the two terms have different signs and kill each other off. Amazing!

With the other terms, we get exatly the same thing, but with some extra sign shenanigans. Let me do it for you:

\[(-1)^{k+j} \omega([gX_a,X_j],\dots) X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\] \[= (-1)^{k+j} \omega(g[X_a,X_j] - X_j(g)X_a,\dots, X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\] \[= (-1)^{k+j} g\omega([X_a,X_j],\dots X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots) \] \[- (-1)^{k+j} X_j(g)\omega(X_a,\dots X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\]

The top term is the desirable one. With the one on the bottom, we shift the \(X_a\) to the right again, until it is in the correct position. By the permutations, we get an extra factor of \((-1)^{k-1}\), as this time the \(j\) is to the right of the \(k\). (Try it if you're confused.) The term then becomes

\[- (-1)^{k+j} (-1)^{k+1} X_j(g) \omega(\dots,X_{k-1},X_{a},X_{k+1},\dots,X{j-1},X_{j+1},\dots)\]

We have the correct amount of powers of \(-1\) again here, \((-1)^{j}\), so this term kills the other junk term from the first term, and we have proven the linearity in \(C^\infty(U)\).

But we're still not done yet! We will have to prove the total asymmetry. To do this, let us assume we flip two arbitrary arguments \(X_k\) and \(X_l\).

We can consider both sums seperately again, here. Let us consider the first sum. The first case is easy: Assume that both \(X_k\) and \(X_l\) are in the argument of \(\omega\). Then, the asymmetry follows from the asymmetry of \(\omega\).


\[X_i(\omega(\dots,X_{k-1},X_{l},X_{k+1},\dots,X_{l-1},X_{k},X_{l+1},\dots)\] \[ = X_i( - \omega(\dots,X_{k-1},X_{k},X_{k+1},\dots,X_{l-1},X_{l},X_{l+1},\dots) = -X_i(\omega(\dots))\]

In the second case, we flip the argument on the left with one in the tail. Here, everything gets a bit more messy. We have to consider the power on minus one, in front, too, this time.

\[(-1)^{k-1} X_l(\omega(,X_{k-1},X_{k+1},\dots,X_{l-1},X_{k},X_{l+1},\dots)\]

We'll push the \(X_{k}\) over in the correct position again. We get an additional factor of \((-1)^{l-k-1}\). This also works if \(l\) is to the left of k, as pushing it left or right does not really make a difference. Our term then becomes:

\[(-1)^{k-1} (-1)^{k-l-1} X_l(\omega(,X_{k-1},X_{k},X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\]

Which is exatly \(-1\) times the original term where \(X_l\) was in the front. So the two terms flip. This was the antisymmetry of the first sum.

Now let us move to the second sum. Here, we have three cases to look at: Both arguments in the tail, both arguments in the commutator, and one argument in the commutator and one in the tail. The first two cases follow trivially from the asymmetry of the commutator and the asymmetry of \(\omega\). In the third case, we push arguments around again. You should be used to it by now:

\[ (-1)^{i+k} \omega([X_l,X_i],\dots,X_{k-1},X_{k+1},\dots,X_{l-1},X_{k},X_{l+1},\dots)\]

It is actually not quite that easy. depending if $i$ is in the middle of \(k\) and \(l\) or not, we get a power of \((-1)^{k-l-1}\) or \((-1)^{k-l}\). But this problem is fixed with the forced ordering of the commutator. Assume wlog \(k<l\). If \(i\) is not in between \(k\) and \(l\), all is well already:

\[ (-1)^{i+l+1} \omega([X_l,X_i],\dots,X_{k-1},X_k,X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\]

But if $i$ is in between \(k\) and \(l\), it follows that \(k<i<l\), and thus, we have to flip the commutator to get the correct ordering demanded by the sum.

\[ (-1)^{i+l} \omega([X_l,X_i],\dots,X_{k-1},X_k,X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\] \[= (-1)^{i+l+1} \omega([X_i,X_l],\dots,X_{k-1},X_k,X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\]

So basically the k and l terms get flipped again here. In the end, we have the antisymmetry.

This should be the idea of the proof for b), I don't really know how to write it in a compact form yet, but everybody should understand it like that, hopefully. I may set it in a more compact form later. If you have questions, ask me: aamm@student.ethz.ch

Bravo!

c

More pain ahead! Start with p=0.

By definition, the 0-form is a smooth function. \(df(X) = X(f)\). So that's rather easy for now. By definition,

\[ddf(X_1,X_2) = X_1(df(X_2)) - X_2(df(X_1)) - df([X_1,X_2])\] \[= X_1(X_2(f)) - X_2(X_1(f)) - [X_1,X_2](f) = 0\]

So we survived that case. Now let's set $p=1$. We have some one-form $\omega$.

\[d\omega(X_1,X_2) = X_1(\omega(X_2)) - X_2(\omega(X_1)) - \omega([X_1,X_2])\] \[= X_1(\omega(X_2) - X_2(\omega(X_1)) - \omega(X_1(X_2) + \omega(X_2(X_1)\]


\[dd\omega(V_1,V_2,V_3) = V_1(d\omega(V_2,V_3)) - V_2(d\omega(V_1,V_3)) + V_3(d\omega(V_1,V_2))\] \[ - d\omega([V_1,V_2],V_3) + d\omega([V_1,V_3],V_2) - d\omega([V_2,V_3],V_1)\]

Now expand the whole thing. Each line will be a term:

\[V_1(V_2(\omega(V_3))) - V_1(V_3(\omega(V_2))) - V_1(\omega(V_2(V_3))) + V_1(\omega(V_3(V_2)))\] \[- V_2(V_1(\omega(V_3))) + V_2(V_3(\omega(V_1))) + V_2(\omega(V_1(V_3))) - V_2(\omega(V_3(V_1)))\] \[V_3(V_1(\omega(V_2))) - V_3(V_2(\omega(V_1))) - V_3(\omega(V_1(V_2))) + V_3(\omega(V_2(V_1)))\] \[- d\omega(V_1(V_2),V_3) + d\omega(V_2(V_1),V_3)\] \[+ d\omega(V_1(V_3),V_2) - d\omega(V_3(V_1),V_2)\] \[- d\omega(V_2(V_3),V_1) + d\omega(V_3(V_2),V_1)\]

Now, I'll expand the bottom three lines. Each line will be a term, again.


\[-V_1((V_2(\omega(V_3)) + V_3(\omega(V_1(V_2))) + \omega(V_1(V_2(V_3))) - \omega(V_3(V_1(V_2)))\] \[V_2(V_1(\omega(V_3))) - V_3(\omega(V_2(V_1))) - \omega(V_2(V_1(V_3))) + \omega(V_3(V_2(V_1)))\] \[V_1(V_3(\omega(V_2))) - V_2(\omega(V_1(V_3))) - \omega(V_1(V_3(V_2))) + \omega(V_2(V_1(V_3)))\] \[-V_3(V_1(\omega(V_2))) + V_2(\omega(V_3(V_1))) + \omega(V_3(V_1(V_2))) - \omega(V_2(V_3(V_1)))\] \[-V_2(V_3(\omega(V_1))) + V_1(\omega(V_2(V_3))) + \omega(V_2(V_3(V_1))) - \omega(V_1(V_2(V_3)))\] \[V_3(V_2(\omega(V_1))) - V_1(\omega(V_3(V_2))) - \omega(V_3(V_2(V_1))) + \omega(V_1(V_3(V_2)))\]


In total, we have:

\[V_1(V_2(\omega(V_3))) - V_1(V_3(\omega(V_2))) - V_1(\omega(V_2(V_3))) + V_1(\omega(V_3(V_2)))\] \[- V_2(V_1(\omega(V_3))) + V_2(V_3(\omega(V_1))) + V_2(\omega(V_1(V_3))) - V_2(\omega(V_3(V_1)))\] \[+ V_3(V_1(\omega(V_2))) - V_3(V_2(\omega(V_1))) - V_3(\omega(V_1(V_2))) + V_3(\omega(V_2(V_1)))\] \[-V_1((V_2(\omega(V_3)) + V_3(\omega(V_1(V_2))) + \omega(V_1(V_2(V_3))) - \omega(V_3(V_1(V_2)))\] \[+V_2(V_1(\omega(V_3))) - V_3(\omega(V_2(V_1))) - \omega(V_2(V_1(V_3))) + \omega(V_3(V_2(V_1)))\] \[+V_1(V_3(\omega(V_2))) - V_2(\omega(V_1(V_3))) - \omega(V_1(V_3(V_2))) + \omega(V_2(V_1(V_3)))\] \[-V_3(V_1(\omega(V_2))) + V_2(\omega(V_3(V_1))) + \omega(V_3(V_1(V_2))) - \omega(V_2(V_3(V_1)))\] \[-V_2(V_3(\omega(V_1))) + V_1(\omega(V_2(V_3))) + \omega(V_2(V_3(V_1))) - \omega(V_1(V_2(V_3)))\] \[+V_3(V_2(\omega(V_1))) - V_1(\omega(V_3(V_2))) - \omega(V_3(V_2(V_1))) + \omega(V_1(V_3(V_2)))\]

In case you have the patience to go through all that, this is zero. Congratulations, you did it!