Difference between revisions of "Aufgaben:Problem 10"

From Ferienserie MMP2
Jump to: navigation, search
m (a)
m (b: typo)
 
(21 intermediate revisions by 6 users not shown)
Line 10: Line 10:
  
 
[[Media:9+10.pdf]] <--- Craven's Solution
 
[[Media:9+10.pdf]] <--- Craven's Solution
 +
 +
==Task==
 +
Let \(U \subset \mathbb{R}^n\) be a domain. A \(p\)-form \(\omega, \; p \geq 0\), is a smooth tensor field of type \( (0, p)\)
 +
that is totally antisymmetric, i.e. \(\omega\) is a multilinear map \(\omega : \underbrace{\mathfrak{X} \times ... \times \mathfrak{X}}_{
 +
p−times} \to C^{\infty} (U)\) such that for all permutations \(\pi \in S_p, \{1, ..., p\}\), and all fields \(X_1, ..., X_p \in \mathfrak{X}(U)\), we have
 +
$$ \omega (X_{π(1)}, ..., X_{π(p)}) = sgn(\pi) \omega(X_1, .., X_p)$$
 +
By convention, a \(0\)-form is a smooth function on \(U\).
 +
 +
a) Show that, if \(\omega\) is a \(p\)-form with \(p > n\), then \(\omega \equiv 0\).
 +
 +
For \(\omega\) a \(p\)-form, we define \(d\omega\) as
 +
\begin{align}
 +
d\omega(X_1, .., X_{p+1}) &= \sum\limits_{i=1}^{p+1}{(−1)^{i−1}X_i(\omega(X_1, ..., X_{i−1}, X_{i+1}, ..., X_{p+1}))} \\
 +
&+\sum\limits_{i,j=1,i<j}^{p+1}{(−1)^{i+j}\omega([X_i, X_j], X_1, ..., X_{i−1}, X_{i+1}, ..., X_{j−1}, X_{j+1}, ..., X_{p+1})}
 +
\end{align}
 +
for \(X_1, ..., X_{p+1} \in \mathfrak{X}(U)\). In particular, \(df(X) = X(f)\) for all \(0\)-forms \(f\) and all
 +
vector fields \(X \in \mathfrak{X}(U)\). Here, \([X, Y]\), with \(X, Y \in \mathfrak{X}(U)\), denotes the unique vector
 +
field \(Z \in \mathfrak{X}(U)\) such that \(Z(f) = X(Y (f)) − Y (X(f))\), for all smooth functions \(f\).
 +
 +
b) Show that \(d\omega\) is a \((p + 1)\)-form.
 +
 +
c) For \(p = 0, 1\) show that the \((p + 2)\)-form \(d(d\omega)\) vanishes identically.
  
 
==Solution==
 
==Solution==
Line 22: Line 44:
 
===a===
 
===a===
  
Here, we should probably use a result from the lecture notes, on General Notes, page 77. Here, we have the proposition with proof, that
+
Here, we should probably use a result from the lecture notes, on [https://people.math.ethz.ch/~gruppe5/group5/lectures/mmp/fs15/Files/General_notes(updated).pdf General Notes], page 77. Here, we have the proposition with proof, that
  
  
\[V(f)(z) = \sum \limits_{j=1}^p V(x^j)(z) \cdot \frac{\partial}{\partial x^j }f(z)\]
+
\[V(f)(z) = \sum \limits_{j=1}^n V(x^j)(z) \cdot \frac{\partial f}{\partial x^j }(z)\]
  
  
Now let us consider that multilinear map at some arbitrary point in  \(mathfrak{X}^p\) We'll then decompose every vector field in the argument into the sum, as seen upstairs. I will omit both arguments from now on, the \(z\) and the \(f\).
+
Now let us consider that multilinear map at some arbitrary point in  \(\mathfrak{X}^p\) We'll then decompose every vector field in the argument into the sum, as seen upstairs. I will omit both arguments from now on, the \(z\) and the \(f\).
  
  
\[\omega(V_1,\dots,V_p) = \omega(\sum \limits_{j_1=1}^n V(x^{j_1}) \frac{\partial}{\partial x^{j_1}}, \dots, \sum \limits_{j_p=1}^n V(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]
+
\[\omega(V_1,\dots,V_p) = \omega(\sum \limits_{j_1=1}^n V_1(x^{j_1}) \frac{\partial}{\partial x^{j_1}}, \dots, \sum \limits_{j_p=1}^n V_p(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]
  
  
  
\[ = \sum \limits_{j_1, \dots , j_p = 1}^n \omega(V(x^{j_1}) \frac{\partial}{\partial x^{j_1}}) , \dots , V(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]
+
\[ = \sum \limits_{j_1, \dots , j_p = 1}^n \omega(V_1(x^{j_1}) \frac{\partial}{\partial x^{j_1}} , \dots , V_p(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]
  
  
Note that \(V(x^{j_k}) \frac{\partial}{\partial x^{j_k}}\) is a vector field. On the left side, the \(V(x^{j_k})\) are smooth functions, as the vector field V has already been evaluated at the smooth function \(x^j\). The right side, that is, the partial differential operators, are the actual vector field. We can pull the smooth functions out, by the definition of the multilinearity of the p-form. We'll then get the expression
+
Note that \(V_k(x^{j_k}) \frac{\partial}{\partial x^{j_k}}\) is a vector field. On the left side, the \(V_k(x^{j_k})\) are smooth functions, as the vector field \(V_k\) has already been evaluated at the smooth function \(x^{j_k}\). The right side, that is, the partial differential operators, are the actual vector field. We can pull the smooth functions out, by the definition of the multilinearity of the p-form. We'll then get the expression
  
\[ \sum \limits_{j_1, \dots , j_p = 1}^n V(x^{j_1}) \dots  V(x^{j_p}) \omega (\frac{\partial}{\partial x^{j_1}} , \dots ,\frac{\partial}{\partial x^{j_p}})\]
+
\[ \sum \limits_{j_1, \dots , j_p = 1}^n V_1(x^{j_1}) \dots  V_p(x^{j_p}) \omega (\frac{\partial}{\partial x^{j_1}} , \dots ,\frac{\partial}{\partial x^{j_p}})\]
  
  
Line 50: Line 72:
 
Let's start with the additivity: Let \(X_k = X_a + X_b\).
 
Let's start with the additivity: Let \(X_k = X_a + X_b\).
  
'''Proposition''' \(d\omega(\dots,X_a+X_b,\dots) = d\omega(\dots, X_a, \dots) + d\omega(\dots, X_b, \dots)\)
+
'''Additivity''' \(d\omega(\dots,X_a+X_b,\dots) = d\omega(\dots, X_a, \dots) + d\omega(\dots, X_b, \dots)\)
  
 
The proof follows directly from the additivity of the vectorfields and the commutator and the multilinearity of \(\omega\). Since everything is additive, we just have to pull the sum out of everything. First sum, assume that the \(X_k\) is in the tail. (The \(\omega\))
 
The proof follows directly from the additivity of the vectorfields and the commutator and the multilinearity of \(\omega\). Since everything is additive, we just have to pull the sum out of everything. First sum, assume that the \(X_k\) is in the tail. (The \(\omega\))
Line 81: Line 103:
  
  
\[ \omega([X_i,X_a + X_b],\dots) = \omega([X_i,X_a]+[X_i,X_b],\dots) = \omega([X_i,X_a], \dots) + \omega([X_i,X_a], \dots)\]
+
\[ \omega([X_i,X_a + X_b],\dots) = \omega([X_i,X_a]+[X_i,X_b],\dots) = \omega([X_i,X_a], \dots) + \omega([X_i,X_b], \dots)\]
  
 
Any analogous for the other side of the commutator. So we have the additivity now.
 
Any analogous for the other side of the commutator. So we have the additivity now.
 +
 +
'''Homogeneity'''
  
 
Next up is the linearity in \(C^\infty(U)\) Consider \(X_k = g X_a\), where \(X_a\) is a vectorfield and \(g\) is a smooth function.
 
Next up is the linearity in \(C^\infty(U)\) Consider \(X_k = g X_a\), where \(X_a\) is a vectorfield and \(g\) is a smooth function.
Line 89: Line 113:
 
'''Warning:''' you cannot pull the smooth function out of a vector field. The vector field satisfies the leibnitz rule, so if \(V\) is a vectorfield, then \(V(gX) \neq gV(X)\), but \(V(gX) = V(g)X + gV(X)\) This will give us some trouble in our proof. We will get some junk terms from both sums, which will kill each other off, luckily.
 
'''Warning:''' you cannot pull the smooth function out of a vector field. The vector field satisfies the leibnitz rule, so if \(V\) is a vectorfield, then \(V(gX) \neq gV(X)\), but \(V(gX) = V(g)X + gV(X)\) This will give us some trouble in our proof. We will get some junk terms from both sums, which will kill each other off, luckily.
  
The first case is the easies one: Consider the first sum, k-th term. There, nothing bad happens, the only thing we recover is the desired term by just inserting the definition:
+
The first case is the easier one: Consider the first sum, k-th term. There, nothing bad happens, the only thing we recover is the desired term by just inserting the definition:
  
  
Line 98: Line 122:
  
  
\[(-1)^{i-1} X_i(\omega(\dots,gX_a,\dots) = (-1)^{i-1} X_i(g\omega(\dots,X_a,\dots)\]
+
\[(-1)^{i-1} X_i(\omega(\dots,gX_a,\dots)) = (-1)^{i-1} X_i(g\omega(\dots,X_a,\dots))\]
  
  
So we're allowed to pull the \(g\) out of the $\omega$, because \(\omega\) is \(C^\infty\) multilinear. (Check the definition if you don't know what I'm talikng about, or just ask me.) For the \(X_i\), we have to use the Leibnitz rule and we will get a junk term.
+
So we're allowed to pull the \(g\) out of the \(omega\), because \(\omega\) is \(C^\infty\) multilinear. (Check the definition if you don't know what I'm talikng about, or just ask me.) For the \(X_i\), we have to use the Leibnitz rule and we will get a junk term.
  
 
\[= (-1)^{i-1} gX_i(\omega(\dots,X_a,\dots)) + (-1)^{i-1} X_i(g)\omega(\dots,X_a,\dots)\]
 
\[= (-1)^{i-1} gX_i(\omega(\dots,X_a,\dots)) + (-1)^{i-1} X_i(g)\omega(\dots,X_a,\dots)\]
Line 114: Line 138:
  
  
So no junk flying about here. Before we consider the second case, we wanna check out what happens if we stuff \(gX_a) in the commutator.
+
So no junk flying about here. Before we consider the second case, we wanna check out what happens if we stuff \(gX_a\) in the commutator.
  
 
\[[X_i,gX_a] = X_i(gX_a) - gX_a(X_i) = X_i(g)X_a + gX_i(X_a) - gX_a(X_i)\]
 
\[[X_i,gX_a] = X_i(gX_a) - gX_a(X_i) = X_i(g)X_a + gX_i(X_a) - gX_a(X_i)\]
Line 137: Line 161:
  
  
An important note here is that we can pull \(X_l(g)\) out of \(\omega\), because the vector field has already been evaluated and is thus no longer a vector field, but a smooth function. The LHS term is the desired term for the multilinearity. The RHS term is the junk term, but it already look similar to the junk term of the first sum. We shall now ''push`` the \(X_a\) over to its proper place, that is, the argument between \(X_{k-1}\) and \(X_{k+1}\), by using permutations. Normally, this would need \(k-1\) pushes, but because there is an item missing, as the \(l < k\), it is only (k-2\) pushes. By doing those permutations, we get an extra factor of \((-1)^{k-2}\). This factor then kills off the other power of \(-1\) in the junk term, so we get:
+
An important note here is that we can pull \(X_l(g)\) out of \(\omega\), because the vector field has already been evaluated and is thus no longer a vector field, but a smooth function. The LHS term is the desired term for the multilinearity. The RHS term is the junk term, but it already look similar to the junk term of the first sum. We shall now push the \(X_a\) over to its proper place, that is, the argument between \(X_{k-1}\) and \(X_{k+1}\), by using permutations. Normally, this would need \(k-1\) pushes, but because there is an item missing, as the \(l < k\), it is only \(k-2\) pushes. By doing those permutations, we get an extra factor of \((-1)^{k-2}\). This factor then kills off the other power of \(-1\) in the junk term, so we get:
  
  
Line 143: Line 167:
 
\[= (-1)^l \omega(X_1,\dots,X_{l-1},X_{l+1},\dots,X_{k-1},X_a,X_{k+1},\dots)\]
 
\[= (-1)^l \omega(X_1,\dots,X_{l-1},X_{l+1},\dots,X_{k-1},X_a,X_{k+1},\dots)\]
  
This is {\em exatly} the form of the junk term of the first sum, but with power \((-1)^{l}\) instead of \((-1)^{l-1}\) in front. So the two terms have different signs and kill each other off. Amazing!
+
This is exactly the form of the junk term of the first sum, but with power \((-1)^{l}\) instead of \((-1)^{l-1}\) in front. So the two terms have different signs and kill each other off. Amazing!
  
 
With the other terms, we get exatly the same thing, but with some extra sign shenanigans. Let me do it for you:
 
With the other terms, we get exatly the same thing, but with some extra sign shenanigans. Let me do it for you:
Line 154: Line 178:
 
The top term is the desirable one. With the one on the bottom, we shift the \(X_a\) to the right again, until it is in the correct position. By the permutations, we get an extra factor of \((-1)^{k-1}\), as this time the \(j\) is to the right of the \(k\). (Try it if you're confused.) The term then becomes
 
The top term is the desirable one. With the one on the bottom, we shift the \(X_a\) to the right again, until it is in the correct position. By the permutations, we get an extra factor of \((-1)^{k-1}\), as this time the \(j\) is to the right of the \(k\). (Try it if you're confused.) The term then becomes
  
\[- (-1)^{k+j} (-1)^{k+1} X_j(g) \omega(\dots,X_{k-1},X_{a},X_{k+1},\dots,X{j-1},X_{j+1},\dots)\]
+
\[- (-1)^{k+j} (-1)^{k+1} X_j(g) \omega(\dots,X_{k-1},X_{a},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\]
  
 
We have the correct amount of powers of \(-1\) again here, \((-1)^{j}\), so this term kills the other junk term from the first term, and we have proven the linearity in \(C^\infty(U)\).
 
We have the correct amount of powers of \(-1\) again here, \((-1)^{j}\), so this term kills the other junk term from the first term, and we have proven the linearity in \(C^\infty(U)\).
  
But we're still not done yet! We will have to prove the total asymmetry. To do this, let us assume we flip two arbitrary arguments \(X_k\) and \(X_l\).
+
'''Asymmetry'''
 +
It is clear that you can write any permutation \(\Pi\) as a combination of Transpositions \( (k,l) \), but you can also write any Transpositions \( (k,l) \) as a combination of Neighbour Transpositions \( (k,k+1) \). Mainly for \(k<l\) in an order list:
  
We can consider both sums seperately again, here. Let us consider the first sum. The first case is easy: Assume that both \(X_k\) and \(X_l\) are in the argument of \(\omega\). Then, the asymmetry follows from the asymmetry of \(\omega\).
+
$$ (k,l) = (k+1,l)\circ...\circ(l-1,l)\circ(k,l)\circ(l-1,k)\circ...\circ(k+1,k)$$
  
 +
(You can check that those are \(2(l-k)-1\) Transpositions giving an overall \(sgn\) of \(-1\))
  
\[X_i(\omega(\dots,X_{k-1},X_{l},X_{k+1},\dots,X_{l-1},X_{k},X_{l+1},\dots)\]
+
We will thus be showing asymmetry for any arbitary Neighbour Transpositions \( (k,k+1) \):
\[ = X_i( - \omega(\dots,X_{k-1},X_{k},X_{k+1},\dots,X_{l-1},X_{l},X_{l+1},\dots) = -X_i(\omega(\dots))\]
+
  
In the second case, we flip the argument on the left with one in the tail. Here, everything gets a bit more messy. We have to consider the power on minus one, in front, too, this time.
+
For the first sum
  
\[(-1)^{k-1} X_l(\omega(,X_{k-1},X_{k+1},\dots,X_{l-1},X_{k},X_{l+1},\dots)\]
+
$$ \sum_{i=1}^{p+1} (-1)^{i-1} X_i(\omega(X_1,\dots,X_{i-1},X_{i+1},\dots)$$
  
We'll push the \(X_{k}\) over in the correct position again. We get an additional factor of \((-1)^{l-k-1}\). This also works if \(l\) is to the left of k, as pushing it left or right does not really make a difference. Our term then becomes:
+
Now applying the Transposition \( (k,k+1) \):
  
\[(-1)^{k-1} (-1)^{k-l-1} X_l(\omega(,X_{k-1},X_{k},X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\]
+
$$ \Rightarrow \sum_{i=1}^{k-1} (-1)^{i-1} X_i(\omega(X_1,\dots,X_{i-1},X_{i+1},\dots, X_{k+1}, X_k,\dots )$$
 +
$$ + (-1)^{k-1} X_{k+1}(\omega(X_1,\dots,X_{k-1},X_k, X_{k+2},\dots) $$
 +
$$+ (-1)^{k} X_{k}(\omega(X_1,\dots,X_{k-1},X_{k+1}, X_{k+2},\dots) $$
 +
$$+\sum_{i=k+2}^{p+1} (-1)^{i-1} X_i(\omega(X_1,\dots , X_{k+1}, X_k,\dots X_{i-1},X_{i+1},\dots)$$
  
Which is exatly \(-1\) times the original term where \(X_l\) was in the front. So the two terms flip. This was the antisymmetry of the first sum.
+
In the two sums we use the asymmetry of \(\omega\) to permute \( (k,k+1) \) such that they are back at their rigthfull place. And because \(X_i\) is linear we get the minus out of the sums. In the two special cases we don't need to permute we get a minus from \((-1)^{k-1} = -(-1)^{(k+1)-1}\) and likewise for the other case. Now everything is of the orignal form and we get the old sum back with an additional minus from every term. Thus we have proven asymmetry for the first sum. Now the second sum:
  
Now let us move to the second sum. Here, we have three cases to look at: Both arguments in the tail, both arguments in the commutator, and one argument in the commutator and one in the tail. The first two cases follow trivially from the asymmetry of the commutator and the asymmetry of \(\omega\). In the third case, we push arguments around again. You should be used to it by now:
+
$$\sum_{j=2}^{p+1} \sum_{i=1}^{j-1} (-1)^{i+j} \omega([X_i,X_j],X_1,\dots ,X_{i-1},X_{i+1},\dots ,X_{j-1}, X_{j+1},\dots)$$
  
\[ (-1)^{i+k} \omega([X_l,X_i],\dots,X_{k-1},X_{k+1},\dots,X_{l-1},X_{k},X_{l+1},\dots)\]
+
We again apply \((k,k+1)\): In the following we will think of the "subordinate"\(i\)-sums as elements of the "superordinate" \(j\)-sum.
  
It is actually not quite that easy. depending if $i$ is in the middle of \(k\) and \(l\) or not, we get a power of \((-1)^{k-l-1}\) or \((-1)^{k-l}\). But this problem is fixed with the forced ordering of the commutator.
+
For \(j< k\) we will permute \(X_k,X_{k+1}\) back to get the original form and pick up a minus. Now the \(k\)-th element of the \(j\)-sum: \(j=k\). We get an \(i\)-sum over \(i<k\)
Assume wlog \(k<l\). If \(i\) is not in between \(k\) and \(l\), all is well already:
+
  
\[ (-1)^{i+l+1} \omega([X_l,X_i],\dots,X_{k-1},X_k,X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\]
+
$$ (-1)^{i+k} \omega([X_i,X_{k+1}],\dots, X_{i-1},X_{i+1},\dots X_k,X_{k+2},\dots)$$
 +
$$ = - (-1)^{i+(k+1)} \omega([X_i,X_{k+1}],\dots, X_{i-1},X_{i+1},\dots X_k,X_{k+2},\dots)$$
  
But if $i$ is in between \(k\) and \(l\), it follows that \(k<i<l\), and thus, we have to flip the commutator to get the correct ordering demanded by the sum.
+
now the (k+1)-th j-sum element: \(j=k+1\) we get an i-sum over \(i<k\) plus an additional term \(j=k+1\), \(i=k\), for the first part:
  
\[ (-1)^{i+l} \omega([X_l,X_i],\dots,X_{k-1},X_k,X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\]
+
$$ (-1)^{i+(k+1)} \omega([X_i,X_k],\dots, X_{i-1},X_{i+1},\dots X_{k-1},X_{k+1},\dots)$$
\[= (-1)^{i+l+1} \omega([X_i,X_l],\dots,X_{k-1},X_k,X_{k+1},\dots,X_{l-1},X_{l+1},\dots)\]
+
$$ = - (-1)^{i+k} \omega([X_i,X_k],\dots, X_{i-1},X_{i+1},\dots X_{k-1},X_{k+1},\dots)$$
  
So basically the k and l terms get flipped again here. In the end, we have the antisymmetry.
+
You can already see that interchanging the k-th and the (k+1)-th j-sum element you get the original form with a minus in front.  
  
This should be the idea of the proof for b), I don't really know how to write it in a compact form yet, but everybody should understand it like that, hopefully. I may set it in a more compact form later. If you have questions, ask me: aamm@student.ethz.ch
+
But we forgot one element of the \(i\)-sum: \(j=k+1\) and \(i=k\)
 +
 
 +
$$ (-1)^{k+(k+1)} \omega([X_{k+1},X_k],\dots, X_{k-1},X_{k+2},\dots)$$
 +
 
 +
By the asymmetry of the commutator we are already done for this case:
 +
 
 +
$$ =- (-1)^{k+(k+1)} \omega([X_k,X_{k+1}],\dots, X_{k-1},X_{k+2},\dots)$$
 +
 
 +
Last we have the \(j\)-sum elements where \(j>k+1\). For \(i\neq k, k+1\) we do the old permuter and get back a minus. For \(i = k\):
 +
 
 +
$$ = (-1)^{k+j} \omega([X_{k+1},X_j],\dots, X_k,X_{k+2},\dots X_{j-1},X_{j+1},\dots)$$
 +
$$ = - (-1)^{(k+1)+j} \omega([X_{k+1},X_j],\dots, X_k,X_{k+2},\dots X_{j-1},X_{j+1},\dots)$$
 +
 
 +
and for \(i = k+1\):
 +
 
 +
$$ = (-1)^{(k+1)+j} \omega([X_k,X_j],\dots, X_{k-1},X_{k+1},\dots X_{j-1},X_{j+1},\dots)$$
 +
$$ = - (-1)^{k+j} \omega([X_k,X_j],\dots, X_{k-1},X_{k+1},\dots X_{j-1},X_{j+1},\dots)$$
 +
 
 +
We interchange these two cases in all the \(j\)-sum elements \((j>k+1)\) getting the original form and a minus. Altogether we get back exactly \(-d\omega\). Which concludes the prove for asymmetry.
  
Bravo!
 
 
===c===
 
===c===
 +
 
More pain ahead! Start with p=0.
 
More pain ahead! Start with p=0.
  
Line 203: Line 249:
 
\[= X_1(X_2(f)) - X_2(X_1(f)) - [X_1,X_2](f)  = 0\]
 
\[= X_1(X_2(f)) - X_2(X_1(f)) - [X_1,X_2](f)  = 0\]
  
So we survived that case. Now let's set $p=1$. We have some one-form $\omega$.
+
So we survived that case. Now let's set \(p=1\). We have some one-form \(\omega\).
  
 
\[d\omega(X_1,X_2) = X_1(\omega(X_2)) - X_2(\omega(X_1)) - \omega([X_1,X_2])\]
 
\[d\omega(X_1,X_2) = X_1(\omega(X_2)) - X_2(\omega(X_1)) - \omega([X_1,X_2])\]
\[= X_1(\omega(X_2) - X_2(\omega(X_1)) - \omega(X_1(X_2) + \omega(X_2(X_1)\]
 
 
  
  
Line 215: Line 259:
 
Now expand the whole thing. Each line will be a term:
 
Now expand the whole thing. Each line will be a term:
  
\[V_1(V_2(\omega(V_3))) - V_1(V_3(\omega(V_2))) - V_1(\omega(V_2(V_3))) + V_1(\omega(V_3(V_2)))\]
+
\[V_1(V_2(\omega(V_3))) - V_1(V_3(\omega(V_2))) - V_1(\omega([V_2,V_3]))\]
\[- V_2(V_1(\omega(V_3))) + V_2(V_3(\omega(V_1))) + V_2(\omega(V_1(V_3))) - V_2(\omega(V_3(V_1)))\]
+
\[-V_2(V_1(\omega(V_3)))+ V_2(V_3(\omega(V_1))) + V_2(\omega([V_1,V_3]))\]
\[V_3(V_1(\omega(V_2))) - V_3(V_2(\omega(V_1))) - V_3(\omega(V_1(V_2))) + V_3(\omega(V_2(V_1)))\]
+
\[V_3(V_1(\omega(V_2))) - V_3(V_2(\omega(V_1))) - V_3(\omega([V_1,V_2]))\]
\[- d\omega(V_1(V_2),V_3) + d\omega(V_2(V_1),V_3)\]
+
$$-[V_1,V_2](\omega(V_3))+V_3(\omega([V_1,V_2])) + \omega(\big[[V_1,V_2],V_3\big])$$
\[+ d\omega(V_1(V_3),V_2) - d\omega(V_3(V_1),V_2)\]
+
$$+[V_1,V_3](\omega(V_2))-V_2(\omega([V_1,V_3])) - \omega(\big[[V_1,V_3],V_2\big])$$
\[- d\omega(V_2(V_3),V_1) + d\omega(V_3(V_2),V_1)\]
+
$$-[V_2,V_3](\omega(V_1))+V_1(\omega([V_2,V_3])) + \omega(\big[[V_2,V_3],V_1\big])$$
 
+
Now, I'll expand the bottom three lines. Each line will be a term, again.
+
 
+
 
+
\[-V_1((V_2(\omega(V_3)) + V_3(\omega(V_1(V_2))) + \omega(V_1(V_2(V_3))) - \omega(V_3(V_1(V_2)))\]
+
\[V_2(V_1(\omega(V_3))) - V_3(\omega(V_2(V_1))) - \omega(V_2(V_1(V_3))) + \omega(V_3(V_2(V_1)))\]
+
\[V_1(V_3(\omega(V_2))) - V_2(\omega(V_1(V_3))) - \omega(V_1(V_3(V_2))) + \omega(V_2(V_1(V_3)))\]
+
\[-V_3(V_1(\omega(V_2))) + V_2(\omega(V_3(V_1))) + \omega(V_3(V_1(V_2))) - \omega(V_2(V_3(V_1)))\]
+
\[-V_2(V_3(\omega(V_1))) + V_1(\omega(V_2(V_3))) + \omega(V_2(V_3(V_1))) - \omega(V_1(V_2(V_3)))\]
+
\[V_3(V_2(\omega(V_1))) - V_1(\omega(V_3(V_2))) - \omega(V_3(V_2(V_1))) + \omega(V_1(V_3(V_2)))\]
+
  
 +
The terms on the top right cancel with the terms in the bottom middle. The remaining top three lines can be put together as commutators
  
 +
$$[V_1,V_2](\omega(V_3)) - [V_1,V_3](\omega(V_2)) + [V_2,V_3](\omega(V_1))$$
 +
$$-[V_1,V_2](\omega(V_3))+ \omega(\big[[V_1,V_2],V_3\big])$$
 +
$$+[V_1,V_3](\omega(V_2)) - \omega(\big[[V_1,V_3],V_2\big])$$
 +
$$-[V_2,V_3](\omega(V_1))+ \omega(\big[[V_2,V_3],V_1\big])$$
  
In total, we have:
+
the \(\omega\) terms can be put together using additivity (swith the second commutator) everthing else cancels:
  
\[V_1(V_2(\omega(V_3))) - V_1(V_3(\omega(V_2))) - V_1(\omega(V_2(V_3))) + V_1(\omega(V_3(V_2)))\]
+
$$ \omega(\big[[V_1,V_2],V_3\big] + [[V_3,V_1],V_2\big]+ \big[[V_2,V_3],V_1\big]) = \omega(0) = 0$$
\[- V_2(V_1(\omega(V_3))) + V_2(V_3(\omega(V_1))) + V_2(\omega(V_1(V_3))) - V_2(\omega(V_3(V_1)))\]
+
\[+ V_3(V_1(\omega(V_2))) - V_3(V_2(\omega(V_1))) - V_3(\omega(V_1(V_2))) + V_3(\omega(V_2(V_1)))\]
+
\[-V_1((V_2(\omega(V_3)) + V_3(\omega(V_1(V_2))) + \omega(V_1(V_2(V_3))) - \omega(V_3(V_1(V_2)))\]
+
\[+V_2(V_1(\omega(V_3))) - V_3(\omega(V_2(V_1))) - \omega(V_2(V_1(V_3))) + \omega(V_3(V_2(V_1)))\]
+
\[+V_1(V_3(\omega(V_2))) - V_2(\omega(V_1(V_3))) - \omega(V_1(V_3(V_2))) + \omega(V_2(V_1(V_3)))\]
+
\[-V_3(V_1(\omega(V_2))) + V_2(\omega(V_3(V_1))) + \omega(V_3(V_1(V_2))) - \omega(V_2(V_3(V_1)))\]
+
\[-V_2(V_3(\omega(V_1))) + V_1(\omega(V_2(V_3))) + \omega(V_2(V_3(V_1))) - \omega(V_1(V_2(V_3)))\]
+
\[+V_3(V_2(\omega(V_1))) - V_1(\omega(V_3(V_2))) - \omega(V_3(V_2(V_1))) + \omega(V_1(V_3(V_2)))\]
+
  
In case you have the patience to go through all that, this is zero. Congratulations, you did it!
+
by the jacobi Identity, and linearity.

Latest revision as of 18:12, 3 August 2015

Note

I copied Aarons Tex file into the wiki and tried to adjust the wrong stuff. If you find typos or wrong formatted stuff, please take those 5 secs to correct it. Thanks.

Since there is a lot to do in this excercise, and Aarons solution looks great, I did not copy Cravens solution.

Links

Here is a really messy solution to this problem.

If you would like to edit the texfile, you can find it here

Media:9+10.pdf <--- Craven's Solution

Task

Let \(U \subset \mathbb{R}^n\) be a domain. A \(p\)-form \(\omega, \; p \geq 0\), is a smooth tensor field of type \( (0, p)\) that is totally antisymmetric, i.e. \(\omega\) is a multilinear map \(\omega : \underbrace{\mathfrak{X} \times ... \times \mathfrak{X}}_{ p−times} \to C^{\infty} (U)\) such that for all permutations \(\pi \in S_p, \{1, ..., p\}\), and all fields \(X_1, ..., X_p \in \mathfrak{X}(U)\), we have $$ \omega (X_{π(1)}, ..., X_{π(p)}) = sgn(\pi) \omega(X_1, .., X_p)$$ By convention, a \(0\)-form is a smooth function on \(U\).

a) Show that, if \(\omega\) is a \(p\)-form with \(p > n\), then \(\omega \equiv 0\).

For \(\omega\) a \(p\)-form, we define \(d\omega\) as \begin{align} d\omega(X_1, .., X_{p+1}) &= \sum\limits_{i=1}^{p+1}{(−1)^{i−1}X_i(\omega(X_1, ..., X_{i−1}, X_{i+1}, ..., X_{p+1}))} \\ &+\sum\limits_{i,j=1,i<j}^{p+1}{(−1)^{i+j}\omega([X_i, X_j], X_1, ..., X_{i−1}, X_{i+1}, ..., X_{j−1}, X_{j+1}, ..., X_{p+1})} \end{align} for \(X_1, ..., X_{p+1} \in \mathfrak{X}(U)\). In particular, \(df(X) = X(f)\) for all \(0\)-forms \(f\) and all vector fields \(X \in \mathfrak{X}(U)\). Here, \([X, Y]\), with \(X, Y \in \mathfrak{X}(U)\), denotes the unique vector field \(Z \in \mathfrak{X}(U)\) such that \(Z(f) = X(Y (f)) − Y (X(f))\), for all smooth functions \(f\).

b) Show that \(d\omega\) is a \((p + 1)\)-form.

c) For \(p = 0, 1\) show that the \((p + 2)\)-form \(d(d\omega)\) vanishes identically.

Solution

This is the solution for exercise 10 of the ferienserie. I didn't use any references besides the lecture notes here, because the problem is very calculation-heavy.

Before we begin with the proof, we should recall what all the stuff used in the exercise actually is. You will get lost in the calculation if you forget what everything means. So I'll repeat it here, so you don't have to look it up. I hate solutions which try to be as brief as possible, which ultimately generates a lot of work just to understand what's going up.

\( U \) is a domain. A domain is a connected open subset of \(\mathbb{R}^n\). \(C^\infty(U)\) is the set of all smooth, real valued function on U. A smooth vector field on \(U\) is a map from \(C^\infty(U)\) to itself satisfying linearity in addition and Leibnitz's rule in multiplication. \(\mathfrak{X}\) is the space of all smooth vector fields on U. Finally, a p-form is a map from \(\mathfrak{X}^p\) to \(C^\infty(U)\).

Still alright? Let's start then!

a

Here, we should probably use a result from the lecture notes, on General Notes, page 77. Here, we have the proposition with proof, that


\[V(f)(z) = \sum \limits_{j=1}^n V(x^j)(z) \cdot \frac{\partial f}{\partial x^j }(z)\]


Now let us consider that multilinear map at some arbitrary point in \(\mathfrak{X}^p\) We'll then decompose every vector field in the argument into the sum, as seen upstairs. I will omit both arguments from now on, the \(z\) and the \(f\).


\[\omega(V_1,\dots,V_p) = \omega(\sum \limits_{j_1=1}^n V_1(x^{j_1}) \frac{\partial}{\partial x^{j_1}}, \dots, \sum \limits_{j_p=1}^n V_p(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]


\[ = \sum \limits_{j_1, \dots , j_p = 1}^n \omega(V_1(x^{j_1}) \frac{\partial}{\partial x^{j_1}} , \dots , V_p(x^{j_p}) \frac{\partial}{\partial x^{j_p}})\]


Note that \(V_k(x^{j_k}) \frac{\partial}{\partial x^{j_k}}\) is a vector field. On the left side, the \(V_k(x^{j_k})\) are smooth functions, as the vector field \(V_k\) has already been evaluated at the smooth function \(x^{j_k}\). The right side, that is, the partial differential operators, are the actual vector field. We can pull the smooth functions out, by the definition of the multilinearity of the p-form. We'll then get the expression

\[ \sum \limits_{j_1, \dots , j_p = 1}^n V_1(x^{j_1}) \dots V_p(x^{j_p}) \omega (\frac{\partial}{\partial x^{j_1}} , \dots ,\frac{\partial}{\partial x^{j_p}})\]


By the pigeonhole principle (The pigeonhole principle: If there are \(n\) pigeonholes and \(p>n\) pigeons, then there is at least one pigeonhole with more than one pigeon in it.) at least two partial differential operators are now the same in the argument of \(\omega\). By the alternating property of \(\omega\), it is then zero. So we have a sum of zeros, which is zero again. We're done!

b

Now, that we know what we're dealing with, we can start doing some painful calculations. And the next two exercises are exatly that, painful calculations.

Let's start with the additivity: Let \(X_k = X_a + X_b\).

Additivity \(d\omega(\dots,X_a+X_b,\dots) = d\omega(\dots, X_a, \dots) + d\omega(\dots, X_b, \dots)\)

The proof follows directly from the additivity of the vectorfields and the commutator and the multilinearity of \(\omega\). Since everything is additive, we just have to pull the sum out of everything. First sum, assume that the \(X_k\) is in the tail. (The \(\omega\))


\[ X_i(\omega(\dots,X_a+X_b,\dots))=X_i(\omega(\dots,X_a,\dots)+\omega(\dots,X_b,\dots)) \] \[ X_i(\omega(\dots,X_a,\dots)) + X_i(\omega(\dots,X_b,\dots))\]


First sum, assume the \(X_k\) is in front, that is, \(i=k\):


\[(X_a+X_b)(\omega(\dots)) = X_a(\omega(\dots))+X_b(\omega(\dots))\]


Second sum, assume the \(X_k\) is in the tail, that is, not in the commutator. Then


\[\omega(\dots,X_a+X_b,\dots) = \omega(\dots,X_a,\dots)+\omega(\dots,X_b,\dots)\]


Now assume that it is in the commutator. You might have to (trivially) show the commutator's additivity:


\[[A,B+C](f) = A((B+C)(f)) - B(A(f)) - C(A(f))\] \[ = A(B(f)) + A(C(f)) -B(A(f)) -B(C(f)) = [A,B] + [A,C]\]


It follows from the additivity of vectorfields. The other side follows from the asymmetry of the commutator. We then just slap the argument in the commutator:


\[ \omega([X_i,X_a + X_b],\dots) = \omega([X_i,X_a]+[X_i,X_b],\dots) = \omega([X_i,X_a], \dots) + \omega([X_i,X_b], \dots)\]

Any analogous for the other side of the commutator. So we have the additivity now.

Homogeneity

Next up is the linearity in \(C^\infty(U)\) Consider \(X_k = g X_a\), where \(X_a\) is a vectorfield and \(g\) is a smooth function.

Warning: you cannot pull the smooth function out of a vector field. The vector field satisfies the leibnitz rule, so if \(V\) is a vectorfield, then \(V(gX) \neq gV(X)\), but \(V(gX) = V(g)X + gV(X)\) This will give us some trouble in our proof. We will get some junk terms from both sums, which will kill each other off, luckily.

The first case is the easier one: Consider the first sum, k-th term. There, nothing bad happens, the only thing we recover is the desired term by just inserting the definition:


\[(-1)^{k-1} gX_a(\omega(\dots))\]


So nothing to do here. From now on, it only gets worse. Now consider the \(i \neq k\) Terms of the first sum.


\[(-1)^{i-1} X_i(\omega(\dots,gX_a,\dots)) = (-1)^{i-1} X_i(g\omega(\dots,X_a,\dots))\]


So we're allowed to pull the \(g\) out of the \(omega\), because \(\omega\) is \(C^\infty\) multilinear. (Check the definition if you don't know what I'm talikng about, or just ask me.) For the \(X_i\), we have to use the Leibnitz rule and we will get a junk term.

\[= (-1)^{i-1} gX_i(\omega(\dots,X_a,\dots)) + (-1)^{i-1} X_i(g)\omega(\dots,X_a,\dots)\]


The left term is the desirable term. The right one is the junk term. It will be killed of by the junk terms of the second sum.

Speaking of which, let's consider the second sum now. Assume first that the \(X_k\) is not in the commutator, Then, we have a simple case:


\[(-1)^{i+j} \omega([X_i,X_j],\dots,gX_a,\dots) = (-1)^{i+j} g\omega([X_i,X_j],\dots,X_a,\dots)\]


So no junk flying about here. Before we consider the second case, we wanna check out what happens if we stuff \(gX_a\) in the commutator.

\[[X_i,gX_a] = X_i(gX_a) - gX_a(X_i) = X_i(g)X_a + gX_i(X_a) - gX_a(X_i)\] \[ = g[X_i,X_a] + X_i(g)X_a\]


Also, by the antisymmetry, we get


\[[gX_a,X_i] = g[X_a,X_i] - X_i(g)X_a\]


Now let us actually stuff it in the commutator. Consider all terms where \(i\) or \(j\) in the sum are k. There is a part in this sum where the other intex in the commutator is smaller than k and a part where it is bigger than k. We must consider both parts. Let's start with the part where our running variable, \(l < k\). Then, we have terms of the form


\[(-1)^{l+k}\omega([X_l,gX_a],\dots)) = (-1)^{l+k}\omega(g[X_l,X_k] + X_l(g)X_a,\dots))\]

Now split the whole thing up and get:


\[(-1)^{l+k}g\omega([X_l,X_k],\dots) + (-1)^{l+k}X_l(g) \omega(X_a,\dots)\]


An important note here is that we can pull \(X_l(g)\) out of \(\omega\), because the vector field has already been evaluated and is thus no longer a vector field, but a smooth function. The LHS term is the desired term for the multilinearity. The RHS term is the junk term, but it already look similar to the junk term of the first sum. We shall now push the \(X_a\) over to its proper place, that is, the argument between \(X_{k-1}\) and \(X_{k+1}\), by using permutations. Normally, this would need \(k-1\) pushes, but because there is an item missing, as the \(l < k\), it is only \(k-2\) pushes. By doing those permutations, we get an extra factor of \((-1)^{k-2}\). This factor then kills off the other power of \(-1\) in the junk term, so we get:


\[(-1)^{l+k} (-1)^{k-2} \omega(X_1,\dots,X_{l-1},X_{l+1},\dots,X_{k-1},X_a,X_{k+1},\dots)\] \[= (-1)^l \omega(X_1,\dots,X_{l-1},X_{l+1},\dots,X_{k-1},X_a,X_{k+1},\dots)\]

This is exactly the form of the junk term of the first sum, but with power \((-1)^{l}\) instead of \((-1)^{l-1}\) in front. So the two terms have different signs and kill each other off. Amazing!

With the other terms, we get exatly the same thing, but with some extra sign shenanigans. Let me do it for you:

\[(-1)^{k+j} \omega([gX_a,X_j],\dots) X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\] \[= (-1)^{k+j} \omega(g[X_a,X_j] - X_j(g)X_a,\dots, X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\] \[= (-1)^{k+j} g\omega([X_a,X_j],\dots X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots) \] \[- (-1)^{k+j} X_j(g)\omega(X_a,\dots X_{k-1},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\]

The top term is the desirable one. With the one on the bottom, we shift the \(X_a\) to the right again, until it is in the correct position. By the permutations, we get an extra factor of \((-1)^{k-1}\), as this time the \(j\) is to the right of the \(k\). (Try it if you're confused.) The term then becomes

\[- (-1)^{k+j} (-1)^{k+1} X_j(g) \omega(\dots,X_{k-1},X_{a},X_{k+1},\dots,X_{j-1},X_{j+1},\dots)\]

We have the correct amount of powers of \(-1\) again here, \((-1)^{j}\), so this term kills the other junk term from the first term, and we have proven the linearity in \(C^\infty(U)\).

Asymmetry It is clear that you can write any permutation \(\Pi\) as a combination of Transpositions \( (k,l) \), but you can also write any Transpositions \( (k,l) \) as a combination of Neighbour Transpositions \( (k,k+1) \). Mainly for \(k<l\) in an order list:

$$ (k,l) = (k+1,l)\circ...\circ(l-1,l)\circ(k,l)\circ(l-1,k)\circ...\circ(k+1,k)$$

(You can check that those are \(2(l-k)-1\) Transpositions giving an overall \(sgn\) of \(-1\))

We will thus be showing asymmetry for any arbitary Neighbour Transpositions \( (k,k+1) \):

For the first sum

$$ \sum_{i=1}^{p+1} (-1)^{i-1} X_i(\omega(X_1,\dots,X_{i-1},X_{i+1},\dots)$$

Now applying the Transposition \( (k,k+1) \):

$$ \Rightarrow \sum_{i=1}^{k-1} (-1)^{i-1} X_i(\omega(X_1,\dots,X_{i-1},X_{i+1},\dots, X_{k+1}, X_k,\dots )$$ $$ + (-1)^{k-1} X_{k+1}(\omega(X_1,\dots,X_{k-1},X_k, X_{k+2},\dots) $$ $$+ (-1)^{k} X_{k}(\omega(X_1,\dots,X_{k-1},X_{k+1}, X_{k+2},\dots) $$ $$+\sum_{i=k+2}^{p+1} (-1)^{i-1} X_i(\omega(X_1,\dots , X_{k+1}, X_k,\dots X_{i-1},X_{i+1},\dots)$$

In the two sums we use the asymmetry of \(\omega\) to permute \( (k,k+1) \) such that they are back at their rigthfull place. And because \(X_i\) is linear we get the minus out of the sums. In the two special cases we don't need to permute we get a minus from \((-1)^{k-1} = -(-1)^{(k+1)-1}\) and likewise for the other case. Now everything is of the orignal form and we get the old sum back with an additional minus from every term. Thus we have proven asymmetry for the first sum. Now the second sum:

$$\sum_{j=2}^{p+1} \sum_{i=1}^{j-1} (-1)^{i+j} \omega([X_i,X_j],X_1,\dots ,X_{i-1},X_{i+1},\dots ,X_{j-1}, X_{j+1},\dots)$$

We again apply \((k,k+1)\): In the following we will think of the "subordinate"\(i\)-sums as elements of the "superordinate" \(j\)-sum.

For \(j< k\) we will permute \(X_k,X_{k+1}\) back to get the original form and pick up a minus. Now the \(k\)-th element of the \(j\)-sum: \(j=k\). We get an \(i\)-sum over \(i<k\)

$$ (-1)^{i+k} \omega([X_i,X_{k+1}],\dots, X_{i-1},X_{i+1},\dots X_k,X_{k+2},\dots)$$ $$ = - (-1)^{i+(k+1)} \omega([X_i,X_{k+1}],\dots, X_{i-1},X_{i+1},\dots X_k,X_{k+2},\dots)$$

now the (k+1)-th j-sum element: \(j=k+1\) we get an i-sum over \(i<k\) plus an additional term \(j=k+1\), \(i=k\), for the first part:

$$ (-1)^{i+(k+1)} \omega([X_i,X_k],\dots, X_{i-1},X_{i+1},\dots X_{k-1},X_{k+1},\dots)$$ $$ = - (-1)^{i+k} \omega([X_i,X_k],\dots, X_{i-1},X_{i+1},\dots X_{k-1},X_{k+1},\dots)$$

You can already see that interchanging the k-th and the (k+1)-th j-sum element you get the original form with a minus in front.

But we forgot one element of the \(i\)-sum: \(j=k+1\) and \(i=k\)

$$ (-1)^{k+(k+1)} \omega([X_{k+1},X_k],\dots, X_{k-1},X_{k+2},\dots)$$

By the asymmetry of the commutator we are already done for this case:

$$ =- (-1)^{k+(k+1)} \omega([X_k,X_{k+1}],\dots, X_{k-1},X_{k+2},\dots)$$

Last we have the \(j\)-sum elements where \(j>k+1\). For \(i\neq k, k+1\) we do the old permuter and get back a minus. For \(i = k\):

$$ = (-1)^{k+j} \omega([X_{k+1},X_j],\dots, X_k,X_{k+2},\dots X_{j-1},X_{j+1},\dots)$$ $$ = - (-1)^{(k+1)+j} \omega([X_{k+1},X_j],\dots, X_k,X_{k+2},\dots X_{j-1},X_{j+1},\dots)$$

and for \(i = k+1\):

$$ = (-1)^{(k+1)+j} \omega([X_k,X_j],\dots, X_{k-1},X_{k+1},\dots X_{j-1},X_{j+1},\dots)$$ $$ = - (-1)^{k+j} \omega([X_k,X_j],\dots, X_{k-1},X_{k+1},\dots X_{j-1},X_{j+1},\dots)$$

We interchange these two cases in all the \(j\)-sum elements \((j>k+1)\) getting the original form and a minus. Altogether we get back exactly \(-d\omega\). Which concludes the prove for asymmetry.

c

More pain ahead! Start with p=0.

By definition, the 0-form is a smooth function. \(df(X) = X(f)\). So that's rather easy for now. By definition,

\[ddf(X_1,X_2) = X_1(df(X_2)) - X_2(df(X_1)) - df([X_1,X_2])\] \[= X_1(X_2(f)) - X_2(X_1(f)) - [X_1,X_2](f) = 0\]

So we survived that case. Now let's set \(p=1\). We have some one-form \(\omega\).

\[d\omega(X_1,X_2) = X_1(\omega(X_2)) - X_2(\omega(X_1)) - \omega([X_1,X_2])\]


\[dd\omega(V_1,V_2,V_3) = V_1(d\omega(V_2,V_3)) - V_2(d\omega(V_1,V_3)) + V_3(d\omega(V_1,V_2))\] \[ - d\omega([V_1,V_2],V_3) + d\omega([V_1,V_3],V_2) - d\omega([V_2,V_3],V_1)\]

Now expand the whole thing. Each line will be a term:

\[V_1(V_2(\omega(V_3))) - V_1(V_3(\omega(V_2))) - V_1(\omega([V_2,V_3]))\] \[-V_2(V_1(\omega(V_3)))+ V_2(V_3(\omega(V_1))) + V_2(\omega([V_1,V_3]))\] \[V_3(V_1(\omega(V_2))) - V_3(V_2(\omega(V_1))) - V_3(\omega([V_1,V_2]))\] $$-[V_1,V_2](\omega(V_3))+V_3(\omega([V_1,V_2])) + \omega(\big[[V_1,V_2],V_3\big])$$ $$+[V_1,V_3](\omega(V_2))-V_2(\omega([V_1,V_3])) - \omega(\big[[V_1,V_3],V_2\big])$$ $$-[V_2,V_3](\omega(V_1))+V_1(\omega([V_2,V_3])) + \omega(\big[[V_2,V_3],V_1\big])$$

The terms on the top right cancel with the terms in the bottom middle. The remaining top three lines can be put together as commutators

$$[V_1,V_2](\omega(V_3)) - [V_1,V_3](\omega(V_2)) + [V_2,V_3](\omega(V_1))$$ $$-[V_1,V_2](\omega(V_3))+ \omega(\big[[V_1,V_2],V_3\big])$$ $$+[V_1,V_3](\omega(V_2)) - \omega(\big[[V_1,V_3],V_2\big])$$ $$-[V_2,V_3](\omega(V_1))+ \omega(\big[[V_2,V_3],V_1\big])$$

the \(\omega\) terms can be put together using additivity (swith the second commutator) everthing else cancels:

$$ \omega(\big[[V_1,V_2],V_3\big] + [[V_3,V_1],V_2\big]+ \big[[V_2,V_3],V_1\big]) = \omega(0) = 0$$

by the jacobi Identity, and linearity.