Difference between revisions of "Aufgaben:Problem 4"

From Ferienserie MMP2
Jump to: navigation, search
m
(some small corrections)
Line 61: Line 61:
 
Now for \(\supset\): let \(A\in Mat(2,\mathbb{R})\) such that \(tr(A) = 0\). We define a curve in \(SL(2\mathbb{R})\) the following way: \( \gamma(t) =e^{tA}\). We check that  \( \dot{\gamma}(0) = A\) and \(\gamma(0) = e^{0A} = \mathbb{I}\Rightarrow \dot{\gamma}(0) \in  \mathfrak{sl}(2,\mathbb{R})\). We only have to show that the curve actually is a map onto \(SL(2\mathbb{R})\), to do this we calculate the determinate:  
 
Now for \(\supset\): let \(A\in Mat(2,\mathbb{R})\) such that \(tr(A) = 0\). We define a curve in \(SL(2\mathbb{R})\) the following way: \( \gamma(t) =e^{tA}\). We check that  \( \dot{\gamma}(0) = A\) and \(\gamma(0) = e^{0A} = \mathbb{I}\Rightarrow \dot{\gamma}(0) \in  \mathfrak{sl}(2,\mathbb{R})\). We only have to show that the curve actually is a map onto \(SL(2\mathbb{R})\), to do this we calculate the determinate:  
  
$$ det(\gamma(t)) = det(e^{tA}) = e^{tr(A)} =e^0 = 1$$
+
$$ det(\gamma(t)) = det(e^{tA}) = e^{tr(tA)} = e^{t\cdot tr(A)} =e^0 = 1$$
  
 
$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) = \{A \in Mat(2,\mathbb{R}) | tr(A) = 0\}$$
 
$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) = \{A \in Mat(2,\mathbb{R}) | tr(A) = 0\}$$
Line 98: Line 98:
 
$$ = aH + bE_+ + cE_-$$
 
$$ = aH + bE_+ + cE_-$$
  
left to show: \(aH + bE_+ + cE_-\) are linear independent. let \(a,b,c,d\in \mathbb{R}\) and:
+
left to show: \(aH + bE_+ + cE_-\) are linear independent. let \(a,b,c\in \mathbb{R}\) and:
  
 
$$ aH + bE_+ + cE_- = 0$$
 
$$ aH + bE_+ + cE_- = 0$$
Line 104: Line 104:
 
$$\left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right) = 0$$
 
$$\left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right) = 0$$
  
$$\left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ 0 & 0 \end{matrix}\right) \Rightarrow a=b=c=d = 0$$
+
$$\left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ 0 & 0 \end{matrix}\right) \Rightarrow a=b=c = 0$$
  
 
Now we calculate the commutators:
 
Now we calculate the commutators:
Line 110: Line 110:
 
$$[H,E_+] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & -1\\ 0 & 0 \end{matrix}\right) = 2E_+$$
 
$$[H,E_+] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & -1\\ 0 & 0 \end{matrix}\right) = 2E_+$$
  
$$[H,E_-] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ -1 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & 0\\ -1 & 0 \end{matrix}\right) = -2E_-$$
+
$$[H,E_-] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ -1 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) = -2E_-$$
  
 
$$[E_+,E_-] = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) = \left(\begin{matrix} 1 & 0\\ 0 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & 0\\ 0 & 1 \end{matrix}\right) = H$$
 
$$[E_+,E_-] = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) = \left(\begin{matrix} 1 & 0\\ 0 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & 0\\ 0 & 1 \end{matrix}\right) = H$$
  
 
We conclude that \([\mathfrak{g}, \mathfrak{g}] \) is again equal to \(\mathfrak{g}\): With the linearity and antisymmetry of the commutator (And that fact that \([A,A] = 0\)) we see that every element in \([\mathfrak{g}, \mathfrak{g}] \) decomposes into a linear combination of the above commutators and thus into a linear combination of \(H, E_+, E_-\). Thus  \(H, E_+, E_-\) form a basis of \([\mathfrak{g}, \mathfrak{g}] \) (linear independence already established). Because \([\mathfrak{g}, \mathfrak{g}] \) and \(\mathfrak{g}\) share a basis they are equal.
 
We conclude that \([\mathfrak{g}, \mathfrak{g}] \) is again equal to \(\mathfrak{g}\): With the linearity and antisymmetry of the commutator (And that fact that \([A,A] = 0\)) we see that every element in \([\mathfrak{g}, \mathfrak{g}] \) decomposes into a linear combination of the above commutators and thus into a linear combination of \(H, E_+, E_-\). Thus  \(H, E_+, E_-\) form a basis of \([\mathfrak{g}, \mathfrak{g}] \) (linear independence already established). Because \([\mathfrak{g}, \mathfrak{g}] \) and \(\mathfrak{g}\) share a basis they are equal.

Revision as of 09:26, 22 June 2015

Note

Check out the document Lie-Gruppen, Bsp. 3.2 for part a) here: [1]. Just be careful, he writes \(\mathfrak{sl} (2, \mathbf{R}) = \{ A \in \mathbf{R}^{d \times d} : \mathrm{Tr}(A) = 1 \} \) while the trace should be zero. Just a typo.


Here is the solution by Grégoire: Media:Ex._4.pdf

Taks

(a) Compute the Lie algebra \(\mathfrak{sl}(2,\mathbb{R})\) of \(SL(2,\mathbb{R})\)

(b) show that $$H = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right),\ E_+= \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right),\ E_- \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)$$

form a basis of \(\mathfrak{g} = \mathfrak{sl} (2, \mathbf{R}) \) as a vector space over \(\mathbb{R}\) and show that they satisfy the relations:

$$[H,E_+] = 2E_+$$ $$[H,E_-] = -2E_-$$ $$[E_+,E_-] = H$$

Here \([,]\) denotes the matrix commutator. Conclude that \([\mathfrak{g}, \mathfrak{g}] = \mathfrak{g}\). Here, the set \([\mathfrak{g}, \mathfrak{g}] \) is de defined as the span of all commutators between elements of \(\mathfrak{g}\)

Solution

(a)

$$SL(2,\mathbb{R}) = \{A \in Mat(2,\mathbb{R}) |\ det(A) = 1\}$$ $$\mathfrak{sl}(2,\mathbb{R}) = \{\dot{\gamma}\ (0)\ | \gamma(-\epsilon, \epsilon)\rightarrow SL(2,\mathbb{R}),\ \gamma(0) = \mathbb{I}\}$$

let \(\gamma\) be a curve in \(SL(2\mathbb{R})\) such that \(\gamma(0) = \mathbb{I}\)

$$ \gamma(t) = \left( \begin{matrix} a & b\\ c & d\end{matrix}\right)$$

$$ \Rightarrow det(\gamma(t)) = (ad -bc) = 1$$

\( \forall t \in \mathbb{R} \). We take the derivative to find the conditions for the elements of \(\mathfrak{sl}(2,\mathbb{R}) \):

$$ \frac{d}{dt} det(\gamma(t)) = 0$$

$$\dot{a}d + a\dot{d} - \dot{b}c - b\dot{c} = 0$$

Consider the following:

(Note that \(\gamma(t)\) is always invertible because \(det(\gamma(t))= 1 \neq 0\))

$$tr(\gamma(t)^{-1}\dot{\gamma}(t))$$

$$ = tr \left( \frac{1}{ad-bd} \left( \begin{matrix} d & -b\\-c & a\end{matrix}\right) \left( \begin{matrix} \dot{a} & \dot{b}\\ \dot{c} & \dot{d}\end{matrix}\right)\right)$$

$$ = \frac{1}{ad-bd} tr\left( \begin{matrix} d \dot{a} - b \dot{c} & \dots\\ \dots & -c \dot{b} +a \dot{d}\end{matrix}\right) = \frac{1}{ad-bd}(d\dot{a}- b\dot{c} -c\dot{b}+ a\dot{d})$$

we conclude:

$$ \frac{d}{dt} det(\gamma(t)) =0 \Leftrightarrow tr(\gamma(t)^{-1}\dot{\gamma}(t))=0 $$

we are interested in the point \(t = 0\):

$$tr(\gamma(0)^{-1}\dot{\gamma}(0)) = tr(\mathbb{I}^{-1}\dot{\gamma}(0)) = tr(\dot{\gamma}(0)) \overset{!}{=} 0$$

$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) \subset \{A \in Mat(2,\mathbb{R}) | tr(A) = 0\}$$

Now for \(\supset\): let \(A\in Mat(2,\mathbb{R})\) such that \(tr(A) = 0\). We define a curve in \(SL(2\mathbb{R})\) the following way: \( \gamma(t) =e^{tA}\). We check that \( \dot{\gamma}(0) = A\) and \(\gamma(0) = e^{0A} = \mathbb{I}\Rightarrow \dot{\gamma}(0) \in \mathfrak{sl}(2,\mathbb{R})\). We only have to show that the curve actually is a map onto \(SL(2\mathbb{R})\), to do this we calculate the determinate:

$$ det(\gamma(t)) = det(e^{tA}) = e^{tr(tA)} = e^{t\cdot tr(A)} =e^0 = 1$$

$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) = \{A \in Mat(2,\mathbb{R}) | tr(A) = 0\}$$


Proposal of a little bit shorter solution:

z.z.: \(T_e\mathrm{SL}(n,\mathbb{R})\cong\{ A\in \mathbb{R}^{n\times n}\ |\ \mathrm{tr}A =0\} \)

proof: Let \(A\in \mathbb{R}^{n\times n}\) be such that \(\mathrm{tr}A=0\). Let \(\gamma:\mathbb{R}\rightarrow\mathbb{R}^{n\times n}\) be a curve defined by \(\gamma(t)=exp(tA)\). This curve has the following properties:

  • \(\gamma(0)=I_n\)
  • \(\dot{\gamma}(0)=A\)
  • \(\mathrm{det}(\gamma(t))=\mathrm{det}(\mathrm{exp}(tA))=\mathrm{exp}(\mathrm{tr}(tA))=1\)


Therefore we know, that \( \gamma(t) \) is a curve in \(\mathrm{SL}(n,\mathbb{R})\) and \(A\in T_e\mathrm{SL}(n,\mathbb{R})\).

For every \(M\in\mathrm{SL}(n,\mathbb{R})\) there is (for a fixed \(t\)) at most one \(A\in T_e\mathrm{SL}(n,\mathbb{R})\) with \(\mathrm{exp}(tA)=M\). Therefore the function

$$\gamma_t: T_e\mathrm{SL}(n,\mathbb{R})\rightarrow \mathrm{SL}(n,\mathbb{R})$$

is in injective. Because a linear mapping of two vector spaces with same dimension is injective if and only if it is surjective, we can show the surjectiviy of \(\gamma_t\) by showing that \(T_e\mathrm{SL}(n,\mathbb{R})\) and \(\{ A\in \mathbb{R}^{n\times n}\ |\ \mathrm{tr}A =0\} \) have the same dimension:\\

$$\mathrm{dim}(T_e\mathrm{SL}(n,\mathbb{R}) = \mathrm{dim}(\mathrm{SL}(n,\mathbb{R}))=n^2-1$$ $$\mathrm{dim}(\{ A\in \mathbb{R}^{n\times n}\ |\ \mathrm{tr}A =0\} =n^2-1$$

q.e.d

(b)

let \(A \in \mathfrak{g}\)

$$\Rightarrow A = \left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right)$$

$$ = aH + bE_+ + cE_-$$

left to show: \(aH + bE_+ + cE_-\) are linear independent. let \(a,b,c\in \mathbb{R}\) and:

$$ aH + bE_+ + cE_- = 0$$

$$\left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right) = 0$$

$$\left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ 0 & 0 \end{matrix}\right) \Rightarrow a=b=c = 0$$

Now we calculate the commutators:

$$[H,E_+] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & -1\\ 0 & 0 \end{matrix}\right) = 2E_+$$

$$[H,E_-] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ -1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) = -2E_-$$

$$[E_+,E_-] = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) = \left(\begin{matrix} 1 & 0\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 0 & 1 \end{matrix}\right) = H$$

We conclude that \([\mathfrak{g}, \mathfrak{g}] \) is again equal to \(\mathfrak{g}\): With the linearity and antisymmetry of the commutator (And that fact that \([A,A] = 0\)) we see that every element in \([\mathfrak{g}, \mathfrak{g}] \) decomposes into a linear combination of the above commutators and thus into a linear combination of \(H, E_+, E_-\). Thus \(H, E_+, E_-\) form a basis of \([\mathfrak{g}, \mathfrak{g}] \) (linear independence already established). Because \([\mathfrak{g}, \mathfrak{g}] \) and \(\mathfrak{g}\) share a basis they are equal.