Difference between revisions of "Aufgaben:Problem 4"

From Ferienserie MMP2
Jump to: navigation, search
(Solution)
m ((a): Put a \(\forall\) in front of the according statement)
 
(56 intermediate revisions by 10 users not shown)
Line 1: Line 1:
1) Let \( f,g:\mathbb{R}^n \rightarrow \mathbb{C} \) be continuous, bounded. Further, \( f,g \in L^1(\mathbb{R}^n) \). Define the convolution \(h:\mathbb{R}^n \rightarrow \mathbb{C} \) as
+
==Note==
  
$$  h(x) = (f * g)(x) = \int_{\mathbb{R}^n } f(x-y) g(y) dy  $$
+
Check out the document Lie-Gruppen, Bsp. 3.2 for part a) here: [https://www.dropbox.com/sh/su8ja1eynb449nr/AABIiXtiB6SNdwjjWYtYN8Tua?dl=0]. Just be careful, he writes \(\mathfrak{sl} (2, \mathbf{R}) = \{ A \in \mathbf{R}^{d \times d} : \mathrm{Tr}(A) = 1 \} \) while the trace should be zero. Just a typo.
  
  
== Part a) ==
+
Here is the solution by Grégoire: [[Media:Ex._4.pdf]]
Show that:
+
  
$$ (f*g)(x) = (g*f)(x) $$
+
==Taks==
  
=== Solution part a) ===
+
(a) Compute the Lie algebra \(\mathfrak{sl}(2,\mathbb{R})\) of \(SL(2,\mathbb{R})\)
  
$$ (f*g)(x) = \int_{\mathbb{R}^n } f(x-y) g(y) dy $$
+
(b) show that
 +
$$H = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right),\ E_+= \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right),\ E_- \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)$$
  
$$ = \int_{-\infty}^{\infty} ... \int_{-\infty}^{\infty} f(x-y) g(y) dy_1 ... dy_n $$
+
form a basis of \(\mathfrak{g} = \mathfrak{sl} (2, \mathbf{R}) \) as a vector space over \(\mathbb{R}\) and show that they satisfy the relations:
  
The claim follows simply by substitution:
+
$$[H,E_+] = 2E_+$$
 +
$$[H,E_-] = -2E_-$$
 +
$$[E_+,E_-] = H$$
  
$$ t = x-y $$
+
Here \([,]\) denotes the matrix commutator. Conclude that \([\mathfrak{g}, \mathfrak{g}] = \mathfrak{g}\). Here, the set \([\mathfrak{g}, \mathfrak{g}] \) is de defined as the span of all commutators between elements of \(\mathfrak{g}\)
  
$$ y = x-t $$
+
==Solution==
  
$$ dy_i = -dt_i $$
+
===(a)===
 +
$$SL(2,\mathbb{R}) = \{A \in \mathrm{Mat}(2,\mathbb{R}) |\ \mathrm{det}(A) = 1\}$$
 +
$$\mathfrak{sl}(2,\mathbb{R}) = \{\dot{\gamma}\ (0)\ | \gamma : (-\epsilon, \epsilon)\rightarrow SL(2,\mathbb{R}),\ \gamma(0) = \mathbb{I}\}$$
  
$$ \int_{\infty}^{-\infty} ... \int_{\infty}^{-\infty} f(t) g(x-t) (-dt_1) ... (-dt_n) $$
+
let \(\gamma\) be a curve in \(SL(2,\mathbb{R})\) such that \(\gamma(0) = \mathbb{I}\)
  
We can now change the orientation of the integral if we multiply (-1) n-times. These will cancel out all the n minus-signs from the \( dt_i\)`s. Thus we get:
+
$$ \gamma(t) = \left( \begin{matrix} a & b\\ c & d\end{matrix}\right)$$
  
$$ \int_{-\infty}^{\infty} ... \int_{-\infty}^{\infty} f(t) g(x-t) dt_1 ... dt_n $$  
+
$$ \Rightarrow \forall t \in [-\epsilon, \epsilon]: \: \mathrm{det}(\gamma(t)) = (ad -bc) = 1.$$  
  
$$ = \int_{\mathbb{R}^n } f(t) g(x-t) dt = (g*f)(x) $$
+
We take the derivative to find the conditions for the elements of \(\mathfrak{sl}(2,\mathbb{R}) \):
  
This proves the claim. \( \square \)
+
$$ \frac{d}{dt} \mathrm{det}(\gamma(t)) = 0$$
  
== Part b) ==
+
$$\dot{a}d + a\dot{d} - \dot{b}c - b\dot{c} = 0$$
  
Show that:
+
Consider the following:
  
$$ \widehat{f*g}(k) = (2\pi)^{n/2} \hat f (k) \hat g (k) $$
+
(Note that \(\gamma(t)\) is always invertible because \(\mathrm{det}(\gamma(t))= 1 \neq 0\))
  
=== Solution part b) ===
+
$$\mathrm{tr}(\gamma(t)^{-1}\dot{\gamma}(t))$$
  
The definition of the fourier coefficient is:
+
$$ = \mathrm{tr} \left( \frac{1}{ad-bc} \left( \begin{matrix} d & -b\\-c & a\end{matrix}\right) \left( \begin{matrix} \dot{a} & \dot{b}\\ \dot{c} & \dot{d}\end{matrix}\right)\right)$$
  
$$ \widehat{f*g}(k) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n } (f*g)(x) e^{-i<k,x>} dx $$
+
$$ = \frac{1}{ad-bc} \mathrm{tr}\left( \begin{matrix} d \dot{a} - b \dot{c} & \dots\\ \dots & -c \dot{b} +a \dot{d}\end{matrix}\right) = \frac{1}{ad-bc}(d\dot{a}- b\dot{c} -c\dot{b}+ a\dot{d})$$
  
And with the defintion of the convolution:
+
we conclude:
  
$$ = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n } \int_{\mathbb{R}^n } f(x-y) g(y) dy \; e^{-i<k,x>} dx $$
+
$$ \frac{d}{dt} \mathrm{det}(\gamma(t)) =0 \Leftrightarrow \mathrm{tr}(\gamma(t)^{-1}\dot{\gamma}(t))=0 $$
  
Because \( e^{-i<k,x>} \) is independent of y, we can put it inside the inner integral.
+
we are interested in the point \(t = 0\):
  
$$ = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n } \int_{\mathbb{R}^n } f(x-y) g(y) e^{-i<k,x>} dy \: dx $$
+
$$\mathrm{tr}(\gamma(0)^{-1}\dot{\gamma}(0)) = \mathrm{tr}(\mathbb{I}^{-1}\dot{\gamma}(0)) = \mathrm{tr}(\dot{\gamma}(0)) \overset{!}{=} 0$$
  
Now we can make a similar substitution as in part a)
+
$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) \subset \{A \in \mathrm{Mat}(2,\mathbb{R}) | \mathrm{tr}(A) = 0\}$$
  
$$ x = t+y $$
+
Now for \(\supset\): let \(A\in \mathrm{Mat}(2,\mathbb{R})\) such that \(\mathrm{tr}(A) = 0\). We define a curve \(\gamma: (-\epsilon,\epsilon) \rightarrow SL(2,\mathbb{R})\) the following way: \( \gamma(t) =e^{tA}\). We check that  \( \dot{\gamma}(0) = A\) and \(\gamma(0) = e^{0A} = \mathbb{I}\Rightarrow \dot{\gamma}(0) \in  \mathfrak{sl}(2,\mathbb{R})\). We only have to show that the curve actually is a map onto \(SL(2,\mathbb{R})\), to do this we calculate the determinante:
  
$$ t = x-y $$
+
$$ \mathrm{det}(\gamma(t)) = \mathrm{det}(e^{tA}) = e^{\mathrm{tr}(tA)} = e^{t \cdot \mathrm{tr}(A)} =e^0 = 1$$
  
$$ dx = dt $$
+
$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) = \{A \in \mathrm{Mat}(2,\mathbb{R}) | \mathrm{tr}(A) = 0\}$$
  
$$ = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n } \int_{\mathbb{R}^n } f(t) g(y) e^{-i<k,t+y>} dy \: dt $$
+
===(b)===
  
With the bilinearity of the scalar product we can write \(<k,t+y>\) as \(<k,t> + <k,y>\). We then get
+
let \(A \in \mathfrak{g}\)  
  
$$ \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n } \int_{\mathbb{R}^n } f(t) g(y) e^{-i<k,t>-i<k,y>} dy \: dt $$  
+
$$\Rightarrow A = \left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right)$$
  
$$ = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n } \int_{\mathbb{R}^n } f(t) e^{-i<k,t>} g(y) e^{-i<k,y>} dy \: dt $$
+
$$ = aH + bE_+ + cE_-$$
  
As we can see, \( f(t) e^{-i<k,t>} \) is independent of y and \(g(y) e^{-i<k,y>}\) is independent of t. Therefore we can write it as two seperate integrals:
+
left to show: \(aH + bE_+ + cE_-\) are linear independent. let \(a,b,c\in \mathbb{R}\) and:
  
$$ = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n }  f(t) e^{-i<k,t>} dt \: \int_{\mathbb{R}^n } g(y) e^{-i<k,y>} dy $$
+
$$ aH + bE_+ + cE_- = 0$$
  
$$ = (2\pi)^{n/2} \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n }  f(t) e^{-i<k,t>} dt \: \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n } g(y) e^{-i<k,y>} dy $$
+
$$\left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right) = 0$$
  
$$ = (2\pi)^{n/2} \hat f (k) \hat g (k) \; \square $$
+
$$\left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ 0 & 0 \end{matrix}\right) \Rightarrow a=b=c = 0$$
  
== Part c) ==
+
Now we calculate the commutators:
  
Let \( 0 < c < \infty\), and define \( F_c(x) = e^{-cx^{2}} \).
+
$$[H,E_+] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & -1\\ 0 & 0 \end{matrix}\right) = 2E_+$$
  
Compute \(\widehat{F_c}(k)\) by first showing that \(2c\phi'(k) + k\phi(k) = 0\), with \(\phi = \widehat{F_c}\), and then solving the differential equation.
+
$$[H,E_-] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ -1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) = -2E_-$$
  
=== '''Solution''' ===
+
$$[E_+,E_-] = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) = \left(\begin{matrix} 1 & 0\\ 0 & 0 \end{matrix}\right)  - \left(\begin{matrix} 0 & 0\\ 0 & 1 \end{matrix}\right) = H$$
  
We proof that \(2c\widehat{F_c}'(k) + k\widehat{F_c}(k) = 0\). First we try to bring \(\widehat{F_c}'(k)\) into a form that is related to \(\widehat{F_c}(k)\):
+
We conclude that \([\mathfrak{g}, \mathfrak{g}] \) is again equal to \(\mathfrak{g}\): With the linearity and antisymmetry of the commutator (And that fact that \([A,A] = 0\)) we see that every element in \([\mathfrak{g}, \mathfrak{g}] \) decomposes into a linear combination of the above commutators and thus into a linear combination of \(H, E_+, E_-\). Thus  \(H, E_+, E_-\) form a basis of \([\mathfrak{g}, \mathfrak{g}] \) (linear independence already established). Because \([\mathfrak{g}, \mathfrak{g}] \) and \(\mathfrak{g}\) share a basis they are equal.
 
+
 
+
 
+
 
+
'''Dominated convergence theorem''':
+
 
+
If \( 1) f_n\) is a sequence of integrable functions with \(\lim\limits_{n \rightarrow \infty}{f_n(x)} = f(x) \)
+
 
+
\( 2)  \exists g\) an integrable function with \(| f_n(x) | \le g(x)  \forall n \in \mathbb{N}, x \in \mathbb{R} \)
+
 
+
\( \Rightarrow f\) is integrable and \(\lim\limits_{n \rightarrow \infty}{\int_{\mathbb{R}} f_n(x) dx} = \int_{\mathbb{R}} \lim\limits_{n \rightarrow \infty}{f_n(x)} dx = \int_{\mathbb{R}} f(x) dx \)
+
 
+
 
+
$$ \frac{d}{dk} \widehat{F_c}(k) = \lim\limits_{h \rightarrow 0}{\frac{\widehat{F_c}(k+h)-\widehat{F_c}(k)}{h}} = \lim\limits_{n \rightarrow \infty}{\frac{\widehat{F_c}(k+h_n)-\widehat{F_c}(k)}{h_n}} , \lim\limits_{n \rightarrow \infty}{h_n} = 0 $$
+
 
+
Let \(l(k) := e^{-ikx} , l'(k) = -ix e^{-ikx} \)
+
 
+
$$ \frac{\widehat{F_c}(k+h_n)-\widehat{F_c}(k)}{h_n} = \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} \frac{e^{-i(k+h_n)x}-e^{-ikx}}{h_n} e^{-cx^2} dx = \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} \frac{l(k+h_n)-l(k)}{h_n} e^{-cx^2} dx $$
+
 
+
Define \(f_n = \frac{l(k+h_n)-l(k)}{h_n} e^{-cx^2} = l'(k+\xi) e^{-cx^2} \) for \( \xi \in (0,h_n) \) (by the mean value theorem)
+
 
+
Now, let us check the conditions of the dominated convergence theorem:
+
 
+
$$ 1) \lim\limits_{n \rightarrow \infty}{f_n(x)} = l'(k) e^{-cx^2} = -ix e^{-ikx} e^{-cx^2} =: f(x) $$
+
 
+
$$ 2) | f_n (x) | = | l'(k+\xi) e^{-cx^2} | = | -ix e^{-i(k+\xi)x} e^{-cx^2} | \le |x| e^{-cx^2} = g(x) $$
+
 
+
$$ \Rightarrow \lim\limits_{n \rightarrow \infty}{\int_{\mathbb{R}} f_n(x) dx} = \int_{\mathbb{R}} \lim\limits_{n \rightarrow \infty}{f_n(x)} dx = \int_{\mathbb{R}} f(x) dx = \int_{\mathbb{R}} -ix e^{-ikx} e^{-cx^2} dx $$
+
 
+
$$ \Rightarrow \frac{d}{dk} \widehat{F_c}(k) = \lim\limits_{n \rightarrow \infty}{\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} \frac{e^{-i(k+h_n)x}-e^{-ikx}}{h_n} e^{-cx^2} dx} = \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} -ix e^{-ikx} e^{-cx^2} dx = \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-cx^{2}}xe^{-ikx} dx $$
+
 
+
 
+
 
+
$$ \frac{d}{dk} \widehat{F_c}(k) = \frac{d}{dk} (\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} F_c(x)e^{-ikx} dx) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} F_c(x) \frac{d}{dk} e^{-ikx} dx = \frac{-i}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-cx^{2}}xe^{-ikx} dx $$
+
 
+
Now we use partial integration to rearrange the integral. We integrate \(xe^{-cx^{2}}\) and derive \(e^{-ikx}\). We get:
+
 
+
$$ \frac{-i}{\sqrt{2\pi}} ([-\frac{e^{-cx^{2}}}{2c}e^{-ikx}]_{-\infty}^{\infty} - \int_{-\infty}^{\infty} \frac{e^{-cx^{2}}}{2c}ike^{-ikx} dx) $$
+
 
+
The first term vanishes. It follows:
+
 
+
$$ \frac{d}{dk} \widehat{F_c}(k) = -\frac{k}{2c\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-cx^{2}}e^{-ikx} dx = -\frac{k}{2c} \widehat{F_c}(k) $$
+
 
+
This gives us:
+
 
+
$$ 2c\widehat{F_c}'(k) + k\widehat{F_c}(k) = -k\widehat{F_c}(k) + k\widehat{F_c}(k) = 0 $$
+
 
+
Now we solve the differential equation \(2c\phi'(k) + k\phi(k) = 0\) by seperation of variables:
+
 
+
$$ 2c\phi'(k) + k\phi(k) = 0 $$
+
 
+
$$ \phi' = -\frac{k}{2c}\phi $$
+
 
+
$$ \frac{d\phi}{dk} = -\frac{k}{2c}\phi $$
+
 
+
$$ \int \frac{d\phi}{\phi} = -\frac{1}{2c} \int k dk $$
+
 
+
$$ ln(\phi) = -\frac{k^{2}}{4c} + p $$
+
 
+
$$ e^{ln(\phi)} = e^{-\frac{k^{2}}{4c} + p} $$
+
 
+
$$ \phi = re^{-\frac{k^{2}}{4c}} $$
+
 
+
while \(p\) and \(r\) are integration constants and \(r = e^{p}\). To evaluate the constant \(r\) we first create a boundary condition:
+
 
+
$$ \widehat{F_c}(0) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} F_c(x) dx = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-cx^{2}} dx $$
+
 
+
With the substitution \(x = \frac{t}{\sqrt{c}}\) we get \(dx = \frac{dt}{\sqrt{c}}\) and thus:
+
 
+
$$ \widehat{F_c}(0) = \frac{1}{\sqrt{2\pi c}} \int_{-\infty}^{\infty} e^{-t^{2}} dt = \frac{1}{\sqrt{2\pi c}} \sqrt{\pi} = \frac{1}{\sqrt{2c}} $$
+
 
+
using the Gaussian Integral. It follows:
+
 
+
$$ \phi(0) = \widehat{F_c}(0) $$
+
 
+
$$ r = \frac{1}{\sqrt{2c}} $$
+
 
+
and thus \(\phi(k) = \widehat{F_c}(k) = \frac{1}{\sqrt{2c}}e^{-\frac{k^{2}}{4c}}\).
+
 
+
== Part d) ==
+
Show that \(F_a*F_b = αF_c\), where \(c = c(a,b)\) and \(α = α(a,b)\).
+
 
+
=== '''Solution''' ===
+
$$ (F_a*F_b)(x) = \int_{-\infty}^{\infty} F_a(x-y)F_b(y) dy = \int_{-\infty}^{\infty} e^{-a(x-y)^{2}}e^{-by^{2}} dy = \int_{-\infty}^{\infty} e^{-a(x^{2}-2xy+y^{2})}e^{-by^{2}} dy = e^{-ax^{2}} \int_{-\infty}^{\infty} e^{2axy-(a+b)y^{2}}dy $$
+
 
+
Using Completing the square we get:
+
 
+
$$ 2axy-(a+b)y^{2} = -(a+b)(y^{2}-\frac{2axy}{a+b}) = -(a+b)((y-\frac{ax}{a+b})^{2}-\frac{a^{2}x^{2}}{(a+b)^{2}}) $$
+
 
+
Therefore:
+
 
+
$$ e^{-ax^{2}} \int_{-\infty}^{\infty} e^{2axy-(a+b)y^{2}}dy = e^{-ax^{2}} \int_{-\infty}^{\infty} e^{-(a+b)((y-\frac{ax}{a+b})^{2}}e^{\frac{a^{2}x^{2}}{a+b}}dy = e^{-ax^{2}}e^{\frac{a^{2}x^{2}}{a+b}} \int_{-\infty}^{\infty} e^{-(a+b)((y-\frac{ax}{a+b})^{2}}dy $$
+
 
+
Substituting \(s = y-\frac{ax}{a+b}\) and solving the Gaussian integral gives us:
+
 
+
$$ (F_a*F_b)(x) = e^{-(a-\frac{a^{2}}{a+b})x^{2}} \int_{-\infty}^{\infty} e^{-(a+b)z^{2}}dz = \sqrt{\frac{\pi}{a+b}}e^{-(a-\frac{a^{2}}{a+b})x^{2}} = αF_c(x) $$
+
 
+
with \(α(a,b) = \sqrt{\frac{\pi}{a+b}}\) and \(c(a,b) = a-\frac{a^{2}}{a+b}\). The equation is valid since \(c = a-\frac{a^{2}}{a+b} > 0 \). This is because \(\frac{a^{2}}{a+b} < a \) for \(a, b > 0\).
+
 
+
== Part e) ==
+
Let \(g \in L^{1}(\mathbb{R})\), \(u \in L^{1}(\mathbb{R}) \cap C^{2}(\mathbb{R})\). Find a solution of the differential equation
+
$$ u^{(2)}(x) = u(x) - 2g(x) $$
+
using Fourier transform.
+
''Hint:'' consider the Fourier transform of \(f(x) = e^{-|x|}\), \(x \in \mathbb{R}\).
+
 
+
=== '''Solution''' ===
+
We first calculate the Fourier transform of \(f(x) = e^{-|x|}\), \(x \in \mathbb{R}\):
+
$$ \widehat{f}(k) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-|x|}e^{-ikx} dx = \frac{1}{\sqrt{2\pi}} (\int_{-\infty}^{0} e^{-|x|}e^{-ikx} dx + \int_{0}^{\infty} e^{-|x|}e^{-ikx} dx) = \frac{1}{\sqrt{2\pi}} (\int_{-\infty}^{0} e^{x}e^{-ikx} dx + \int_{0}^{\infty} e^{-x}e^{-ikx} dx) $$
+
$$ = \frac{1}{\sqrt{2\pi}} (\int_{-\infty}^{0} e^{x(1-ik)} dx + \int_{0}^{\infty} e^{-x(1+ik)} dx) = \frac{1}{\sqrt{2\pi}} ([\frac{e^{x(1-ik)}}{1-ik}]_{-\infty}^{0} + [\frac{e^{-x(1+ik)}}{-(1+ik)}]_{0}^{\infty}) = \frac{1}{\sqrt{2\pi}} (\frac{1}{1-ik} - \frac{1}{-(1+ik)}) $$
+
$$ = \frac{1}{\sqrt{2\pi}} \frac{1+ik+1-ik}{1-i^{2}k^{2}} = \frac{1}{\sqrt{2\pi}} \frac{2}{1+k^{2}} $$
+
Now to the differential equation:
+
$$ u^{(2)}(x) = u(x) - 2g(x) $$
+
$$ \widehat{u^{(2)}}(k) = \widehat{(u - 2g)}(k) $$
+
$$ \widehat{u^{(2)}}(k) = \widehat{u}(k) - 2\widehat{g}(k) $$
+
$$ i^{2}k^{2}\widehat{u}(k) = \widehat{u}(k) - 2\widehat{g}(k) $$
+
using \(\widehat{(\lambda\psi + \mu\varphi)} = \lambda\widehat{\psi} + \mu\widehat{\varphi}\) and \(\widehat{(\frac{\partial}{\partial x_j}\psi)}(k) = ik_j\widehat{\psi}(k)\) for \(\psi,\varphi \in L^{1}(\mathbb{R})\) and \(\lambda,\mu \in \mathbb{C}\).
+
$$ -k^{2}\widehat{u}(k) = \widehat{u}(k) - 2\widehat{g}(k) $$
+
$$ \widehat{u}(k) (1 + k^{2}) = 2\widehat{g}(k) $$
+
$$ \widehat{u}(k) = \frac{2}{1+k^{2}}\widehat{g}(k) = \frac{\sqrt{2\pi}}{\sqrt{2\pi}} \frac{2}{1+k^{2}}\widehat{g}(k) = \sqrt{2\pi}\widehat{f}(k)\widehat{g}(k) $$
+
using the identity for \(\widehat{f}(k)\) evaluated in the first part of the exercise. Applying part b) we get:
+
$$ \widehat{u}(k) = \sqrt{2\pi} \left( \widehat{f}(k)\widehat{g}(k) \right) = \widehat{f * g}(k) $$
+
and the rest is done in silence:
+
$$ u(x) = (f * g)(x) = \int_{\mathbb{R}} f(x - y) g(y) \, dy $$
+

Latest revision as of 13:52, 26 July 2015

Note

Check out the document Lie-Gruppen, Bsp. 3.2 for part a) here: [1]. Just be careful, he writes \(\mathfrak{sl} (2, \mathbf{R}) = \{ A \in \mathbf{R}^{d \times d} : \mathrm{Tr}(A) = 1 \} \) while the trace should be zero. Just a typo.


Here is the solution by Grégoire: Media:Ex._4.pdf

Taks

(a) Compute the Lie algebra \(\mathfrak{sl}(2,\mathbb{R})\) of \(SL(2,\mathbb{R})\)

(b) show that $$H = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right),\ E_+= \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right),\ E_- \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)$$

form a basis of \(\mathfrak{g} = \mathfrak{sl} (2, \mathbf{R}) \) as a vector space over \(\mathbb{R}\) and show that they satisfy the relations:

$$[H,E_+] = 2E_+$$ $$[H,E_-] = -2E_-$$ $$[E_+,E_-] = H$$

Here \([,]\) denotes the matrix commutator. Conclude that \([\mathfrak{g}, \mathfrak{g}] = \mathfrak{g}\). Here, the set \([\mathfrak{g}, \mathfrak{g}] \) is de defined as the span of all commutators between elements of \(\mathfrak{g}\)

Solution

(a)

$$SL(2,\mathbb{R}) = \{A \in \mathrm{Mat}(2,\mathbb{R}) |\ \mathrm{det}(A) = 1\}$$ $$\mathfrak{sl}(2,\mathbb{R}) = \{\dot{\gamma}\ (0)\ | \gamma : (-\epsilon, \epsilon)\rightarrow SL(2,\mathbb{R}),\ \gamma(0) = \mathbb{I}\}$$

let \(\gamma\) be a curve in \(SL(2,\mathbb{R})\) such that \(\gamma(0) = \mathbb{I}\)

$$ \gamma(t) = \left( \begin{matrix} a & b\\ c & d\end{matrix}\right)$$

$$ \Rightarrow \forall t \in [-\epsilon, \epsilon]: \: \mathrm{det}(\gamma(t)) = (ad -bc) = 1.$$

We take the derivative to find the conditions for the elements of \(\mathfrak{sl}(2,\mathbb{R}) \):

$$ \frac{d}{dt} \mathrm{det}(\gamma(t)) = 0$$

$$\dot{a}d + a\dot{d} - \dot{b}c - b\dot{c} = 0$$

Consider the following:

(Note that \(\gamma(t)\) is always invertible because \(\mathrm{det}(\gamma(t))= 1 \neq 0\))

$$\mathrm{tr}(\gamma(t)^{-1}\dot{\gamma}(t))$$

$$ = \mathrm{tr} \left( \frac{1}{ad-bc} \left( \begin{matrix} d & -b\\-c & a\end{matrix}\right) \left( \begin{matrix} \dot{a} & \dot{b}\\ \dot{c} & \dot{d}\end{matrix}\right)\right)$$

$$ = \frac{1}{ad-bc} \mathrm{tr}\left( \begin{matrix} d \dot{a} - b \dot{c} & \dots\\ \dots & -c \dot{b} +a \dot{d}\end{matrix}\right) = \frac{1}{ad-bc}(d\dot{a}- b\dot{c} -c\dot{b}+ a\dot{d})$$

we conclude:

$$ \frac{d}{dt} \mathrm{det}(\gamma(t)) =0 \Leftrightarrow \mathrm{tr}(\gamma(t)^{-1}\dot{\gamma}(t))=0 $$

we are interested in the point \(t = 0\):

$$\mathrm{tr}(\gamma(0)^{-1}\dot{\gamma}(0)) = \mathrm{tr}(\mathbb{I}^{-1}\dot{\gamma}(0)) = \mathrm{tr}(\dot{\gamma}(0)) \overset{!}{=} 0$$

$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) \subset \{A \in \mathrm{Mat}(2,\mathbb{R}) | \mathrm{tr}(A) = 0\}$$

Now for \(\supset\): let \(A\in \mathrm{Mat}(2,\mathbb{R})\) such that \(\mathrm{tr}(A) = 0\). We define a curve \(\gamma: (-\epsilon,\epsilon) \rightarrow SL(2,\mathbb{R})\) the following way: \( \gamma(t) =e^{tA}\). We check that \( \dot{\gamma}(0) = A\) and \(\gamma(0) = e^{0A} = \mathbb{I}\Rightarrow \dot{\gamma}(0) \in \mathfrak{sl}(2,\mathbb{R})\). We only have to show that the curve actually is a map onto \(SL(2,\mathbb{R})\), to do this we calculate the determinante:

$$ \mathrm{det}(\gamma(t)) = \mathrm{det}(e^{tA}) = e^{\mathrm{tr}(tA)} = e^{t \cdot \mathrm{tr}(A)} =e^0 = 1$$

$$\Rightarrow \mathfrak{sl}(2,\mathbb{R}) = \{A \in \mathrm{Mat}(2,\mathbb{R}) | \mathrm{tr}(A) = 0\}$$

(b)

let \(A \in \mathfrak{g}\)

$$\Rightarrow A = \left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right)$$

$$ = aH + bE_+ + cE_-$$

left to show: \(aH + bE_+ + cE_-\) are linear independent. let \(a,b,c\in \mathbb{R}\) and:

$$ aH + bE_+ + cE_- = 0$$

$$\left(\begin{matrix} a & 0\\ 0 & -a \end{matrix}\right) + \left(\begin{matrix} 0 & b\\ 0 & 0 \end{matrix}\right) + \left(\begin{matrix} 0 & 0\\ c & 0 \end{matrix}\right) = 0$$

$$\left(\begin{matrix} a & b\\ c & -a \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ 0 & 0 \end{matrix}\right) \Rightarrow a=b=c = 0$$

Now we calculate the commutators:

$$[H,E_+] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & -1\\ 0 & 0 \end{matrix}\right) = 2E_+$$

$$[H,E_-] = \left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right) = \left(\begin{matrix} 0 & 0\\ -1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) = -2E_-$$

$$[E_+,E_-] = \left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right) = \left(\begin{matrix} 1 & 0\\ 0 & 0 \end{matrix}\right) - \left(\begin{matrix} 0 & 0\\ 0 & 1 \end{matrix}\right) = H$$

We conclude that \([\mathfrak{g}, \mathfrak{g}] \) is again equal to \(\mathfrak{g}\): With the linearity and antisymmetry of the commutator (And that fact that \([A,A] = 0\)) we see that every element in \([\mathfrak{g}, \mathfrak{g}] \) decomposes into a linear combination of the above commutators and thus into a linear combination of \(H, E_+, E_-\). Thus \(H, E_+, E_-\) form a basis of \([\mathfrak{g}, \mathfrak{g}] \) (linear independence already established). Because \([\mathfrak{g}, \mathfrak{g}] \) and \(\mathfrak{g}\) share a basis they are equal.