Difference between revisions of "Talk:Aufgaben:Problem 5"

From Ferienserie MMP2
Jump to: navigation, search
Line 2: Line 2:
  
 
--[[User:Nik|Nik]] ([[User talk:Nik|talk]]) 14:41, 28 July 2015 (CEST)
 
--[[User:Nik|Nik]] ([[User talk:Nik|talk]]) 14:41, 28 July 2015 (CEST)
 
===Ilmanen's solution in Einstein notation===
 
 
Let \(A \in Z(\mathrm{Mat}_d(\mathbb{C}))\).
 
 
Let \(E_{ij}\) be the \(d \times d\)-matrix with \((E_{ij})^k{}_l = \delta^{ij} \delta_{kl} \).
 
 
Now, consider
 
$$(E_{ij} A)^k{}_l = (E_{ij})^k{}_m A^m{}_l = \delta^k{}_i \delta^j{}_m A^m{}_l = \delta^k{}_i A^j{}_l$$
 
but since \(A\) commutes with all compex \(d \times d\)-matrices, this is the same as
 
$$(A E_{ij})^k{}_l = A^k{}_m (E_{ij})^m{}_l = A^k{}_m \delta^m{}_i \delta^j{}_l = \delta^j{}_l A^k{}_i$$
 
 
Thus, we have that
 
$$\delta^k{}_i A^j{}_l = \delta^j{}_l A^k{}_i$$
 
As this holds for ''any'' \(1 \leq i,j,k,l \leq d\), we find:
 
$$\forall i \neq j: A^i{}_j = 0 \ \text{and} \ A^i{}_i = A^j{}_j$$
 
which requires that \(A\) takes the form
 
$$A = \lambda \mathbb{I}_d$$
 
for some \(\lambda \in \mathbb{C}\)
 
  
 
----
 
----
Line 37: Line 18:
  
 
[[User:Djanine|Djanine]] ([[User talk:Djanine|talk]]) 16:39, 28 July 2015 (CEST)
 
[[User:Djanine|Djanine]] ([[User talk:Djanine|talk]]) 16:39, 28 July 2015 (CEST)
 +
 +
----
 +
 +
You're both probably right - that's why I wanted to know if anyone is confident in this notation stuff. [https://en.wikipedia.org/wiki/Einstein_notation#Common_operations_in_this_notation Wikipedia] has up and down indices for matrix multiplication, too, but since we aren't dealing with co-/contravariant vectors, subscripts should probably suffice as well. <br>
 +
I just felt that the alternative solution as it's written in the wiki was overly brief / a bit hand-waving.
 +
 +
Better this way?
 +
 +
--[[User:Nik|Nik]] ([[User talk:Nik|talk]]) 16:42, 28 July 2015 (CEST)
 +
 +
===Ilmanen's solution in Einstein notation===
 +
 +
Let \(A \in Z(\mathrm{Mat}_d(\mathbb{C}))\).
 +
 +
Let \(E_{ij}\) be the \(d \times d\)-matrix with \((E_{ij})_{kl} = \delta_{ik} \delta_{jl} \).
 +
 +
Now, consider
 +
$$(E_{ij} A)_{kl} = (E_{ij})_{km} A_{ml} = \delta_{ki} \delta_{jm} A_{ml} = \delta_{ki} A_{jl}$$
 +
but since \(A\) commutes with all compex \(d \times d\)-matrices, this is the same as
 +
$$(A E_{ij})_{kl} = A_{km} (E_{ij})_{ml} = A_{km} \delta_{mi} \delta_{jl} = \delta_{jl} A_{ki}$$
 +
 +
Thus, we have that
 +
$$\delta_{ki} A_{jl} = \delta_{jl} A_{ki}$$
 +
As this holds for ''any'' \(1 \leq i,j,k,l \leq d\), we find:
 +
$$\forall i \neq j: A_{ij} = 0 \ \text{and} \ A_{ii} = A_{jj}$$
 +
which requires that \(A\) takes the form
 +
$$A = \lambda \mathbb{I}_d$$
 +
for some \(\lambda \in \mathbb{C}\)

Revision as of 14:42, 28 July 2015

Does anyone like tensor notation and wants to tell me whether this is formally correct?

--Nik (talk) 14:41, 28 July 2015 (CEST)


Why do you write this with up down indices at all? Wouldn't it be normal to just use down indices, as these are just matrix multiplications we are dealing with, or am I missing something?

Carl (talk) 16:06, 28 July 2015 (CEST)


Quote from Wikipedia: "Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term. When dealing with covariant and contravariant vectors, where the position of an index also indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts;"

Oh and: \((E_{ij})_{kl} = \delta_{ik} \delta_{jl} \) or \((E_i{}^j)^k{}_l = \delta_i{}^k \delta^j{}_l \). I'm not sure about this, but lower indices have to stay low, and upper indices have to stay up.

Djanine (talk) 16:39, 28 July 2015 (CEST)


You're both probably right - that's why I wanted to know if anyone is confident in this notation stuff. Wikipedia has up and down indices for matrix multiplication, too, but since we aren't dealing with co-/contravariant vectors, subscripts should probably suffice as well.
I just felt that the alternative solution as it's written in the wiki was overly brief / a bit hand-waving.

Better this way?

--Nik (talk) 16:42, 28 July 2015 (CEST)

Ilmanen's solution in Einstein notation

Let \(A \in Z(\mathrm{Mat}_d(\mathbb{C}))\).

Let \(E_{ij}\) be the \(d \times d\)-matrix with \((E_{ij})_{kl} = \delta_{ik} \delta_{jl} \).

Now, consider $$(E_{ij} A)_{kl} = (E_{ij})_{km} A_{ml} = \delta_{ki} \delta_{jm} A_{ml} = \delta_{ki} A_{jl}$$ but since \(A\) commutes with all compex \(d \times d\)-matrices, this is the same as $$(A E_{ij})_{kl} = A_{km} (E_{ij})_{ml} = A_{km} \delta_{mi} \delta_{jl} = \delta_{jl} A_{ki}$$

Thus, we have that $$\delta_{ki} A_{jl} = \delta_{jl} A_{ki}$$ As this holds for any \(1 \leq i,j,k,l \leq d\), we find: $$\forall i \neq j: A_{ij} = 0 \ \text{and} \ A_{ii} = A_{jj}$$ which requires that \(A\) takes the form $$A = \lambda \mathbb{I}_d$$ for some \(\lambda \in \mathbb{C}\)