Talk:Aufgaben:Problem 5

From Ferienserie MMP2
Revision as of 07:56, 29 July 2015 by Nik (Talk | contribs)

Jump to: navigation, search

Does anyone like tensor notation and wants to tell me whether this is formally correct?

--Nik (talk) 14:41, 28 July 2015 (CEST)


Why do you write this with up down indices at all? Wouldn't it be normal to just use down indices, as these are just matrix multiplications we are dealing with, or am I missing something?

Carl (talk) 16:06, 28 July 2015 (CEST)


Quote from Wikipedia: "Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term. When dealing with covariant and contravariant vectors, where the position of an index also indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts;"

Oh and: \((E_{ij})_{kl} = \delta_{ik} \delta_{jl} \) or \((E_i{}^j)^k{}_l = \delta_i{}^k \delta^j{}_l \). I'm not sure about this, but lower indices have to stay low, and upper indices have to stay up.

Djanine (talk) 16:39, 28 July 2015 (CEST)


You're both probably right - that's why I wanted to know if anyone is confident in this notation stuff. Wikipedia has up and down indices for matrix multiplication, too, but since we aren't dealing with co-/contravariant vectors, subscripts should probably suffice as well.
I just felt that the alternative solution as it's written in the wiki was overly brief / a bit hand-waving.

Better this way?

--Nik (talk) 16:42, 28 July 2015 (CEST)


Yes it was probably correct before, but this seems more natural. I find the second to last step a bit hard to follow. I would add: set \(k = i\) then \(A_{jl} = \delta_{jl} A_{ii}\) so for \(l=j\) we get \(A_{ii} =A_{jj}\) and for \(l\neq j\) \(A_{jl} = 0\) and as this holds for all \(ij\)...

Maybe one could at the beginning define \((E_{ij})\) such that \(i\neq j\) then the last step would look a bit nicer.

Carl (talk) 17:42, 28 July 2015 (CEST)

One more technicality: the proof is missing the other inclusion \(\{\lambda \mathbb{I}\} \subset Z(A)\)

Carl (talk) 07:00, 29 July 2015 (CEST)


You're right, that way it looks considerably neater.

Ilmanen's solution in Einstein notation

Let \(A \in Z(\mathrm{Mat}_d(\mathbb{C}))\).

Let \(E_{ij}\) be the \(d \times d\)-matrix with \((E_{ij})_{kl} = \delta_{ik} \delta_{jl} \).

Now let \(1 \leq i, j \leq d\) with \(i \neq j\) be fixed and consider $$(E_{ij} A)_{kl} = (E_{ij})_{km} A_{ml} = \delta_{ki} \delta_{jm} A_{ml} = \delta_{ki} A_{jl}.$$ Since \(A\) commutes with all compex \(d \times d\)-matrices, this is the same as $$(A E_{ij})_{kl} = A_{km} (E_{ij})_{ml} = A_{km} \delta_{mi} \delta_{jl} = \delta_{jl} A_{ki}.$$

Thus, we have that $$\delta_{ki} A_{jl} = \delta_{jl} A_{ki}$$ For \(k=i, l=j\), we find $$A_{ii} = A_{jj}$$ and for \(k = l = i\) we can see that $$A_{ji} = 0.$$ As this holds for any combination of \((i,j)\), the required form for A is $$A = \lambda \mathbb{I}_d$$ for some \(\lambda \in \mathbb{C}\). And obviously \(\forall \lambda \in \mathbb{C}: \lambda \mathbb{I} \in Z(\mathrm{Mat}_d(\mathbb{C}))\).