Discussion:
Vector notation?
(too old to reply)
Stefan Ram
2024-07-28 09:27:30 UTC
Permalink
(The quotation below is given in pure ASCII, but at the end of this
post you will also find a rendition with some Unicode being used.)

I have read the following derivation in a chapter on SR.

|(0) We define:
|X := p_"mu" p^"mu",
|
|(1) from this, by Eq. 2.36 we get:
|= p_"mu" "eta"^"mu""nu" p_"mu",
|
|(2) from this, using matrix notation we get:
| ( 1 0 0 0 ) ( p_0 )
|= ( p_0 p_1 p_2 p_3 ) ( 0 -1 0 0 ) ( p_1 )
| ( 0 0 -1 0 ) ( p_2 )
| ( 0 0 0 -1 ) ( p_3 ),
|
|(3) from this, we get:
|= p_0 p_0 - p_1 p_1 - p_2 p_2 - p_3 p_3,
|
|(4) using p_1 p_1 - p_2 p_2 - p_3 p_3 =: p^"3-vector" * p^"3-vector":
|= p_0 p_0 - p^"3-vector" * p^"3-vector".

. Now, I used to believe that a vector with an upper index is
a contravariant vector written as a column and a vector with
a lower index is covariant and written as a row. I'm not sure
about this. Maybe I dreamed it or just made it up. But it would
be a nice convention, wouldn't it?

Anyway, I have a question about the transition from (1) to (2):

In (1), the initial and the final "p" both have a /lower/ index "mu".
In (2), the initial p is written as a row vector, while the final p
now is written as a column vector.

When, in (1), both "p" are written exactly the same way, by what
reason then is the first "p" in (2) written as a /row/ vector and
the second "p" a /column/ vector?

Here's the same thing with a bit of Unicode mixed in:

|(0) We define:
|X ≔ p_μ p^μ
|
|(1) from this, by Eq. 2.36 we get:
|= p_μ η^μν p_ν
|
|(2) from this, using matrix notation we get:
| ( 1 0 0 0 ) ( p₀ )
|= ( p₀ p₁ p₂ p₃ ) ( 0 -1 0 0 ) ( p₁ )
| ( 0 0 -1 0 ) ( p₂ )
| ( 0 0 0 -1 ) ( p₃ )
|
|(3) from this, we get:
|= p₀ p₀ - p₁ p₁ - p₂ p₂ - p₃ p₃
|
|(4) using p₁ p₁ - p₂ p₂ - p₃ p₃ ≕ p⃗ * p⃗:
|= p₀ p₀ - p⃗ * p⃗

. TIA!
Ross Finlayson
2024-07-28 15:45:09 UTC
Permalink
Post by Stefan Ram
(The quotation below is given in pure ASCII, but at the end of this
post you will also find a rendition with some Unicode being used.)
I have read the following derivation in a chapter on SR.
|X := p_"mu" p^"mu",
|
|= p_"mu" "eta"^"mu""nu" p_"mu",
|
| ( 1 0 0 0 ) ( p_0 )
|= ( p_0 p_1 p_2 p_3 ) ( 0 -1 0 0 ) ( p_1 )
| ( 0 0 -1 0 ) ( p_2 )
| ( 0 0 0 -1 ) ( p_3 ),
|
|= p_0 p_0 - p_1 p_1 - p_2 p_2 - p_3 p_3,
|
|= p_0 p_0 - p^"3-vector" * p^"3-vector".
. Now, I used to believe that a vector with an upper index is
a contravariant vector written as a column and a vector with
a lower index is covariant and written as a row. I'm not sure
about this. Maybe I dreamed it or just made it up. But it would
be a nice convention, wouldn't it?
In (1), the initial and the final "p" both have a /lower/ index "mu".
In (2), the initial p is written as a row vector, while the final p
now is written as a column vector.
When, in (1), both "p" are written exactly the same way, by what
reason then is the first "p" in (2) written as a /row/ vector and
the second "p" a /column/ vector?
|X ≔ p_μ p^μ
|
|= p_μ η^μν p_ν
|
| ( 1 0 0 0 ) ( p₀ )
|= ( p₀ p₁ p₂ p₃ ) ( 0 -1 0 0 ) ( p₁ )
| ( 0 0 -1 0 ) ( p₂ )
| ( 0 0 0 -1 ) ( p₃ )
|
|= p₀ p₀ - p₁ p₁ - p₂ p₂ - p₃ p₃
|
|= p₀ p₀ - p⃗ * p⃗
. TIA!
It looks that it follows usual sorts of rank-lowering
and linearisations building out a poor-man's algebraic
varieties with the transpose as orthogonal, while in
the development the actual terminus was long ago neglected.
J. J. Lodder
2024-07-28 19:36:05 UTC
Permalink
Post by Stefan Ram
(The quotation below is given in pure ASCII, but at the end of this
post you will also find a rendition with some Unicode being used.)
If you want to discus formulae in detail
it is better to learn some (La)TeX,

Jan
Mikko
2024-07-29 09:35:09 UTC
Permalink
Post by Stefan Ram
(The quotation below is given in pure ASCII, but at the end of this
post you will also find a rendition with some Unicode being used.)
I have read the following derivation in a chapter on SR.
|X := p_"mu" p^"mu",
|
|= p_"mu" "eta"^"mu""nu" p_"mu",
|
| ( 1 0 0 0 ) ( p_0 )
|= ( p_0 p_1 p_2 p_3 ) ( 0 -1 0 0 ) ( p_1 )
| ( 0 0 -1 0 ) ( p_2 )
| ( 0 0 0 -1 ) ( p_3 ),
|
|= p_0 p_0 - p_1 p_1 - p_2 p_2 - p_3 p_3,
|
|= p_0 p_0 - p^"3-vector" * p^"3-vector".
. Now, I used to believe that a vector with an upper index is
a contravariant vector written as a column and a vector with
a lower index is covariant and written as a row. I'm not sure
about this. Maybe I dreamed it or just made it up. But it would
be a nice convention, wouldn't it?
In (1), the initial and the final "p" both have a /lower/ index "mu".
In (2), the initial p is written as a row vector, while the final p
now is written as a column vector.
When, in (1), both "p" are written exactly the same way, by what
reason then is the first "p" in (2) written as a /row/ vector and
the second "p" a /column/ vector?
|X ≔ p_μ p^μ
|
|= p_μ η^μν p_ν
|
| ( 1 0 0 0 ) ( p₀ )
|= ( p₀ p₁ p₂ p₃ ) ( 0 -1 0 0 ) ( p₁ )
| ( 0 0 -1 0 ) ( p₂ )
| ( 0 0 0 -1 ) ( p₃ )
|
|= p₀ p₀ - p₁ p₁ - p₂ p₂ - p₃ p₃
|
|= p₀ p₀ - p⃗ * p⃗
. TIA!
As "eta" or the dot product is an essential element of the structure
of the world as described by SR the distinction between contravariant
and covariant vectors (or vectors and covectors) is not necessary.

Instead of p_μ p^ν or p_μ η^μν p_ν you can simply write p_μ p_ν. If
you want to use a non-orthonormal basis you need to uses a different
matrix for "eta" in p_μ p_ν but the principle is same. Usually there
is no need to use a non-orthonormal basis in SR but in GR using one
may simplify other things.
--
Mikko
Stefan Ram
2024-08-01 11:13:59 UTC
Permalink
Post by Stefan Ram
When, in (1), both "p" are written exactly the same way, by what
reason then is the first "p" in (2) written as a /row/ vector and
the second "p" a /column/ vector?
In the meantime, I found the answer to my question reading a text
by Viktor T. Toth.

Many Textbooks say,

( -1 0 0 0 )
eta_{mu nu} = ( 0 1 0 0 )
( 0 0 1 0 )
( 0 0 0 1 ),

but when you multiply this by a column (contravariant) vector,
you get another column (contravariant) vector instead of
a row, while the "v_mu" in

eta_{mu nu} v^nu = v_mu

seems to indicate that you will get a row (covariant) vector!

As Viktor T. Toth observed in 2005, a square matrix (i.e., a row
of columns) only really makes sense for eta^mu_nu (which is just
the identity matrix). He then clear-sightedly explains that a
matrix with /two/ covariant indices needs to be written not as
a /row of columns/ but as a /row of rows/:

eta_{mu nu} = [( -1 0 0 0 )( 0 1 0 0 )( 0 0 1 0 )( 0 0 0 1 )]

. Now, if one multiplies /this/ with a column (contravariant)
vector, one gets a row (covariant) vector (tweaking the rules for
matrix multiplication a bit by using scalar multiplication for the
product of the row ( -1 0 0 0 ) with the first row of the column
vector [which first row is a single value] and so on)!
Mikko
2024-08-02 08:57:12 UTC
Permalink
Post by Stefan Ram
Post by Stefan Ram
When, in (1), both "p" are written exactly the same way, by what
reason then is the first "p" in (2) written as a /row/ vector and
the second "p" a /column/ vector?
In the meantime, I found the answer to my question reading a text
by Viktor T. Toth.
Many Textbooks say,
( -1 0 0 0 )
eta_{mu nu} = ( 0 1 0 0 )
( 0 0 1 0 )
( 0 0 0 1 ),
but when you multiply this by a column (contravariant) vector,
you get another column (contravariant) vector instead of
a row, while the "v_mu" in
eta_{mu nu} v^nu = v_mu
seems to indicate that you will get a row (covariant) vector!
As Viktor T. Toth observed in 2005, a square matrix (i.e., a row
of columns) only really makes sense for eta^mu_nu (which is just
the identity matrix). He then clear-sightedly explains that a
matrix with /two/ covariant indices needs to be written not as
eta_{mu nu} = [( -1 0 0 0 )( 0 1 0 0 )( 0 0 1 0 )( 0 0 0 1 )]
. Now, if one multiplies /this/ with a column (contravariant)
vector, one gets a row (covariant) vector (tweaking the rules for
matrix multiplication a bit by using scalar multiplication for the
product of the row ( -1 0 0 0 ) with the first row of the column
vector [which first row is a single value] and so on)!
Matrices do not match very well with the needs of physics. Many physical
quantities require more general hypermatrices. But then one must be
very careful that the multiplicatons are done correctly. Using abstract
indices ix clearer. Just note that if an index is used twice in lower
position the inverse "eta" must be used. For SR the upper index position
is not really necessary.
--
Mikko
Stefan Ram
2024-08-02 10:54:50 UTC
Permalink
Post by Mikko
Matrices do not match very well with the needs of physics. Many physical
quantities require more general hypermatrices. But then one must be
very careful that the multiplicatons are done correctly.
In the meantime, I have written about this for the case of a ( 0, 2 )
tensor, i.e., a bilinear form (such as "eta"). It turns out that for
this case, a simple single rule for matrix multiplication suffices.

To give it the right context, my following text starts with a small
introduction into the linear algebra of vectors and forms and
arrives at the actual matrix multiplication only near the end:

(If one is not into bilinear algebra, one may stop reading now!)

It's about the fact that the matrix representation of a ( 0, 2 )-
tensor should actually be a row of rows, not a row of columns,
as you often see in certain texts. A row of columns, on the
other hand, would be suitable for a ( 1, 1 )-tensor. I got this
from a text by Viktor T. Toth. All errors here are my own though.

But since I want to start with the basics, this matrix
representation will only be dealt with towards the end of
this text, where impatient readers could of course jump to.

In this text, I limit myself to real vector spaces R, R^1,
R^2, etc. For a vector space R^n, let the set of indices
be I := { i | 0 <= i < n }.

Forms

The structure-preserving mappings f into the field R are precisely
the linear mappings of a vector to R.

I call such a linear mapping f of a vector to R a /form/ or a
/covector/.

Let f_i be n forms. If the tuple ( f_i( v ))ieI of a vector v
is equal to the tuple ( f_i( w ))ieI of a vector w if and only
if v=w, I call the tuple ( f_i )ieI a /basis/ of the vector space.
The numbers v^i := f_i( v ) are the /(contravariant) coordinates/
of the vector v in the basis ( f_i )ieM.

I call the vector e_i, for which f_j( e_i ) is 1 for i=j and 0
for i<>j, the i-th /basis vector/ of the basis ( f_i( v ))ieI.

If f is a form, then the tuple ( f( e_i ))ieI are the /(covariant)
coordinates/ of the form f.

Matrices

We write the covariant coordinates f_i of a form f as a "horizontal"
1xn-matrix M( B, f ):

( f_0, f_1, ..., f_(n-1) ).

The contravariant coordinates v^i of a vector v we write in a
basis B as a "vertical" nx1-matrix M( B, v ):

( v^0 )
( v^1 )
( . . . )
( v^( n-1 )).

The application f( v ) of a form to a vector then results from
the matrix multiplication M( B, f )X M( B, v ).

Rule For The Matrix Multiplication X
.-----------------------------------------------------------------.
| The /multiplication X/ of a 1xn-matrix with an nx1-matrix is |
| a sum with n summands, where the summand i is the product of |
| the column i of the first matrix with the row i of the second |
| matrix. |
'-----------------------------------------------------------------'

( 0, 2 )-Tensors

We also call the forms (covectors) "( 0, 1 )-tensors" to express
that they make a scalar out of 0 covectors and one vector linearly.

Accordingly, a /( 0, 2 )-tensor/ is a bilinear mapping (bilinear
form) that makes a scalar out of 0 covectors and /two/ vectors.

Matrix representation of ( 0, 2 )-tensors

According to Viktor T. Toth, for us, the matrix representation
of a ( 0, 2 )-tensor f is a horizontal 1xn-matrix M( B, f ),
whose individual components are horizontal 1xn-matrices of
scalars. The scalar at position j of component i of M( B, f )
is f( e^i, e^j ), where the superscripts here do not indicate
components of e but a basis vector.

(PS: Here I am not sure about the correct order "f( e^i, e^j )"
or "f( e^j, e^i )", but this is a technical detail.)

Let's now look at the case n=3 and see how we calculate the
application of such a tensor f to two vectors v and w with
the matrix representations!
( v^0 ) ( w^0 )
( (f_00,f_01,f_02)(f_10,f_11,f_12)(f_20,f_21,f_22) ) X ( v^1 ) X ( w^1 )
( v^2 ) ( w^2 )

We start with the first product:
( v^0 )
( (f_00,f_01,f_02) (f_10,f_11,f_12) (f_20,f_21,f_22) ) X ( v^1 )
( v^2 ).

According to our rule for the matrix multiplication X, this is the
sum

v^0*(f_00,f_01,f_02)+v^1*(f_10,f_11,f_12)+v^2*(f_20,f_21,f_22)=

(v^0*f_00,v^0*f_01,v^0*f_02)+
(v^1*f_10,v^1*f_11,v^1*f_12)+
(v^2*f_20,v^2*f_21,v^2*f_22)=

(v^0*f_00+v^1*f_10+v^2*f_20,
v^0*f_01+v^1*f_11+v^2*f_21,
v^0*f_02+v^1*f_12+v^2*f_22).

This is again a "horizontal" 1xn-matrix (written vertically
here because it does not fit on one line), which can be
multiplied by the vertical nx1-matrix for w according to
our rules for matrix multiplication X:

(v^0*f_00+v^1*f_10+v^2*f_20,
v^0*f_01+v^1*f_11+v^2*f_21, ( w^0 )
v^0*f_02+v^1*f_12+v^2*f_22) X ( w^1 )
( w^2 ).

According to our rule for the matrix multiplication X, this
results in the number

w^0*(v^0*f_00+v^1*f_10+v^2*f_20)+
w^1*(v^0*f_01+v^1*f_11+v^2*f_21)+
w^2*(v^0*f_02+v^1*f_12+v^2*f_22).

So, the multiplication of the given matrix representation
of a ( 0, 2 )-tensor with the matrix representations of
two vectors correctly results in a /number/ using the single
uniform rule for the matrix multiplication X.

In the literature (especially on special relativity),
the "Minkowski metric", which is a (0,2)-tensor, is written as
a row of /columns/. The application to two vectors would then be:

( f_00, f_01, f_02 ) ( v^0 ) ( w^0 )
( f_10, f_11, f_12 ) ( v^1 ) ( w^1 )
( f_20, f_21, f_22 ) ( v^2 ) ( w^2 ) =

( f_00 * v^0 + f_01 * v1 + f_02 * v^2 ) ( w^0 )
( f_10 * v^0 + f_11 * v1 + f_12 * v^2 ) ( w^1 )
( f_20 * v^0 + f_21 * v1 + f_22 * v^2 ) ( w^2 )

Now the product of /two column vectors/ appears, which is
not defined as a matrix multiplication! (Matrix multiplication
is not the same as the dot product of two vectors.)
JanPB
2024-08-07 09:47:13 UTC
Permalink
Post by Stefan Ram
Post by Mikko
Matrices do not match very well with the needs of physics. Many physical
quantities require more general hypermatrices. But then one must be
very careful that the multiplicatons are done correctly.
In the meantime, I have written about this for the case of a ( 0, 2 )
tensor, i.e., a bilinear form (such as "eta"). It turns out that for
this case, a simple single rule for matrix multiplication suffices.
To give it the right context, my following text starts with a small
introduction into the linear algebra of vectors and forms and
(If one is not into bilinear algebra, one may stop reading now!)
It's about the fact that the matrix representation of a ( 0, 2 )-
tensor should actually be a row of rows, not a row of columns,
as you often see in certain texts. A row of columns, on the
other hand, would be suitable for a ( 1, 1 )-tensor. I got this
from a text by Viktor T. Toth. All errors here are my own though.
But since I want to start with the basics, this matrix
representation will only be dealt with towards the end of
this text, where impatient readers could of course jump to.
In this text, I limit myself to real vector spaces R, R^1,
R^2, etc. For a vector space R^n, let the set of indices
be I := { i | 0 <= i < n }.
Forms
The structure-preserving mappings f into the field R are precisely
the linear mappings of a vector to R.
I call such a linear mapping f of a vector to R a /form/ or a
/covector/.
Let f_i be n forms. If the tuple ( f_i( v ))ieI of a vector v
is equal to the tuple ( f_i( w ))ieI of a vector w if and only
if v=w, I call the tuple ( f_i )ieI a /basis/ of the vector space.
The numbers v^i := f_i( v ) are the /(contravariant) coordinates/
of the vector v in the basis ( f_i )ieM.
I call the vector e_i, for which f_j( e_i ) is 1 for i=j and 0
for i<>j, the i-th /basis vector/ of the basis ( f_i( v ))ieI.
If f is a form, then the tuple ( f( e_i ))ieI are the /(covariant)
coordinates/ of the form f.
Matrices
We write the covariant coordinates f_i of a form f as a "horizontal"
( f_0, f_1, ..., f_(n-1) ).
The contravariant coordinates v^i of a vector v we write in a
( v^0 )
( v^1 )
( . . . )
( v^( n-1 )).
The application f( v ) of a form to a vector then results from
the matrix multiplication M( B, f )X M( B, v ).
Rule For The Matrix Multiplication X
.-----------------------------------------------------------------.
| The /multiplication X/ of a 1xn-matrix with an nx1-matrix is |
| a sum with n summands, where the summand i is the product of |
| the column i of the first matrix with the row i of the second |
| matrix. |
'-----------------------------------------------------------------'
( 0, 2 )-Tensors
We also call the forms (covectors) "( 0, 1 )-tensors" to express
that they make a scalar out of 0 covectors and one vector linearly.
Accordingly, a /( 0, 2 )-tensor/ is a bilinear mapping (bilinear
form) that makes a scalar out of 0 covectors and /two/ vectors.
Matrix representation of ( 0, 2 )-tensors
According to Viktor T. Toth, for us, the matrix representation
of a ( 0, 2 )-tensor f is a horizontal 1xn-matrix M( B, f ),
whose individual components are horizontal 1xn-matrices of
scalars. The scalar at position j of component i of M( B, f )
is f( e^i, e^j ), where the superscripts here do not indicate
components of e but a basis vector.
(PS: Here I am not sure about the correct order "f( e^i, e^j )"
or "f( e^j, e^i )", but this is a technical detail.)
Let's now look at the case n=3 and see how we calculate the
application of such a tensor f to two vectors v and w with
the matrix representations!
( v^0 ) ( w^0 )
( (f_00,f_01,f_02)(f_10,f_11,f_12)(f_20,f_21,f_22) ) X ( v^1 ) X ( w^1 )
( v^2 ) ( w^2 )
( v^0 )
( (f_00,f_01,f_02) (f_10,f_11,f_12) (f_20,f_21,f_22) ) X ( v^1 )
( v^2 ).
According to our rule for the matrix multiplication X, this is the
sum
v^0*(f_00,f_01,f_02)+v^1*(f_10,f_11,f_12)+v^2*(f_20,f_21,f_22)=
(v^0*f_00,v^0*f_01,v^0*f_02)+
(v^1*f_10,v^1*f_11,v^1*f_12)+
(v^2*f_20,v^2*f_21,v^2*f_22)=
(v^0*f_00+v^1*f_10+v^2*f_20,
v^0*f_01+v^1*f_11+v^2*f_21,
v^0*f_02+v^1*f_12+v^2*f_22).
This is again a "horizontal" 1xn-matrix (written vertically
here because it does not fit on one line), which can be
multiplied by the vertical nx1-matrix for w according to
(v^0*f_00+v^1*f_10+v^2*f_20,
v^0*f_01+v^1*f_11+v^2*f_21, ( w^0 )
v^0*f_02+v^1*f_12+v^2*f_22) X ( w^1 )
( w^2 ).
According to our rule for the matrix multiplication X, this
results in the number
w^0*(v^0*f_00+v^1*f_10+v^2*f_20)+
w^1*(v^0*f_01+v^1*f_11+v^2*f_21)+
w^2*(v^0*f_02+v^1*f_12+v^2*f_22).
So, the multiplication of the given matrix representation
of a ( 0, 2 )-tensor with the matrix representations of
two vectors correctly results in a /number/ using the single
uniform rule for the matrix multiplication X.
In the literature (especially on special relativity),
the "Minkowski metric", which is a (0,2)-tensor, is written as
( f_00, f_01, f_02 ) ( v^0 ) ( w^0 )
( f_10, f_11, f_12 ) ( v^1 ) ( w^1 )
( f_20, f_21, f_22 ) ( v^2 ) ( w^2 ) =
( f_00 * v^0 + f_01 * v1 + f_02 * v^2 ) ( w^0 )
( f_10 * v^0 + f_11 * v1 + f_12 * v^2 ) ( w^1 )
( f_20 * v^0 + f_21 * v1 + f_22 * v^2 ) ( w^2 )
Now the product of /two column vectors/ appears, which is
not defined as a matrix multiplication! (Matrix multiplication
is not the same as the dot product of two vectors.)
A nice summary. In the matrix language we are either in R^n or at most
in
a general vector space V with a fixed basis. This allows the standard
(typically unstated explicitly) identification of V with V* (the dual
space
of V). This unstated identification can be confusing.

But any bilinear map is isomorphic to an element of V* x V* (where
"x" is the tensor product) and then the above standard identification
yields an element T of V x V* which acts on a* in V* and b in V. And
when
T, a*, b are written as matrices [T], [a*], [b] then:

T(a*, b) = [a]-transpose.[T].[b]

..since covectors like a* are written as transposed (row) vetcors.

--
Jan
Janiel Bajukov
2024-08-08 08:40:33 UTC
Permalink
But any bilinear map is isomorphic to an element of V* x V* (where "x"
is the tensor product) and then the above standard identification yields
an element T of V x V* which acts on a* in V* and b in V. And when T,
T(a*, b) = [a]-transpose.[T].[b]
..since covectors like a* are written as transposed (row) vetcors.
I can't follow your point, are you implying faster than light travel
possible?? And what the feck is a vetcor??

ie, it's impossible to "defend" yourself sending a missile towards west,
here Poland, pretending sending it to east, to kill Russians. It's lie.
The fucking nazis Ukrane killed Polish people by intent 2024. Reason
enough to send the army to capture the fictitious nazi Ukrane.


𝗥𝘂𝘀𝘀𝗶𝗮_𝘀𝗵𝗼𝘂𝗹𝗱_𝗻𝗼_𝗹𝗼𝗻𝗴𝗲𝗿_𝗵𝗼𝗹𝗱_𝗯𝗮𝗰𝗸_𝗨𝗸𝗿𝗮𝗶𝗻𝗲_–_𝗠𝗲𝗱𝘃𝗲𝗱𝗲𝘃
The “terrorist operation” in Kursk Region justifies going “to Kiev and
further,” the former president believes
https://www.r%74.com/russia/602307-medvedev-ukraine-territory-taboo/

So Medvedev confirms that Russian Army was fighting one hand tied behind
its back ? Seems so..... The softness is seen by zionist neocon judeo
nazis from USRAEL and EURAEL as weakness... Then again is it not about
killing civilians like the Israeli terrorists do in Gaza presently but
about going after things like the NATO supply routes and or indeed
destroying the buildings in Kiev Llov infested with zionist neocon judeo
nazis.

Russia should warn the citizen of Odessa, Kharkov, Dnepropetrovsk,
Nikolaev ,and other important (border) regions that they should leave
within 2 weeks. That itself will collapse the Kiev government or will
result in an influx of migrants into Europe (EU). The EU said Kiev has the
right to defend itself so the EU should also be ready to face the
consequences

Yes, take back all the land that 'Blackrock' has stolen from Ukraine.. for
sure this one. Also, disrupt and destroy the Ukrainian kid factories..
they know what it is. And the people involved? Publicly skin them alive
and then crucify them in the town square. It will stop quickly.

Medvedev is right, wipe out the whole Ukraine and dont stop until you have
reached the Atlantic border.

Don't bother Dmitry, grandpa Putin has already condemned the attack and
will soon be sending a letter of protest to the UN. lol. Or, send the
captured nazis on paid vacation to Turkey, turning back to kill russians.

fucking Putin is a traitor. A 𝙠𝙝𝙖𝙯𝙖𝙧_𝙜𝙤𝙮 WEF "young_global_leader", faking
a Christian. The putina is a motherfucker capitalist.

Loading...