bloch-sphere-0

Linear Algebra

We've reviewed vector spaces, but the operations available to us are a little too basic to be useful yet. In this article, we'll introduce the concept of linear algebra.

In linear algebra, we'll be using the vectors introduced in the last article and adding some extra pieces called linear operators. Linear operators will be represented by capital letters like AA, BB, CC etc. Linear operators will act on vectors to produce new vectors. We'll notate it like this:

Av=wA \mathbf{v} = \mathbf{w}

In English this says: When the linear operator AA acts on the vector v\mathbf{v}, it produces the vector w\mathbf{w}. Linear operators can also act across different vector spaces. i.e. in the example above, it might be the case that v\mathbf{v} and w\mathbf{w} are from different vector spaces.

We define the vector spaces a linear operator acts on as:

A:VWA: V \rightarrow W

This says: the linear operator AA takes in vectors from the vector space VV and produces vectors in the vector space WW.

It's worth at this point mentioning a word that might be on the mind of some readers: Matrices. If you've studied matrices before, you will notice that linear operators appear to be very similar. They both involve operations on vectors. Matrices are one possible representation of linear operators. But, as we've said before, we're more interested in studying abstract objects and their properties rather than specific representations. So, we'll be focusing on the abstract properties of linear operators rather than concrete representations.

Linear operators only follow 2 very simple rules:

Additivity

A(v+w)=Av+AwA(\mathbf{v} + \mathbf{w}) = A\mathbf{v} + A\mathbf{w}

Homogeneity

A(cv)=cAvA(c\mathbf{v}) = cA\mathbf{v}

From these simple rules, we can derive a very important property of linear operators. Remember from the previous article we inroduced the idea of a basis. One of the properties we had for a basis is that it must be spanning. i.e. any vector in the vector space can be written as a linear combination of the basis vectors.

So, say we have a vector v\mathbf{v} in a vector space VV. If we have a basise1,e2,,en\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_n in this vector space, then we can write v\mathbf{v} asv=c1e1+c2e2++cnen\mathbf{v} = c_1\mathbf{e}_1 + c_2\mathbf{e}_2 + \ldots + c_n\mathbf{e}_n for some set of scalars c1,c2,,cnc_1, c_2, \ldots, c_n.

Now look what happens if we apply a linear operator A:VWA: V \rightarrow W to v\mathbf{v}:

Av=A(c1e1+c2e2++cnen)=A(c1e1)+A(c2e2)++A(cnen)=c1Ae1+c2Ae2++cnAenA\mathbf{v} = A(c_1\mathbf{e}_1 + c_2\mathbf{e}_2 + \ldots + c_n\mathbf{e}_n) \\[3ex] = A(c_1\mathbf{e}_1) + A(c_2\mathbf{e}_2) + \ldots + A(c_n\mathbf{e}_n) \\[3ex] = c_1A\mathbf{e}_1 + c_2A\mathbf{e}_2 + \ldots + c_nA\mathbf{e}_n

This shows us something very interesting: To calculate how AA acts on any vector v\mathbf{v} we only need to know how it acts on the basis vectors of the vector space. This is a very powerful property as it means, for example in a 2D vector space, we only need to know how AA acts on two basis vectors instead of every possible vector in the space.

Let's look for example at the identity operator II acting on a 2D vector space. The identity operator is essentially the "do nothing" operator as it takes every vector to itself. We define II on our 2 basis vectors as:

Ie1=e1Ie2=e2I\mathbf{e}_1 = \mathbf{e}_1 \\[3ex] I\mathbf{e}_2 = \mathbf{e}_2

And can therefore verify that II behaves as we expect:

v=c1e1+c2e2Iv=I(c1e1+c2e2)=c1Ie1+c2Ie2=c1e1+c2e2=v\mathbf{v} = c_1\mathbf{e}_1 + c_2\mathbf{e}_2 \\[3ex] I \mathbf{v} = I(c_1\mathbf{e}_1 + c_2\mathbf{e}_2) \\[3ex] = c_1I\mathbf{e}_1 + c_2I\mathbf{e}_2 \\[3ex] = c_1\mathbf{e}_1 + c_2\mathbf{e}_2 \\[3ex] = \mathbf{v}

So it works!

Inner-products

The next concept we'll need to introduce is an inner-product. An inner product is an operation that takes two vectors and produces a scalar. We'll denote it with a dot (\cdot) like this:

vw=a\mathbf{v} \cdot \mathbf{w} = a

Note that we've reused the \cdot symbol that we use to multiply scalars together: aba \cdot b. This is just convention and usually won't be confusing in context as we denote scalars and vectors differently.

Here are the rules that an inner-product must follow:

Linearity in the second argument

u(av+bw)=a(uv)+b(uw)\mathbf{u} \cdot (a\mathbf{v} + b\mathbf{w}) = a(\mathbf{u} \cdot \mathbf{v}) + b(\mathbf{u} \cdot \mathbf{w})

This shows us that the inner product distributes over addition and scalar multiplication

Conjugate symmetry

vw=(wv)\mathbf{v} \cdot \mathbf{w} = (\mathbf{w} \cdot \mathbf{v})^*

Where ^* denotes the complex conjugate operator. Of course this is only relevant for vector spaces over the field of complex numbers. If we were working with real numbers we'd have: vw=wv\mathbf{v} \cdot \mathbf{w} = \mathbf{w} \cdot \mathbf{v}

We specify this complex conjugate explicitly here since it will be useful later in the course.

Positive definiteness

vv0   and   vv=0v=0\mathbf{v} \cdot \mathbf{v} \geq 0 \ \ \textmd{ and } \ \ \mathbf{v} \cdot \mathbf{v} = 0 \leftrightarrow \mathbf{v} = \mathbf{0}

This means that the inner-product of a vector with itself is always greater than or equal to 00 and is only equal to 00 if the vector is 0\mathbf{0}.

Remember that 00 is the additive identity of the field and 0\mathbf{0} is the additive identity of the vector space.


Some of you might be aware of something called the dot product, which is a specific inner-product in euclidean vector spaces but, as we've said before, we're more interested in the abstract properties of inner-products rather than any specific representations here.

With this inner-product, we will now define two new concepts: orthogonality and normality.

We say two vectors v\mathbf{v} and w\mathbf{w} are orthogonal ifvw=0\mathbf{v} \cdot \mathbf{w} = 0.

We say a single vector v\mathbf{v} is normal if vv=1\mathbf{v} \cdot \mathbf{v} = 1. This is sometimes called a unit vector.

If some vectors are orthogonal and normal we will describe them as orthonormal.

Something we will be interested in are orthonormal bases. That is, a set of basis vectors that are not only linearly independent and spanning, but also orthonormal:

{e1,e2,,en}Veiej={1   if  i=j0   if  ij\{\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_n\} \sub V \\[3ex] \mathbf{e}_i \cdot \mathbf{e}_j = \begin{cases} 1 \ \ \textmd{ if } \ i = j \\ 0 \ \ \textmd{ if } \ i \neq j \end{cases}

Why is this useful? Well, consider taking the inner product of two vectors v\mathbf{v} and w\mathbf{w}. We can represent these vectors in terms of the orthonormal basis vectors: v=v1e1+v2e2++vnen\mathbf{v} = v_1\mathbf{e}_1 + v_2\mathbf{e}_2 + \ldots + v_n\mathbf{e}_nand w=w1e1+w2e2++wnen\mathbf{w} = w_1\mathbf{e}_1 + w_2\mathbf{e}_2 + \ldots + w_n\mathbf{e}_n. Then the inner product of these 2 vectors is:

vw=(v1e1+v2e2++vnen)(w1e1+w2e2++wnen)=v1w1e1e1+v2w1e2e1++vnw1ene1++v1wne1en+v2wne2en++vnwnenen\mathbf{v} \cdot \mathbf{w} = (v_1\mathbf{e}_1 + v_2\mathbf{e}_2 + \ldots + v_n\mathbf{e}_n) \cdot (w_1\mathbf{e}_1 + w_2\mathbf{e}_2 + \ldots + w_n\mathbf{e}_n) \\[3ex] = v_1w_1\mathbf{e}_1 \cdot \mathbf{e}_1 + v_2w_1 \mathbf{e}_2 \cdot \mathbf{e}_1 + \ldots + v_nw_1\mathbf{e}_n \cdot \mathbf{e}_1 + \\[3ex] \ldots + v_1w_n\mathbf{e}_1 \cdot \mathbf{e}_n + v_2w_n \mathbf{e}_2 \cdot \mathbf{e}_n + \ldots + v_nw_n\mathbf{e}_n \cdot \mathbf{e}_n

But from our definition earlier, we know that the inner product of any 2 basis vectors is 0 unless i=ji = j. So we can simplify it:

=v1w1e1e1+v2w2e2e2++vnwnenen = v_1w_1\mathbf{e}_1 \cdot \mathbf{e}_1 + v_2w_2\mathbf{e}_2 \cdot \mathbf{e}_2 + \ldots + v_nw_n\mathbf{e}_n \cdot \mathbf{e}_n

And since our basis vectors are normal, when we take the inner product of them with themselves we get 1:

=v1w1+v2w2++vnwn = v_1w_1 + v_2w_2 + \ldots + v_nw_n

So by representating our vectors in an orthonormal basis we managed to find a nice way to calculate it. This also shows us that no matter what orthonormal basis we chose, we would always get the same result which tells us that the inner product is independent of the basis we chose.

Exercises

Exercise 1

Show that the inner product is conjugate linear in the first argument. That is:

(au+bw)v=a(uv)+b(wv)(a\mathbf{u} + b\mathbf{w}) \cdot \mathbf{v} = a^*(\mathbf{u} \cdot \mathbf{v}) + b^*(\mathbf{w} \cdot \mathbf{v})
Show solution

Exercise 2

In the space of 2D Euclidian vectors, prove or disprove that the following vectors are:

  • Normal
  • Orthogonal
  • Linearly Independant
  • Spanning
v1=[11],v2=[21]\mathbf{v}_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \mathbf{v}_2 = \begin{bmatrix} 2 \\ 1 \end{bmatrix}

By saying "In the space of 2D Euclidian vectors" here, you may assume the following holds:

[10][10]=1[10][01]=0[01][01]=1\begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 1 \\[3ex] \begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0 \\[3ex] \begin{bmatrix} 0 \\ 1 \end{bmatrix} \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 1
Show solution
> Next Article (Bra-Ket Notation)