bloch-sphere-0

Vector Spaces

The next stop on our tour of abstract mathematics is vector spaces.

You may have heard of or used "vectors" before and, if so, your intuition will definitely be helpful here. However, just as before, we're going to be looking at things in a more abstract way.

A vector space is defined together with a field. We say formally that a vector space over a field FF is a set VV. The elements of FFwe will call scalars and the elements of VV we will call vectors. In the mathematical notation, vectors will be represented as bold lowercase letters (e.g. u,v,wV\mathbf{u}, \mathbf{v}, \mathbf{w} \in V) and scalars will be represented as italic lowercase letters e.g. a,b,cFa, b, c \in F.

We have two operations that we can perform on the elements of our vector space. The first is vector addition (++), which takes two vectors and returns another vector. The second is scalar multiplication, which takes a scalar and a vector and returns another vector. We won't use a specific symbol for scalar multiplication, but will write it as the scalar on the left-hand-side of the vector: ava\mathbf{v}.

Once more, we're going to have a bunch of rules for how these operations work. They're not particularly difficult and have many similarities to the previous rules we've seen for groups and fields.

Associativity

u+(v+w)=(u+v)+w\mathbf{u} + (\mathbf{v} + \mathbf{w}) = (\mathbf{u} + \mathbf{v}) + \mathbf{w}

Commutativity

u+v=v+u\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}

Identity

0,v+0=0+v=v\exists 0, \mathbf{v} + \mathbf{0} = \mathbf{0} + \mathbf{v} = \mathbf{v}

Inverse

v,v+(v)=(v)+v=0\exists {-\mathbf{v}}, \mathbf{v} + (-\mathbf{v}) = (-\mathbf{v}) + \mathbf{v} = \mathbf{0}

Scalar Multiplication

a(bv)=(ab)va(b\mathbf{v}) = (a \cdot b)\mathbf{v}

This is the first rule that requires us to use something from the field FF. It tells us that scalar multiplication use the multiplication operation from the field.

Scalar Multiplication Identity

1v=v1\mathbf{v} = \mathbf{v}

This rule tells us that multiplying a vector by the multiplicative identity of the field leaves the vector unchanged. Note that we didn't have to write 1\exists 1 because the existince of this multiplicative identity already comes from the field.

Distributivity Over Vector Addition

a(u+v)=au+ava(\mathbf{u} + \mathbf{v}) = a\mathbf{u} + a\mathbf{v}

Distributivity Over Scalar Addition

(a+b)v=av+bv(a + b)\mathbf{v} = a\mathbf{v} + b\mathbf{v}

This rule shows us how the addition operation from the field FF behaves with the vector addition operation.


Great, so we've got some slightly different objects and operations that behave a bit differently from what we've seen before, but fundamentally, this is very similar with groups/fields.

Again we can use our rules to prove some basic properties. For example, just like we did with fields:

a(v+0)=av+a0av=av+a0(av)+av=(av)+av+a00=0+a00=a0a(\mathbf{v} + \mathbf{0}) = a\mathbf{v} + a\mathbf{0} \\[3ex] a\mathbf{v} = a\mathbf{v} + a\mathbf{0} \\[3ex] -(a\mathbf{v}) + a\mathbf{v} = -(a\mathbf{v}) + a\mathbf{v} + a\mathbf{0} \\[3ex] \mathbf{0} = \mathbf{0} + a\mathbf{0} \\[3ex] \mathbf{0} = a\mathbf{0}

Euclidian Vectors

The most common example of a vector space is the set of Euclidean vectors that you're probably familiar with. In this case, the field is the real numbers R\mathbb{R} and the vectors are the points in space represented by a column of real numbers. For example:

[xy]\begin{bmatrix} x \\ y \end{bmatrix}

We will define our vector addition and scalar multiplication as follows:

[xy]+[uv]=[x+uy+v]a[xy]=[axay]\begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} x + u \\ y + v \end{bmatrix} \\[3ex] a\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} a \cdot x \\ a \cdot y \end{bmatrix}

We can then identify the zero vector and the additive inverse:

0=[00][xy]+[xy]=0\mathbf{0} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \\[3ex] \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} -x \\ -y \end{bmatrix} = \mathbf{0}

We can see that this will fit the rules we defined above since vector addition boils down to real number addition which is by definition associative and commutative. The same is true for scalar multiplication since that involves real number multiplication.

Basis

Given our vector space VV, we can select out any number of vectors and form a linear combination of them. A linear combination is a sum of scalar multiples of vectors. For example, given vectors v1,v2,,vn\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n, a linear combination of these vectors is:

a1v1+a2v2++anvna_1\mathbf{v}_1 + a_2\mathbf{v}_2 + \ldots + a_n\mathbf{v}_n

Another concept we'll introduce here is linear independence. Two vectorsv1\mathbf{v}_1, v2\mathbf{v}_2 are linearly independent if one cannot be written as a scalar multiple of the other. In other words, ifv1=av2\mathbf{v}_1 = a\mathbf{v}_2 for some value of aa, then the vectors are not linearly independent. Or in other words, they are linearly dependent.

We will be especially interested in sets of linearly independent vectors. They are sets where any vector from the set cannot be written as a linear combination of the others. More formally:

i,vijiajvj\forall i, \mathbf{v}_i \neq \sum_{j \neq i} a_j\mathbf{v}_j

for any given set of scalars aja_j.

Once we have a set of linearly independent vectors. If they span the entire vector space, then we say that they form a basis. "Span" here means that any vector from the vector space vV\mathbf{v} \in V can be written as a linear combination of the vectors from our set. For example if our set of basis vectors isv1,v2,,vn\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_n then:

v=a1v1+a2v2++anvn\mathbf{v} = a_1\mathbf{v}_1 + a_2\mathbf{v}_2 + \ldots + a_n\mathbf{v}_n

for some scalars aia_i.

Quite surprisingly, with these two conditions on our basis (linear independence and spanning) we will always get the same number of basis vectors for any given vector space. This number is called the dimension of the vector space.

In our example of Euclidean vectors from earlier, one set of basis vectors could be:

v1=[10],v2=[01]\mathbf{v_1} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \mathbf{v_2} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}

It is quite clear that these two vectors are linearly independent since any scalar multiple ofv1\mathbf{v_1} will always leave the bottom number in the column as 0 and any scalar multiple of v2\mathbf{v_2} will always leave the top number in the column as 0:

a[10]=[a0]b[01]=[0b]a\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} a \\ 0 \end{bmatrix} \\[3ex] b\begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ b \end{bmatrix}

so we could never represent v1\mathbf{v_1} in terms of v2\mathbf{v_2}

It is also clear that these two vectors span the vector space since any vector can be written as a linear combination like this:

[ab]=av1+bv2\begin{bmatrix} a \\ b \end{bmatrix} = a\mathbf{v_1} + b\mathbf{v_2}

Since we have two basis vectors, this tells us that the dimension of our vector space is 2. We also mentioned earlier that any valid basis will always have the same number of vectors. Take a second to convince yourself of this fact. Try adding an extra vector to our basis and see if it still satisfies the conditions of linear independence.

Note that, although the number of vectors in our basis is always constant, the vectors themselves can be different. For example, here is alternate basis we could have chosen:

v1=[11],v2=[11]\mathbf{v_1} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \mathbf{v_2} = \begin{bmatrix} 1 \\ -1 \end{bmatrix}

We can see these vectors are linearly independant as the values in the column have a different sign in v2\mathbf{v_2} and the same sign in v1\mathbf{v_1}.

We can also see that they are spanning:

[ab]=12(a+b)v1+12(ab)v2\begin{bmatrix} a \\ b \end{bmatrix} = \frac{1}{2}(a+b)\mathbf{v_1} + \frac{1}{2}(a-b)\mathbf{v_2}

This is because:

12(a+b)v1+12(ab)v2=12(a+b)[11]+12(ab)[11]=12([a+ba+b]+[abba])=12[a+b+aba+b+ba]=12[2a2b]=[ab]\frac{1}{2}(a+b)\mathbf{v_1} + \frac{1}{2}(a-b)\mathbf{v_2} = \frac{1}{2}(a+b)\begin{bmatrix} 1 \\ 1 \end{bmatrix} + \frac{1}{2}(a-b)\begin{bmatrix} 1 \\ -1 \end{bmatrix} \\[3ex] = \frac{1}{2}\Bigl( \begin{bmatrix} a+b \\ a+b \end{bmatrix} + \begin{bmatrix} a-b \\ b-a \end{bmatrix} \Bigr) \\[3ex] = \frac{1}{2}\begin{bmatrix} a+b+a-b \\ a+b+b-a \end{bmatrix} \\[3ex] = \frac{1}{2}\begin{bmatrix} 2a \\ 2b \end{bmatrix} \\[3ex] = \begin{bmatrix} a \\ b \end{bmatrix}

This concludes our study of vector spaces for now. This is the first point in this chapter that we've touched on a concept that we will directly use in quantum computing. We'll be using vectors to represent the states of our quantum bits. But in order to perform interesting operations on these quantum bits, we'll need the language of linear algebra. More on that in the next article!

Exercises

Exercise 1

A matrix is similar to a column vector except instead of having a single column of numbers, it has multiple columns forming a square. For example in the case when we have 2 rows and 2 columns:

[abcd]\begin{bmatrix} a & b \\ c & d \end{bmatrix}

Prove that this set of 2x2 matrices, where the values are real, when coupled with real coefficients, forms a vector space.

In order to do this you will first have to define what vector addition and scalar multiplication means. i.e. what it means to add 2 matrices together and multiply a matrix by a scalar.

Once these definitions have been made, you will have to prove that your operations fulfill the rules stated above.

Show solution

Exercise 2

For the 2x2 matrix vector space defined in the previous exercise, find its dimension.

Remember the dimension of a vector space is the size of any set of linearly independant and spanning vectors.

In order to answer this question you much first propose this set of vectors, prove they are linearly independant, prove they are spanning and then present the size of the set.

Show solution

Exercise 3

How does the dimensionality of our matrix vector space change if we only allow symmetric matrices?

Symmetric matrices are matrices that reflect across the diagonal. i.e. the top right number is the same as the bottom left number:

[abbc]\begin{bmatrix} a & b \\ b & c \end{bmatrix}
Show solution
> Next Article (Linear Algebra)