The next stop on our tour of abstract mathematics is vector spaces.
You may have heard of or used "vectors" before and, if so, your intuition will definitely be helpful here. However, just as before, we're going to be looking at things in a more abstract way.
A vector space is defined together with a field. We say formally that a vector space over a field is a set . The elements of we will call scalars and the elements of we will call vectors. In the mathematical notation, vectors will be represented as bold lowercase letters (e.g. ) and scalars will be represented as italic lowercase letters e.g. .
We have two operations that we can perform on the elements of our vector space. The first is vector addition (), which takes two vectors and returns another vector. The second is scalar multiplication, which takes a scalar and a vector and returns another vector. We won't use a specific symbol for scalar multiplication, but will write it as the scalar on the left-hand-side of the vector: .
Once more, we're going to have a bunch of rules for how these operations work. They're not particularly difficult and have many similarities to the previous rules we've seen for groups and fields.
This is the first rule that requires us to use something from the field . It tells us that scalar multiplication use the multiplication operation from the field.
This rule tells us that multiplying a vector by the multiplicative identity of the field leaves the vector unchanged. Note that we didn't have to write because the existince of this multiplicative identity already comes from the field.
This rule shows us how the addition operation from the field behaves with the vector addition operation.
Great, so we've got some slightly different objects and operations that behave a bit differently from what we've seen before, but fundamentally, this is very similar with groups/fields.
Again we can use our rules to prove some basic properties. For example, just like we did with fields:
The most common example of a vector space is the set of Euclidean vectors that you're probably familiar with. In this case, the field is the real numbers and the vectors are the points in space represented by a column of real numbers. For example:
We will define our vector addition and scalar multiplication as follows:
We can then identify the zero vector and the additive inverse:
We can see that this will fit the rules we defined above since vector addition boils down to real number addition which is by definition associative and commutative. The same is true for scalar multiplication since that involves real number multiplication.
Given our vector space , we can select out any number of vectors and form a linear combination of them. A linear combination is a sum of scalar multiples of vectors. For example, given vectors , a linear combination of these vectors is:
Another concept we'll introduce here is linear independence. Two vectors, are linearly independent if one cannot be written as a scalar multiple of the other. In other words, if for some value of , then the vectors are not linearly independent. Or in other words, they are linearly dependent.
We will be especially interested in sets of linearly independent vectors. They are sets where any vector from the set cannot be written as a linear combination of the others. More formally:
for any given set of scalars .
Once we have a set of linearly independent vectors. If they span the entire vector space, then we say that they form a basis. "Span" here means that any vector from the vector space can be written as a linear combination of the vectors from our set. For example if our set of basis vectors is then:
for some scalars .
Quite surprisingly, with these two conditions on our basis (linear independence and spanning) we will always get the same number of basis vectors for any given vector space. This number is called the dimension of the vector space.
In our example of Euclidean vectors from earlier, one set of basis vectors could be:
It is quite clear that these two vectors are linearly independent since any scalar multiple of will always leave the bottom number in the column as 0 and any scalar multiple of will always leave the top number in the column as 0:
so we could never represent in terms of
It is also clear that these two vectors span the vector space since any vector can be written as a linear combination like this:
Since we have two basis vectors, this tells us that the dimension of our vector space is 2. We also mentioned earlier that any valid basis will always have the same number of vectors. Take a second to convince yourself of this fact. Try adding an extra vector to our basis and see if it still satisfies the conditions of linear independence.
Note that, although the number of vectors in our basis is always constant, the vectors themselves can be different. For example, here is alternate basis we could have chosen:
We can see these vectors are linearly independant as the values in the column have a different sign in and the same sign in .
We can also see that they are spanning:
This is because:
This concludes our study of vector spaces for now. This is the first point in this chapter that we've touched on a concept that we will directly use in quantum computing. We'll be using vectors to represent the states of our quantum bits. But in order to perform interesting operations on these quantum bits, we'll need the language of linear algebra. More on that in the next article!
A matrix is similar to a column vector except instead of having a single column of numbers, it has multiple columns forming a square. For example in the case when we have 2 rows and 2 columns:
Prove that this set of 2x2 matrices, where the values are real, when coupled with real coefficients, forms a vector space.
In order to do this you will first have to define what vector addition and scalar multiplication means. i.e. what it means to add 2 matrices together and multiply a matrix by a scalar.
Once these definitions have been made, you will have to prove that your operations fulfill the rules stated above.
For the 2x2 matrix vector space defined in the previous exercise, find its dimension.
Remember the dimension of a vector space is the size of any set of linearly independant and spanning vectors.
In order to answer this question you much first propose this set of vectors, prove they are linearly independant, prove they are spanning and then present the size of the set.
How does the dimensionality of our matrix vector space change if we only allow symmetric matrices?
Symmetric matrices are matrices that reflect across the diagonal. i.e. the top right number is the same as the bottom left number: