You can conceptualize a vector geometrically as something having magnitude and direction, for instance a vector with x and y components (35), more generally (xy)
Within quantum computing, we often work with state vectors, vectors that represent a specific point in space that corresponds to some state in a quantum system, which can be visualized via a bloch sphere:
This particular vector state corresponds to an even superposition between ∣0⟩ and ∣1⟩.
Vectors can rotate (around the center) to any point on the surface of the sphere, and each point represents a unique state of some quantum system.
In order to completely define vectors, we must define vector space:
A vector space V over a fieldF, is a set of vectors, where two conditions hold:
Vector addition of two vectors ∣a⟩,∣b⟩∈V, will yield a third vector ∣a⟩,∣b⟩=∣c⟩, which is also in V.
Scalar multiplication between ∣a⟩∈V and some n∈F, written n∣a⟩, is also contained in V.
A field, in this context, can be thought of as some set of numbers, with the property that if a,b∈F, then aand b can be added, subtracted, multiplied and divided.
To clarify this, we will do an example, demonstrating the set R2, over a field R is a vector space, meaning:
(x1y1)+(x2y2)=(x1+y2x2+y2), is also contained in R2.
This is true because the sum of two real numbers is also a real number, meaning both new vector components are real numbers, and so they must reside in R2.
Sidenote: A field in R2 is written as R, because it's a plane.
We also state that:
n∣v⟩=(nxny)∈V,∀n∈R
Scaling a vector.
Matrices and Matrix operations.
Another fundamental structure is that of the matrix, which can transform vectors into other vectors: ∣v⟩→∣v′⟩=M∣v⟩.
Matrices can be represented as arrays of numbers:
M=(1−2315i01+i7−4)
A matrix can be applied to vectors, via matrix multiplication.
Generally, we take the first row of the first matrix, and multiply each element by its corresponding element in the column of the second matrix.
The sum of these elements, is the first element, in the first row of the new matrix. To fill the rest of the row, we take the first row of the first matrix by the rest of the columns of the second matrix exhaustively.
Then, we take the second row of the first matrix, and repeat the process for each column of the second matrix, until we've used all rows of the first matrix.
In order to carry out quantum computations, we take a quantum state vector, and manipulate it by applying a matrix to it. A vector is simply a matrix that only has one column. To carry out this multiplication, we carry out the above procedure. We manipulate qubits by applying sequences of quantum gates, which can be expressed as matrices of elements that alter the states of qubits, such as the Pauli-X gate:
σx=(0110).
The Pauli-X gate manipulates single qubits, and is the quantum equivalent of the not gate in classical computation, and is represented by the Pauli-X matrix: [0110].
It flips the computational basis states, which are written as column vectors:
Within quantum computation, we tend two encounter two types of matrices, Hermitian and Unitary matrices. The former is more related to Quantum Mechanics, but still has some relevance, whereas the latter is the bread and butter of quantum computation.
A Hermitian matrix is a matrix is that is equivalent to its Conjugate Transpose denoted †, which means if you flip the signs of the imaginary components of the matrix, and then reflect these components over the top left diagonal, the resulting matrix will be the one you started with:
The basic idea is that evolving a quantum state via the application of a unitary matrix preserve the magnitude of the quantum state.
Spanning Sets, Linear Dependence and Bases
Here, we will take a look at the construction of vector spaces: Consider some vector space V. We say that some set of vectors of S spans some subset of the vector space (subspace), expressed by Vs⊂V, if we can write any vector in the subspace, as a Linear Combination of the other vectors in the space.
This subset notation indicates that there are elements of V that are not within Vs.
A linear combination of some set of vectors, in a space over a field F, is arbitrarily defined as the sum of said vectors, which will also be a vector:
∣v⟩=f1∣v1⟩+f2∣v2⟩+...+fn∣vn=i∑fi∣vi⟩, where each fi is some element of F. If we have a set of vectors spanning some space, any other vector in that space, can be described by those other vectors.
A set of vectors ∣v1⟩,...∣vn⟩ is linearly dependent if there exist corresponding coefficients for each vector bi∈F, such that:
b1∣v1⟩+b2∣v2⟩,...bn∣vn⟩=i∑bi∣vi⟩=0, where at least one of the bi coefficients is non-zero. For example if we have ∣v1⟩,...,∣vn⟩, and some coefficients ∣b1⟩,...,∣bn⟩, such that the linear combination is 0.
Since there's a vector with a non-zero coefficient (a non-zero scalar applied to the basis vector), we choose that term, ba∣va⟩:
If ba is the only non-zero coefficient, then ∣va⟩ has been written as the null vector, meaning the set is linearly dependent. Otherwise, va would be described as some element of the combination of non-zero vectors above. To prove the converse, we assume some vector ∣va⟩ in the subspace ∣v1⟩,...∣vn⟩ that can be written as a linear combination of other vectors:
∣va⟩=s∑bs∣vs⟩, where s is an index that spans the a subset of the subspace.
From this we get: ∣va⟩=s∑bs∣vs⟩=∣va⟩−(b1∣vs1⟩+...+br∣vsr)=0
Essentially, for all vectors that are not included in the subset, we set their coefficients, indexed by q, to 0, thus:
If we have some vectors, v1,v2,v3, with the following respective components: [123], [351], [008], we can write ∣a⟩ as a combination of the components of v1,v2,v3 like so: [369]=(3)[123], (0)[351], (0)[008]
∣va⟩−(b1∣vs1⟩+...+br∣vsr⟩)+(0)(∣vq1⟩+...+∣vqt⟩)=0, which is a linear combination of all of the elements of the subspace.
We can consider a basic example of this process, considering the set of two vectors that live in R2, ∣a⟩=(10) and ∣b⟩=(20). If we choose the field over our vector space to be R, we can create a linear combination of these vectors: 2∣a⟩−∣b⟩=0.
A set of vectors can be considered a linearly independent spanning set. In this context, the basis of some vector space is the minimal possible set that spans the entire space, where the cardinality of the basis set is considered the dimension of the vector space.
Bases and spanning sets are important, because the allow us to shrink vector spaces down, and speak about them as the combination of a smaller number of vectors, from which we can draw conclusions that we can generalize to the entire space, because every vector in the space is just a linear combination of basis vectors (and some scalars).
We often counter the bases of ∣0⟩,∣1⟩, which can be used to be describe any other qubit state via linear combinations: 2∣0⟩+∣1⟩, representing an even superposition between the two possible states.
Hilbert Spaces, Orthonormality, Inner Products
One of the premiere mathematical structures of Quantum Mechanics is the Hilbert Space. Here, we can think of Hilbert spaces as a state space in which all vectors representing quantum states exist. The difference between these spaces, and typical vector spaces, is that Hilbert space comes equipped with an inner product, an operation that can be performed on two vectors, that returns a scalar.
The scalar quantity returned from this operation represents the degree to which the first vector lies along the second vector. From this value, the probabilities of measurement in different quantum states can be calculated.
For two vectors a and b, in a Hilbert space, the inner product is described as ⟨a∣b⟩, where a and b are conjugate transposes of their original values, giving us:
⟨a∣b⟩=(a1∗a2∗...an∗)(b1b2...bn)=a1∗b1+a2∗b2+...+an∗bn, where * denotes the complex conjugate of some vector.
One of the most important conditions for these spaces, is that the inner product of some vector with itself is equal to one: ⟨ψ∣ψ⟩=1. This is the normalization condition expressed mathematically, which states the length of the vector squared (the components squared and summed), is equal to one. The physical significance is that the length of a vector in a particular direction is representative of the probability amplitude of the quantum system, with regard to the measurement of some quantum state.
Returning to the Bloch sphere:
The surface of this sphere is a valid Hilbert Space, along with the inner product between qubit states. A final note on Hilbert spaces is that they relate to Unitary matrices, which preserve inner product, meaning you can transform a vector under a series of unitary matrices without breaking the normalization condition that the inner product of two vectors must equate to one:
This means Unitary evolution of quantum systems sends quantum states to other valid quantum states- for a single qubit Hilbert space, represented by the Bloch sphere, unitary transformations correspond to vector rotations to different points on the sphere, without altering the length of the vector.