One way of testing whether a set of vectors is a linearly independent set is to arrange the vectors as columns of a matrix and then work out its determinant. If the determinant of that matrix is then the set is a linearly dependent set but if the determinant is something other than then the set is a linearly independent one.
Obviously this only works If the matrix so formed is a square matrix, where the number of rows equals the number of columns of the matrix. This is the same a saying … if number of components of the vectors in our set equals the total number of vectors in our set. While this might at first seem as a letdown, but this isn’t totally useless, especially when it comes to determining whether the set is a basis for a particular subspace. What do I mean by that? Well, I’d recommend you continue reading!
I have been using this “technique” for quite sometime in Linear Algebra classes at school, but mostly, without really understanding what really is going on underneath. At first sight, this might seem as a “trick” of some sort but as Qiaochu Yuan says “ In mathematics nothing is a trick if seen from a sufficiently high level”, this is no trick.
Well, from here on, I’ll try to be as mathematically formal as I can.
Let set ,be a linearly dependent set, with cardinality , of n-vectors. Now construct a matrix whose columns are the vectors of . Or, if are the n-vector elements of , then construct such that,
Now Let be the determinant of M. Then,
Now because the set is a linearly dependent set, this implies that one of the vectors (without loss of generality, let’s say its the vector ) can be represented as a linear combination of the other vectors in the set. Or,
Now, take Matrix M and subtract from the n column (or from ). This results in another matrix (say M’) whose last column is a 0 vector.
So, we now have that
It follows from the fact that one of the columns of the matrix being 0, that
Because the determinant of a matrix remains unchanged if a scalar multiple of a particular column(or row) is added(or subtracted) from a column, it thus follows
Therefore, we have just shown that if a the columns of a matrix are linearly dependent, the determinant of that matrix should be zero. The converse is also true and it shouldn’t be that difficult to show.
Now that it has been shown (not so rigorous, but bearably solid) why it works, I’d like to focus on something I mentioned earlier on in the post, something about being a square matrix. After reading the proof, it should be very clear as to why should be a square matrix. Our technique uses the idea of a determinant which is only defined for a square matrix and hence , if isn’t a square matrix, then we can not compute its determinant.
Now when you think about what it actually means for to be a square matrix, you realize a certain fact about this technique that is, in my opinion, worthy of a mention. Beacause is a square matrix, the number of vectors in and the number of component in each vector has to be equal. Else, wouldn’t be a square matrix. This means that the linear independence of the columns of implies that the vectors in spans the vector space . So, if a set is linearly independent set and it spans a particular subspace, the set is said to be a basis for that particular subspace, right?
So, what do we have now? Our little “theorem” now extends to
If a set with vectors is a basis for the subspace , then the determinant of the matrix whose columns are the basis vectors of that basis set, is a non-zero number. If the set isn’t a basis for that subspace , then the determinant of the matrix is .
The converse of this statement is true too and should be difficult to see why.