I\'m using scikit-learn to perform PCA on this dataset. The scikit-learn documentation states that
Due to implementation subtleties of the Singular Va
SVD decompositions are not guaranteed unique - only the values will be identical, as different implementations of svd() can produce different signs. Any of the eigenvectors can have flipped signs, and will produce identical results when transformed, then transformed back into the original space. Most algorithms in sklearn which use SVD decomposition use the function sklearn.utils.extmath.svd_flip() to correct this, and enforce an identical convention across algorithms. For historical reasons, PCA() never got this fix (though maybe it should...)
In general, this is not something to worry about - just a limitation of the SVD algorithm as typically implemented.
On an additional note, I find assigning importance to PC weights (and parameter weights in general) dangerous, because of exactly these kinds of issues. Numerical/implementation details should not influence your analysis results, but many times it is hard to tell what is a result of the data, and what is a result of the algorithms you use for exploration. I know this is a homework assignment, not a choice, but it is important to keep these things in mind!