I am working on a simple cnn classifier using keras with tensorflow background.
def cnnKeras(training_data, training_labels, test_data, test_labels, n_dim):
pr
The loss function sparse_categorical_crossentropy interprets the final layer in the context of classifiers as a set of probabilities for each possible class, and the output value as the number of the class. (The Tensorflow/Keras documentation goes into a bit more detail.) So x neurons in output layer are compared against output values in the range from 0 to x-1; and having just one neuron in the output layer is an 'unary' classifier that doesn't make sense.
If it's a classification task where you want to have output data in the form from 0 to x-1, then you can keep sparse categorical crossentropy, but you need to set the number of neurons in the output layer to the number of classes you have. Alternatively, you might encode the output in a one-hot vector and use categorical crossentropy loss function instead of sparse categorical crossentropy.
If it's not a classification task and you want to predict arbitrary real-valued numbers as in a regression, then categorical crossentropy is not a suitable loss function at all.