If I correctly understood the significance of the loss function to the model, it directs the model to be trained based on minimizing the loss value. So for example, if I wan
if i understood it correctly, you question is: why optimise "loss" when we can optimise "accuracy".
Of course you can !! (whether it will be good for convergence is another issue). You see, both loss (MSE in your case) and accuracy are essentially usual functions or to be precise equations and you can choose any equation as your objective function.
maybe this confusion arises due to the use of things like: "mse" and even more confusing: "acc"
.
check this file to get a more clear picture of what happens when you write "mse"
"acc"
is a little bit more confusing. You see, when you write "acc" it has multiple meaning for Keras. Hence, based on what loss function you are using, Keras then decides the best "acc" function for you. Check this file to see what happens when you write "acc"
Finally, answering your question: shouldn't the focus of the model during it's training to maximize acc (or minimize 1/acc) instead of minimizing MSE?
Well, to keras, MSE
and acc
are nothing but functions. Keras optimises your model based on feedback from the function defined at:
model.compile(loss=
for attribute: loss
pass a function. If you do not want to do so, just write "mse"
and keras will pass the required function for you.
for attribute: metrics
pass a list of function(s). If you are lazy like me, then simple ask keras to do so by writing "acc"
which function/equation should you use as your objective function?
that's for another day :)