I have done manual hyperparameter optimization for ML models before and always defaulted to tanh or relu as hidden layer activation functions. Recently, I star