问题
Are there any examples which could prove/demonstrate that we can see underfitting while classifying images with mobilenet too?
I have tried transfer learning and feature extraction with mobilenet in ml5.js Since it's already trained on several thousand images, even when I add and train only 3 new images, I seem to get correct results.
I am looking for an example such that I can demonstrate to the user that underfitting is possible with mobilenet as well. It could be by changing a particular parameter while building the model or something close. Open to any tech stack (tensorflow.js / Ml5.js / keras).
For instance this is from keras's documentation:
application_mobilenet(
input_shape = NULL,
alpha = 1,
depth_multiplier = 1,
dropout = 0.001,
include_top = TRUE,
weights = "imagenet",
input_tensor = NULL,
pooling = NULL,
classes = 1000
)
mobilenet_preprocess_input(x)
mobilenet_decode_predictions(preds, top = 5)
mobilenet_load_model_hdf5(filepath)
so is there a variable that the user could change and observe the difference/underfitting?
Additionally, here's a codelab link for doing image classification with mobilenet & tensorflow.js. Basically, I want to do something similar but just show the user that underfitting is also possible here. Is there any way I could modify this code?
https://codelabs.developers.google.com/codelabs/tensorflowjs-teachablemachine-codelab#0
来源:https://stackoverflow.com/questions/64621656/can-we-show-underfitting-with-mobilenet