This is my code:
for it in range(EPOCH*24410//BATCH_SIZE):
tr_pa, tr_sp = sess.run([tr_para, tr_spec])
train_loss, _ = sess.run([loss, fw_op], feed_dict=
The telltale signature of overfitting is when your validation loss starts increasing, while your training loss continues decreasing, i.e.:
(Image adapted from Wikipedia entry on overfitting)
Here are some other plots indicating overfitting (source):
See also the SO thread How to know if underfitting or overfitting is occuring?.
Clearly, your plot does not exhibit such behavior, hence you are not overfitting.
Your code looks OK, keeping in mind that you don't show what exactly goes on inside your session sess
.