How to use textsum?

前端 未结 1 852
野性不改
野性不改 2021-02-09 06:33

I\'ve been following this link to use textsum. I\'ve trained the model using the command provided. But I don\'t see any folder \'train\' in \'textsum/log_root/\' directory. Sinc

1条回答
  •  [愿得一人]
    2021-02-09 06:48

    I honestly can't answer why you would not see a train folder in the log_root directory if you have passed all your parameters correctly. One other thing to note is to make sure you wait long enough. So when you execute your training run using Textsum, are you seeing any verbose logs stating there is some error such as no file list or something. If so then your path being passed to one of the params is probably off. It is relative to the path you are calling it from as well, so you need to make sure you are at the root path where your workspace file is.

    Another thing, are you using the CPU or GPU? If you are using the CPU...it takes a while for the model to get to the point where it is even able to write out the data. Now if you are using the GPU then this is much faster, but you need to wait until you see the "average_loss" logs start printing to your screen. Once you notice those, then there is good chance you will see your "train" folder with data.

    As for the "real-time" test data, I am still lookin into this myself and now that I have my current data being trained in the model, I am going to be starting on that as well. The direction, I understand so far, is that once you have trained your model and have your pickle file or whatever ti is, you can then "serve" it using the info here: https://tensorflow.github.io/serving/

    At that point your model is trained, and you can query against it and feed in new response so over time your model gets smarter. Again I have not proven this yet with an example but it is the approach I am going to be starting on here soon.

    With regards to "testing the model", you pretty much can follow the instructions provided on the textsum git, re-generating the vocab file, and then training. Then after you get your average loss to a small enough fraction you can then run decode against the data. Then in your log_root decode folder you will see the headlines generated and their associated reference files (what the actual headline was). Hope this helps and good luck!

    0 讨论(0)
提交回复
热议问题