precision and recall in fastText?

我的梦境 提交于 2019-12-04 13:00:04

Precision is the ratio of number of relevant results and total number of results retrieved by the program. Assume a document search engine, retrieved 100 docs out of which 90 are relevant to the query, then the precision is 90 / 100 (0.9). Since we have calculated the precision with 100 results, this is P@100.

And Recall is the ratio of relevant results retrieved by the algorithm and total number of the all relevant results. With the same example above, if the total number of relevant documents is 110, then the recall is 90 / 110.

In a nutshell, recall helps to evaluate an information retrieval program on how complete it is, in terms of fetching relevant results; and precision helps to evaluate on how accurate the results are.

Please check this for binary classification in fasttext also, https://github.com/facebookresearch/fastText/issues/93

Precision is a ratio of number of correctly predicted labels over number of labels predicted by the model

Recall is a ratio of number of correctly predicted labels over number of actual labels from validation dataset.

For examples: Actual labels for a input in validation dataset: A, B, C, F, G

Predicted labels for the input from the model: A, B, C, D

Correctly predicted labels: A, B, C

Precision: 3 / 4 = 0.75

Recall: 3 / 5 = 0.6

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!