Neural Network based ranking of documents

后端 未结 6 572
北海茫月
北海茫月 2021-01-31 00:41

I\'m planning of implementing a document ranker which uses neural networks. How can one rate a document by taking in to consideration the ratings of similar articles?. Any good

相关标签:
6条回答
  • 2021-01-31 00:50

    The problem you are trying to solve is called "collaborative filtering".

    Neural Networks

    One state-of-the-art neural network method is Deep Belief Networks and Restricted Boltzman Machines. For a fast python implementation for a GPU (CUDA) see here. Another option is PyBrain.

    Academic papers on your specific problem:

    • This is probably the state-of-the-art of neural networks and collaborative filtering (of movies):

      Salakhutdinov, R., Mnih, A. Hinton, G, Restricted Boltzman Machines for Collaborative Filtering, To appear in Proceedings of the 24th International Conference on Machine Learning 2007. PDF

    • A Hopfield network implemented in Python:

      Huang, Z. and Chen, H. and Zeng, D. Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering. ACM Transactions on Information Systems (TOIS), 22, 1,116--142, 2004, ACM. PDF

    • A thesis on collaborative filtering with Restricted Boltzman Machines (they say Python is not practical for the job):

      G. Louppe. Collaborative filtering: Scalable approaches using restricted Boltzmann machines. Master's thesis, Universite de Liege, 2010.
      PDF

    Neural networks are not currently the state-of-the-art in collaborative filtering. And they are not the simplest, wide-spread solutions. Regarding your comment about the reason for using NNs being having too little data, neural networks don't have an inherent advantage/disadvantage in that case. Therefore, you might want to consider simpler Machine Learning approaches.

    Other Machine Learning Techniques

    The best methods today mix k-Nearest Neighbors and Matrix Factorization.

    If you are locked on Python, take a look at pysuggest (a Python wrapper for the SUGGEST recommendation engine) and PyRSVD (primarily aimed at applications in collaborative filtering, in particular the Netflix competition).

    If you are open to try other open source technologies look at: Open Source collaborative filtering frameworks and http://www.infoanarchy.org/en/Collaborative_Filtering.

    0 讨论(0)
  • 2021-01-31 00:53

    Packages

    If you're not committed to neural networks, I've had good luck with SVM, and k-means clustering might also be helpful. Both of these are provided by Milk. It also does Stepwise Discriminant Analysis for feature selection, which will definitely be useful to you if you're trying to find similar documents by topic.

    God help you if you choose this route, but the ROOT framework has a powerful machine learning package called TMVA that provides a large number of classification methods, including SVM, NN, and Boosted Decision Trees (also possibly a good option). I haven't used it, but pyROOT provides python bindings to ROOT functionality. To be fair, when I first used ROOT I had no C++ knowledge and was in over my head conceptually too, so this might actually be amazing for you. ROOT has a HUGE number of data processing tools.

    (NB: I've also written a fairly accurate document language identifier using chi-squared feature selection and cosine matching. Obviously your problem is harder, but consider that you might not need very hefty tools for it.)

    Storage vs Processing

    You mention in your question that:

    ...articles would be fetched based on their tags and passed through a neural network for ranking.

    Just as another NB, one thing you should know about machine learning is that processes like training and evaluating tend to take a while. You should probably consider ranking all documents for each tag only once (assuming you know all the tags) and storing the results. For machine learning generally, it's much better to use more storage than more processing.

    Now to your specific case. You don't say how many tags you have, so let's assume you have 1000, for roundness. If you store the results of your ranking for each doc on each tag, that gives you 100 million floats to store. That's a lot of data, and calculating them all will take a while, but retrieving them is very fast. If instead you recalculate the ranking for each document on demand, you have to do 1000 passes of it, one for each tag. Depending on the kind of operations you're doing and the size of your docs, that could take a few seconds to a few minutes. If the process is simple enough that you can wait for your code to do several of these evaluations on demand without getting bored, then go for it, but you should time this process before making any design decisions / writing code you won't want to use.

    Good luck!

    0 讨论(0)
  • 2021-01-31 01:02

    The FANN library also looks promising.

    0 讨论(0)
  • 2021-01-31 01:04

    If I understand correctly, your task is something related to Collaborative filtering. There are many possible approaches to this problem; I suggest you follow the wikipedia page to have an overview of the main approaches you can choose.

    For your project work I can suggest looking at Python based intro to Neural Networks with a simple BackProp NN implementation and a classification example. This is not "the" solution, but perhaps you can build your system out of that example without the need for a bigger framework.

    0 讨论(0)
  • 2021-01-31 01:07

    You might want to check out PyBrain.

    0 讨论(0)
  • 2021-01-31 01:12

    I am not really sure if a neural networks are the best way to solve this. I think Euclidean Distance Score or Pearson Correlation Score combined with item or user based filtering would be a good start.

    An excellent book on the topic is: Programming Collective Intelligence from Toby Segaran

    0 讨论(0)
提交回复
热议问题