Is it possible to develop a powerful web search engine using Erlang, Mnesia & Yaws?

前端 未结 4 1067
感情败类
感情败类 2021-02-02 04:07

I am thinking of developing a web search engine using Erlang, Mnesia & Yaws. Is it possible to make a powerful and the fastest web search engine using these software? What w

相关标签:
4条回答
  • 2021-02-02 04:30

    I would recommend CouchDB instead of Mnesia.

    • Mnesia doesn't have Map-Reduce, CouchDB does (correction - see comments)
    • Mnesia is statically typed, CouchDB is a document database (and pages are documents, i.e. a better fit to the information model in my opinion)
    • Mnesia is primarily intended to be a memory-resident database

    YAWS is pretty good. You should also consider MochiWeb.

    You won't go wrong with Erlang

    0 讨论(0)
  • 2021-02-02 04:40

    As far as I know Powerset's natural language procesing search engine is developed using erlang.

    Did you look at couchdb (which is written in erlang as well) as a possible tool to help you to solve few problems on your way?

    0 讨论(0)
  • 2021-02-02 04:40

    In the 'rdbms' contrib, there is an implementation of the Porter Stemming Algorithm. It was never integrated into 'rdbms', so it's basically just sitting out there. We have used it internally, and it worked quite well, at least for datasets that weren't huge (I haven't tested it on huge data volumes).

    The relevant modules are:

    rdbms_wsearch.erl
    rdbms_wsearch_idx.erl
    rdbms_wsearch_porter.erl
    

    Then there is, of course, the Disco Map-Reduce framework.

    Whether or not you can make the fastest engine out there, I couldn't say. Is there a market for a faster search engine? I've never had problems with the speed of e.g. Google. But a search facility that increased my chances of finding good answers to my questions would interest me.

    0 讨论(0)
  • 2021-02-02 04:41

    Erlang can make the most powerful web crawler today. Let me take you through my simple crawler.

    Step 1. I create a simple parallelism module, which i call mapreduce

    -module(mapreduce).
    -export([compute/2]).
    %%=====================================================================
    %% usage example
    %% Module = string
    %% Function = tokens
    %% List_of_arg_lists = [["file\r\nfile","\r\n"],["muzaaya_joshua","_"]]
    %% Ans = [["file","file"],["muzaaya","joshua"]]
    %% Job being done by two processes
    %% i.e no. of processes spawned = length(List_of_arg_lists)
    
    compute({Module,Function},List_of_arg_lists)->
        S = self(),
        Ref = erlang:make_ref(),
        PJob = fun(Arg_list) -> erlang:apply(Module,Function,Arg_list) end,
        Spawn_job = fun(Arg_list) -> 
                        spawn(fun() -> execute(S,Ref,PJob,Arg_list) end)
                    end,
        lists:foreach(Spawn_job,List_of_arg_lists),
        gather(length(List_of_arg_lists),Ref,[]).
    gather(0, _, L) -> L; gather(N, Ref, L) -> receive {Ref,{'EXIT',_}} -> gather(N-1,Ref,L); {Ref, Result} -> gather(N-1, Ref, [Result|L]) end.
    execute(Parent,Ref,Fun,Arg)-> Parent ! {Ref,(catch Fun(Arg))}.

    Step 2. The HTTP Client

    One would normally use either inets httpc module built into erlang or ibrowse. However, for memory management and speed (getting the memory foot print as low as possible), a good erlang programmer would choose to use curl. By applying the os:cmd/1 which takes that curl command line, one would get the output direct into the erlang calling function. Yet still, its better, to make curl throw its outputs into files and then our application has another thread (process) which reads and parses these files

    Command: curl "http://www.erlang.org" -o "/downloaded_sites/erlang/file1.html"
    In Erlang
    os:cmd("curl \"http://www.erlang.org\" -o \"/downloaded_sites/erlang/file1.html\"").
    So you can spawn many processes. You remember to escape the URL as well as the output file path as you execute that command. There is a process on the other hand whose work is to watch the directory of downloaded pages. These pages it reads and parses them, it may then delete after parsing or save in a different location or even better, archive them using the zip module
    folder_check()->
        spawn(fun() -> check_and_report() end),
        ok.
    
    -define(CHECK_INTERVAL,5).
    
    check_and_report()->
        %% avoid using
        %% filelib:list_dir/1
        %% if files are many, memory !!!
        case os:cmd("ls | wc -l") of
            "0\n" -> ok;
            "0" -> ok;
            _ -> ?MODULE:new_files_found()
        end,
        sleep(timer:seconds(?CHECK_INTERVAL)),
        %% keep checking
        check_and_report().
    
    new_files_found()->
        %% inform our parser to pick files
        %% once it parses a file, it has to 
        %% delete it or save it some
        %% where else
        gen_server:cast(?MODULE,files_detected).
    

    Step 3. The html parser.
    Better use this mochiweb's html parser and XPATH. This will help you parse and get all your favorite HTML tags, extract the contents and then good to go. The examples below, i focused on only the Keywords, description and title in the markup


    Module Testing in shell...awesome results!!!

    2> spider_bot:parse_url("http://erlang.org").
    [[[],[],
      {"keywords",
       "erlang, functional, programming, fault-tolerant, distributed, multi-platform, portable, software, multi-core, smp, concurrency "},
      {"description","open-source erlang official website"}],
     {title,"erlang programming language, official website"}]
    

    3> spider_bot:parse_url("http://facebook.com").
    [[{"description",
       " facebook is a social utility that connects people with friends and others who work, study and live around them. people use facebook to keep up with friends, upload an unlimited number of photos, post links
     and videos, and learn more about the people they meet."},
      {"robots","noodp,noydir"},
        [],[],[],[]],
     {title,"incompatible browser | facebook"}]
    

    4> spider_bot:parse_url("http://python.org").
    [[{"description",
       "      home page for python, an interpreted, interactive, object-oriented, extensible\n      programming language. it provides an extraordinary combination of clarity and\n      versatility, and is free and
    comprehensively ported."},
      {"keywords",
       "python programming language object oriented web free source"},
      []],
     {title,"python programming language – official website"}]
    

    5> spider_bot:parse_url("http://www.house.gov/").
    [[[],[],[],
      {"description",
       "home page of the united states house of representatives"},
      {"description",
       "home page of the united states house of representatives"},
      [],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
      [],[],[]|...],
     {title,"united states house of representatives, 111th congress, 2nd session"}]
    


    You can now realise that, we can index the pages against their keywords, plus a good schedule of page revisists. Another challenge was how to make a crawler (something that will move around the entire web, from domain to domain), but that one is easy. Its possible by parsing an Html file for the href tags. Make the HTML Parser to extract all href tags and then you might need some regular expressions here and there to get the links right under a given domain.

    Running the crawler

    7> spider_connect:conn2("http://erlang.org").        
    
            Links: ["http://www.erlang.org/index.html",
                    "http://www.erlang.org/rss.xml",
                    "http://erlang.org/index.html","http://erlang.org/about.html",
                    "http://erlang.org/download.html",
                    "http://erlang.org/links.html","http://erlang.org/faq.html",
                    "http://erlang.org/eep.html",
                    "http://erlang.org/starting.html",
                    "http://erlang.org/doc.html",
                    "http://erlang.org/examples.html",
                    "http://erlang.org/user.html",
                    "http://erlang.org/mirrors.html",
                    "http://www.pragprog.com/titles/jaerlang/programming-erlang",
                    "http://oreilly.com/catalog/9780596518189",
                    "http://erlang.org/download.html",
                    "http://www.erlang-factory.com/conference/ErlangUserConference2010/speakers",
                    "http://erlang.org/download/otp_src_R14B.readme",
                    "http://erlang.org/download.html",
                    "https://www.erlang-factory.com/conference/ErlangUserConference2010/register",
                    "http://www.erlang-factory.com/conference/ErlangUserConference2010/submit_talk",
                    "http://www.erlang.org/workshop/2010/",
                    "http://erlangcamp.com","http://manning.com/logan",
                    "http://erlangcamp.com","http://twitter.com/erlangcamp",
                    "http://www.erlang-factory.com/conference/London2010/speakers/joearmstrong/",
                    "http://www.erlang-factory.com/conference/London2010/speakers/RobertVirding/",
                    "http://www.erlang-factory.com/conference/London2010/speakers/MartinOdersky/",
                    "http://www.erlang-factory.com/",
                    "http://erlang.org/download/otp_src_R14A.readme",
                    "http://erlang.org/download.html",
                    "http://www.erlang-factory.com/conference/London2010",
                    "http://github.com/erlang/otp",
                    "http://erlang.org/download.html",
                    "http://erlang.org/doc/man/erl_nif.html",
                    "http://github.com/erlang/otp",
                    "http://erlang.org/download.html",
                    "http://www.erlang-factory.com/conference/ErlangUserConference2009",
                    "http://erlang.org/doc/efficiency_guide/drivers.html",
                    "http://erlang.org/download.html",
                    "http://erlang.org/workshop/2009/index.html",
                    "http://groups.google.com/group/erlang-programming",
                    "http://www.erlang.org/eeps/eep-0010.html",
                    "http://erlang.org/download/otp_src_R13B.readme",
                    "http://erlang.org/download.html",
                    "http://oreilly.com/catalog/9780596518189",
                    "http://www.erlang-factory.com",
                    "http://www.manning.com/logan",
                    "http://www.erlang.se/euc/08/index.html",
                    "http://erlang.org/download/otp_src_R12B-5.readme",
                    "http://erlang.org/download.html",
                    "http://erlang.org/workshop/2008/index.html",
                    "http://www.erlang-exchange.com",
                    "http://erlang.org/doc/highlights.html",
                    "http://www.erlang.se/euc/07/",
                    "http://www.erlang.se/workshop/2007/",
                    "http://erlang.org/eep.html",
                    "http://erlang.org/download/otp_src_R11B-5.readme",
                    "http://pragmaticprogrammer.com/titles/jaerlang/index.html",
                    "http://erlang.org/project/test_server",
                    "http://erlang.org/download-stats/",
                    "http://erlang.org/user.html#smtp_client-1.0",
                    "http://erlang.org/user.html#xmlrpc-1.13",
                    "http://erlang.org/EPLICENSE",
                    "http://erlang.org/project/megaco/",
                    "http://www.erlang-consulting.com/training_fs.html",
                    "http://erlang.org/old_news.html"]
    ok
    
    Storage: Is one of the most important concepts for a search engine. Its a big mistake to store search engine data in an RDBMS like MySQL, Oracle, MS SQL e.t.c. Such systems are completely complex and the applications that interface with them employ heuristic algorithms. This brings us to Key-Value Stores, of which the two of my best are Couch Base Server and Riak. These are great Cloud File Systems. Another important parameter is caching. Caching is attained using say Memcached, of which the other two storage systems mentioned above have support for it. Storage systems for Search engines ought to be schemaless DBMS,which focuses on Availability rather than Consistency. Read more on Search Engines from here: http://en.wikipedia.org/wiki/Web_search_engine

    0 讨论(0)
提交回复
热议问题