Using Jsoup, how can I fetch each and every information resides in each link?

后端 未结 1 1630
暗喜
暗喜 2021-01-20 03:17
     package com.muthu;
     import java.io.IOException;
     import org.jsoup.Jsoup;
     import org.jsoup.helper.Validate;
     import org.jsoup.nodes.Document;
           


        
1条回答
  •  无人及你
    2021-01-20 04:14

    If you connect to an URL it will only parse the current page. But you can 1.) connect to an URL, 2.) parse the informations you need, 3.) select all further links, 4.) connect to them and 5.) continue this as long as there are new links.

    considerations:

    • You need a list (?) or something else where you've store the links you already parsed
    • You have to decide if you need only links of this page or externals too
    • You have to skip pages like "about", "contact" etc.

    Edit:
    (Note: you have to add some changes / errorhandling code)

    List visitedUrls = new ArrayList<>(); // Store all links you've already visited
    
    
    public void visitUrl(String url) throws IOException
    {
        url = url.toLowerCase(); // now its case insensitive
    
        if( !visitedUrls.contains(url) ) // Do this only if not visted yet
        {
            Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
    
            /* ... Select your Data here ... */
    
            Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
    
            for( Element next : nextLinks ) // Iterate over all Links
            {
                visitUrl(next.absUrl("href")); // Recursive call for all next Links
            }
        }
    }
    

    You have to add more restrictions / checks at the part where next links are selected (maybe you want to skip / ignore some); and some error handling.


    Edit 2:

    To skip ignored links you can use this:

    1. Create a Set / List / whatever, where you store ignored keywords
    2. Fill it with those keywords
    3. Before you call the visitUrl() method with the new Link to parse, you check if this new Url contains any of the ignored keywords. If it contains at least one it will be skipped.

    I modified the example a bit to do so (but it's not tested yet!).

    List visitedUrls = new ArrayList<>(); // Store all links you've already visited
    Set ignore = new HashSet<>(); // Store all keywords you want ignore
    
    // ...
    
    
    /*
     * Add keywords to the ignorelist. Each link that contains one of this
     * words will be skipped.
     * 
     * Do this in eg. constructor, static block or a init method.
     */
    ignore.add(".twitter.com");
    
    // ...
    
    
    public void visitUrl(String url) throws IOException
    {
        url = url.toLowerCase(); // Now its case insensitive
    
        if( !visitedUrls.contains(url) ) // Do this only if not visted yet
        {
            Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
    
            /* ... Select your Data here ... */
    
            Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
    
            for( Element next : nextLinks ) // Iterate over all Links
            {
                boolean skip = false; // If false: parse the url, if true: skip it
                final String href = next.absUrl("href"); // Select the 'href' attribute -> next link to parse
    
                for( String s : ignore ) // Iterate over all ignored keywords - maybe there's a better solution for this
                {
                    if( href.contains(s) ) // If the url contains ignored keywords it will be skipped
                    {
                        skip = true;
                        break;
                    }
                }
    
                if( !skip )
                    visitUrl(next.absUrl("href")); // Recursive call for all next Links
            }
        }
    }
    

    Parsing the next link is done by this:

    final String href = next.absUrl("href");
    /* ... */
    visitUrl(next.absUrl("href"));
    

    But possibly you should add some more stop-conditions to this part.

    0 讨论(0)
提交回复
热议问题