package com.muthu;
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.helper.Validate;
import org.jsoup.nodes.Document;
If you connect to an URL it will only parse the current page. But you can 1.) connect to an URL, 2.) parse the informations you need, 3.) select all further links, 4.) connect to them and 5.) continue this as long as there are new links.
considerations:
Edit:
(Note: you have to add some changes / errorhandling code)
List visitedUrls = new ArrayList<>(); // Store all links you've already visited
public void visitUrl(String url) throws IOException
{
url = url.toLowerCase(); // now its case insensitive
if( !visitedUrls.contains(url) ) // Do this only if not visted yet
{
Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
/* ... Select your Data here ... */
Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
for( Element next : nextLinks ) // Iterate over all Links
{
visitUrl(next.absUrl("href")); // Recursive call for all next Links
}
}
}
You have to add more restrictions / checks at the part where next links are selected (maybe you want to skip / ignore some); and some error handling.
Edit 2:
To skip ignored links you can use this:
visitUrl()
method with the new Link to parse, you check if this new Url contains any of the ignored keywords. If it contains at least one it will be skipped.I modified the example a bit to do so (but it's not tested yet!).
List visitedUrls = new ArrayList<>(); // Store all links you've already visited
Set ignore = new HashSet<>(); // Store all keywords you want ignore
// ...
/*
* Add keywords to the ignorelist. Each link that contains one of this
* words will be skipped.
*
* Do this in eg. constructor, static block or a init method.
*/
ignore.add(".twitter.com");
// ...
public void visitUrl(String url) throws IOException
{
url = url.toLowerCase(); // Now its case insensitive
if( !visitedUrls.contains(url) ) // Do this only if not visted yet
{
Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
/* ... Select your Data here ... */
Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
for( Element next : nextLinks ) // Iterate over all Links
{
boolean skip = false; // If false: parse the url, if true: skip it
final String href = next.absUrl("href"); // Select the 'href' attribute -> next link to parse
for( String s : ignore ) // Iterate over all ignored keywords - maybe there's a better solution for this
{
if( href.contains(s) ) // If the url contains ignored keywords it will be skipped
{
skip = true;
break;
}
}
if( !skip )
visitUrl(next.absUrl("href")); // Recursive call for all next Links
}
}
}
Parsing the next link is done by this:
final String href = next.absUrl("href");
/* ... */
visitUrl(next.absUrl("href"));
But possibly you should add some more stop-conditions to this part.