android how get url in page loaded- loading

前端 未结 2 1202
终归单人心
终归单人心 2021-01-26 03:17

hello everyone i want to know if there is any way to find url in page loaded in webview for example

webview.loadurl(\"stackoverflow.com\") // this is url 
string         


        
相关标签:
2条回答
  • 2021-01-26 03:45

    Put a first client in your WebView : WebViewClient, in which you'll call the html :

    @Override
    public void onPageFinished(WebView view, String url) {
       webview1.loadUrl("javascript:alert(document.getElementsByTagName('body')[0].innerHTML);");
       }
    

    And then, put a second client :

    webview1.setWebChromeClient(new MyWebChromeClient());
    

    And in the WebChromeClient, put this, after having declared a boolean navigationTolink to false:

       @Override
       public boolean onJsAlert(final WebView view, String url, final String transfert, JsResult result) {
                 if (!navigationtoLink) {
                    Document html = Jsoup.parse(transfert);
                    Elements links = html.select("a[href]");
                    for (Element link : links) {
                        if (link.attr("href").contains("youtube.com")) { 
                        view.loadUrl(link.attr("href"));
                        navigationtoLink=true;
                    }
                 }
    
        }
    

    This can help for grabbing

    0 讨论(0)
  • 2021-01-26 04:04

    To get all links/URLs from your webView, you need a html parser to iterate the page content. Then, you can loop the result list and check if it contains your Youtube Channel url or the url that you are looking for.

    1) You can use jsoup, this is an example (taken from here ):

    File input = new File("/tmp/input.html");
    Document doc = Jsoup.parse(input, "UTF-8", "http://example.com/");
    
    Elements links = doc.select("a[href]"); // get all "a" elements with "href"
    Elements pngs = doc.select("img[src$=.png]");// get all "img" with src ending .png
    
    Element masthead = doc.select("div.masthead").first();
    

    2) OR use HTML Parser library.

    public static List<String> getLinksOnPage(final String url) {
        final Parser htmlParser = new Parser(url);
        final List<String> result = new LinkedList<String>();
    
        try {
            final NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
            for (int j = 0; j < tagNodeList.size(); j++) {
                final LinkTag loopLink = (LinkTag) tagNodeList.elementAt(j);
                final String loopLinkStr = loopLink.getLink();
                result.add(loopLinkStr);
            }
        } catch (ParserException e) {
            e.printStackTrace(); // TODO handle error
        }
    
        return result;
    }
    

    3) OR create your own parser, something like:

    String HTMLPage; // get the HTML page as a String
    Pattern linkPattern = Pattern.compile("(<a[^>]+>.+?</a>)",  Pattern.CASE_INSENSITIVE|Pattern.DOTALL);
    Matcher pageMatcher = linkPattern.matcher(HTMLPage);
    ArrayList<String> links = new ArrayList<String>();
    while(pageMatcher.find()){
        links.add(pageMatcher.group());
    }
    

    links ArrayList will contains all links in the page.

    PS: You can edit linkPattern to filter some links.

    0 讨论(0)
提交回复
热议问题