Getting all links of a webpage using Ruby

前端 未结 5 1423
盖世英雄少女心
盖世英雄少女心 2021-02-08 06:36

I\'m trying to retrieve every external link of a webpage using Ruby. I\'m using String.scan with this regex:

/href=\"https?:[^\"]*|href=\'https?:[^\         


        
5条回答
  •  暗喜
    暗喜 (楼主)
    2021-02-08 07:30

    Mechanize uses Nokogiri under the hood but has built-in niceties for parsing HTML, including links:

    require 'mechanize'
    
    agent = Mechanize.new
    page = agent.get('http://example.com/')
    
    page.links_with(:href => /^https?/).each do |link|
      puts link.href
    end
    

    Using a parser is generally always better than using regular expressions for parsing HTML. This is an often-asked question here on Stack Overflow, with this being the most famous answer. Why is this the case? Because constructing a robust regular expression that can handle real-world variations of HTML, some valid some not, is very difficult and ultimately more complicated than a simple parsing solution that will work for just about all pages that will render in a browser.

提交回复
热议问题