I\'m trying to retrieve every external link of a webpage using Ruby. I\'m using String.scan
with this regex:
/href=\"https?:[^\"]*|href=\'https?:[^\
Mechanize uses Nokogiri under the hood but has built-in niceties for parsing HTML, including links:
require 'mechanize'
agent = Mechanize.new
page = agent.get('http://example.com/')
page.links_with(:href => /^https?/).each do |link|
puts link.href
end
Using a parser is generally always better than using regular expressions for parsing HTML. This is an often-asked question here on Stack Overflow, with this being the most famous answer. Why is this the case? Because constructing a robust regular expression that can handle real-world variations of HTML, some valid some not, is very difficult and ultimately more complicated than a simple parsing solution that will work for just about all pages that will render in a browser.