How to replace plain URLs with links?

前端 未结 24 2278
生来不讨喜
生来不讨喜 2020-11-21 05:42

I am using the function below to match URLs inside a given text and replace them for HTML links. The regular expression is working great, but currently I am only replacing t

相关标签:
24条回答
  • 2020-11-21 05:45

    The e-mail detection in Travitron's answer above did not work for me, so I extended/replaced it with the following (C# code).

    // Change e-mail addresses to mailto: links.
    const RegexOptions o = RegexOptions.Multiline | RegexOptions.IgnoreCase;
    const string pat3 = @"([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,6})";
    const string rep3 = @"<a href=""mailto:$1@$2.$3"">$1@$2.$3</a>";
    text = Regex.Replace(text, pat3, rep3, o);
    

    This allows for e-mail addresses like "firstname.secondname@one.two.three.co.uk".

    0 讨论(0)
  • 2020-11-21 05:46

    I've wrote yet another JavaScript library, it might be better for you since it's very sensitive with the least possible false positives, fast and small in size. I'm currently actively maintaining it so please do test it in the demo page and see how it would work for you.

    link: https://github.com/alexcorvi/anchorme.js

    0 讨论(0)
  • 2020-11-21 05:50

    Try the below function :

    function anchorify(text){
      var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig;
      var text1=text.replace(exp, "<a href='$1'>$1</a>");
      var exp2 =/(^|[^\/])(www\.[\S]+(\b|$))/gim;
      return text1.replace(exp2, '$1<a target="_blank" href="http://$2">$2</a>');
    }
    

    alert(anchorify("Hola amigo! https://www.sharda.ac.in/academics/"));

    0 讨论(0)
  • 2020-11-21 05:51

    Replacing URLs with links (Answer to the General Problem)

    The regular expression in the question misses a lot of edge cases. When detecting URLs, it's always better to use a specialized library that handles international domain names, new TLDs like .museum, parentheses and other punctuation within and at the end of the URL, and many other edge cases. See the Jeff Atwood's blog post The Problem With URLs for an explanation of some of the other issues.

    The best summary of URL matching libraries is in Dan Dascalescu's Answer
    (as of Feb 2014)


    "Make a regular expression replace more than one match" (Answer to the specific problem)

    Add a "g" to the end of the regular expression to enable global matching:

    /ig;
    

    But that only fixes the problem in the question where the regular expression was only replacing the first match. Do not use that code.

    0 讨论(0)
  • 2020-11-21 05:52

    Correct URL detection with international domains & astral characters support is not trivial thing. linkify-it library builds regex from many conditions, and final size is about 6 kilobytes :) . It's more accurate than all libs, currently referenced in accepted answer.

    See linkify-it demo to check live all edge cases and test your ones.

    If you need to linkify HTML source, you should parse it first, and iterate each text token separately.

    0 讨论(0)
  • 2020-11-21 05:53

    First off, rolling your own regexp to parse URLs is a terrible idea. You must imagine this is a common enough problem that someone has written, debugged and tested a library for it, according to the RFCs. URIs are complex - check out the code for URL parsing in Node.js and the Wikipedia page on URI schemes.

    There are a ton of edge cases when it comes to parsing URLs: international domain names, actual (.museum) vs. nonexistent (.etc) TLDs, weird punctuation including parentheses, punctuation at the end of the URL, IPV6 hostnames etc.

    I've looked at a ton of libraries, and there are a few worth using despite some downsides:

    • Soapbox's linkify has seen some serious effort put into it, and a major refactor in June 2015 removed the jQuery dependency. It still has issues with IDNs.
    • AnchorMe is a newcomer that claims to be faster and leaner. Some IDN issues as well.
    • Autolinker.js lists features very specifically (e.g. "Will properly handle HTML input. The utility will not change the href attribute inside anchor () tags"). I'll thrown some tests at it when a demo becomes available.

    Libraries that I've disqualified quickly for this task:

    • Django's urlize didn't handle certain TLDs properly (here is the official list of valid TLDs. No demo.
    • autolink-js wouldn't detect "www.google.com" without http://, so it's not quite suitable for autolinking "casual URLs" (without a scheme/protocol) found in plain text.
    • Ben Alman's linkify hasn't been maintained since 2009.

    If you insist on a regular expression, the most comprehensive is the URL regexp from Component, though it will falsely detect some non-existent two-letter TLDs by looking at it.

    0 讨论(0)
提交回复
热议问题