It\'s widely considered that the best reason to validate one\'s HTML is to ensure that all browsers will treat it consistently and predictably.
The HTML 5 draft, h
<!DOCTYPE html>
. The reason why one would want to comply here is getting the standards mode in the easiest way possible. It’s something you can memorize unlike the HTML 4.01 or XHTML 1.0 doctypes you need to look up and copy and paste each time. Of course, the reason why you’d want the standards mode is fewer surprises on the CSS layer.longdesc
, summary
and profile
. (Note that people disagree on whether these are, indeed, waste of time, but as currently drafted, HTML5 makes them obsolete.) That is, if you have limited resources to improve accessibility, your limited resources are better spent on something other than longdesc
and summary
. If you have limited resources for semantic purity, your resources are better spent on something other than making sure you have the right incantation in profile
.<font>
element is obsoleted, because making it conforming would make anti-<font>
standardistas think that the HTML5 people have gone crazy, which could lead to bad PR. <applet>
is obsoleted mainly as a matter of principle of not giving special markup to one particular plug-in. The classid
attribute on <object>
is obsoleted, because it’s in practice ActiveX-specific.name
attribute on <a>
and the language
attribute on <script>
.(I develop the Validator.nu HTML5 validator which is also the HTML5 validation engine used by the W3C validator.)
Validation has never really been about getting consistent results across browsers, even before HTML5 began. That's a myth propagated by those who don't understand what they're talking about, even if they think they do.
The real reason for validation is and always has been purely an issue of quality assurance. It's just a way of detecting errors, which . Even though results for any given error may be, or may soon become, consistent among browsers, it's still possible that the result itself is not as intended.
It's important for authors to be able to catch errors in their code because cleaner, error free markup is easier to work with and maintain, especially when working in a team environment. While most individual errors may end up being benign and not cause any major problems, there are some that can give unexpected results. e.g. Incorrectly, overlapping or unclosed elements can cause unexpected layout problems in some cases, and letting a validator tell you where the error is, helps in rectifying the problem. But if the results are filled with dozens of otherwise benign errors, it can make the detection and process more difficult than need be.
Given this, does it make any sense to limit ones HTML 5 to that which will validate, and what practical benefit will we get from doing so?
Yes, of course. You forget that the future is not fixed. In particular, you implicitly assume that HTML 5 specs will never change, and never deprecate any features. This, of course, only cements the status quo. It is definitely desirable to remove support for some features in long term, to make it easier for new developments to take place (in particular if these might conflict each other).
There may be no immediate benefit in producing valid HTML 5 (except that it still makes validation and thus development easier). But there may be a long-range benefit if most websites improve in quality because it makes moving on beyond the current technologies and standards much easier.
This is, indeed, one of my quibbles with HTML5. There's no point defining a subset of streams as 'valid', if a browser must handle all streams in the same way anyway. The eons spent on the WHATWG list debating fallback mechanisms is a massive waste of everyone's time, especially when XML should already have solved all the parsing issues.
It would have been an useful idea to produce a best-practices document on parsing legacy invalid documents but this has no part in a web standard, it's just another ingredient to further muddy the waters around HTML5, which can't decide whether it wants to be codifying existing behaviour (like HTML 3.2 did), redefining a cleaner platform (like HTML 3.0 tried) or adding new extensions piecemeal.
Anyhow, the question may be misplaced because there will never be a browser that "fully supports HTML5". There is just far, far too much of it: browser manufacturers could not implement absolutely everything down to the minutiae even if they wanted to, which at least Microsoft explicitly do not. Instead, obviously useful features will be cherry-picked from it by vendor and meet wider acceptance.
HTML5 is not a coherent HTML specification, it's Hixie's sprawling, unreadable and unfinished recipe for every random thing he thinks a web browser should do. It will fail. And W3's alternative approach, XHTML2, has already failed. There is no coherent future direction for web standards. We have dropped the ball.
W3C HTML5 validator maintainer here. I recently wrote a short “Why Validate?” section for the “About” section of the HTML5 validator:
http://validator.w3.org/nu/about.html#why-validate
The source for the text of that section is here:
https://github.com/validator/validator/blob/master/site/nu-about.html#L160
And pull requests with suggested refinements/additions are welcome.
What I have there currently is this:
The core reason to run your HTML documents through a conformance checker is simple: To catch unintended mistakes—mistakes you might have otherwise missed—so that you can fix them.
Beyond that, some document-conformance requirements (validity rules) in the HTML spec are there to help you and the users of your documents avoid certain kinds of potential problems. To explain the rationale behind those requirements, the HTML spec contains these two sections:
- rationale for syntax-level errors
- rationale for restrictions on content models and on attribute values
To summarize what’s stated in those two sections:
- There are some markup cases defined as errors because they are potential problems for accessibility, usability, interoperability, security, or maintainability—or because they can result in poor performance, or that might cause your scripts to fail in ways that are hard to troubleshoot.
- Along with those, some markup cases are defined as errors because they can cause you to run into potential problems in HTML parsing and error-handling behavior—so that, say, you’d end up with some unintuitive, unexpected result in the DOM.
Validating your documents alerts you to those potential problems.
It's a good question.
The primary purpose of validation (for me at least) is to help me catch errors in my markup, and to give me a good base on which to build when testing pages in different browsers; if the markup is valid, and the page is borked in IE6, it's an IE6 issue.
The fact that browsers should all still behave in a predictable manner even if your markup includes technically invalid HTML5 such as a table summary, or an anchor accesskey, muddies the waters somewhat.
As a general rule of thumb, I'd always want my pages to validate, for the aforementioned reason. However, if (for example) an attribute was dropped from the HTML5 spec without an apparently suitable replacement being added, I might be inclined to continue using the deprecated or obsolete attribute, and accept the validation errors.
As ever, I think it's a case of knowing your craft.
If you know what you're doing, and have made a conscious decision to build a page that doesn't validate for sound reasons, it's not a problem. If you're just writing code that doesn't validate because you don't know any better, that's another matter entirely.
Stephen