"Boilerplate" text is repeated generic text of the kind that search engines understandably don't like. Obvious examples are copyright statements and "placeholder" text on in-development pages. But can you rank pages that contain purely this type of unhelpful text?
I liked the W3C's recent redesign, which I felt was clean and modern. I was a little disappointed to see a number of pages with no content, some linked fairly prominently, such as the authoring tools page. The pages containing identical text and subheadings, with stuff like "This intro text is boilerplate for the beta release of w3.org". Not very best practice as far as I can see:
Google, of course, advise that webmasters should "Minimize boilerplate repetition".
W3.org is a highly authoritative domain, and seems like a good case study of whether Google will detect that these are valueless pages and rank them appropriately. So how do the W3C get on?
Unfortunately, Google seems coy about showing too many pages for specific searches these days. Searching for [site:w3.org "This intro text is boilerplate"] only gives us 30 results, and most only target a single word in the title. However, a number of these have secured high rankings on Google for the likes of [meta formats] (1st place) and [vertical applications] (5th):
In many cases, w3.org has other pages targeting the same words that (to its credit) Google ranks higher than the boilerplates.
So, to rank boilerplate text it looks like you need:
- Decent domain weight and internal links to the pages
- Extra credit for a unique title and headings
- You don't need any meta elements