Lighthouse: [SEO Audits] Structured data is valid

Created on 25 Jan 2018  ·  36Comments  ·  Source: GoogleChrome/lighthouse

Audit group: Content Best Practices
Description: JSON-LD structured data syntax is valid
Failure description: JSON-LD structured data syntax is invalid
Help text: Structured data contains rich metadata about a web page. The data may be used to provide more context to users in search results or social sharing. This audit is only checking for JSON-LD structured data syntax errors. For a checkup of Google search eligibility, try the Structured Data Testing Tool.

Success conditions:

  • the JSON can be parsed
  • the keys are not duplicated
  • @ keywords are whitelisted in the JSON-LD spec
  • @context URLs are valid
  • validation using jsonld.js
  • @types are whitelisted on schema.org
  • object properties are whitelisted on schema.org

Notes:

  • defers Google search eligibility and other more advanced validation logic to the Structured Data Testing Tool (SDTT)
  • implementation logic is built into Lighthouse and not open sourced separately
P2 seo

Most helpful comment

A few observations after running this on a few sites:

1. Case sensitivity

This is a failure because there's no aggregateRating type, just a Thing.aggregateRating property. The type should be AggregateRating. It might not be clear to the user though, because the schema.org URL exists.

Screenshot 2019-04-11 at 09 42 08

JSON-LD is case sensitive, but the Structured Data Testing Tool doesn't complain about it.

2. Empty elements

There's a few JSON-LD elements with no content or just {}. SDTT doesn't care, and it probably just means there's no data, so nothing we need to do I think.

3. Parser errors

I've only seen one example of this, but if you try to escape a single quote in the JSON our error message isn't super helpful. (JSON only allows a few escape sequences.)

Screenshot 2019-04-11 at 10 21 35

You get the same message if you have line breaks in your strings or a trailing comma somewhere. Maybe prefixing the message with "Parse error: " would make it clearer what the problem is.

All 36 comments

@kdzwinel let's start by setting up a new repo on your personal GH account and getting the plumbing to work so that it is registered on npm and installed by LH. You can send me any code reviews outside of the LH repo.

TODO: Is there anything we can reuse from Chromium's "copyless paste" feature, which does JSON-LD parsing? https://github.com/chromium/chromium/search?utf8=%E2%9C%93&q=json-ld&type=

What's in scope for the first pass here? The three checkboxes above?

Also, do we know how much of the heavy lifting can be done by existing libraries?

Some of the lifting can be done with existing projects, eg https://github.com/zaach/jsonlint, assuming the license is compatible. @kdzwinel could you set up some benchmarks and see how good it is? Depending on it directly is probably fine for v1 but we'd also like to fold in other structured data schemas beyond JSON-LD, so it may be worth setting up a standalone SD validator repo anyway and getting the plumbing set up.

I do see quite a few json-ld projects already, so I was mostly curious on that side.
Seems like JSONlint would be good to provide better diagnostic data than what JSON.parse does; seems good if we're finding a lot of invalid JSON.

The three checkboxes above are what you're scoping this MVP to?

Oh I actually linked to the wrong project. jsonlint does look interesting and may help. I meant to link to https://github.com/digitalbazaar/jsonld.js, which is JSON-LD specific.

As for the MVP, the first two (valid JSON, standard props) are required and the third is a stretch goal.

Paul, on the subject of an external repo, I was curious to hear your thoughts on internal vs external. My hope/expectation is for the structured data validation code to be reusable in other contexts and also maintained by the web community. I think an external repo would be more conducive to those things, eg how the axe a11y logic is external to LH. WDYT?

Paul, on the subject of an external repo, I was curious to hear your thoughts on internal vs external.

We have been talking a lot on the engineering meeting about solutions such as lerna, yarn workspaces and doing what we are doing with lh-logger (same repo, different package), but I haven't asked why we don't want a separate repo in the first place?

breaking out the seperate repo discussion into #4955

Small update.

jsonld.js can do some transforms on JSON-LD structures, but I don't see how we can use it for reporting errors. BTW this package has a non-standard license.

I've built a first prototype that checks:

  • if provided code is a valid JSON,
  • traverses JSON object, making sure that only valid keywords are used and
  • for some keywords checks if values are correct type.

And I found a first production error 😄 (already pinged Eric about it)
screen shot 2018-04-18 at 15 01 45


And a second one 😱 Who would have thought that simple JSON validation will be so useful here.

screen shot 2018-04-18 at 15 21 51

I've put togheter technical decisions and other notes for future reference.

JSON Validation

JSON.parse:
✅ +0kb
❌ doesn't explain what's wrong (in some cases it does tell you where error occurs though e.g. "SyntaxError: Unexpected token b in JSON at position 11")
❌ doesn't catch duplicated keys

https://github.com/zaach/jsonlint (popular, unmaintained, MIT) :
⚠️+12kb
✅explains what's wrong
❌ doesn't catch duplicated keys (https://github.com/zaach/jsonlint/issues/13)

https://github.com/circlecell/jsonlint-mod (fork of the previous one):
⚠️+12kb
✅explains what's wrong
✅catches duplicated keys

JSON-LD doesn't support duplicated keys, so jsonlint-mod was the best choice. It isn't perfect though: error messages are hard to customize and it calls console.error which litters console output. We may want to create our own fork in the long run.

JSON-LD Validation

There are no json-ld linters/validators that I know of (ld-validate that Paul found, doesn't check the json-ld grammar, but rather lets you create and validate object constraints), so for MVP we only do couple of checks. We make sure that:

  • keys are not duplicated (checked by the JSON parser)
  • all keywords (keys starting with an @) come from the JSON-LD safelist
  • all @context URLs are valid
  • and quite a few additional checks performed by jsonld.js expansion algorithm

Schema.org Validation

Using raw data from schema.org we are checking:

  • if referenced schema.org types actually exist (e.g. "@type": "InvalidType"),
  • and if there are no unrecognized properties on all schema.org objects (e.g. {"@type": "Recipe", "unknownProperty": "error"}).

ℹ️ Objects that are using some other vocabulary than schema.org (e.g. http://rdf.data-vocabulary.org/) are ignored.

Google Recommendations Validation

Using data scraped from SDTT we are checking if all required and recommended fields are present on schema.org objects.

This is not a long term solution - in the future we would like to use something like shex to validate not only the required/recommended fields, but also their values. We would also like to also support recommendations coming from other companies (Bing/Pinterest/Apple/Yandex etc.).

WIP - if anyone is interested in testing it: https://github.com/kdzwinel/lighthouse/tree/json-ld

screen shot 2018-05-08 at 16 42 29

Hello,
Why not consuming the Google Structured Data Testing tool for that ? No API available ?

@AymenLoukil SDTT is closed source and does not provide an API. We're also hoping to make this audit relevant for all search engines, as they all handle structured data bit differently. It's kind of the wild west in terms of consistency 🤠

@rviscomi OMG how is that closed source 🚪 ? It belongs to Google no ? Why not reuse and enhance..
Anyway, http://jsonapi.org/format/1.1/ may helps in validating JSON/ JSON-LD format.

Objects that are using some other vocabulary than schema.org (e.g. http://rdf.data-vocabulary.org/) are ignored.

I would encourage also acceptance (or not actively complaining) of markup that links a Schema.org term to a non-Schema.org term. For an example I've been looking at using schema.org/Dataset with a mainEntity property whose value is modeled as a W3C csvw:Table. While it is possible that such markup might be in error, it is also good to accept such possibilities as being potentially sensible.

@kdzwinel for now let's take out the recommendations based on SDTT scraping. That will help ensure that we don't lag behind recommendations from SDTT in case it changes.

@kdzwinel See https://github.com/GoogleChrome/lighthouse/issues/4359#issue-291723379 - I've updated the audit title and help text.

Is this functionality still under development?

@danbri the core functionality is complete and awaiting review. See #6750.

Beyond that, we're working on a new results renderer to more clearly show users the JSON-LD errors in the context of the code snippet. See #6901 and #6999.

image

A few observations after running this on a few sites:

1. Case sensitivity

This is a failure because there's no aggregateRating type, just a Thing.aggregateRating property. The type should be AggregateRating. It might not be clear to the user though, because the schema.org URL exists.

Screenshot 2019-04-11 at 09 42 08

JSON-LD is case sensitive, but the Structured Data Testing Tool doesn't complain about it.

2. Empty elements

There's a few JSON-LD elements with no content or just {}. SDTT doesn't care, and it probably just means there's no data, so nothing we need to do I think.

3. Parser errors

I've only seen one example of this, but if you try to escape a single quote in the JSON our error message isn't super helpful. (JSON only allows a few escape sequences.)

Screenshot 2019-04-11 at 10 21 35

You get the same message if you have line breaks in your strings or a trailing comma somewhere. Maybe prefixing the message with "Parse error: " would make it clearer what the problem is.

Hey folks, I poked around with the patch this morning to see how this is progressing (just a quick look so far -- looks good!)

One thing I found was that the audit didn't detect externalized structured data: e.g. <script type="application/ld+json" src=...> or <link rel="alternate" type="application/ld+json" href=...>

I'm not actually sure if this is meant to be supported. WDYT?

I suspect that this could be fixed by changing this block which currently only handles script with inline content.

EDIT: apparently it's not allowed to use <script src=> for data (per HTML spec)[1].

[1] "When used to include data blocks, the data must be embedded inline, the format of the data must be given using the type attribute, and the contents of the script element must conform to the requirements defined for the format used. The src, charset, async, defer, crossorigin, and nonce attributes must not be specified."

Fwiw Google also does not currently consume structured data published that
way

On Fri, 16 Aug 2019 at 16:04, Michal Mocny notifications@github.com wrote:

Hey folks, I poked around with the patch this morning to see how this is
progressing (just a quick look so far -- looks good!)

One thing I found was that the audit didn't detect externalized structured
data: e.g.

@mmocny can you not just add them on the server side? Any <script> tag with a valid MIME media type is considered a "data block" in HTML. They can be added dynamically, but it's best if they arrive with the rest of the initial markup--for simple crawlers (et al) as you note.

@BigBlueHat My specific use case I have (potentially) large sized SD which is not relevant to 99% of client page loads. I was trying to optimize for page size by merely linking, and not forcefully including, all SD (thus allowing the user agent to decide if/when it was useful to actually download the data).

I guess if I want to keep this behaviour while resorting to using script to dynamically inject, I would now need to take control of _when_ to do it conditionally. And if I'm to do that, I could indeed just as well conditionally Server-side-included it, as you suggest.

I'm happy to read about this initiative though I'm wondering whether microdata and RDFa have been considered as well?

Reason I ask is because even though JSON-LD is the 'preferred' syntax right now there's still tons of microdata markup out there (and RDFa in lesser quantity as well) that's actively maintained. It would be a real shame if these fall out of scope.

operating over the parsed graph (set of node-edge-node triples) from
json-ld would establish a model well suited to allow dropping in rdf,
microdata parsers. This is what we do at Google and what other public tools
do.

On Wed, 21 Aug 2019, 19:52 Jarno van Driel, notifications@github.com
wrote:

I'm happy to read about this initiative though I'm wondering whether
microdata and RDFa have been considered as well?

Reason I ask is because even though JSON-LD is the 'preferred' syntax
right now there's still tons of microdata markup out there (and RDFa in
lesser quantity as well) that's actively maintained. It would be a real
shame if these fall out of scope.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/GoogleChrome/lighthouse/issues/4359?email_source=notifications&email_token=AABJSGLN42PZCGQMV2PJEGDQFWFGLA5CNFSM4ENWC24KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD42YJXA#issuecomment-523601116,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AABJSGLUBD3W6QWKBI6SN3LQFWFGLANCNFSM4ENWC24A
.

In regards to parsers/linters for validating syntax/vocabulary constraints maybe @gkellogg knows of some you can use. He's also currently maintaining the Structured Data Linter

Coincidentally, I was doing a bit of my own digging in this space recently.

First, a summary of what I think Lighthouse currently has (note: I'm not author or involved, I'm just piecing together):

  • Patches #6750 #8137 added support for JSON-LD parsing & linting (leveraging pre-existing libraries), and manually validating the structure of the JSON objects using published schema.org contexts.
  • Patch #8328 attempts to expose this work to the default Lighthouse UI

Some of the benefits of doing it this way:

  • It was comparatively simple, in that it required comparatively few external libraries (less binary cost)
  • Certain features, e.g. matching syntax errors to specific line numbers, may have been easier to implement

On the other hand, there may be correctness issues to doing it this way. I'm not at all sure if the current implementation supports all the different ways you can publish json-ld (e.g. @reverse props, type embedding, @context aliasing...)

And so, even just for correctness reasons it _may already be desirable_ to validate at the graph level, instead of trying to validate the parsed JSON-LD object. We could check this by creating more complex test cases, or finding more complex real world examples.

If that is indeed already true, then the hard part ask of requesting validation at the graph level becomes more of a requirement. After that, I suspect adding more parsers/linters (and even other vocabs) would become near-trivial.

@jvandriel I only took a quick scan, but looks like @gkellogg's tool is "based on RDF.rb", and:

The Linter is implemented as an HTML Web Application, with most of the work performed via Ajax requests using a JSON content type

As such, I suspect it isn't applicable. We probably would want something in JS, or at least that could be compiled into WebAssembly (not sure if Lighthouse is ok with WebAssembly).

Found this, though.

And so, even just for correctness reasons it _may already be desirable_ to validate at the graph level, instead of trying to validate the parsed JSON-LD object. We could check this by creating more complex test cases, or finding more complex real world examples.

Depending on the kind of validation you have in mind, it might make sense to validate against JSON-LD's expenaded form which jsonld.js's expand() would give you from any valid JSON-LD document. Validators written against expanded form would avoid having to deal with all the variations possible in a compact tree-ier form.

...and Indeed, the patch already does this. I should have checked. (Thanks for the note)

@jvandriel I only took a quick scan, but looks like @gkellogg's tool is "based on RDF.rb", and:

The Linter is implemented as an HTML Web Application, with most of the work performed via Ajax requests using a JSON content type

As such, I suspect it isn't applicable. We probably would want something in JS, or at least that could be compiled into WebAssembly (not sure if Lighthouse is ok with WebAssembly).

The Linter is, indeed, a web application, but is based on the "rdf" CLI installed with the linkeddata Ruby gem. It's been run for years, with minimal updates, to help validate schema.org examples. It principally depends on semantic validation of the rendered RDF graph, which could probably be done using other languages (such as rdfjs) with appropriate code. The validation bits are in the rdf-reasoner gem. It's all public-domain software, so people are free to do with it what they please.

Of course, Ruby can be compiled into WebAssembly too.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

motiejuss picture motiejuss  ·  3Comments

muuvmuuv picture muuvmuuv  ·  3Comments

Simply007 picture Simply007  ·  3Comments

timj2 picture timj2  ·  3Comments

workjalexanderfox picture workjalexanderfox  ·  3Comments