JSON-Schema-Test-Suite icon indicating copy to clipboard operation
JSON-Schema-Test-Suite copied to clipboard

[244] Add a bunch of meta-schema tests

Open ssilverman opened this issue 5 years ago • 26 comments

This adds tests using the meta-schema as a schema. This does not test things that cannot currently be guaranteed to be represented in a schema, for example, normalized URL requirements and optional "format" for "pattern".

Closes #244

ssilverman avatar Apr 29 '20 10:04 ssilverman

I hope I didn't get too many false/true validity tests wrong as typos due to faulty cut & paste.

ssilverman avatar Apr 29 '20 11:04 ssilverman

Maybe I’ll split this up more in each file...

ssilverman avatar Apr 30 '20 22:04 ssilverman

Ok, I added a second commit that groups everything better and also orders them as they appear in the spec.

ssilverman avatar May 01 '20 02:05 ssilverman

This also now closes #301.

ssilverman avatar May 07 '20 03:05 ssilverman

hope I didn't get too many false/true validity tests wrong

Hopefully you ran them against an implementation first? 🐙

karenetheridge avatar May 08 '20 01:05 karenetheridge

@karenetheridge I started testing them today now that my new validator Is just about done. I wrote one so I could better understand the spec and hence the tests. I’ve already found one test error.

ssilverman avatar May 11 '20 12:05 ssilverman

I just added a way for the meta tests to be tested as schemas and not just as plain instances. There's currently "isAllSchemas" for each test set, and for finer control, "isSchema" for each test. I don't love "isAllSchemas", perhaps "allAreSchemas" is better, I haven't decided what word is best.

ssilverman avatar May 18 '20 17:05 ssilverman

Just changed "isAllSchemas" to "isMetaSchema" and removed "isSchema".

ssilverman avatar May 19 '20 07:05 ssilverman

I am puzzled by some of the changes here so it would be helpful to itemize what problems it is proposing to solve.

karenetheridge avatar May 19 '20 23:05 karenetheridge

It adds tests for the meta-schema keywords. I'm moving the "isMetaSchema" stuff and some of the non-validatable tests into a "Phase II". (I can see how the comments were puzzling.)

ssilverman avatar May 20 '20 00:05 ssilverman

I haven't reviewed every test in detail and I can't run them yet against my implementation as I don't yet support $refs to foreign documents, but I think these should be ok.

As an aside, however, they still aren't addressing the lack of test coverage that I was getting at a week or two ago on slack about detecting outright errors in schemas. Turning the schemas into data instances and using the draft2019-19 metaschema as the schema pushes all these tests into the normal validation path, rather than the can-the-implementation-detect-schema-errors path. To do that, just move the schemas into 'schema' for each test (and use anything at all as the data instance, as it isn't relevant).

karenetheridge avatar May 20 '20 18:05 karenetheridge

Moving them into "schema" won't cause test failures unless all tests whose schemas have errors are marked as FAIL. This is not currently mandated, and as we discussed in the Slack channel, won't necessarily work unless mandated some other way, either in the README or in the test-schema.

Summary: Just putting invalid schemas into "schema" won't work as we have it currently. The point of updating that part, to test bad schemas is not in this PR. It will be part of an upcoming PR.

ssilverman avatar May 21 '20 20:05 ssilverman

This is not currently mandated,

What does this mean?

won't necessarily work

What does this mean? How are you defining "work"? The tests won't produce a valid result, so the results must be "valid": false.

If the tests in this PR are to be simply testing bad data (which happen to be schemas) against a schema (which happens to be the draftX metaschema), then please rename the tests to "meta-foo.json", rather than "meta/foo.json" -- the use of subdirectories has so far been reserved for optional tests. Or, failing that, update the test suite documentation to explain the directory structure and what each subdirectory is for.

karenetheridge avatar May 21 '20 20:05 karenetheridge

I don’t agree that non-optional tests can’t be in subdirectories, because it’s easy enough to walk a tree in the test runner. However, I do agree that it should be documented. Thanks for that; I will update the README.

Also, having a bunch of files prefixed with “meta-“ is a good indicator that a “meta/“ subdirectory is needed. That’s just my personal experience and opinions talking, though. @julian can decide, but in the meantime, I’ll update the README.

Update: it’s the same reason there’s an “optional/“ directory instead of prefixing a bunch of files with “optional-“.

ssilverman avatar May 21 '20 21:05 ssilverman

I've removed the meta/ directory and put all meta-tests into their respective keywords.

ssilverman avatar May 23 '20 02:05 ssilverman

Force-pushed to track changes in master.

ssilverman avatar May 31 '20 04:05 ssilverman

I'll only do "merge" if that's @Julian's preference. Otherwise I'll continue to do rebasing. And please don't try to mandate process again, unless you run it by @julian.

ssilverman avatar Jun 01 '20 23:06 ssilverman

I'll add: You may have missed that I've been rebasing against master regularly, above. No need to suggest that. And besides, it won't even let you merge anything if there's conflicts. I'm keeping on top of those.

ssilverman avatar Jun 01 '20 23:06 ssilverman

Rebasing against master during a review causes some history to be lost (and unfortunately, also marks review comments as "resolved"), which is I think what @handrews was mentioning the other day.

If you were going to rebase anyway before merging, then that's great, and my comment was redundant! :)

karenetheridge avatar Jun 01 '20 23:06 karenetheridge

No history is lost. All the history in a PR is accessible, even the ones that may appear lost after a force-push. Before I merge in a PR, my practice is to reduce to a minimum set of applicable commits. This keeps the commit history clean. This is also why my personal preference is to have the submitter do the merge (assuming trust) because there's a last chance to squash relevant commits into one or more, as opposed to squashing everything into one commit.

Edit: Maybe there's difference between different levels of GitHub, or maybe what I'm thinking is still accessible isn't really what you're referring to. I don't know, but I'm open to hearing suggestions to make reviewers' lives easier, including not force-pushing until the end. I was doing it as I go because in the past, PRs had been merged in by the reviewer, not giving the submitter time to squash multiple commits appropriately (to potentially more than one instead of only one).

ssilverman avatar Jun 01 '20 23:06 ssilverman

Ok, since this has gone through a few iterations and a bunch of waiting time, I'll use this comment as a summary of what's happening in this PR. I just updated the description to be:

This adds tests using the meta-schema as a schema. This does not test things that cannot currently be guaranteed to be represented in a schema, for example, normalized URL requirements and optional "format" for "pattern".

This PR adds tests that can be tested currently, and tests that cannot be tested currently will be pushed to a future PR pending some more discussion on how we can add those tests.

ssilverman avatar Jun 01 '20 23:06 ssilverman

I am obviously still behind, but just clarifying the "policy" -- as usual this is just de facto (it's what I do on all other repos I manage/maintain, as well as $WORK, etc. -- it may not be what @Relequestual / @awwright / Henry do on other repos, which does not mean I disagree! I just didn't coordinate with them, so we can reconcile if need be or write things down if need be, whatever helps, but laying it out here):

  • the only hard-fast policy is "do not squash when you merge to master if there are multiple units of work in your branch" -- i.e. always create a merge commit, and don't mash work together if it was done separately, so merging to master should create 2 or more commits.

The soft policy is I personally don't generally squash things if each commit does one thing. But that much is up to your personal taste if it happens on a feature branch -- if you want to squash fixme commits that's fine with me (though as I say, generally I'll only do that when the initial commit is "defective" for what it's meant for, not if additional self contained pieces are added to it, but fine if others do what they usually do).

The medium-to-hard tangential policy is also "keep commits small and self contained" -- 200 lines or so is the max. I generally will not personally review things that are much larger than that -- once or twice I'll give a free pass :) but after that I just won't do it and ask to break it into smaller self contained chunks that are each independently 200 lines or less. In this repo, I'll also ignore if it's copy pasting 200 lines across draft versions.

And yes, I generally do "submitter commits / merges" after approval (which is as simple as any person with commit bit saying "you are good" or hitting green buttons or whatever they feel like doing).

And yeah again :) all the above is meant just to facilitate, so we can of course discuss or reconcile or write this down if there's other preferences or if I've contradicted what we do in the spec repo (which I don't contribute to as much as I should, so I don't know the rules there!).

Julian avatar Jun 02 '20 15:06 Julian

We mandate very little. @karenetheridge is impressive for even attempting to review this. I wouldn't. I MIGHT consider reviewing it if there were many commits, but there's just too much here for someone to sensibly review.

I generally find if it's going to take more than 20-30 minutes to review, you'll have to really convince me it's worth my time.

That being said, @Julian has ownership over this repo. Pretty much what he says goes because he puts in monumental effort, and we got to support that as much as possible.

Relequestual avatar Jun 02 '20 22:06 Relequestual

This was too much for me to review manually, but I did run the tests through Hyperjump JSV and they all passed, so it looks good as far as I can tell.

jdesrosiers avatar Jun 05 '20 00:06 jdesrosiers

I’m working on simplifying this PR into smaller chunks, starting with some README updates.

ssilverman avatar Jun 05 '20 00:06 ssilverman

@ssilverman just in case you happen to be around -- I'm going to start pulling additional chunks out of this PR to merge. Thanks again for sending it and sorry it took so long.

Julian avatar Jul 08 '22 07:07 Julian