Skip to content
This repository has been archived by the owner on Apr 8, 2024. It is now read-only.

Test serialization format improvements #61

Open
ehuss opened this issue Jun 28, 2019 · 2 comments
Open

Test serialization format improvements #61

ehuss opened this issue Jun 28, 2019 · 2 comments
Labels
tests Issues with the test framework

Comments

@ehuss
Copy link
Contributor

ehuss commented Jun 28, 2019

The current snap tests which dump a debug output of the nodes have a variety of issues:

  • I find it difficult to review and compare the input and the output. Particularly with the files in separate directories, it can be cumbersome to look at them both. Ideally I would find it easier if the input and output were in the same file. Consider how to review this PR: https://github.com/rust-lang-nursery/wg-grammar/pull/38/files.

  • It does not include the tokens in the leaves, making it difficult to see what was matched.

  • It seems to have some non-deterministic ordering (possibly with just the error case), making the tests fail.

  • It is rather verbose, making it difficult to see the structure.

  • Span information is not included, making it difficult to see where things correspond.

  • Error tests are particularly difficult (just says "ParseError" with a long list of possible tokens). Perhaps errors could be annotated similar to compiletest where the exact location of the error is a comment below using ^ to mark the error.

  • Lexer errors don't work at all. Sometimes I think it might be useful to have tests that do not lex properly (like a.1foo).

  • I think there might be value in choosing a commonly available serialization format (or something more formal than just a debug dump). This might make it easier to be able to reuse the test suite with other parsers (creating a sort of conformance test suite). Other parsers could have an adapter which would convert their tree structure to something compatible with the lyg structure, and then compare the results with the expected test output.

    There are various other routes this could take. Such as providing a Rust library with a common tree structure other parsers could target, and the library would handle comparing test output.

  • I think it might be a good idea to restrict tests to certain high-level fragments of the language to make it easier to test with other parsers. I would propose sticking with macro_rules fragments as the lowest-level each test should use (that is, items, statements, expressions, paths, types, etc.).

@ehuss ehuss added the tests Issues with the test framework label Jun 28, 2019
@CAD97
Copy link
Contributor

CAD97 commented Jun 28, 2019

It seems to have some non-deterministic ordering (possibly with just the error case), making the tests fail.

Any nondeterminism is a bug in gll that needs to be fixed.

Span information is not included, making it difficult to see where things correspond.

That's on someone (probably me) to go and enable proc_macro_semver_exempt on the test runner and remove the output mangling (which currently removes the useless Span..Span labels).

Lexer errors don't work at all.

Currently we delegate to proc_macro for the lexing stage. Until that changes (we probably want a formal lexer at some point), it's probably not our purview to be testing the lexer.

I think it might be a good idea to restrict tests to certain high-level fragments of the language to make it easier to test with other parsers. I would propose sticking with macro_rules fragments as the lowest-level each test should use (that is, items, statements, expressions, paths, types, etc.).

This makes sense so long as we can continue to be able to test any weird edge cases we find. Keep in mind that we're still going to be providing more information than a yes/no answer, though.

@eddyb
Copy link
Member

eddyb commented Jul 1, 2019

One thing we could do to improve error output is integrate with something like annotate-snippets-rs.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
tests Issues with the test framework
Projects
None yet
Development

No branches or pull requests

3 participants