-
Notifications
You must be signed in to change notification settings - Fork 214
agoric sdk unit testing
We've migrated to AVA (from tap/tape) for tests.
Running yarn test
from the top of the agoric-sdk
tree will run all tests in all packages.
Running yarn test
from the top of a package will run all tests of that package, in parallel.
yarn test
really just runs ava
, and any additional command-line arguments are passed through. So yarn test --help
will tell you what options you can add.
Run yarn test test/test-foo.js
to run only the tests within a single file. This accepts multiple filenames and globs (yarn test test/subdir/test-*.js
) too.
yarn test -m '*substring*'
will look at all test files and run only the test functions with substring
in their names. This uses a simple pattern match (not a glob or regexp), and does not pay attention to the filename. You may wish to use short-but-distinctive test names, without spaces, to make this most useful. yarn test -m foo
will only run tests which were defined with test('foo', async t => ...)
. To run a single test in a single file, use something like yarn test test/test-foo.js -m test1
.
Within a test file, changing test(..)
to test.only
or test.skip
works as in tape.
If all tests passed, and nothing wrote to console.log
, AVA's default output is a terse "NN tests passed".
Running yarn test -v
enables verbose mode, which prints one line per test. The line it prints is a shortened form of the test filename, plus the test name itself. From what I can tell, it uses the filename glob you provide (our default is test/**/test-*.js
) and only displays the parts that were matched by a wildcard. So in the captp
package, wher we have test/test-crosstalk.js
and test/test-disco.js
and test/test-loopback.js
, the output looks like:
$ yarn test -v
yarn run v1.22.4
$ ava -v
✔ crosstalk › prevent crosstalk
✔ disco › try disconnecting captp
✔ loopback › try loopback captp (209ms)
─
3 tests passed
Done in 1.24s.
Where crosstalk > prevent crosstalk
means that test/test-crosstalk.js
contained a test('prevent crosstalk, t => ..)
definition.
Verbose mode appears to emit the file/test name after the test finishes. Any console.log
will thus appear before the file/test name.
By default, AVA runs all tests in parallel (one worker process per CPU core), which frequently speeds things up. Any console output is interleaved at random. If you want to debug tests by adding console.log
statements, you will either want to use yarn test -s
(aka --serial
) to disable parallel execution, and/or run just a single test function from a single test file.
You can add t.log(msg)
calls in your test program and their output will be emitted after the the file/test name.
yarn test debug test/test-foo.js
will start the Node.js inspector and run the test, whereupon it will wait for a debugger to connect.
yarn test --node-arguments "nodearg1 nodearg2"
will let you supply other Node.js arguments.
Unlike tape/tap, AVA test files cannot be run standalone (node -r esm test/test-foo.js
will fail), but in practice the previous two options are probably sufficient.
Each package's package.json
needs a devDependency on ava
, and a clause that configures it.
yarn add --dev ava
- then edit
package.json
to add:
"ava": {
"files": [ "test/**/test-*.js" ],
"require": [ "esm" ],
"timeout": "2m"
}
All tests must have names like test-XYZ.js
(we skip e.g. test.js
, testHelper.js
, fooTest.js
). They must all be in the package's test/
subdirectory.
Each test file must import AVA at the top:
import test from 'ava';
Most of our code uses harden
and other SES features, and all programs which use that code (e.g. in their imported libaries) must first install SES. So most test programs will start with:
import 'install-ses';
import test from 'ava';
Once AVA is imported, each test looks like:
test('name', async t => {
do_stuff();
test_assertions();
});
The 'name'
must be unique within the file, and is used to identify the test function in results, as well as when using the -m
option to run specific named tests.
Any exceptions raised during the test function will flunk the test, so the recommended practice is to use async
test functions and await
all Promises before the test finishes. This way rejected Promises will flunk the test (assuming that's what you want). You don't have to await
the Promise immediately, especially if it's not supposed to resolve yet. If you don't really care about the value, you should accumulate the Promise into an Array, and then do await Promise.all(accumulated)
before the end of the test, so any leftover errors will be caught.
Whenever possible, use await fnThatReturnsAPromise()
, or something like t.is(await fn(), value)
. This ensures that a rejected promise (which probably indicates a failure) will actually flunk the test.
If the promise is expected to reject, use await t.throwsAsync(p, { message: /message/ })
.
"No Promise Left Behind": assume errors in the code under test might cause any Promise to be rejected, and make sure all such rejections will be caught by the test.
The AVA test assertions are similar to those in tape/tap, however AVA does not offer multiple aliases for each, so there are fewer methods to choose from.
A rough mapping from tape
to AVA is:
tape | ava |
---|---|
t.pass / t.fail | pass / fail |
t.ok/true/assert | truthy |
t.notOk/false/notok | falsy |
t.equal/equals/isEqual/is | is |
t.notEqual/isNot/not | not |
t.deepEqual/deepEquals | deepEqual |
t.notDeepEqual | notDeepEqual |
t.throws(fn, str/regexp) | throws(fn, { message:/etc }) |
t.match | regex |
t.doesNotMatch | notRegex |
t.rejects(p,exp,msg) | throwsAsync(fn/p, { message:/etc }) |
Some notable differences:
- assertions that look for exceptions (
t.throws
andt.throwsAsync
) take an "expectation" object instead of a string/regexp. This expectation object can have amessage
property which behaves like the old string/regexp, but it has other properties (instanceOf
,is
,name
,code
) that may be useful. -
t.like
matches a subset of object properties, which can reduce boilerplate in tests when you only care about certain fields -
t.end()
is generally an error, and unnecessary - our defensive practice of wrapping entire test functions in try/catch blocks, with a
t.fail
in the catch clause, is unnecessary -
t.plan()
is still occasionally useful, but only if your assertion is being run in a callback orthen
, and you can't find a way to rewrite it to do the assertion at the top level of the function instead. ref ava docs on when to use plan
The t.assert(condition)
method is special, and if it fails, AVA will edit the code and re-run the test to get more details. So you can do t.assert(x === y, 'oops')
, and if it fails, you learn what both x
and y
were. This reduces the need to preemptively guess what details you'll need to diagnose the problem, using/abusing the message field like:
t.assert(x === y, `oops ${x} !== ${y}`);
or adding commented-out console.log(x, y)
just in case.
The description in this avajs issue worked for me under WebStorm. The main thing is to move the javascript test to Application Parameters
and put ~/agoric-sdk/node_modules/.bin/ava
in the javascrpt file
box.
Most of our package.json
ava:
stanzas include a require: ["esm"]
clause, which causes AVA to add a -r esm
when invoking the test. This is our current technique for enabling ESM module support. When we switch to Node's native ESM support (#527), we'll remove these entries.
Running yarn test -w
will enable "watch mode", which leaves the test process running, and re-runs the right tests every time the source or test file changes.
For other nifty AVA features, check out the AVA home page
In general we need to remove unsolicited console.log
s from our codebase (I'm the most guilty party, having added a boatload in SwingSet). These are impossible to read when multiple tests are being run in parallel.
This wiki is for developing agoric-sdk. For help using Agoric SDK, see https://docs.agoric.com/ and https://agoric-sdk.pages.dev/