-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test runner executes after() in declared order #48736
Comments
Following monkey patch changes behavior of
|
Hey @MoLow , |
I am not sure this is a simple change |
If backwards-compatibility is an issue, adding a configuration option similar to how you can specify "only", "concurrency" would solve that problem. |
This comment was marked as outdated.
This comment was marked as outdated.
I'm hesitant to add this rather than just ship a defer utility on top of Symbol.dispose/asyncDispose |
Adding Note that reverse execution order is not something new, other test frameworks provide reverse execution in various forms. Mocha runs
Jest's jasmine2 runner executes
Ava runs
|
Mocha runs after in fifo order though: describe("something", () => {
after(() => console.log(1))
after(() => console.log(2))
after(() => console.log(3))
it("foos", () => {});
}) Would log 1,2,3 and not 3,2,1 And so does Jest (with afterAll, in a much slower fashion and with more verbose output) with the default modern runner. The legacy runner (and jasmine in standalone) does log 3,2,1 but it doesn't seem to be the standard order. |
I don't think we should change this. Running them in reverse order is, IMO, more mental overhead for users. |
Need to call
Need to use the jasmine2 runner as stated in Jest's documentation. I'm not sure what the default "modern" runner is as I don't use Jest. I just wanted to point out that there are various testing frameworks that provide/perform teardown in reverse order. This includes the custom testing framework I developed for my application. Other languages rely on destructors doing things in the right order.
If I cannot use |
I don't understand this. I use the test runner's hooks to clean up after my tests all the time. If the issue is that you're expecting the |
also, you can use a single |
I'm wondering if we should abort a test context's AbortController on test completion and not just cancellation to allow this: test("mytest", async ({ signal }) => {
const browser = await createBrowserInstance({ signal });
const context = await createIncognitoContext(browser, { signal });
const tab = await createBrowserTab(context, { signal });
// your test here
}); Not entirely related to this issue. (Also it's kind of besides the point but in that specific case it's fine to just close the browser since it owns the context and the tab which would also be closed with it.) |
How would you reverse the provided example? Like this? test("mytest", async (t) => {
const browser = await create_browser_instance();
const context = await create_incognito_context(browser);
const tab = await create_browser_tab(context);
t.after(() => destroy_browser_tab(tab));
t.after(() => destroy_incognito_context(context));
t.after(() => destroy_browser_instance(browser));
}); Ah but now for reasons the creation of the tab failed. So now you have a browser and incognito context lingering around until the process terminates. Fine if its just one, not so much if there are several hundreds. One cannot assume setup always succeeds.
That means that the user needs to keep track of what should and what shouldn't be torn down. It can be done, yes. But adds lots of boilerplate.
Interesting and yes the example is a bit contrived. Obviously killing the browser would kill everything else too. The objects I'm creating are not aware of each other and I'd have to add a lot of extra boilerplate to add dependency tracking. One-liners that say "oh, when this test is done, please kill this for me" are for me easy to write and understand. And it works fine until there's a dependency chain involved. |
I see what you mean now. Essentially, you want to write your test like this and have the test("mytest", async (t) => {
const browser = await create_browser_instance();
t.after(() => destroy_browser_instance(browser));
const context = await create_incognito_context(browser);
t.after(() => destroy_incognito_context(context));
const tab = await create_browser_tab(context);
t.after(() => destroy_browser_tab(tab));
}); It's still not a change that I would make, but I guess we can see how others feel. |
Yes, exactly. I've since amended the monkey patch I posted earlier to support nested tests and the Asking to change the default behavior is probably asking too much but to have some way of telling the runner to execute hooks in reverse would be nice and probably not too much effort. |
Both mocha and Jest do it the same way we do with Jest even switching from reverse-order to fifo order so I doubt we should change it but I do see the use case and think we should address it. How would you feel about the abort signal being passed to the test context always aborting when a test (or suite) completes and not just when they are cancelled? ( also cc @MoLow ) |
I think it makes sense. If anything was listening to that signal, it should abort at that point anyway. |
@benjamingr how would it solve the problem outlined here? |
You would use the signal for cleanup instead of after hooks like so #48736 (comment) |
Yeah, I realized it later 😅 |
I'm not sure why this issue was closed, #48827 doesn't really solve the original problem as (in my experience) event handlers are executed in registration order. So you'd still need to DYI reverse-order cleanup. |
I decided to abandon using the built-in test runner for running my tests entirely. Reasons being:
Problems 1 means one has to start Puppeteer on-demand and put it behind some kind of synchronization mechanism. Otherwise node would report various leaks due to having more than 8-10 browsers. Problem 2 makes closing the browser complicated. Not closing the browser properly causes the entire script to hang forever. #48877 doesn't solve the termination problem because the test runner believes tests are still running. I ended up solving problem 1 & 2 with reference counting but problem 3 then causes the browser to be terminated after each suite as there are no tests running in-between suites. The browser is then re-launched for the next suite. Not a show-stopper but when you have dozens of suites to categorize thousands of tests it starts to hurt performance quite a bit. Having more suites with fewer tests would make the performance impact worse. Problem 4 means that any and all forms of "kill browser after test" are by definition be unreliable. I had to specifically modify this behavior to prevent scripts from hanging forever due to the browser still being live. Problem 5 is the reason I opened this issue. So I'm reverting back to my hand-rolled test runner that exhibits none of these problems and is ironically fewer lines than all the workarounds I had to write to fix the built-in runner. Perhaps I'll revisit the built-in test runner in a couple years but in its current state it is simply not practical for anything but basic tests. |
Does Would docs/a guide showing how to run setupTests like logic help?
Can you repro?
What do you mean sequentially?
Happy to discuss this, I don't think it was raised before?
Sure, though did the workaround with |
--import is not documented. Global before/after operate on a per-file basis. So if you want to organize your tests into multiple files and also want global before/after all tests start/finish, not going to happen.
If you don't terminate the browser instance running in the background, the test runner hangs forever. If there is no hard guarantee that termination is performed no matter what... you're going to spend a lot of time SSH-ing into your test server and killing browser instances. As far as I understand, #48877 is merely an alternative to const puppeteer = require("puppeteer-core");
const { describe, before, test, } = require('node:test');
describe("suite", () => {
before(async () => {
browser = await puppeteer.launch({
dumpio: false,
pipe: true,
executablePath: "/usr/bin/chromium",
ignoreDefaultArgs: [ "about:blank" ],
waitForInitialPage: false,
args: [
"--no-sandbox",
"--disable-setuid-sandbox",
"--disable-web-security",
"--disable-gpu",
"--font-render-hinting=none",
`--window-size=1600,900`,
"--ignore-certificate-errors",
],
});
});
// simulate termination failure by intentionally not calling await browser.close();
test("test", async () => {});
});
Note total duration below being 2 seconds. If I add const { describe, test } = require('node:test');
function sleepFor(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
describe("first suite", { concurrency: true }, () => {
test("test 2", async () => {
await sleepFor(1000);
});
});
describe("second suite", { concurrency: true }, () => {
test("test 2", async () => {
await sleepFor(1000);
});
})
Mentioned in submitted issue and also here. Example below. Note that output does not mention the error itself, only "Error: failed running after hook". const { describe, after, test } = require('node:test');
describe("suite", () => {
test("test", async () => {
after(() => {
throw new Error("oops");
});
after(() => {
console.log("This is never executed");
});
});
});
As I mentioned previously, in my experience event listeners are executed in registration order so it doesn't change anything because declared order == registration order. |
Version
v18.16.0
Platform
Linux tester 5.15.0-75-generic #82-Ubuntu SMP Tue Jun 6 23:10:23 UTC 2023 x86_64 GNU/Linux
Subsystem
test_runner
What steps will reproduce the bug?
How often does it reproduce? Is there a required condition?
Always.
What is the expected behavior? Why is that the expected behavior?
I would expect the after functions to be executed in reverse order:
ℹ this is declared last
ℹ this is declared next
ℹ this is declared first
✔ mytest (3.16807ms)
What do you see instead?
ℹ this is declared first
ℹ this is declared next
ℹ this is declared last
✔ mytest (3.16807ms)
Additional information
Various frameworks provide setup / teardown before and after a test. Usually its desired to perform setup in declared order but teardown in reverse order. This is very convenient because there maybe inter-dependencies. Consider the contrived example of:
Once the test ends, regardless of success/failure these should then be torn down in reverse order:
If
after()
was executed in reverse order, setup and teardown could be conveniently written as below, providing clean teardown regardless at which point setup failed:However, not only is
after()
executed in declared order, subsequentafter()
are skipped entirely. I also discoveredafter()
eating the reported error, leaving a very confused developer as to why his test failed with nothing but this as output:▶ Mytest
✔ subtest 1 (17.906258ms)
✔ subtest 2 (32.88647ms)
✔ subtest 3 (21.395154ms)
✔ subtest 4 (27.778948ms)
▶ Mytest (242.726795ms) <-- Red triangle, nothing else.
ℹ tests 1
ℹ pass 0
ℹ fail 1
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 361.519968
It took a lot of digging to figure out that the first
after()
was trying to destroy something that was still in use.To perform teardown in the correct order I'll have to do it manually:
Which makes me wonder why even bother with
after()
sincetry {} catch {} finally {}
provides the same functionality.The text was updated successfully, but these errors were encountered: