-
-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
E2E: remove installation checking #22168
Conversation
Thanks! |
I wouldn't worry about Firefox and Safari. |
When it will be stable at Chrome and in Puppeteer it will work here after remaking screenshots. |
Yes, but until then we'll have to make the screenshots manually for the webgpu examples. |
One can add changes locally to const browser = await puppeteer.launch({
headless: ! process.env.VISIBLE,
args: [
'--use-gl=swiftshader',
'--no-sandbox',
'--enable-surface-synchronization',
//// addition
'--enable-unsafe-webgpu',
'--enable-dawn-features=use_tint_generator'
],
executablePath: '/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary',
userDataDir: '/Users/$USER/Library/Application\\ Support/Google/ChromeCanaryTestUnsecure/',
}); and then execute 'nom run make-screenshot <webgpu_samples>'. This can be temporary experimental branch for webgpu screenshots. |
Also in situation like this, when you not sure about proper flags image-output is usefull. 98cfa79#diff-cc4fa397e997f0bb32b468a53b6e33bf4e6d59c4a869faab0625e15d048128af |
'--enable-unsafe-webgpu',
'--enable-dawn-features=use_tint_generator' Seems like these flags produce black renders with the current version of puppeteer. |
@mrdoob swiftshader not work. WebGPU need additional instance of puppeteer with --use-gl=egl or something like that (depends on OS in CI an in your PC) |
@munrocket I think WebGPU is still not available in puppeteer. |
This was added to simplify e2e adoption and checking useless right now.
P.S.
I am just revisited current e2e implementation after reading this tweet https://twitter.com/RReverser/status/1417800541133058050
Basically we doing almost the same, but we also controlling time. Each frame is rendered in 16ms in our virtual time to make pictures exactly the same, we rendering 2 frames and making screenshot after that.
Ways to improve current e2e implementation (without recreating all screenshots):