Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: speed up network load times #10976

Merged
merged 8 commits into from
May 31, 2024
Merged

perf: speed up network load times #10976

merged 8 commits into from
May 31, 2024

Conversation

JammingBen
Copy link
Contributor

@JammingBen JammingBen commented May 29, 2024

Description

This PR improves network load times by taking the following actions:

Remove manual vendor chunking

The manual chunking we had put all vendor libraries into one chunk. As a result, this chunk was huge and took quite some time to be loaded, especially with slower network connections.

While there might be a case for using (meaningful) manual chunks, it needs more testing. Also, I noticed that it can mess with dynamic imports.

Lazy load dependencies

Large dependencies are now being lazy loaded, meaning they are not automatically loaded on page load. The most critical library is the toast editor, which is is simply an insanely huge library.

The following libs are affected:

  • uppy importer plugins
  • v-calendar dependency in OcDatepicker
  • emoji-mart dependency in OcEmojiPicker
  • epub reader app component (holds the epubjs dependencies)
  • TextEditor component (holds the toast-ui dependencies)

Lazy load translations of core packages

Especially the translations in web-pkg are quite large, hence we now load them dynamically during the bootstrap process of Web.

Fix circular import issues

All circular import issues have been fixed, since they may affect rollup's ability to create proper chunks.

Benchmarks

Network load time

  • master: load 300-400ms, finish 1.2s
  • v8.0.2: load 150-200ms, finish 1.5s
  • PR: load 200ms, finish 800ms

Network load time (Chrome profile "Fast 3G")

  • master: load 27s, finish 40s
  • v8.0.2: load 20s, finish 33s
  • PR: load 15s, finish 30s

Lighthouse (speed index)

  • master: 4.7s
  • v8.0.2: 3.7s
  • PR: 3.2s

Further steps

As the benchmarks show, while things have improved, they are still far from great. The following steps could (should?) be evaluated next:

Enable server text compression

Static JS and CSS files could easily be compressed by the server, reducing the size of the chunks by a lot (see https://developer.chrome.com/docs/lighthouse/performance/uses-text-compression?utm_source=lighthouse&utm_medium=devtools). I tested this locally by adding a middleware in oCIS (the Chi router already ships one) and things looked good. Is there a reason we don't have this?

Split CSS into multiple chunks

Currently we're setting cssCodeSplit: false, which results in one big CSS file. We should enable this, however, there is an issue where some CSS definitions will be put into chunks that are never loaded. Needs investigation.

Lazy load more components

Especially components that are registered by extensions (e.g. sidebar panels, search components) can easily be lazy loaded. I guess it would make sense, since extensions are being registered and loaded with the initial page load, although their components might not need to be.

Lazy load routes

This one I'm really not sure about. I tried lazy loading all our routes, especially since the vue-router docs recommend doing it. While it does have a positive effect on the initial page load time, it also has a negative effect: When you navigate to another (not loaded) route after the initial load, there is a slight delay because the browser fetches the new stuff. In production build I don't really notice it, but with pnpm vite and with a slower network connection I definitely do.

We could overcome this issue by using a wrapper component that shows a loading spinner while the async component is loading.

Another thing to note is that lazy routes would help with the CSS split, since CSS from lazy routes will definitely be loaded when the component of such a route is being rendered.

I'm torn on this one 🤷

Related Issue

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Technical debt
  • Tests
  • Documentation
  • Maintenance (e.g. dependency updates or tooling)

Removes the manual chunking which put all vendor libraries into one chunk. As a result, this chunk was huge and took quite some time to be loaded, especially with slower network connections.

While there might be a case for using (meaningful) manual chunks, it needs more testing. Also, I noticed that it can mess with dynamic imports.
@JammingBen JammingBen self-assigned this May 29, 2024
Copy link

update-docs bot commented May 29, 2024

Thanks for opening this pull request! The maintainers of this repository would appreciate it if you would create a changelog item based on your changes.

@JammingBen JammingBen force-pushed the perf/network-loading branch 4 times, most recently from 6ef68c7 to 6bad216 Compare May 31, 2024 05:38
@JammingBen JammingBen force-pushed the perf/network-loading branch from 6bad216 to 2260589 Compare May 31, 2024 06:03
Copy link

sonarcloud bot commented May 31, 2024

Copy link
Member

@kulmann kulmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀 🚀 🚀

@JammingBen JammingBen merged commit 6dd1399 into master May 31, 2024
3 checks passed
@delete-merged-branch delete-merged-branch bot deleted the perf/network-loading branch May 31, 2024 11:19
@micbar micbar mentioned this pull request Jun 19, 2024
24 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants