Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise approach for documenting multiple Kubernetes versions #23518

Open
Tracked by #44609 ...
sftim opened this issue Aug 28, 2020 · 30 comments
Open
Tracked by #44609 ...

Revise approach for documenting multiple Kubernetes versions #23518

sftim opened this issue Aug 28, 2020 · 30 comments
Labels
area/web-development Issues or PRs related to the kubernetes.io's infrastructure, design, or build processes kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@sftim
Copy link
Contributor

sftim commented Aug 28, 2020

This is a Feature Request

What would you like to be added
Kubernetes offers online documentation for the current release and the four previous minor releases.
In this issue I'm proposing that, aside from pages inside /docs/ (and its localized equivalents), we should always serve current content.

Possible approaches:

  • update redirects for the non-live website to send people visiting /blog/, /training/ etc to the live site
  • serve all documentation from one site with perhaps a prefix: /docs/v1.18/home/ and /docs/v1.19/home/ etc
  • something else

Why is this needed

Comments
Let's discuss whether we want this and, if so, how we want it to work. It's going to be at least a medium-sized chunk of work to implement, I suspect.

/kind feature
/area web-development

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. area/web-development Issues or PRs related to the kubernetes.io's infrastructure, design, or build processes labels Aug 28, 2020
@kbhawkey
Copy link
Contributor

kbhawkey commented Sep 1, 2020

Yes, this brings up some good questions and ideas.
I think it makes sense to split off the versioning of docs content from the other site sections (blog, training, community, ...).
As an aside, I'd like to see the Case Studies and Partners sections consolidated into a single section.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 30, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 30, 2020
@sftim
Copy link
Contributor Author

sftim commented Dec 30, 2020

FWIW, etcd-io/website#82 is a somewhat-related discussion about multiple versions for etcd documentation.

@sftim
Copy link
Contributor Author

sftim commented Dec 30, 2020

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 30, 2020
@nate-double-u
Copy link
Contributor

nate-double-u commented Dec 31, 2020

I expect that moving to a folder based versioning system that includes all the versions would also make issue #12303 moot.

@nate-double-u
Copy link
Contributor

nate-double-u commented Dec 31, 2020

In slack, @sftim, you mentioned wanting to surface all of the previous k8s documentation versions. I like this idea.
[edit] You mentioned previous versions back to v1.0. I assume that still means about 20 versions?

If we do surface all the old versions, we may want to reconsider how we're presenting those older versions. The "current +4 versions" drop-down box works well now because it is a short list, but if we include all previous versions that drop-down would quickly become unwieldy. Potentially we could set up a previous versions page that would provide some more context for each version.

@nate-double-u
Copy link
Contributor

nate-double-u commented Dec 31, 2020

In slack, @sftim, you mentioned wanting to surface all of the previous k8s documentation versions. I like this idea.
[edit] You mentioned previous versions back to v1.0. I assume that still means about 20 versions?

serve all documentation from one site with perhaps a prefix: /docs/v1.18/home/ and /docs/v1.19/home/ etc

Thinking about these two things together: this could greatly increase the size of the site.

I've been looking at this style of versioning for etcd - and I think that it works well for small to medium sized sites, at least the way I've seen it set up using folders (there may be other ways of achieving this organization).

The Kubernetes site is much bigger than the sites I've seen managing versioning with a folder system though.

If we were to place all ~20 previous versions each in their own folder under /content, I think we may start to see some deploy performance issues.

With all languages in, I think the Docs section of the Kubernetes website is ~6500 files and ~65 MB (current version, older versions would likely be smaller). If we include all ~20 versions in one repo, we'll inflate that to ~1.3 GB.
[edit] I suppose they're all already in one repo. Currently we're using branches to version, and git does some very nifty space saving magic for us. File size may not be the most important factor here.

I've seen the deploy log for v1.16, and it takes about 7 minutes to do the full deploy. Building the sites currently takes about 0.3 minutes of that (~23000 ms). If we have to build all 20 versions each time we deploy (or deploy preview), we may be adding about 6 minutes to the overall deploy (Rounding down because older versions may be smaller, and I'm not very familiar with Hugo & Netlify yet, they may do some clever things to speed this up).

This would be fairly easy to test though - we could just copy the current /docs folder 20x over and do a draft PR to see how long it takes to deploy. There could be some clever de-duplication we could look into as well to help mitigate some of these concerns.

@nate-double-u
Copy link
Contributor

nate-double-u commented Jan 5, 2021

So, having written about how maybe not to do this, I've been thinking about how maybe we could do this :)

I haven't thought of a really clever way of doing it yet, and I keep coming back to @sftim's initial comment:

update redirects for the non-live website to send people visiting /blog/, /training/ etc to the live site

This would likely be a lot of work up front, but is probably the simplest way to go -- the updates would be fairly straightforward, but there would be a lot of them. Setting up a matrix of issues (or a spreadsheet) to organize these updates would be helpful. We maybe able to cherry pick commits between versions to reduce this workload (i was able to employ this on the CNI version branches, but that was a much smaller site, and I was the only one working on the versioning system at the time).

The config.toml file has the currentUrl variable, which I think is only used for deprecation warnings right now. We could use something like it to link everything non-docs to the current live site (instead of the url variable which gets set for each version).

something else

Proxy model - this is sort of how we're already doing it, but we're using Netlify's whole service as the proxy. I've seen API services where each path gets sent to a different service provider/server (using tools like nginx). We could potentially build and publish only the content/_lang_/docs/ folders, and then proxy based on a version path. This would be very tricky in one site with Netlify I think. I've done some research on how to do this with the _redirects file (https://docs.netlify.com/routing/redirects/rewrites-proxies/) but think it would be hard given how complex the _redirects already are.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 26, 2021
@sftim
Copy link
Contributor Author

sftim commented Jun 8, 2021

/remove-lifecycle rotten
Still important

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 8, 2021
@nate-double-u
Copy link
Contributor

PR #29504 may solve a lot of my concerns about build time if we go with a folder based versioning system and build each version every time we do a release or deploy preview.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 23, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 23, 2021
@nate-double-u
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 23, 2021
@nate-double-u
Copy link
Contributor

Hugo has made an update and introduced a new feature that may help us to manage versions better:
Vertical merging of content mounts: https://github.com/gohugoio/hugo/releases/tag/v0.96.0

I haven’t played with it much yet, but if I’m reading this correctly, it looks like we may be able to have a base version, then for any new version, only create deltas (and, I think we may be able to do this for both release version and language).

This may give us a path away from versioning the whole site the way we currently do with subdomains.

/cc @chalin

@nate-double-u
Copy link
Contributor

nate-double-u commented Apr 12, 2022

/remove-lifecycle stale
potential new methods available

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2022
@sftim
Copy link
Contributor Author

sftim commented Apr 13, 2022

/triage accepted
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Apr 13, 2022
@sftim
Copy link
Contributor Author

sftim commented Jun 20, 2022

#23518 (comment) sounds great!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 18, 2022
@sftim
Copy link
Contributor Author

sftim commented Oct 18, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 18, 2022
@sftim
Copy link
Contributor Author

sftim commented Dec 13, 2022

Issue #37871 is broadly similar

@sftim
Copy link
Contributor Author

sftim commented Jan 2, 2024

Anyone who'd like to help with this issue is very welcome to work on it.

@sftim
Copy link
Contributor Author

sftim commented Feb 14, 2024

@chrisnegus would you like to contribute towards this?

@sftim
Copy link
Contributor Author

sftim commented Feb 14, 2024

If we were to place all ~20 previous versions each in their own folder under /content, I think we may start to see some deploy performance issues.

With all languages in, I think the Docs section of the Kubernetes website is ~6500 files and ~65 MB (current version, older versions would likely be smaller). If we include all ~20 versions in one repo, we'll inflate that to ~1.3 GB. [edit] I suppose they're all already in one repo. Currently we're using branches to version, and git does some very nifty space saving magic for us. File size may not be the most important factor here.

I've seen the deploy log for v1.16, and it takes about 7 minutes to do the full deploy. Building the sites currently takes about 0.3 minutes of that (~23000 ms). If we have to build all 20 versions each time we deploy (or deploy preview), we may be adding about 6 minutes to the overall deploy (Rounding down because older versions may be smaller, and I'm not very familiar with Hugo & Netlify yet, they may do some clever things to speed this up).

I wonder whether we can freeze the HTML for the archived minor release docs, maybe doing a full rebuild once a week. I suspect we do have the technology for that.

@chrisnegus
Copy link
Contributor

I'm starting work on this. I'll report back here when I have made some progress.

@sftim
Copy link
Contributor Author

sftim commented Oct 3, 2024

Also see #48171 about documenting version emulation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/web-development Issues or PRs related to the kubernetes.io's infrastructure, design, or build processes kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

7 participants