Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

write buffer pooling #192

Closed
wants to merge 1 commit into from
Closed

write buffer pooling #192

wants to merge 1 commit into from

Conversation

y3llowcake
Copy link
Contributor

No description provided.

Copy link
Contributor

@garyburd garyburd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a test. I suggest adding a loop with pools at

for _, compress := range []bool{false, true} {
. Follow the pattern for writers. The pools to test are:

  • The nil pool.
  • A pool that fails if call order is not Get / Put / Get / Put ... Get / Put. Calls must alternate between Get and Put. First call must be Get. Last call must be Put.
  • sync.Pool

@@ -109,6 +109,12 @@ type CloseError struct {
Text string
}

// BufferPool represents a pool of buffers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention that sync.Pool implements the interface. Also state that a shared pool must support concurrent Get() and Put().

@kisielk
Copy link
Contributor

kisielk commented Dec 14, 2016

Since this pool interface is specific to this package, shouldn't it work with []byte instead of interface{}? It would require a wrapper around sync.Pool but it seems like it would make the API more specific.

@garyburd
Copy link
Contributor

Working with []byte increases friction for applications that use sync.Pool. How does working with []byte benefit applications?

@garyburd
Copy link
Contributor

garyburd commented Dec 15, 2016

I am not enthusiastic about my suggestion to add the dialer and upgrader write pool fields in this PR. As I said in #193, I want to avoid adding new knobs to the API. On the other hand, I don't want an unexpected change in performance for existing users of the package. If anybody has thoughts on this issue, then please comment.

@y3llowcake
Copy link
Contributor Author

How about we just to expose boolean fields in Upgrader and Dialer. e.g:
type Upgrader struct { ... EnableBufferPooling bool }

This seems to expose fewer details and is arguably less brittle. It could also be reused as the signal for enabling pooling of compression and read buffers when we get around to it.

The tricky bit is managing pools for different buffer sizes internally, but this seems like a tractable problem.

@garyburd
Copy link
Contributor

EnableBufferPooling sounds like a good plan. I suggest calling it EnableWriteBufferPooling in case it becomes possible to also enable read buffer pooling (see golang/go#15735).

How about rounding the buffer size up to the next power of 2 and keeping one pool for each power of 2?

@y3llowcake
Copy link
Contributor Author

Yah, this sounds reasonable. I'll try the new approach in this PR soon.

In the mean time, I'd appreciate if you could take a look at #194 . I'm hoping to make progress fixing the existing flate performance situation, but hope it will converge with the work in this PR soon.

@FZambia FZambia mentioned this pull request Feb 13, 2017
@AlekSi
Copy link
Contributor

AlekSi commented Feb 15, 2018

Any news on this one?

@theckman
Copy link

theckman commented Aug 5, 2018

@y3llowcake any interest in continuing this PR?

Is there anyone else who is interested in this that wants to pick up the branch and look at resolving the merge conflicts?

@y3llowcake
Copy link
Contributor Author

Sorry, no. I'm no longer using this library. Good luck.

@y3llowcake y3llowcake closed this Aug 5, 2018
@gorilla gorilla locked and limited conversation to collaborators Apr 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants