Skip to content
This repository has been archived by the owner on Sep 6, 2022. It is now read-only.

feat: add CloseRead/CloseWrite on streams #10

Closed
wants to merge 2 commits into from
Closed

Conversation

Stebalien
Copy link
Member

This changes the behavior of Close to behave as one would expect: it closes the stream. The new methods, CloseWrite/CloseRead allow for closing the stream in a single direction.

Note: This does not implement CancelWrite/CancelRead as our stream muxer protocols don't support that.

fixes #9

This changes the behavior of `Close` to behave as one would expect: it closes
the stream. The new methods, CloseWrite/CloseRead allow for closing the stream in
a single direction.

Note: This _does not_ implement CancelWrite/CancelRead as our stream muxer
_protocols_ don't support that.

fixes #9
@Stebalien
Copy link
Member Author

cc @libp2p/go-team

Copy link
Contributor

@vyzo vyzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

Copy link
Contributor

@marten-seemann marten-seemann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the right way to go.

Close is the normal termination of a stream, i.e. should be called by the sender when all data has been sent. As such, it only makes sense to speak of closing the write side of a stream, since only the sender can know when it's done writing.

What we're really talking about here is the abnormal termination of a stream. Distinguishing between normal and abnormal stream termination makes a big difference as soon as you're using an unreliable transport, since you don't need to do retransmissions for terminated streams, whereas you need to ensure reliable delivery on streams that were normally terminated.
At a high level, there are two reasons why a stream can be abnormally terminated:

  • The sender decides that it doesn't want to send the (full) reply any more.
  • The receiver decides that it's not interested in the (full) reply any more.

Mixing these two unrelated use cases into one (and labeling it "reset") has caused us a lot of trouble in the past.
In QUIC we expose two methods, CancelWrite and CancelRead, which exactly correspond to the two abnormal termination conditions described above.

@Stebalien
Copy link
Member Author

This proposal is based on the go standard library's TCPConn and UnixConn and the standard behavior of close(sock) in unix.

As such, it only makes sense to speak of closing the write side of a stream, since only the sender can know when it's done writing.

Not really. In this proposal, Close is effectively CloseRead + CloseWrite which means: flush everything, tell the other side I'm done writing, and tell libp2p that I no longer care about the stream (i.e., free internal resources). This is what we want in most cases as we usually don't care if the other side actually receives the response (we just want to try).

To reliably close the stream and get confirmation that Close actually worked, the user would call CloseWrite (send an EOF) and then Read to wait for the other side to close their end.

What we're really talking about here is the abnormal termination of a stream.

I agree that Reset should be broken into CancelRead and CancelWrite. However, except for QUIC, none of our current transports support that. Not even TCP supports this.

This proposal is entirely about Close, not Reset.

@Stebalien
Copy link
Member Author

Design requirements as I see them:

  1. The user needs to be able to call an async "fire and forget" close that eventually frees the underlying resources (Close())
  2. The user needs to be able to close the stream for writing while continuing to read (CloseWrite()).
  3. The user needs to be able to terminate the stream on error (Reset()).

Mixing these two unrelated use cases into one (and labeling it "reset") has caused us a lot of trouble in the past.

Could you elaborate on this? I can't think of any cases where I'd want to abort reads or writes only and not just throw away the entire stream.

@marten-seemann
Copy link
Contributor

I agree that Reset should be broken into CancelRead and CancelWrite. However, except for QUIC, none of our current transports support that. Not even TCP supports this.

This proposal is entirely about Close, not Reset.

The point is, once you break Reset into CancelRead and CancelWrite, you don't need a CloseRead anymore, since it's basically the same as CancelRead (with the only difference that CancelRead also takes an error code, which is really useful for complex application protocols. H3 uses that a lot.).

This is what we want in most cases as we usually don't care if the other side actually receives the response (we just want to try).

We still want the transport to reliably deliver what we wrote, so the transport has to take care of this stream until all data has been acknowledged (in the case of TCP the kernel takes care of this, so in this case you can forget about the stream earlier). So what you're suggesting is more like send an EOF (i.e. normal stream termination in the send direction), combined with abnormal stream termination in the read direction.

In this proposal, Close is effectively CloseRead + CloseWrite which means: flush everything, tell the other side I'm done writing, and tell libp2p that I no longer care about the stream (i.e., free internal resources).

This can work, but this API encourages sloppy programming. We really want a client to send us an EOF when it's done sending a request. However, we're encouraging it to not do so, since everything works if it never calls CloseWrite(). It just needs to wait for the server to reset the read direction (= the client's write direction) of the stream after sending the response. We're basically making abnormal stream termination the default behavior.

@Stebalien
Copy link
Member Author

The point is, once you break Reset into CancelRead and CancelWrite, you don't need a CloseRead anymore, since it's basically the same as CancelRead (with the only difference that CancelRead also takes an error code, which is really useful for complex application protocols. H3 uses that a lot.).

I get that. CloseRead is effectively an underspeced, crappy CancelRead. Unfortunately, again, most stream transports (our muxers, tcp, etc.) don't support canceling one direction.

What if we just stated that CloseRead should cause remote writes to return errors but may also be discarded or cause the stream to be reset?

We still want the transport to reliably deliver what we wrote, so the transport has to take care of this stream until all data has been acknowledged (in the case of TCP the kernel takes care of this, so in this case you can forget about the stream earlier). So what you're suggesting is more like send an EOF (i.e. normal stream termination in the send direction), combined with abnormal stream termination in the read direction.

In the ideal case, yes. However, CloseRead may also just throw away data as noted above (similar to TCP). CloseRead is just shutdown(sock, SHUT_RD).

This can work, but this API encourages sloppy programming. We really want a client to send us an EOF when it's done sending a request. However, we're encouraging it to not do so, since everything works if it never calls CloseWrite(). It just needs to wait for the server to reset the read direction (= the client's write direction) of the stream after sending the response. We're basically making abnormal stream termination the default behavior.

I agree we should encourage users to use CloseWrite. However:

  1. We currently leak streams when users don't read off the EOF.
  2. Users expect that Close() frees system resources and closes in both directions. This is a guarantee made by the net.Conn interface.

At a minimum, I think we need to rename Close to CloseWrite and introduce a Close function that "does the right thing" (behaves like a normal socket close).

Stebalien added a commit to libp2p/go-yamux that referenced this pull request May 28, 2019
@marten-seemann
Copy link
Contributor

Could you elaborate on this? I can't think of any cases where I'd want to abort reads or writes only and not just throw away the entire stream.

An instructive example is an HTTP POST to a site that doesn't exist. Say the client uploads a GB file, but the target on the server side has moved. In that case, the server would CancelRead, but still send the 404 response. Furthermore, this 404 response has to be delivered reliably, so the write side of the stream has to be properly EOFed.
Even though HTTP is not our primary use case, in my view we've failed at defining a stream interface, if it doesn't even allow us to run HTTP on top of libp2p.

I get that. CloseRead is effectively an underspeced, crappy CancelRead. Unfortunately, again, most stream transports (our muxers, tcp, etc.) don't support canceling one direction.

There's an easy fix for that. You can implement a workaround CancelRead as a go-routine that Reads from the stream until the EOF. Of course, this should only be a temporary fix, and we should fix our stream muxers.

  1. Users expect that Close() frees system resources and closes in both directions. This is a guarantee made by the net.Conn interface.

That's an argument about taste here. Streams don't implement the net.Conn interface, so we're not bound by any guarantees. We just implement io.Closer, which doesn't give any guarantees whatsoever.

  1. We currently leak streams when users don't read off the EOF.

Mostly because we've been lenient about EOFs for so long. If we define a request as "everything on the stream until EOF" instead of "the first x bytes that happen to be parseable", this wouldn't be a problem. A server wouldn't reply to a request until the stream was properly closed. A lot of helper functions in the Go standard library expect the EOF (e.g. ioutil.ReadAll), so continuing to be lenient about it will cause us problems in the long run.
On a practical note, we'd probably need to rev our stream multiplexers to be able to properly distinguish between the old and the new behavior.

@Stebalien
Copy link
Member Author

Stebalien commented May 28, 2019

An instructive example is an HTTP POST to a site that doesn't exist. Say the client uploads a GB file, but the target on the server side has moved. In that case, the server would CancelRead, but still send the 404 response. Furthermore, this 404 response has to be delivered reliably, so the write side of the stream has to be properly EOFed.
Even though HTTP is not our primary use case, in my view we've failed at defining a stream interface, if it doesn't even allow us to run HTTP on top of libp2p.

I see. Personally, I'd just stop reading, write the error, then call Close() (defined as either "ignore reads, close write" or "cancel reads, close write" depending on the implementation).

However, I agree that always being able to cancel the write-side would be much better.

There's an easy fix for that. You can implement a workaround CancelRead as a go-routine that Reads from the stream until the EOF. Of course, this should only be a temporary fix, and we should fix our stream muxers.

That's effectively what CloseRead() does (which I'm currently implementing in the stream muxers themselves). My concern is that there's no way to tell the other side that we're no longer interested in reading at the protocol level.

If that behavior is fine, then CloseRead is equivalent to CancelRead except that it follows the convention used by the net connection types.

That's an argument about taste here. Streams don't implement the net.Conn interface, so we're not bound by any guarantees. We just implement io.Closer, which doesn't give any guarantees whatsoever.

They do in-fact implement the net.Conn interface (sans some thread-safety guarantees).

But my primary motivation here is the law of least surprise: users expect Close() to behave like libc's close. I have yet to see anyone correctly use the stream interfaces correctly the first (or even nth) time.


Let's break this down (as I see it). Is this correct?

Naming Issues

  • Should the method that closes the stream for writing be called CloseWrite or Close.
  • Should the method that aborts reads on the stream be called CloseRead or CancelRead.

Note: if we go with Close as the "close for writing" method, we can still simulate the "fire and forget" Close with Close() + CancelRead().

Protocol Issues

What guarantees do we need around CloseRead/CancelRead:

  • Do we need to be able to send an error message? (requires protocol changes)
  • Do we need to send anything or can we just drop the data? (requires protocol changes)
  • Is it valid to reset the stream if we receive data after calling CloseRead (given your 404 use-case, no).

Do we need a CancelWrite or is Reset (bidirectional cancel) sufficient? (requires protocol changes)

@hsanjuan
Copy link

Before we did Close followed by AwaitEOF (which given enough time would Reset()).

Is it correct to assume that now the process would be CloseWrite() + AwaitEOF()?

It's clear the current Close() is prone to programming errors (not obvious that a safeguard Reset is needed every time). As I understand this Close() means I don't want to use this anymore and will free all resources without having to wait for an EOF or Resetting the stream (?), which is more natural. But probably we still need to do AwaitEOF()-things if we want things to finish cleanly on both sides right?

One advantange is that for bidirectional streams where one end writes a requests a reads an answer, it could call CloseWrite after sending the request. The receiving end would get EOF for reading so it would be known that the stream is closed for reading. Then it could call CloseWrite after sending the answer and fully terminate its stream. Am I getting it correctly that this approach would correctly clean up streams on both sides without additional calls to a full Close() ?

@Stebalien
Copy link
Member Author

Is it correct to assume that now the process would be CloseWrite() + AwaitEOF()?

Yes.

As I understand this Close() means I don't want to use this anymore and will free all resources without having to wait for an EOF or Resetting the stream (?), which is more natural. But probably we still need to do AwaitEOF()-things if we want things to finish cleanly on both sides right?

Yes, ish. Close() will still try to finish cleanly, it just won't block until that happens so there's no feedback if it fails. This is equivalent to a tcp socket close.

One advantange is that for bidirectional streams where one end writes a requests a reads an answer, it could call CloseWrite after sending the request. The receiving end would get EOF for reading so it would be known that the stream is closed for reading. Then it could call CloseWrite after sending the answer and fully terminate its stream. Am I getting it correctly that this approach would correctly clean up streams on both sides without additional calls to a full Close() ?

That is the correct approach but I'd like to still require the Close() or Reset() so we can remove this hack. (again, TCPConn has the same requirement, CloseRead and CloseWrite don't actally close the underlying socket file descriptor).

However, we don't have to get rid of that hack, it's just kind of funky. I'm fine either way.

@marten-seemann
Copy link
Contributor

Before we did Close followed by AwaitEOF (which given enough time would Reset()).

Where AwaitEOF is a dirty hack that we only introduced because we didn't want to teach implementations to properly close streams. I've disliked it from the beginning, because it incentivizes implementations to do the wrong thing: not close streams. In addition to unclean protocol semantics, this comes at the cost of one long-lived go-routine for the receiver of the stream (for every stream!).

In the request - response scenario, there are two clean ways to handle this would be:

  1. Only consider a request complete when the stream has been EOFed, OR
  2. Leave the stream open indefinitely. An implementation that doesn't close its streams is misbehaving and will eventually starve itself of new streams (if peers set tight enough limits for concurrent streams).

The first option has the advantage that it immediately penalizes misbehaving peers. The second option performs slightly better in cases where the EOF arrives late due to network loss / reordering.

Now going from a world where we have misbehaving implementations to a world where we force implementations to behave properly will require some careful thought. One option that comes to creating a new version of our stream multiplexers, be lenient with the old ones, and strict with the new ones. We might be able to come up with something better as well.

@raulk
Copy link
Member

raulk commented Jul 8, 2019

Where AwaitEOF is a dirty hack that we only introduced because we didn't want to teach implementations to properly close streams.

I think the expectation is that Close() triggers an EOF on the peer, who in turn uses that signal to close their write side, therefore triggering an EOF on your end. This is the transposition of the graceful TCP connection closure flow.

Copy link
Member

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few comments, @Stebalien.

// CloseRead closes the stream for writing but leaves it open for
// reading.
//
// CloseRead does not free the stream, users must still call Close or
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about a CloseRead() followed by a CloseWrite()? Should the contract enforce the release of resources?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A nice implementation should but I'm not sure if we want to require that. This is kind of like shutdown(SHUT_RD) followed by shutdown(SHUT_WR), the user still needs to call close().

mux/mux.go Show resolved Hide resolved
@raulk
Copy link
Member

raulk commented Jul 8, 2019

Let's dislodge this conversation. Stream interface refactors are traumatic, and we don't want to do these often. So let's make sure we're covering all concerns and potential use cases.

@marten-seemann, aside from request-scoped streams, users also use streams for long-lived streaming (ignore the pun), and we also reuse streams in some places via pools (or may have the intent to). (Not all multiplexers can signal opening, data, closing in a single packet, like QUIC).

Let's take a step back and consider the cases we're modelling for from the perspective of a libp2p user.

For reference, here are my rough notes from researching this topic: https://gist.github.com/raulk/8db7e4cc7658ae9c9483aea1eed6b398.

(TCP has no stream capability but its semantics are well implanted in people's brains so it's a fair reference for the principle of least surprise.)

  • Graceful duplex closure: Stop writing and reading, do not issue errors; let the stack manage; I no longer care about this stream; I'm done using it.

    • libp2p candidate: Stream#Close().
    • TCP: close(); does connection close in TCP. When the app calls it, it passes on responsibility to the OS, who eventually destroys the socket as one would expect. Data received from the peer after calling close is discarded. Stack guarantees delivery of bytes written until close was called.
    • QUIC: we send the FIN bit and drain any incoming data on that stream.
    • Yamux and Mplex only support write-side closure, and like the TCP stack, they should drain outstanding bytes until the other party sends a FIN or we time out and RST the stream.
  • Read-side half-close: abrupt by definition (reader desists before writer has signalled end):

    • libp2p candidate: Stream#CloseRead().
    • TCP: shutdown(SHUT_RD), no wire-level signalling possible; behaviour is platform-dependent. Windows issues RST on incoming data; BSD swallows and acks incoming data; Linux accepts incoming data until buffer fills up, then congestion control kicks in. See https://stackoverflow.com/a/30317014.
    • QUIC: STOP_SENDING. Abrupt action.
    • Yamux and Mplex do not have signalling for this; we must decide how to implement.
  • Write-side half-close: signal there's no more data to be read. Can be abrupt or graceful.

    • libp2p candidate: Stream#CloseWrite().
    • TCP: shutdown(SHUT_WR). Graceful. Immediately issues a FIN, which the peer will ACK. We can still read. Connection won't be fully closed until peer sends a FIN and we ACK it.
    • QUIC: FIN bit for graceful; RESET_STREAM for ungraceful (delivery of previously streamed data is not guaranteed).
    • Yamux and Mplex both support write-side half-close.
  • Abrupt duplex closure:

    • libp2p candidate: Stream#Reset().
    • TCP: no APIs to signal abnormal duplex closure (I think).
    • QUIC: I guess a combination of RESET_STREAM + STOP_SENDING?

I'm not done yet. I'm wrapping up my day and I wanted to post my notes here for the benefit of everybody (and also to collect possible corrections). Will continue tomorrow, and hopefully come back with something that reconciles the different points of view and interests.


Questions:

  • @marten-seemann: how do we deal with unidirectional QUIC streams in libp2p? Do we even use them? If not, should we expose this capability?
  • @marten-seemann: why the terminology CancelWrite() and CancelRead() in quic-go instead of Close*()? Is it to convey that no retransmissions will be done for undelivered data?

@Stebalien
Copy link
Member Author

TCP: close(); does connection close in TCP. When the app calls it, it passes on responsibility to the OS, who eventually destroys the socket as one would expect. Data received from the peer after calling close is discarded. Stack guarantees delivery of bytes written until close was called.

Note: "stack makes an effort to deliver the bytes written before close is called but makes no guarantees because this is all done after close returns"

Otherwise, that's a good summary.

mux/mux.go Outdated
// Reset.
CloseWrite() error

// CloseRead closes the stream for writing but leaves it open for
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrong description (copypasta).

@raulk
Copy link
Member

raulk commented Oct 29, 2019

👉 Here's a matrix analysis of full-duplex, half-duplex, abrupt, graceful stream/connection closure across TCP, QUIC, Yamux and mplex. Highly suggest readers to use that as a reference.


@Stebalien

Note: "stack makes an effort to deliver the bytes written before close is called but makes no guarantees because this is all done after close returns"

I think this is wrong, it would be very bad if it works that way because it breaks TCP reliability. This is an active close, and at this stage, the connection would be in FIN-WAIT-1 state, where the TCP stack is still issuing retransmissions.


My conclusions to alter this changeset after this very thorough investigation:

  • Close() detaches the stream. No further writes or reads are accepted. If the underlying transport is reliable, retransmission of bytes prior to calling close is guaranteed. The stack handles stream closure in the background.
  • Closing the read side is abrupt by definition. QUIC allows specifying a close reason.
    • We should make CloseRead() accept options. There is no interface to honour, so it can be as simple as CloseRead(CloseReadOpt...).
  • How you see CloseRead() working with our current multiplexers (let's focus on yamux and mplex)?
    • Would this send a RST flag? I think that triggers a full-duplex abrupt close.
    • Ideally, we'd error writes on the other end, but not reads.
  • Closing the write side can be graceful or abrupt.
    • Current multiplexers only allow for graceful closing (FIN). A RST would close the write side on the other end.
    • QUIC can signal abnormal write termination, there's no reason to hide this.
    • CloseWrite() can take options: CloseWrite(TryForce()).

@Stebalien
Copy link
Member Author

If the underlying transport is reliable, retransmission of bytes prior to calling close is guaranteed

Is best-effort up to some timeout (e.g., what the linux kernel does). Unfortunately, we can't guarantee anything unless we flush on close which will likely break a bunch of code that doesn't expect Close to block.

Closing the read side is abrupt by definition. QUIC allows specifying a close reason.

👍

We should make CloseRead() accept options. There is no interface to honour, so it can be as simple as CloseRead(CloseReadOpt...).

That is, go interfaces are implicit. Checking "does this thing have this function" isn't actually all that uncommon. (e.g., my own code: https://github.com/multiformats/go-multiaddr-net/blob/c9acf9f27c5020e78925937dc3de142d2d393cd1/net.go#L31-L77)

On the other hand, go also uses CloseRead(err) and CloseWrite(err) for io pipes. So the go standard library doesn't even agree.

Which options would you like to support? If it's just errors, maybe CloseRead(err) and CloseWrite(err) works better (given that pipes follow that interface).

How you see CloseRead() working with our current multiplexers (let's focus on yamux and mplex)?

I'd really just make it client-side, like it is with TCP and throw away any new data we get. In yamux, we can just never send a window update.

Why? Well, usually we call CloseRead because we no longer care, not because we really want to signal anything to the remote side (IMO).

Closing the write side can be graceful or abrupt.

Many transports don't support abruptly closing one direction. I'm all for providing some way to feature-detect on a stream but I'd rather not add too many requirements that all transports need to meet.

What's the motivation for abruptly closing the write side but not the read side?

@Kubuxu
Copy link
Member

Kubuxu commented Oct 31, 2019

On the other hand, go also uses CloseRead(err) and CloseWrite(err) for io pipes. So the go standard library doesn't even agree.

Go lib uses that to allow simulating custom error conditions.

@aarshkshah1992
Copy link
Contributor

@raulk @Stebalien If this hasn't been allocated, I'd love to pick this up.

@Stebalien
Copy link
Member Author

This is waiting on a decision from @raulk.

Copy link
Member

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about the following?

  • Close() closes the write-side gracefully, and background-drains the read-side, unless the multiplexer supports cancelling reads (e.g. QUIC), in which case we send that signal.

  • CloseRead() and CloseWrite() are not defined in standard interfaces AFAIK. As such, there is no special signature to respect.

    • We can offer CloseRead(err) and CloseWrite(err) methods with no breakage of expectations.
    • Such signatures allow us to decorate the error and provide enhanced info to multiplexers.
  • Let's introduce a special error type.

type AnnotatedStreamError struct {
    Reason   int
}
  • On CloseRead(nil): Yamux and mplex drain the receive end into io.Discard; on QUIC, we call CancelRead(), with a 0 error code.

  • On CloseRead(AnnotatedStreamError): Yamux and mplex do the same as above, they do not support anything else. On QUIC, we call CancelRead(), passing in the supplied error code.

  • On CloseWrite(nil): Yamux and mplex close the stream; on QUIC, we call [Close()] (**not CancelWrite(), see godocs), which translates to a graceful FIN.

  • On CloseWrite(AnnotatedStreamError): Yamux and mplex do the same as above, they do not support convey close reasons. On QUIC, we call CancelWrite(), passing in the supplied error code. This signals an abrupt termination of the write-side.

  • For errors types other than AnnotatedStreamError, we send a muxer-determined fixed value, which in the case of QUIC can be max(uint64).

@Stebalien
Copy link
Member Author

Yes! 👍

Let's introduce a special error type.

We may want to use an interface (slightly more idiomatic). On the other hand, being overly generic doesn't get us too far.

If we go with AnnotedStreamError, I'd allow an Error field as well. Some transports may want to send string errors. (although we need to be careful to not send sensitive data in the error).

import "errors"

// could be stream/libp2p specific? CloseErrorCode? StreamErrorCode?
type ErrorCode interface {
  ErrorCode() int 
}

// we can provide a special wrapper type for convenience:

type CloseError struct {
    Code int
    Error error
}
/// error functions...

// extracts the error.
func getErrorCode(err error) int {
  var errCode ErrorCode
  if errors.As(err &errCode) {
    return errCode.ErrorCode()
  } else {
    return SomeDefault
  }
}

@raulk
Copy link
Member

raulk commented May 21, 2020

Closing this and opening a new PR from the same branch to wipe the comment slate clean.

@raulk raulk closed this May 21, 2020
@raulk raulk mentioned this pull request May 21, 2020
Stebalien added a commit to libp2p/go-yamux that referenced this pull request Aug 27, 2020
fixes libp2p/go-libp2p-core#10

fix: avoid returning accept errors

Instead, wait for shutdown.
Stebalien added a commit to libp2p/go-yamux that referenced this pull request Aug 28, 2020
fixes libp2p/go-libp2p-core#10

fix: avoid returning accept errors

Instead, wait for shutdown.
Stebalien added a commit to libp2p/go-yamux that referenced this pull request Aug 28, 2020
fixes libp2p/go-libp2p-core#10

fix: avoid returning accept errors

Instead, wait for shutdown.
Stebalien added a commit to libp2p/go-yamux that referenced this pull request Aug 28, 2020
Stebalien added a commit to libp2p/go-yamux that referenced this pull request Aug 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Question: Why doesn't Close() close the stream?
7 participants