Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DHT Request pipelining #92

Closed
wants to merge 39 commits into from
Closed

DHT Request pipelining #92

wants to merge 39 commits into from

Conversation

vyzo
Copy link
Contributor

@vyzo vyzo commented Sep 17, 2017

The big lock, shared by SendMessage and SendRequest, restricts our ability to issue pipelined requests with potentially significant performance impact -- #88

This patch implants a goroutine pump for serializing message reads out of line. This allows us to lock only for the duration of the write, which pipelines concurrent requests.

@vyzo
Copy link
Contributor Author

vyzo commented Sep 17, 2017

Summoning @whyrusleeping @Stebalien

@vyzo vyzo force-pushed the feat/request-pipeline branch 2 times, most recently from c0e98e4 to c4f4864 Compare September 17, 2017 10:21
@vyzo
Copy link
Contributor Author

vyzo commented Sep 17, 2017

Note that there is some subtlety in the fallback to the single request per stream protocol: the single message counter is only incremented in the case of a successful request.

In the case of pipelined requests failing in the read, some of them (if there is more than 2) may fail in their retries due to the old protocol while we are collecting enough samples to fallback.
We can accelerate the convergence by having read errors in retry increment the single message counter.

@vyzo vyzo force-pushed the feat/request-pipeline branch 2 times, most recently from eb0cd02 to a4ee75c Compare September 17, 2017 10:57
@vyzo
Copy link
Contributor Author

vyzo commented Sep 20, 2017

Rebased on master for #93

@vyzo vyzo force-pushed the feat/request-pipeline branch from 1f9fe55 to c98efab Compare September 20, 2017 07:21
dht_net.go Outdated
select {
case res = <-resch:

case <-t.C:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd just add a deadline to the context.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, will do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do lose the ability to distinguish ErrReadTimeount however.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to learn why this failed from ctx.Err().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, that's what I return now -- it's just not ErrReadTimeout anymore but rather "context deadline exceeded".

dht_net.go Outdated
defer s.Close()

w := ggio.NewDelimitedWriter(s)
return w.WriteMsg(pmes)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably reset the stream on error instead of closing it (probably not that important but generally a good idea).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: It's safe to close the stream after resetting it (so you can leave the defer s.Close()).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, small thing to fix.

dht_net.go Outdated
case <-t.C:
return ErrReadTimeout
return nil, ErrReadTimeout
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, I'd switch to a context deadline (not your code but easy cleanup).

dht_net.go Outdated
ms.r = ggio.NewDelimitedReader(nstr, inet.MessageSizeMax)
r := ggio.NewDelimitedReader(nstr, inet.MessageSizeMax)
rch := make(chan chan requestResult, requestResultBuffer)
go messageReceiver(ms.dht.ctx, rch, r)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could save on long-running go routines by starting these only as needed and shutting them down when we have no more outstanding replies (i.e., by having some form of outstanding reply counter). If we don't keep these around, we could also afford to spin up a second goroutine to manage the outstanding requests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I'm not a fan of pipelining but out-of-order replies would require a protocol change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lazily spinning the goroutine is perhaps not so hard to implement, but shutting it down gets tricky -- not sure its worth the complexity. Let me think about it.

dht_net.go Outdated
}

return nil
}
}

func (ms *messageSender) SendRequest(ctx context.Context, pmes *pb.Message) (*pb.Message, error) {
ms.lk.Lock()
defer ms.lk.Unlock()
defer log.EventBegin(ctx, "dhtSendRequest", ms.dht.self, ms.p, pmes).Done()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont technically need to prefix the event type with "dht", the log object is created as a logger for the dht subsystem

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ipfs log only gives you the event name in the event field, so that's kind of necessary to disambiguate events.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i mean for practical purposes with grep, otherwise the system field does it just fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will remove the prefix, the canonical way to process the event log is jq; and grep becomes only slightly more complicated, it can be a double grep for dht and then SendRequest

dht_net.go Outdated
case res = <-resch:

case <-rctx.Done():
return nil, rctx.Err()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the context is cancelled, do we want to kill off the stream? or is that handled elsewhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the specific request context, so it's inappropriate to kill for the entire stream.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking a bit more about this, I think we do want to kill the whole stream after all.
The issue is that if we start having slow responses (over 1m), the pipeline will fill with everything timing out and become unusable until the responses are received (regardless of whether we have stopped waiting).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will implement with a "reset" directive to the message receiver.

dht_net.go Outdated
case next, ok = <-rch:
if !ok {
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a little cleaner if we move the logic at the bottom of the loop into this case. Would save us from having to pre-declare those variables

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, was just trying to avoid excessive indentation.

@whyrusleeping
Copy link
Contributor

This tenatively looks good to me, I'd like to see some tests added that exercise some of the different scenarios (single vs pipelined, slow handlers).

How complicated this is getting makes me think we should make a new DHT protocol handler (version bump) that has message IDs in the protobuf (which should simplify this significantly). Then in the future when we have a 1.0 release, we can drop the old code. Really, this should have been just using a message based interface from the get-go.

cc @Stebalien

@vyzo
Copy link
Contributor Author

vyzo commented Sep 22, 2017

I will add some more tests as these are important cases we want to make sure we handle right. I am testing with a live node for now.

Re: message ids: Yes, that would make the pump goroutine completely unnecessary and let us handle it with just two locks (read and write) and a queue map (for messages read out of order in the contention).

@vyzo
Copy link
Contributor Author

vyzo commented Sep 22, 2017

Some analysis of performance in concurrent requests:

  • go-ipfs master, without pipelining:
{"event":"findPeerSingleBegin","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T08:32:30.917943022Z"}
{"event":"findPeerSingleBegin","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T08:32:30.918892911Z"}
{"event":"findPeerSingleBegin","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T08:32:30.920377591Z"}
{"duration":157141601,"event":"findPeerSingle","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T08:32:31.076695602Z"}
{"duration":313461714,"event":"findPeerSingle","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T08:32:31.231225466Z"}
{"duration":509875531,"event":"findPeerSingle","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T08:32:31.428626557Z"}
  • with request pipelining:
{"event":"findPeerSingleBegin","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T14:54:58.09138292Z"}
{"event":"findPeerSingleBegin","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T14:54:58.091476318Z"}
{"event":"findPeerSingleBegin","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T14:54:58.09169362Z"}
{"duration":107551379,"event":"findPeerSingle","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T14:54:58.199282563Z"}
{"duration":175856230,"event":"findPeerSingle","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T14:54:58.267292144Z"}
{"duration":256608137,"event":"findPeerSingle","peerID":"QmdFJrz6FXQU15UVCV1Evs4Ftq8xVvAtxfacC23cXySpQZ","system":"dht","time":"2017-09-21T14:54:58.348002273Z"}

@vyzo vyzo mentioned this pull request Sep 22, 2017
@whyrusleeping
Copy link
Contributor

@vyzo update here?

@vyzo
Copy link
Contributor Author

vyzo commented Oct 14, 2017

will get back to it soon.

@whyrusleeping whyrusleeping added the status/ready Ready to be worked label Oct 17, 2017
@whyrusleeping whyrusleeping removed the status/ready Ready to be worked label Oct 17, 2017
@whyrusleeping whyrusleeping reopened this Oct 17, 2017
@whyrusleeping whyrusleeping added the status/in-progress In progress label Oct 17, 2017
@ghost ghost removed the status/in-progress In progress label Oct 17, 2017
@ghost ghost assigned whyrusleeping Oct 17, 2017
@ghost ghost added the status/in-progress In progress label Oct 17, 2017
@whyrusleeping
Copy link
Contributor

whoops, sorry.

@vyzo
Copy link
Contributor Author

vyzo commented Sep 6, 2018

So i have rebased this, but I don't know what this business with the delayed tests is.
Another artifact of the rebase is that the bufferedDelimtedWriter use has disappeared -- i don't know what this business is either; this will need to be repatched in.

@vyzo vyzo force-pushed the feat/request-pipeline branch from e174e62 to 4737c8a Compare September 6, 2018 09:37
@bigs
Copy link
Contributor

bigs commented Sep 6, 2018

@vyzo yeah, that's the same situation i've run into. anything i can assist with?

@vyzo
Copy link
Contributor Author

vyzo commented Sep 7, 2018

@bigs can you repatch in the buffered writer stuff? Seems like this is important.

@Stebalien
Copy link
Member

So i have rebased this, but I don't know what this business with the delayed tests is.

@laser was trying to test network latency. The first attempt, using delayed blockstores, wasn't really sufficient. Later tests used the mocknet but I don't think they ever really removed the delayed datastore stuff (we can probably get rid of that).

@ghost ghost added the topic/filecoin Topic filecoin label Oct 26, 2018
@anacrolix anacrolix self-assigned this Jan 21, 2019
@raulk
Copy link
Member

raulk commented Feb 11, 2019

@anacrolix – were you intending to review this PR? We need to merge master into it. The conflicts don't look too bad.

@anacrolix
Copy link
Contributor

I'll try to get on to this now.

@anacrolix
Copy link
Contributor

I've been poking around this code in master. Why don't we just send a single request per stream?

@whyrusleeping
Copy link
Contributor

@anacrolix because opening a new stream is expensive right now. Once we get multistream 2, we can just do that.

@Stebalien
Copy link
Member

See: #167

@anacrolix
Copy link
Contributor

I am experimenting with an alternative to this, that optimistically reuses streams, and creates new ones if they're blocked. I believe it won't suffer from pipelining issues, and cross-polluting timeouts etc.

@anacrolix anacrolix mentioned this pull request Feb 20, 2019
@bigs bigs removed their assignment Jan 29, 2020
@Stebalien
Copy link
Member

The DHT code has moved on and this would need to be re-implemented.

@Stebalien Stebalien closed this May 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants