Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connect fails for /dnsaddr/.../wss but not /dns[4|6]/../wss #9204

Closed
3 tasks done
olizilla opened this issue Aug 17, 2022 · 13 comments
Closed
3 tasks done

Connect fails for /dnsaddr/.../wss but not /dns[4|6]/../wss #9204

olizilla opened this issue Aug 17, 2022 · 13 comments
Labels
kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization P1 High: Likely tackled by core team if no one steps up

Comments

@olizilla
Copy link
Member

Checklist

Installation method

ipfs-update or dist.ipfs.tech

Version

Kubo version: 0.14.0
Repo version: 12
System version: arm64/darwin
Golang version: go1.18.3

Config

No response

Description

We have a dnsaddr txt record configured to point to dns4 & dns6 mulitaddrs.

dig +short TXT _dnsaddr.elastic.dag.house
"dnsaddr=/dns6/elastic.dag.house/tcp/443/wss/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm"
"dnsaddr=/dns4/elastic.dag.house/tcp/443/wss/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm"

swarm connect fails:

$ ipfs swarm connect /dnsaddr/elastic.dag.house/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm
Error: connect QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC failure: failed to dial QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC:
  * [/ip6/2606:4700::6812:147e/tcp/443/wss] dial tcp [2606:4700::6812:147e]:443: connect: no route to host
  * [/ip4/127.0.0.1/tcp/3000/ws] dial tcp 127.0.0.1:3000: connect: connection refused
  * [/ip6/2606:4700::6812:157e/tcp/443/wss] dial tcp [2606:4700::6812:157e]:443: connect: no route to host
  * [/ip4/104.18.21.126/tcp/443/wss] remote error: tls: handshake failure
  * [/ip4/104.18.20.126/tcp/443/wss] remote error: tls: handshake failure
  * [/ip4/10.0.2.40/tcp/3000/ws] dial tcp 10.0.2.40:3000: i/o timeout

The intention is to let the users node pick either ip4 or ip6.

However attempting to connect directly to the /dns4 multiaddr directly succeeds:

$ ipfs swarm connect /dns6/elastic.dag.house/tcp/443/wss/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm
connect QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC success

It's expected that connecting to the /dnsaddr also succeeds, but it appears to be over-resolving to the /ip multiaddr and losing the domain info needed to satisfy tls.

@olizilla olizilla added kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization labels Aug 17, 2022
@Jorropo
Copy link
Contributor

Jorropo commented Aug 17, 2022

I am confident this is a libp2p bug, where the dnsaddr code doesn't have the check to not resolve websocket addresses.

/cc @marten-seemann

@marten-seemann
Copy link
Member

Was this fixed by libp2p/go-libp2p#1592?

@Jorropo
Copy link
Contributor

Jorropo commented Aug 17, 2022

@marten-seemann I've checked on Kubo 0.15.0-rc1 (which use go-libp2p 0.21 which have the fix according to github) and it reproduce.

@marten-seemann
Copy link
Member

Are you using an nginx in front of your kubo node, or do you have the cert configured in libp2p directly?

@Jorropo
Copy link
Contributor

Jorropo commented Aug 17, 2022

Are you using an nginx in front of your kubo node

Is this question for me ?
I don't think the question make sense because the nginx in front would be for daghouse, but they aren't running kubo but elastic IPFS.
If your question was about a "normal" (not reverse) proxy in front of my node no, I use naked connections.

or do you have the cert configured in libp2p directly?

$ curl -vvv https://elastic.dag.house
*   Trying 2606:4700::6812:147e:443...
* TCP_NODELAY set
* Connected to elastic.dag.house (2606:4700::6812:147e) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.; CN=sni.cloudflaressl.com
*  start date: Feb 11 00:00:00 2022 GMT
*  expire date: Feb 10 23:59:59 2023 GMT
*  subjectAltName: host "elastic.dag.house" matched cert's "*.dag.house"
*  issuer: C=US; O=Cloudflare, Inc.; CN=Cloudflare Inc ECC CA-3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55e88e9212f0)
> GET / HTTP/2
> Host: elastic.dag.house
> user-agent: curl/7.68.0
> accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 256)!

Curl think so.

@lidel lidel moved this to 🏃‍♀️ In Progress in IPFS Shipyard Team Aug 17, 2022
@lidel lidel added the P1 High: Likely tackled by core team if no one steps up label Aug 17, 2022
@lidel
Copy link
Member

lidel commented Aug 17, 2022

fwiw, i confirmed the manual dial to /dns4 and /dns6 from DNSAddr TXT records connects fine (Kubo 0.14):

$ dig +short TXT _dnsaddr.elastic.dag.house
"dnsaddr=/dns4/elastic.dag.house/tcp/443/wss/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm"
"dnsaddr=/dns6/elastic.dag.house/tcp/443/wss/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm"
$ ipfs swarm connect /dns4/elastic.dag.house/tcp/443/wss/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm
connect QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC success
$ ipfs swarm peers | grep QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC
/ip4/104.18.21.126/tcp/443/wss/p2p/QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC

The error occurs only when /dnsaddr resolution is added to the picture.
Both Kubo 0.14 and 0.15.0-rc1 (with go-libp2p 0.21) produce:

$ ipfs swarm connect /dnsaddr/elastic.dag.house/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm
Error: connect QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC failure: failed to dial QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC:
  * [/ip6/2606:4700::6812:157e/tcp/443/wss] dial tcp [2606:4700::6812:157e]:443: connect: network is unreachable
  * [/ip6/2606:4700::6812:147e/tcp/443/wss] dial tcp [2606:4700::6812:147e]:443: connect: network is unreachable
  * [/ip4/104.18.20.126/tcp/443/wss] remote error: tls: handshake failure
  * [/ip4/104.18.21.126/tcp/443/wss] remote error: tls: handshake failure

(we did not hit the problem with /dnsaddr/bootstrap.libp2p.io because it has TCP and QUIC in addition to /wss)

@lidel lidel changed the title Connect fails for dnsaddr pointing to dns multiaddrs with tls Connect fails for /dnsaddr/.../wss but not /dns[4|6]/../wss Aug 17, 2022
@olizilla
Copy link
Member Author

olizilla commented Aug 17, 2022

js-ipfs fails on the same inputs. I'll raise an issue there too.
js-ipfs doesn't support dnsaddrs... ipfs/js-ipfs#2289

❯ js-ipfs swarm connect /dnsaddr/elastic.dag.house/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm
The dial request has no valid addresses

❯ js-ipfs swarm connect /dns4/elastic.dag.house/tcp/443/wss/p2p/bafzbeibhqavlasjc7dvbiopygwncnrtvjd2xmryk5laib7zyjor6kf3avm
/dns4/elastic.dag.house/tcp/443/wss/p2p/QmQzqxhK82kAmKvARFZSkUVS6fo9sySaiogAnx5EnZ6ZmC

@aschmahmann
Copy link
Contributor

@marten-seemann unfortunately libp2p/go-libp2p#1592 is insufficient. There were a couple deficiencies called out in the PR libp2p/go-libp2p#1592 (comment) (e.g. #9199 and this).

I suspect @olizilla is correct in that the peerstore is storing any "root" address and all fully resolved addresses but none in the middle which is what would be needed under the current setup in go-libp2p. Although being smarter about address resolution (e.g. as suggested in libp2p/go-libp2p#1597 and libp2p/go-libp2p#1592) would be better.

@marten-seemann
Copy link
Member

Right, libp2p/go-libp2p#1597 would be the right fix for that. At the moment, we're aggressively resolving any address down to the IP. What we should do is resolve /dnsaddr down to a /dns, and leave the rest to the transport.

@Jorropo
Copy link
Contributor

Jorropo commented Aug 17, 2022

What we should do is resolve /dnsaddr down to a /dns, and leave the rest to the transport.

If done incorrectly this would create issues with #9199, the transport magic DI code would need to support a new resolver interface so the transport can call into the DoH implementations we may use.
This would also require an option to pass this dns resolver interface (whatever it may be) to libp2p.
I guess it would pass this object: https://pkg.go.dev/github.com/libp2p/go-libp2p#MultiaddrResolver

@aschmahmann
Copy link
Contributor

@Jorropo AFAICT that doesn't appear related to this issue, it's just a separate issue related to #9199 that already exists. Also a PR dealing with DI code is already linked from that PR libp2p/go-libp2p#1607.

@BigLep BigLep moved this from 🏃‍♀️ In Progress to 🥞 Todo in IPFS Shipyard Team Aug 30, 2022
@BigLep
Copy link
Contributor

BigLep commented Oct 4, 2022

2022-10-04: is anyone dependent on this landing before IPFS Camp 2022 (e.g., have a talk dependent on web sockets)?

@marten-seemann
Copy link
Member

@BigLep This should have been fixed by go-libp2p v0.23. libp2p/go-libp2p#1597, to be more specific.

@BigLep BigLep closed this as completed Nov 17, 2022
Repository owner moved this from 🥞 Todo to 🎉 Done in IPFS Shipyard Team Nov 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization P1 High: Likely tackled by core team if no one steps up
Projects
No open projects
Archived in project
Development

No branches or pull requests

6 participants