This is an implementation of the datastore interface backed by Amazon S3.
NOTE: Plugins only work on Linux and MacOS at the moment. You can track the progress of this issue here: golang/go#19282
- Grab a plugin release from the releases section matching your Kubo version and install the plugin file in
~/.ipfs/plugins
. - Follow the instructions in the plugin's README.md
The plugin can be manually built/installed for different versions of Kubo (starting with 0.23.0) with:
git checkout go-ds-s3-plugin/v<kubo-version>
make plugin
make install-plugin
go get
the Kubo release you want to build for. Make sure any other dependencies are aligned to what Kubo uses.make install
and test.
If you are building against dist-released versions of Kubo, you need to build using the same version of go that was used to build the release (here).
If you are building against your own build of Kubo you must align your plugin to use it.
If you are updating this repo to produce a new version of the plugin:
- Submit a PR so that integration tests run
- Make a new tag
go-ds-s3-plugin/v<kubo_version>
and push it. This will build and release the plugin prebuilt binaries.
As go plugins can be finicky to correctly compile and install, you may want to consider bundling this plugin and re-building kubo. If you do it this way, you won't need to install the .so
file in your local repo, i.e following the above Building and Installing section, and you won't need to worry about getting all the versions to match up.
# We use go modules for everything.
> export GO111MODULE=on
# Clone kubo.
> git clone https://github.com/ipfs/kubo
> cd kubo
# Pull in the datastore plugin (you can specify a version other than latest if you'd like).
> go get github.com/ourzora/go-ds-s3@latest
# Add the plugin to the preload list.
> echo -en "\ns3ds github.com/ourzora/go-ds-s3/plugin 0" >> plugin/loader/preload_list
# ( this first pass will fail ) Try to build kubo with the plugin
> make build
# Update the deptree
> go mod tidy
# Now rebuild kubo with the plugin
> make build
# (Optionally) install kubo
> make install
For a brand new ipfs instance (no data stored yet):
- Copy
s3plugin.so
to$IPFS_DIR/plugins/go-ds-s3.so
(or runmake install
if you are installing locally). - Run
ipfs init
. - Edit
$IPFS_DIR/config
to include s3 details for the first Datastore mount (see Configuration below). - Overwrite
$IPFS_DIR/datastore_spec
(Don't do this on an instance with existing data - it will be lost. See Configuration below).
The config file should include the following:
{
"Datastore": {
...
"Spec": {
"mounts": [
{
"child": {
"type": "s3ds",
"region": "$bucketregion",
"bucket": "$bucketname",
"rootDirectory": "$bucketsubdirectory",
"accessKey": "",
"secretKey": "",
"keyTransform": "$keytransformmethod"
},
"mountpoint": "/blocks",
"prefix": "s3.datastore",
"type": "measure"
},
If the access and secret key are blank they will be loaded from the usual ~/.aws/.
The key transform allows you to specify how data is stored behind S3 keys. It must be one of the available methods:
default
- No sharding.
suffix
- Shards by storing block data at a key with a
data
suffix. E.g.CIQJ7IHPGOFUJT5UMXIW6CUDSNH6AVKMEOXI3UM3VLYJRZUISUMGCXQ/data
next-to-last/2
- Shards by storing block data based on the second to last 2 characters of its key. E.g.
CX/CIQJ7IHPGOFUJT5UMXIW6CUDSNH6AVKMEOXI3UM3VLYJRZUISUMGCXQ
If you are on another S3 compatible provider, e.g. Linode, then your config should be:
{
"Datastore": {
...
"Spec": {
"mounts": [
{
"child": {
"type": "s3ds",
"region": "$bucketregion",
"bucket": "$bucketname",
"rootDirectory": "$bucketsubdirectory",
"regionEndpoint": "us-east-1.linodeobjects.com",
"accessKey": "",
"secretKey": "",
"keyTransform": "$keytransformmethod"
},
"mountpoint": "/blocks",
"prefix": "s3.datastore",
"type": "measure"
},
If you are configuring a brand new ipfs instance without any data, you can overwrite the datastore_spec file with:
{"mounts":[{"bucket":"$bucketname","mountpoint":"/blocks","region":"$bucketregion","rootDirectory":"$bucketsubdirectory"},{"mountpoint":"/","path":"datastore","type":"levelds"}],"type":"mount"}
Otherwise, you need to do a datastore migration.
Feel free to join in. All welcome. Open an issue!
This repository falls under the IPFS Code of Conduct.
MIT