-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Save and load block data using IPFS #374
Conversation
Codecov Report
@@ Coverage Diff @@
## master #374 +/- ##
==========================================
- Coverage 61.81% 61.66% -0.16%
==========================================
Files 262 263 +1
Lines 22978 23003 +25
==========================================
- Hits 14204 14184 -20
- Misses 7277 7303 +26
- Partials 1497 1516 +19
|
I wrote a quick test to figure out if we could use the offline/online versions of the ipfs API object interchangeably, and it looks like we can, at least with the mock ipfs api object, as it passes. https://gist.github.com/evan-forbes/ad8fd510874f2a8c7d990ab080087da9 Although, after discussing with @liamsi, it seems that making API changes to incorporate offline and online usage is outside the scope of this PR, and will be handled in a separate ADR/issue. There also needs to be more testing and investigation into the exact impact of using the offline vs online version of the API. Therefore, I'm making this PR ready for review. |
// NOTE: The existence of meta should imply the existence of the | ||
// block. So, make sure meta is only saved after blocks are saved. | ||
panic(fmt.Sprintf("Error reading block: %v", err)) | ||
return nil, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to have an error check here to return a meaningful error to the caller in case a block is not found locally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my understanding: if the dag passed in dag has the offline mode enabled, is that sufficient to ensure that we failed to find the block locally? Or is there some sentinel error or error type to check against?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, resolved here 507f04d
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my understanding: if the dag passed in dag has the offline mode enabled, is that sufficient to ensure that we failed to find the block locally? Or is there some sentinel error or error type to check against?
That's a great point and I think we should have a test that ensures that the correct error is thrown and the timeout is not reached. Added here 51c7f44
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the dag passed in dag has the offline mode enabled, is that sufficient to ensure that we failed to find the block locally?
I guess so, but there could be other errors as well, e.g. related to erasure coding. Also, retrieval code has its own error type, but usually, for DAG-related not found errors I check for ipld.ErrNotFound.
// If all the nodes restart after committing a block, | ||
// we need this to reload the precommits to catch-up nodes to the | ||
// most recent height. Otherwise they'd stall at H-1. | ||
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) { | ||
func (bs *BlockStore) SaveBlock( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am kinda concerned about this method as we have PutBlock already being called on consensus and I don't think relying just on Blockstore.SaveBlock instead for networking intensely operation is a good idea. In case we save Block two times(overheads) just to persist Meta, it sounds reasonable for me now to change blockstore into MetaStore or HeaderStore and no more handle block data read/writes at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it sounds reasonable for me now to change blockstore into MetaStore or HeaderStore and no more handle block data read/writes at all.
You are right about this. But we currently already store the block twice. We need to change this indeed.
The question is if in this PR or in a separate one. Swapping the blockstore with a header store is basically reviving and slightly rethinking @marbar3778's PR: #218.
My suggestion was to tackle this in a separate PR but if you feel strongly about this and @evan-forbes agrees, I'm OK with doing this in this PR as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually might require more thoughts. Also the API @renaynay started which will enable retrieving the state-relevant portion might be necessary for this (prioritized https://github.com/lazyledger/lazyledger-core/issues/221 to be included in the next larger milestone).
See: #218 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But to not be blocked on that API (https://github.com/lazyledger/lazyledger-core/issues/221), we could make it that wherever the Txs are needed for replay, the full block (instead of the reserved nid Txs) is downloaded via IPFS (if not locally available anyways).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, Ismail also shared your concern over the this, particularly when loading block data. I think it's improved slightly by using the offline version of the dag api 8619f27 . Unfortunately, until we add some caching mechanism I don't think we have another way to ensure that the data in the block being saved is added to IPFS other than PutBlock
.
Out of curiosity, is saving data to IPFS that network intensive? I know there's certainly overhead, as opposed to saving to local only DB, but unlike fetching data there's no blocking involved, right?
I really like the idea of breaking up the Blockstore
struct to two different entities, one that strictly uses the local store, and another focused on retrieving block data from IPFS. I think that makes a lot of sense for LazyLedger. However, I do feel that is out of the scope of this PR. This PR's main purpose is to get us to the next step of removing BlockID
and stop saving block data twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am ok to tackle this in a separate PR.
Out of curiosity, is saving data to IPFS that network-intensive?
It is, as down the road Bitswap provides data to DHT and that's kinda network-intensive. Further, after #375 is merged, PutBlock may even become blocking for providing. The providing itself is also a concern as it definitely should be out of Blockstore and currently there is no way to disable it. Luckily, offline dag thin should solve that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, I am fully ok to work over this in another PR, as a proper solution for this is not obvious and would probably require some level of refactoring over multiple components
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, let's try to tackle the story behind this in another PR. That PR will require some clearly outlined reasoning and an implementation plan in an issue or maybe even (as part of) a full-blown ADR (specifically around storage, i.e. storage for validator nodes, storage for partial nodes that passively participate in the consensus–those that will replace the current tendermint fullnodes).
I think on master, only block-proposers call PutBlock. The other nodes currently only explicitly store the block data via the tendermint store (#218 tries to change that). I think for this PR it's OK to skip the online/offline story.
BTW, I could imagine that if PutBlock is called twice and there is no update on the data, the network IO would not really be triggered (not sure though, would need to look into the interplay between the ipfs block store and bitswap).
store/store.go
Outdated
} | ||
|
||
// NewBlockStore returns a new BlockStore with the given DB, | ||
// initialized to the last height that was committed to the DB. | ||
func NewBlockStore(db dbm.DB, dagAPI ipld.DAGService) *BlockStore { | ||
func NewBlockStore(db dbm.DB, dagAPI format.DAGService) *BlockStore { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to ensure we only passing here Offline Dag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, changed the API provider to have an online or offline option, and then started only using the offline version for blockstore 8619f27
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
although, I couldn't find a way to quickly check inside the function that the DAGService
is offline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually is not trivial and requires passing IPFS's blockstore and just DAG 😞
This is how the func for offline DAG would look like:
func OfflineDAG(bstore blockstore.Blockstore) format.DAGService {
return dag.NewDAGService(bsrv.New(bstore, offline.Exchange(bstore)))
}
ipfs/provider.go
Outdated
type APIProvider func(online bool) (coreiface.APIDagService, io.Closer, error) | ||
|
||
type onlineOfflineProvider struct { | ||
online, offline coreiface.APIDagService | ||
closer io.Closer | ||
} | ||
|
||
func NewOnlineOfflineProvider(online coreiface.CoreAPI, closer io.Closer) (APIProvider, error) { | ||
|
||
offline, err := online.WithOptions(OfflineOption) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
provider := onlineOfflineProvider{ | ||
online: online.Dag(), | ||
offline: offline.Dag(), | ||
} | ||
|
||
return provider.Provide, nil | ||
} | ||
|
||
func (o onlineOfflineProvider) Provide(online bool) (coreiface.APIDagService, io.Closer, error) { | ||
switch online { | ||
case true: | ||
return o.online, o.closer, nil | ||
case false: | ||
return o.offline, o.closer, nil | ||
} | ||
// the compiler does't know this code is unreachable | ||
return nil, nil, nil | ||
} | ||
|
||
func OfflineOption(settings *options.ApiSettings) error { | ||
settings.Offline = true | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this allows us to still be able to use a dag only mock, and have an offline APIDagService for the blockstore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry that I am responding to this quite late, but I am against this way of having offline DAG. Its usage is confusing, it feels like we have two IPFSs offline and online instead of one and provide two of them independently in the node constructor.
Instead, I would suggest going this way:
func OfflineDAG(bstore blockstore.Blockstore) format.DAGService {
return dag.NewDAGService(bsrv.New(bstore, offline.Exchange(bstore)))
}
or straightly the constructor
func NewBlockStore(db dbm.DB, bstore blockstore.Blockstore) *BlockStore {
bs := LoadBlockStoreState(db)
return &BlockStore{
base: bs.Base,
height: bs.Height,
db: db,
dag: dag.NewDAGService(bsrv.New(bstore, offline.Exchange(bstore))),
}
}
It just ensures 'offliness' of our Blockstore by receiving or wrapping offline blockstore from IPFS. Blockstores wrapping each other also makes more logical sense to me than passing dags or even full APIs. And, importantly this continues our road of decomposing IPFS into pieces, specifically, here we would already rely on IPFS's blockstore instead of very high-level API. Besides this, we would also simply some testability with easily mockable Blocksktore and this is just less code solution. The blockstore could be taken from the IpfsNode object. Hope that also makes sense for you.
Sorry again for the late response, I should've suggested this earlier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all bueno, it's better that we get it right the first time! 🙂 I like this better too, it's less hacky and ensures that all store.Blockstore
s are offline. The thing that I think can be improved is readability. I can see this being incredibly confusing to someone unfamiliar with what we're doing. The word blockstore is used way too much, and the purpose isn't directly obvious. It will get even more confusing when we have to modify the APIProvider
to also return a blockstore.Blockstore
, which we pass into the store.Blockstore
to make a dag, which is also returned by the APIProvider
, but we only use that dag api for consensus... We know it's because we want to enforce the online and offline usage of the DAG API, but I think we could make that more readable by doing something like this
// Maybe name this something else, like Gateway or Connector
// APIProvider allows customizable IPFS core APIs.
type APIProvider func() (DAGProvider, io.Closer, error)
// DAGProvider returns an online or offline DAG API
type DAGProvider func(online bool) coreiface.APIDagService
and
func NewBlockStore(db dbm.DB, dagProvider ipfs.DAGProvider) *BlockStore {
bs := LoadBlockStoreState(db)
return &BlockStore{
base: bs.Base,
height: bs.Height,
db: db,
dag: dagProvider(false),
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should for sure use the mechanism you mentioned
func OfflineDAG(bstore blockstore.Blockstore) format.DAGService {
return dag.NewDAGService(bsrv.New(bstore, offline.Exchange(bstore)))
}
in the DAGProvider
instead of the one used in onlineOfflineProvider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we could also do something like this too
// maybe name this something different
// APIProvider allows customizable IPFS core APIs.
type APIProvider func() (exchange.Interface, blockstore.Blockstore, io.Closer, error)
and just construct the DAGService
wherever needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with this approach:
type APIProvider func() (exchange.Interface, blockstore.Blockstore, io.Closer, error)
Is that I am also started to use another component from IPFs in another PR, so passing full Node instead of io.Closer above is inevitable. Furthermore, I expect other components to be used by us as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take a look at the new PuBlock semantics here. It now accepts ContentRouting from IPFS/libp2p and I need to pass it down from IpfsNode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So overall, to keep readability, we can also rename APIProvider to NodeProvider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think #400 can solve our problems
ipfs/provider.go
Outdated
type APIProvider func(online bool) (coreiface.APIDagService, io.Closer, error) | ||
|
||
type onlineOfflineProvider struct { | ||
online, offline coreiface.APIDagService | ||
closer io.Closer | ||
} | ||
|
||
func NewOnlineOfflineProvider(online coreiface.CoreAPI, closer io.Closer) (APIProvider, error) { | ||
|
||
offline, err := online.WithOptions(OfflineOption) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
provider := onlineOfflineProvider{ | ||
online: online.Dag(), | ||
offline: offline.Dag(), | ||
} | ||
|
||
return provider.Provide, nil | ||
} | ||
|
||
func (o onlineOfflineProvider) Provide(online bool) (coreiface.APIDagService, io.Closer, error) { | ||
switch online { | ||
case true: | ||
return o.online, o.closer, nil | ||
case false: | ||
return o.offline, o.closer, nil | ||
} | ||
// the compiler does't know this code is unreachable | ||
return nil, nil, nil | ||
} | ||
|
||
func OfflineOption(settings *options.ApiSettings) error { | ||
settings.Offline = true | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry that I am responding to this quite late, but I am against this way of having offline DAG. Its usage is confusing, it feels like we have two IPFSs offline and online instead of one and provide two of them independently in the node constructor.
Instead, I would suggest going this way:
func OfflineDAG(bstore blockstore.Blockstore) format.DAGService {
return dag.NewDAGService(bsrv.New(bstore, offline.Exchange(bstore)))
}
or straightly the constructor
func NewBlockStore(db dbm.DB, bstore blockstore.Blockstore) *BlockStore {
bs := LoadBlockStoreState(db)
return &BlockStore{
base: bs.Base,
height: bs.Height,
db: db,
dag: dag.NewDAGService(bsrv.New(bstore, offline.Exchange(bstore))),
}
}
It just ensures 'offliness' of our Blockstore by receiving or wrapping offline blockstore from IPFS. Blockstores wrapping each other also makes more logical sense to me than passing dags or even full APIs. And, importantly this continues our road of decomposing IPFS into pieces, specifically, here we would already rely on IPFS's blockstore instead of very high-level API. Besides this, we would also simply some testability with easily mockable Blocksktore and this is just less code solution. The blockstore could be taken from the IpfsNode object. Hope that also makes sense for you.
Sorry again for the late response, I should've suggested this earlier.
any chance this could be merged instead of cherry-picking it into my PR? Would like to avoid scope creep in my PR, there are already quite a few changes |
I think it can. The current justified concern is that calling PutBlock in store.SaveBlock could be potentially very network IO intensive. Because PutBlock currently also "provides" to the DHT (think of it as announcing to a bunch of peers to update their distributed lookup table). Potential workarounds to unblock this faster:
IMO, 1. is clearly preferable. @Wondertan Can you re-visit your PR from the perspective that even this change would be temporary. If there are not further consequence than it being (temporarily) confusing, we should just capture this in an issue and move on with this and Marko's PR. |
How resource intensive are we talking? If its a couple kb or even mb per block that gets garbage collected after a few seconds, that should be fine IMO. |
The networking IO can take up to several minutes (depending on block size on to how many ipfs peers you are connected to). Especially after @Wondertan's change that makes providing more explicit: #375 (previously it was implicitly handled by IPFS: async and also throttled and we couldn't be sure when this would be done). Providing and ideally any IPFS networking IO needs to be decoupled from storing. |
I'm updating this PR to the latest master and quickly making the changes @Wondertan suggested. If everything goes smoothly, it should be ready to go within the hour |
The e2e test is failing locally for me, even when temporarily reverting the most recent change in #413, and I'm still trying to debug why |
@liamsi, it is not clear to me now what long-term approach we should take here. It is clear now that IPFS brought to us architectural change that combines downloading, saving, and sharing into one operation. Previously, all of them were separated and independent. In general, we should answer if we want to fully adopt that approach or not. If not - then workarounds like with OfflineDAG are inevitable, if yes - additional architectural/designing investigation should be made to adopt properly. Personally, I am in favor of 'yes', as I am already going this way in DA interface, but still how to make that work for Blockstore is not clear. Ideally, if 'yes', we should eliminate tm's Blockstore and make it Headerstore instead, but the path to that is not clear for me now. |
That is what @marbar3778' PR (#218) that is kind blocked a bit on this change is trying to achieve. Regarding the long-term–which again, is a bit orthogonal to this PR–it depends a bit on how we decide about the tasks and relations of node types (celestiaorg/celestia-specs#178). For example: the fact that we even do call PutBlock on block proposing stems from the (now invalidated) idea that validators will do DAS (see: celestiaorg/celestia-specs#3 (comment)). Tendermint does not even store the block at that point. In fact now we could instead fall back to calling PutBlock just everywhere where we (or rather the existing tendermint code base) used to call store.SaveBlock. Note, that providing can happen completely async again! It is not consensus critical anymore that DAS can happen as quickly as possible: block production won't depend on DAS anymore. That said your manual providing is still important: we need to have some control and some clarity over when we write to the DHT (and when it is done). I don't have a full answer to how the exact long-term solution should look, at least not today ;) We must first decide on which node type cares about the consensus messages and which care about after the block is finalized. Until we decide on this it is probably safe to assume only validators and fullnodes (actual full nodes that download all data) care about consensus messages. As the alternative is just an optimization that (we should pursue IMO) that allows DAS (partial) nodes to start sampling after they see a proposal (instead of waiting until the commit).
If we stick to this we still need more control over resources here. E.g. as said above, storing should be blocking but other aspects like providing to the content routing can surely be async. For triggering rebroadcasting it would also be good to give node operators some control over general resource consumption too. While I like the abstractions of IPFS I don't like the fact that the networking IO parts are extremely implicit (only after your change providing became more explicit). So the question for you is: are you OK with merging this PR as it is? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes!
Thanks to a lot of help from @liamsi , we managed to make a lot of progress debugging the e2e test. Unfortunately, there are still some issues we have to work out/decide on before moving on. In short, the tests pass if we stop manually providing, but continue to fail if we keep manually providing and revert saving and loading block data via IPFS. That's really weird, given that both master and this branch passed the e2e tests before merging. We should expect reverting the changes to The second issue we at least found a solution to, although the underlying cause is still not yet clear. The solution was to not attempt to load the block during rpc requests. Instead, we loaded the block only if the blockmeta could be found first. While this is hacky, it will be fixed in #218 when only the header is searched for returned via rpc. More importantly, I still don't understand why this happens. The blockstore is using an offline DAG, and, at least during other tests, the offline dag is working as intended in that it will not hang if the data is not stored locally, it returns an error relatively quickly. As far as options go, I'm planning on continuing to debug tmrw. I might try to bootstrap each of the validator's embedded IPFS nodes. I am also going to try using the old hacky solution of offline dags, to hopefully find precisely what is stopping block data from being transferred over IPFS. |
@evan-forbes, regarding e2e tests stalling on unknown height. I had a similar issue in #375 and it was reproducible in 100% until I reworked context handling, as I mentioned here. Not sure if this helpful anyhow right now, but I guess it might be related. |
I guess that this is because PutBlock is async and the same for another test. I think PutBlock should still be blocking as before and instead of passing nil for content routing, we need to hijack ipfs.MockRouting() into there |
So, I thought I fixed it, but that was a fluke. The change does seem to make the e2e tests fail later, but they still fail 98% of the time locally. I still don't know which change introduced in this PR is causing the test to stall. If a dedicated DHT bootstrap node is used as the only bootstrap peer, or all bootstrap peers are removed, then the e2e test passes. Both might be similar to just turning off providing, which also makes the e2e test to pass. typical logs
|
p2p/ipld/write.go
Outdated
rowRoots := eds.RowRoots() | ||
colRoots := eds.ColumnRoots() | ||
|
||
// commit the batch to ipfs | ||
err = batchAdder.Commit() | ||
if err != nil { | ||
return err | ||
} | ||
|
||
prov := newProvider(ctx, croute, int32(squareSize*4), logger.With("height", block.Height)) | ||
for _, root := range rowRoots { | ||
prov.Provide(plugin.MustCidFromNamespacedSha256(root)) | ||
} | ||
for _, root := range colRoots { | ||
prov.Provide(plugin.MustCidFromNamespacedSha256(root)) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Providing does not require nodes to be committed and this could potentially increase PutBlock execution time for multicore systems, as before starting providing we will wait while all the data is written. We should start providing as early as possible
d11a26b
to
befb3c6
Compare
sorry for the force push, I didn't find the bug until restarting from scratch, and the old branch was a little messy from debug attempts. besides some minor cleaning up, I don't think I changed anything else besides the bug fix. Still, I'd feel better if this was given a quick glance over by someone else before merging, thanks! |
I'll take a look surely. Also, what's the problem with force pushing to sorry for it? IMO clean history is important |
But can you explain what were the bugs specifically? For |
You're right, clean history is important! In other scenarios, like strictly cleaning up the history before merging or asking for a review, I don't think it's that big of a deal. In this case, I restarted from scratch, so while the important changes are the same, the only way for a reviewer to know for sure is to basically re-review each line of code. If I would have made the changes via normal commits, then reviewers could view the precise changes and have confidence in the them without having to re-review each line of code. typically, I squash merge, as that ensures that every commit added to master passes tests/is in a working state. I didn't mean to bring up very bike sheddy issue though. I'm just apologizing for basically asking for a re-review after it was already approved. 🤷
for for the e2e test, I'm still not 100% sure, as the logs returned are not deterministic and I can't manage to recreate it in a test. Sometimes, after waiting long enough to time out, we get logs that say
other times, we get opaque peer disconnection logs that only say With the current design, every |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Only left some questions
There are a lot of upcoming changes that diverge from the original needs which spawned this PR a month ago, and those changes are not yet fully decided upon. That being said, we will still be able to use the changes in this PR in at least a few if not all of the planned node types. The changes here are made significantly more useful after we incorporate the decoupling of Providing introduced in #427. For those reasons we're merging this now, but I expect that we will need further changes to both the mechanics and the API. |
This reverts commit 8da1644.
* Revert "Remove the PartSetHeader from VoteSetMaj23, VoteSetBits, and state.State (#479)" This reverts commit 3ba47c2. * Revert "Stop Signing over the `PartSetHeader` for Votes and remove it from the `Header` (#457)" This reverts commit 818be04. * Revert "Decouple PartSetHeader from BlockID (#441)" This reverts commit 9d4265d. * Revert "Save and load block data using IPFS (#374)" This reverts commit 8da1644. * fix merge error * Revert "Add the ipfs dag api object in Blockstore (#356)" and get rid of embedded ipfs objects This reverts commit 40acb17. * remove ipfs node provider from new node * stop init ipfs repos * remove IPFS node config * clean up remaining ipfs stuff
Description
This PR builds off of #372 and #356 to save and load block data via IPFS. It should be noted that it does not completely remove the use of block parts or the
PartSetHeader
. It should also be noted that it is not yet explicit or configurable in when it retrieves data from IPFS per the request of #373.Closes: #373