-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache parsed neuron data #20
Comments
some more thoughts on above. It seems to me that both types of caching
would be interesting. For the request caching, I would think one should basically create a directory hierarchy from a root directory specified by an option options(catmaid.cache.root=TRUE)
that matches the request url e.g.
underneath that there should be an rds object named by the md5 hash of the content (or perhaps the etag). One could then imagine having a second option options(catmaid.cache.expiry=3600) which sets the cache expiry time in seconds. |
For the parsed result caching something like md5 of raw contents as dir and then function name ( |
I have noticed a couple of options for this, but nothing looks perfect so far.
Option 1 has the big advantage of being on CRAN. Either option may need
|
See #119 for a cache / mock testing approach |
not sure of a good strategy yet for this? One simple thing would be to hash the returned json and at least save ourselves the trouble of re-parsing.
as a little test, something like this:
breaks down to about
So it looks like this strategy could give a 3-5x speedup, which sounds interesting. But then the question is where would do this. If we insert something in
catmaid_fetch
we could make something very general and save the json parsing. But if we worked with read.neuron.catmaid, we should be able to save everything.Another strategy would be to cache the request itself – this could involve catmaid_fetch again and a hash of the url/post data along with some kind of timestamp checking.
The text was updated successfully, but these errors were encountered: