-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoC withCache
utility
#600
Conversation
This comment has been minimized.
This comment has been minimized.
Just my $0.02, I think it should be separate from the It looks like |
@davidhousedev that's great feedback. The real goal of this utility is to provide an easy cache API for 3p requests. And in that sense, it doesn't really make sense to be on the storefront object, because they aren't storefront requests. Initially I felt that people should just use the Perhaps we could provide a helper function, that you'd call to pass something to your context: import {createRequestHandler, createCacheHandler} from '@shopify/remix-oxygen';
export default {
async fetch(request, env, executionContext) {
const {storefront} = createStorefrontClient(...);
const handleRequest = createRequestHandler({
...,
getLoadContext: () => ({
...,
storefront,
sanityCache: createCacheHandler(await caches.open('sanity')),
dynamicYield: createCacheHandler(await caches.open('dynamicYield')),
})
})
}
} And then from within a route loader: export async function loader({ context: { storefront, sanityCache } }) {
const [storefrontData, sanityData] = await Promise.all([
storefront.query(`some query`),
sanityCache(["some", "key"], async function () {
return fetch("sanitiy-api");
}), // or some other flavor like Fran mentions above
]);
} |
Yep, 100% agree — inside I also like your notion of a helper function for individual clients in Then we also expose these lower level things so folks can have more control — and so app integration partners can have a happy partners to create their own clients (like your examples of Sanity and DY). |
Thanks! Questions, and forgive me if they don't make sense:
Thanks! |
I'm not sure about the implications of opening multiple caches. Probably it's not a perf issue but, alternatively, you can just add a prefix in the cacheKey. Something like Or create a custom helper: getLoadContext: () => ({
storefront,
sanityCache: (key: string, action: Function) => withCache(['sanity', key], action)
// or even:
fetchSanity: (body: string) => withCache(['sanity', body], async () => (await fetch('sanity-api/....', {body})).json())
}) However, I think this would probably be simplified if we provide Just to clarify, I'm not against the idea of using different caches, just providing alternatives here for brainstorming.
Yes, it supports strings for simplicity, or arrays of serializable things:
I don't see any advantage for a null parameter since it can be called conditionally, as you said.
Perhaps in |
@frandiox I think I agree with that. It seems like it probably would be overkill to have a separate cache per 3p. And it's easy enough to add another element to the array key to make sure it's scoped. |
@frandiox Good point re: cache namespaces. We can definitely adopt that pattern in
While I see the utility of |
Ah, interesting. I guess the generated SDK eventually calls raw Related question: would you expect to be able to return a |
To me, I think |
I don't really like the null return option, because sometimes you still want the result (like you want the error), but you don't want it to be cached. I like the other idea of the strategy apart of the return signature. The one downside of that approach is you can't just have something super simple like |
@frandiox I wonder the best way to separate it from |
Yep! Good point :). As long as
Good question. I think we can work with any JSON-serializable return value from One thing that comes to mind about the return is the typing. What do you expect the return type to look like? We like the return type from |
We could support responses but it will make harder something like
We can clone responses but that means you need to await two
Yeah 100% agree.
Do you mean the one where you must return
I've changed it to be returned by the existing utility: --
The return type is the same you are returning from the action: Or do you have something else in mind? |
@blittle @davidhousedev Some other possibilities for the
|
I like this option more than wrapping the returned value in a Regarding return types, maybe it's the time I'm spending with Rust, but I see a lot of value in being required to handle an error from a function that may error. It's easy to forget a Really appreciate the work you're doing on this! 🙇🏻♂️ |
👋 thanks for taking the time to iterate on this proposal! A couple thoughts swirling in my head: Do we really need a generic
|
@jplhomer previously in h1, we bound |
@jplhomer Alas, we do and it would. I've walked @blittle through this problem space before, happy to get into detail out-of-band. I'll outline it below: We build our pages via a headless CMS. We query the data from the CMS over graphql. We structure our pages as a stack of "modules" where each module can be independently reordered on the page to enable rich customization options. We have two options when querying for page data:
Option 1 is not technically feasible. If we try to fetch the entire page in one request, we hit graphql query complexity limits and our query is rejected. Option 2 is only feasible if we're able to put all queries behind the same cache key. If each request is cached independently, we expose a race condition where one module might be revalidated before another but both modules have been updated in the CMS. The result would be a page that's marketing one promotion (Free product A) whereas a module further down the page is marketing a a different promotion. Again, happy to discuss further! |
3bc1005
to
46c2402
Compare
f42f707
to
4d73d51
Compare
Are there plans to include a utility to generate a cache key from object? Third party APIs might want to pass user specific headers, which need to be part of the key. Having something like this build in could be helpful? |
You can pass an object or an array as the cache key already, it will be stringified internally for you: withCache(['my-3p-api', request.headers.get('x-authentication')], CacheShort(), () => {...}) |
@blittle — thoughts on cache observability? Any path here to allow devs to look under the hood on cache hits/misses? |
We have a use case where we have to make two Storefront API calls in order to handle translated variant options. These need to be in sync to ensure they marry up to one another, meaning they can't live under two separate cache keys.
# Returns options in english so we can map up the always-english URL query params to the translated options by options[index]
query DefaultLangProduct($handle: String!) {
product(handle: $handle) {
options {
name
values
}
}
}
# Returns options in users language, along with all other product data
query Product(
$country: CountryCode
$language: LanguageCode
$handle: String!
$selectedOptions: [SelectedOptionInput!]!
) @inContext(country: $country, language: $language) {
product(handle: $handle) {
id
title
descriptionHtml
options {
name
values
}
# etc.
}
} const [defaultLangProduct, product] = Promise.all([
storefront.query(DEFAULT_LANGUAGE_PRODUCT_QUERY, {
variables: {
handle,
selectedOptions
},
cache: CacheNone(),
}),
storefront.query(PRODUCT_QUERY, {
variables: {
handle,
selectedOptions,
country: locale.country,
language: locale.language,
},
cache: CacheNone(),
}),
]) Is there a way to combine the cache key of two storefront API calls so we can pass it to For now we're doing this which is a bit grim because the cache entry is invalidated after a build (in case the code changed): const [defaultLangProduct, product] = await withCache(
// This uuid is specific to this call site. It includes the build number so we can safely change the code below without
// having to remember to update the uuid.
[env.BUILD_NUMBER, '876979b6-c2ad-4d05-8865-7b16b5d7fa48'],
CacheLong(),
() =>
Promise.all([
storefront.query(DEFAULT_LANGUAGE_PRODUCT_QUERY, {
variables: {
handle,
selectedOptions,
},
cache: CacheNone(),
}),
storefront.query(PRODUCT_QUERY, {
variables: {
handle,
selectedOptions,
country: locale.country,
language: locale.language,
},
cache: CacheNone(),
}),
]),
); |
You can check how we create a cache key for a single storefront call and then concatenate the other one. Something like:
Where:
|
Thanks @frandiox. It got kinda complex so we ended up caching them both independently as before, but handled the mismatch more gracefully (redirect to the default variant) and set custom CF cache tags so that we can purge the cache on product update webhooks. It would have been great if the Storefront API supported query batching or |
Proof of concept for a
storefront.withCache
withCache
utility similar touseQuery
. This comes from an internal draft doc by @blittleNotes:
We already have aIt's not a standalonestorefront.cache
, so I've changed its name tostorefront.withCache
for now.withCache
returned byconst {storefront, withCache} = createStorefrontClient(...)
Questions:
Should this really be insidestorefront
? It's not going to be used for storefront queries so perhaps it should be returned fromconst {storefront, withCache} = createStorefrontClient(...)
instead, or a different utility?const {storefront, withCache} = createStorefrontClient(...)
).useQuery
andfetchSync
to make cache easier with fetch, would it make sense to have bothwithCache
andfetchWithCache
exposed?shouldCacheResult
parameter to know if the result should be cached or not (similar to the internalshouldCacheResponse
instorefront.query
, useful when there are GraphQL errors in the payload). Should we allow signaling "no cache" with a return valuenull
instead? Or, alternatively, allow passing the cache strategy as part of the result:How to test:
Add the following code to
root.tsx
and refresh the page several times.