-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi_cached not preserving original keys when calling the underlying function #563
Comments
Needs a test/reproducer. |
I don't want to disrupt PR #564, so I'm moving this bit of discussion here instead.
Well, that's an extreme example, but I suppose "legal" under Interpretation 2. :) I was expecting something more like this...
I agree that these functions should know nothing about the cache. My interpretation was that separation of concerns is achieved by:
On subsequent review, I'm realizing why this is confusing. In decorator, before reading from the cache, the cache keys are built by aiocache/aiocache/decorators.py Lines 377 to 380 in 5007dc2
When aiocache/aiocache/decorators.py Lines 355 to 360 in 79c3f70
This means that the cache keys used to set and get are generally not matching each other, unless the keys returned by the decorated function are the same as the keys listed in the arg that is named by ConclusionSo, that seems to imply that Interpretation 1 needs to be correct. It's done a bit oddly at the moment though, because cache is read with the Should there be a check that the decorated function actually returns keys that match those in the |
My impression of this @multi_cached decorator, is that it is fairly specific and expects a function which returns the keys it was passed. i.e. The function needs to be specifically designed as something which receives a list of inputs and returns values indexed by those inputs. So, to support arbitrary functions like these seems out of scope (and the second wouldn't work when
Or something to that extent (i.e. splitting your fetching logic from your application-specific logic).
But, now your cache code needs to understand how the function transforms its input to its output, which seems to me like you're mixing the cache and function code again. I don't think they should be coupled that tightly. I can see the logic of only transforming it on the get and expecting the function to return keys that match this, but the code actually transforms it both on the get and the set, so this behaviour didn't work before either.
The only use of
It can't use the keys returned by the function or it wouldn't be a cache, it needs to look them up without calling the function.
Likewise, we need to save whatever was returned from the function, so what would either of these have to do with that? I'm not sure I'm following the confusion on that part...
Yes, I think it should be treated as a simple API that is not uncommon. If a function does not implement this simple API design, then it is not appropriate to put the @multicached decorator on it.
I'm not sure what else you could propose.
There's some discussion on that here: #490 (comment) Currently, I'd say that returning anything other than the requested keys is undefined behaviour, which we should narrow down and decide how it should work. Current ideas that could support returning contradictory values include eager loading (a function returns extra keys to cache, expecting a call for those keys in the future) and a no-cache (excluding keys that shouldn't be cached, maybe because the request timed out or something).
I'm not sure what you're suggesting. If the result doesn't include the correct keys, how am I supposed to know which output key relates to which input key? |
Closing because the issue is completed, but feel free to continue the discussion. |
@Dreamsorcerer Thank you for your thoughtful consideration of these points.
I agree with you. This paradigm makes the most sense, all considered. |
I'll add a few responses for posterity, but I think none of it changes the outcome of this discussion. I.e., that the @multi_cached decorator expects a function which returns the keys it was passed |
I had been thinking about the inverse of that situation, where the decorator is being applied to an existing function that returns complex keys. But either way, I think you're right that a simple wrapper could/should be applied to keep the caching logic separate from the function logic. |
I originally thought this was the problem that But of course it's even better to keep these responsibilities completely separate, as you're suggesting.
Right. I missed this distinction until now. |
Well, ok, yeah...I suppose I was stating the obvious there. ;)
Sort of. For the other decorators, the cache serializes the exact output of the function. With multi_cached, teach item of the output gets serialized individually. For the complex-key functions I was considering in Interpretation 2, |
I had been thinking of a straight 1-to-1 replacement of the keys, something like:
...but of course that's no good because there's no guarantee that the response keys have the same order as the attribute keys. Maybe set logic would be better.
It's better, but now it's getting into the territory of the discussion in #490 that you mentioned. For instance, it doesn't consider the case where some keys are returned by At the moment, I don't have substantial suggestions for handling the empty / partial response cases. |
@Dreamsorcerer The fix for this issue is not merged into 0.12 branch and hence the fix is not available in the latest pip package. |
I don't think it's backwards compatible. |
@jkgenser and I were testing
multi_cached
in our project, and found that transformed keys are being passed into the underlying function call afterkey_builder
is called. Created #564 to track.The text was updated successfully, but these errors were encountered: