-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
revisit hash_always_miss and hash_ignore_busy vs. stale objects #3670
Comments
FTR: This has been discussed during bugwash with no clear decision. Some notes:
Regarding this ticket, one solution which was discussed is to
|
I think a crucial insight is that by definition we only need to prune when we finish fetching a new objcore. That means that it is something the be-worker can do, after getting out of the critical path. Likewise it is theoretically enough to prune a single object. However, that leaves no margin for VCL or parameter changes, so two should probably be attempted. The problem is finding an objcore to prune, because they may be stuck in the middle of the objhdr list, surrounded by a LOT of other objcores with incompatible In many cases, (but not all: Anyway, it sounds like it might be a good idea to start simulating the dynamic behaviour of this area... |
This ticket has been discussed between @bsdphk and myself. Brief summary: We did not come up with a particularly nice solution, any ideas to clean the object list in retrospect seem overly complicated. We discussed the following idea as a likely "best" (reasonably simple) compromise: For This approach is not a complete solution, as it would only remove at most one "surplus" object per insert. Noted after the disucssion: We might want to consider moving stale ocs found during always_miss / ignore_busy to the end of the object list, such that at least chances increase for other old objects to be cleaned up. side notes:
|
I'm not sure I would do an extra lookup at the end, I would probably force hash_always_miss through the initial lookup, so it comes in with a stale_oc like everbody else. |
... is not what I meant. I meant to move a found stale_oc to the end of the oc list such that another stale_oc has a chance to be picked up by the next lookup |
this has been discussed during bugwash - no clear result, please see edit versions if interested |
My suggestion from the bugwash discussion. For a lookup with hash_always_miss raised:
|
I have a hard time seeing how there could be more than a single compatible oc, if we implement this, but other than that: Yes. |
If you start two fetch tasks for the same hash, same variant, you can have two fresh compatible ocs once you cleared |
I think the pseudo-IMS will go a long way to reduce the problem, I'd prefer to not add expensive machinery to oh until we know it is (still) needed. |
bugwash: It seems we have got close to the second last edit of #3670 (comment):
should be revisited after #4073 |
For ordinary requests, we (try to) replace stale cache objects when we fetch new ones:
HSH_Lookup()
, we return the "best object" via theocp
argumentcnt_lookup()
, we either pass this object directly toVBF_Fetch()
as theoldoc
argument for a bgfetch, or indirectly by saving it inreq->stale_oc
for themiss
case.vbf_stp_fetchend()
, when we have a new cache object, we kill a single stale object which was saved before, if anyNow this mechanism clearly does not work with
hash_always_miss
, as previously documented in #2945 : If this mode is present, we insert a new object, never cleaning up old stale objects (because we do not track them).A different but seemingly similar issue exists with
hash_ignore_busy
: For concurrent requests, we insert multiple objects, but only remove one.This ticket is to question if this behavior is in fact what we want. I see the following issues:
On the other end, I could imagine cases where these facilities are used to deliberately create fallback objects in cache, and I wonder if such use cases relying on the property of inserting additional objects exist.
Demo
vcl:
The text was updated successfully, but these errors were encountered: