-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release-24.1: colexec: fix memory leak in simpleProjectOp #139097
Open
yuzefovich
wants to merge
1
commit into
cockroachdb:release-24.1
Choose a base branch
from
yuzefovich:backport24.1-138804
base: release-24.1
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
release-24.1: colexec: fix memory leak in simpleProjectOp #139097
yuzefovich
wants to merge
1
commit into
cockroachdb:release-24.1
from
yuzefovich:backport24.1-138804
+75
−40
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit fixes a bounded memory leak in `simpleProjectOp` that has been present since 20.2 version. The leak was introduced via the combination of - 57eb4f8 in which we began tracking all batches seen by the operator so that we could decide whether we need to allocate a fresh projecting batch or not - 895125b in which we started using the dynamic sizing for batches (where we'd start with size 1 and grow exponentially until 1024 while previously we would always use 1024). Both changes combined made it so that the `simpleProjectOp` would keep _all_ batches with different sizes alive until the query shutdown. The problem later got more exacerbated when we introduced dynamic batch sizing all over the place (for example, in the spilling queue). Let's discuss the reasoning for why we needed something like the tracking we did in the first change mentioned above. The simple project op works by putting a small wrapper (a "lense") `projectingBatch` over the batch coming from the input. That wrapper can be modified later by other operators (for example, a new vector might be appended), which would also modify the original batch coming from the input, so we need to allow for the wrapper to be mutated accordingly. At the same time, the input operator might decide to produce a fresh batch (for example, because of the dynamically growing size) which wouldn't have the same "upstream" mutation applied to it, so we need to allocate a fresh projecting batch and let the upstream do its modifications. In the first change we solved it by keeping a map from the input batch to the projecting batch. This commit addresses the same issue by only checking whether the input batch is the same one as was seen by the `simpleProjectOp` on the previous call. If they are the same, then we can reuse the last projecting batch; otherwise, we need to allocate a fresh one and memoize it. In other words, we effectively reduce the map to have at most one entry. This means that with dynamic batch size growth we'd create a few `projectingBatch` wrappers, but that is ok given that we expect the dynamic size heuristics to quickly settle on the size. This commit adjusts the contract of `Operator.Next` to explicitly mention that implementations should reuse the same batch whenever possible. This is already the case pretty much everywhere, except when the dynamic batch size grows or shrinks. Before we introduced the dynamic batch sizing, different batches could only be produced by the unordered synchronizers, so that's another place this commit adjusts. If we didn't do anything, then the simplistic mapping with at most one entry could result in "thrashing" - i.e. in the extreme case where the inputs to the synchronizer would produce batches in round-robin fashion, we'd end up creating a new `projectingBatch` every time which would be quite wasteful. In this commit we modify both the parallel and the serial unordered synchronizers to always emit the same output batch which is populated by manually inserting vectors from the input batch. I did some manual testing on the impact of this change. I used a table and a query (with some window functions) from the customer cluster that has been seeing some OOMs. I had one node cluster running locally with `--max-go-memory=256MiB` (so that the memory needed by the query was forcing GC all the time since it exceeded the soft memory limit) and `distsql_workmem=128MiB` (since we cannot spill to disk some state in the row-by-row window functions). Before this patch I observed the max RAM usage at 1.42GB, and after this patch 0.56GB (the latter is pretty close to what we report on EXPLAIN ANALYZE). Release note (bug fix): Bounded memory leak that could previously occur when evaluating some memory-intensive queries via the vectorized engine in CockroachDB has now been fixed. The leak has been present since 20.2 version.
Thanks for opening a backport. Please check the backport criteria before merging:
If your backport adds new functionality, please ensure that the following additional criteria are satisfied:
Also, please add a brief release justification to the body of your PR to justify this |
blathers-crl
bot
added
the
backport
Label PR's that are backports to older release branches
label
Jan 15, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport 1/1 commits from #138804.
/cc @cockroachdb/release
This commit fixes a bounded memory leak in
simpleProjectOp
that hasbeen present since 20.2 version. The leak was introduced via the
combination of
all batches seen by the operator so that we could decide whether we
need to allocate a fresh projecting batch or not
dynamic sizing for batches (where we'd start with size 1 and grow
exponentially until 1024 while previously we would always use 1024).
Both changes combined made it so that the
simpleProjectOp
would keepall batches with different sizes alive until the query shutdown. The
problem later got more exacerbated when we introduced dynamic batch
sizing all over the place (for example, in the spilling queue).
Let's discuss the reasoning for why we needed something like the
tracking we did in the first change mentioned above. The simple project
op works by putting a small wrapper (a "lense")
projectingBatch
overthe batch coming from the input. That wrapper can be modified later by
other operators (for example, a new vector might be appended), which
would also modify the original batch coming from the input, so we need
to allow for the wrapper to be mutated accordingly. At the same time,
the input operator might decide to produce a fresh batch (for example,
because of the dynamically growing size) which wouldn't have the same
"upstream" mutation applied to it, so we need to allocate a fresh
projecting batch and let the upstream do its modifications. In the first
change we solved it by keeping a map from the input batch to the
projecting batch.
This commit addresses the same issue by only checking whether the input
batch is the same one as was seen by the
simpleProjectOp
on theprevious call. If they are the same, then we can reuse the last
projecting batch; otherwise, we need to allocate a fresh one and memoize
it. In other words, we effectively reduce the map to have at most one
entry.
This means that with dynamic batch size growth we'd create a few
projectingBatch
wrappers, but that is ok given that we expect thedynamic size heuristics to quickly settle on the size.
This commit adjusts the contract of
Operator.Next
to explicitlymention that implementations should reuse the same batch whenever
possible. This is already the case pretty much everywhere, except when
the dynamic batch size grows or shrinks.
Before we introduced the dynamic batch sizing, different batches could
only be produced by the unordered synchronizers, so that's another place
this commit adjusts. If we didn't do anything, then the simplistic
mapping with at most one entry could result in "thrashing" - i.e. in the
extreme case where the inputs to the synchronizer would produce batches
in round-robin fashion, we'd end up creating a new
projectingBatch
every time which would be quite wasteful. In this commit we modify both
the parallel and the serial unordered synchronizers to always emit the
same output batch which is populated by manually inserting vectors from
the input batch.
I did some manual testing on the impact of this change. I used a table
and a query (with some window functions) from the customer cluster that
has been seeing some OOMs. I had one node cluster running locally with
--max-go-memory=256MiB
(so that the memory needed by the query wasforcing GC all the time since it exceeded the soft memory limit) and
distsql_workmem=128MiB
(since we cannot spill to disk some state inthe row-by-row window functions). Before this patch I observed the max
RAM usage at 1.42GB, and after this patch 0.56GB (the latter is pretty
close to what we report on EXPLAIN ANALYZE).
Fixes: #138803.
Release note (bug fix): Bounded memory leak that could previously occur
when evaluating some memory-intensive queries via the vectorized engine
in CockroachDB has now been fixed. The leak has been present since 20.2
version.
Release justification: bug fix.