-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Improving Lucene Engine Query Performance by reducing number of times a single Lucene k-NN query gets executed #2115
Comments
Im in favor of solution (1) over solution (2). I cannot think of a major advantage of doing rewrite vs createWeight, unless there is some kind of benefit around caching - but I cannot think of any |
I am also in favor of 1, but I was just thinking if we can fix it from Core side will that help us or not. Hence I added solution 2. Since the change in core is not even simple and might impact latency if not implemented correctly. So I think we should go with solution 1. But would like to hear more from other maintainers. |
How about caching the faiss search result for a short time. Do we know if the query is from the same request or not using something like query UUID? Blast radius could be smaller than caching SearchContext. |
@heemin32 this is not for faiss engine, this is for Lucene engine. Plus would you elaborate how caching will work in case of Lucene? |
You are saying for faiss, the query will happen only one time? Hmm. But for n, because it is filtered to its single parent, I guess exact search will get hit and caching might not help here. |
@heemin32 I didn't mention faiss anywhere and this double query execution for lucene happens because lucene query is executed during rewrite and if you see the links added in the description rewrite for queries happen for both query and fetch phase(for more than 1 shard). Hence extra latency. This issue doesn't talk about extra latency during inner hits. |
I am also in favor of 1! it looks good to me. @navneet1v
@heemin32 @navneet1v i think it would not happens with native knn query. rewrite comes from |
@jmazanec15 i think the major advantage is when there is no hits in so the solution 1 is better for me. @navneet1v |
yes thats correct. The issue which we are talking in this gh issue doesn't happen for Native engines. |
@luyuncheng thanks for putting up the thoughts. @junqiu-lei will be working on the fix. I think we should be able fix this before 2.18 release. |
With option 1, we could use inheritance instead of delegation, allowing us to inherit all other methods unchanged.
|
@heemin32 I would prefer delegation/composition here over inheritance, so that we can avoid creating new queries in Opensearch whenever Lucene adds a new query. |
@kotwanikunal One of the approaches I was thinking with this was is to unify NativeEngineKnnVectorQuery and the LuceneEngineKNNQuery mentioned in Solution. I see a few of benefits if we are able to pull it off
One of the approaches is to change KNNQuery to a generic Query. This will allow us to hold both KNNQuery and LuceneQueries There are some challenges though
Its worth looking into if there is a solution around these challenges |
@kotwanikunal I saw the PR #2305 and I am excited to see benchmarking results with that change. On the note of unification, can do this as a 2 step process. Unification is always good and helps reduce a lot of code branches. But this should not spill over and delay this fix for Lucene engine. @shatejas on this
I am not sure if we would be able to add the rescoring support just like this in the Lucene engine. Reason is: Lucene currently uses the FlatVectors as the vectors for HNSW graph. So when we try to access the flatVectors via Codec it will give the same quantized vectors and not full precision vectors. I see that in BQ support for Lucene they are trying the access of floatVectorValues via codec apache/lucene#13651. But since it is still in PR so cannot say when it will be available. Please correct me if there is something I am missing. |
That sounds like a good plan. I prioritized getting through the benchmarks and new flame graphs. Added them here: #2305 (comment) |
The change was merged into 2.x on 12/11/2024: 8daedac On the benchmarking dashboard, we can see that the latency for 2.19 has dropped in line with the merge. Dashboards: https://opensearch.org/benchmarks/ -> Vectorsearch-lucene-Cohere-1m-768D (Start date: Dec 3, 2024 @ 19:19:19.215) |
Description
Currently Lucene Engine KNN queries get executed during the re-write phase and not in the Weight class. On recent deep-dive we observed that rewrite function of a query can be called multiple times in the overall search flow.
Please check this code trace on rewrite running before start of fetch phase.
The same was observed in the flame graphs, where when we have more than 1 shard, during fetch phase the rewrite on the query is called again. This leads to running of the Lucene engine k-NN query more than 1 and adds latency.
Flame Graph
Number of shards: 2
KNN engine: Lucene
Dataset: 1M 128D sift.
Tool used: OSB
Docker image: opensearchstaging/opensearch:2.17.0.10284
Heap: 16GB
RAM: 64GB
Cores: 16
Search JFR: search.jfr.zip
Why we need Query and its rewrite in fetch phase
A quick scan of the opensearch core code in fetch phase I found use cases that might require to run the query rewrite again and then use it during fetch phase. Below are the references where query which was rewritten in the DefaultSearchContext during FetchPhase is added to FetchPhaseSearchContext and was used.
Explain Sub phase: This is used to provide the explanation why a particular result is in the query.
PercolateQuery Highlighting: No idea what this query type is but it does use some visitor pattern of the Query(Lucene interface) to do something.
Inner Hits: This is used to get the child hits when there is a parent child relationship between fields. Not sure about this use case as it is actually doing some more funky logic on query. This needs more deep-dive
Possible Solution
Solution 1
One solution we can explore is by wrapping all the Lucene queries in another QueryClause lets say LuceneEngineKNNQuery and have a class member in this class which actual Lucene query. Now when createWeight is called we can first rewrite the query and then call the scorer on top of it. This will ensure that Lucene k-NN query is executed only once.
Sample Code:
Solution 2
Another solution we can implement by caching the SearchContext at a shard level, and when fetch phase is executed we use the same SearchContext so that we don't need to rewrite the queries.
Another approach can be we can defer the rewrite and make it lazy so that only Fetch Pre-Processors that needs re-write should do the rewrite, and once it is done by a Fetch Processor then none of the other will need to run the re-write again as the query has changed now.
Pros and Cons
Both solution has its own pros and cons. Like
The text was updated successfully, but these errors were encountered: