-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
configure stack size #3
base: master
Are you sure you want to change the base?
Conversation
NOTE: this is a breaking change (adds exported field to exported struct), and once merging we will need toincrement the major version. |
This is a migration of: couchbase/vellum#28 |
Also @abhinavdangeti can you check CLA status, I no longer have permission to see that list. |
I reverted the change to the benchmark. First, generally when we want to compare before/after results it's unfair to change the benchmark in the same change. Second, I don't think it was an appropriate change anyway. To benchmark building a vellum, you should actually build it each time. Reusing the builder on subsequent iterations means we're not really doing the same work each loop through the benchmark. That said, one still might argue that reuse is what real applications would do, so I'm going to propose we have 2 benchmarks, one reuses the builder, one does not. The first is more useful if you're actually trying to measure improvements to the building process. The second is more useful at measuring how well reusing the builder helps. I will add this new benchmark to master and we can merge it in here, and see the results of both benchmarks before and after. |
OK, so here are the updated benchmark results (old is current master, new is this branch)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for getting this benchmarked @mschoch.
As we've reviewed the code already, and these numbers look mostly good - I'm ok to merge this.
OK, so since this will be a new major version I wanted to go a little bit slower. Today I ran some more full-stack indexing building tests, to see if this helped in the real world. The test is using search-benchmark-game which indexes just over 5 million documents. All tests ran with bleve v2.0.0. First, I ran 2 runs with stock bleve v2.0.0:
Then I used a go mod rewrite to point to this new vellum and ran 2 more runs:
So, some improvement, but nothing huge. Then, just to sanity check that there is some signal here and it isn't all noise. I switched it back to stock bleve v2.0.2 and ran it one more time:
So, my unscientific conclusion is that there is some real word benefit, but at the moment it isn't very large. |
Just to be a bit more scientific, converting to seconds for easier comparison: old:
new:
Taking the fastest of the old (1014) and the slowest of the new (1002) we get a diff of 12 seconds, which is just around 1% of the ballpark times. |
Hey @richardartoul , I don't see that you've signed the CLA to make contributions to the project. Would you follow instructions as listed here .. https://github.com/blevesearch/vellum/blob/master/CONTRIBUTING.md Once you've done that, we can go ahead with merging this into the code base. |
No description provided.