Increase expected size of output sample buffer, to correctly handle input termination. #22
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem: when using the full API (
.process()
method in this wrapper), the number of output samples is incorrect for a chunked input at the end of a stream. This is because the calculation ofnew_size
assumes that the number of output samples that will be generated will be at most the number of input samples times the resampling ratio. While that's generally true for chunks processed mid-stream (as well as at the start of a stream), when the input is terminated it becomes possible for the output samples to exceed this expected length because the resampler's buffer is being flushed of any remaining data.Solution: an ideal solution would be to determine the actual number of samples that will be flushed on an input-terminating call. This is complicated to do, for various reasons related to issues such as libsndfile/libsamplerate#6
As a hacky workaround, we can figure that adding a "fudge factor" of 10,000 samples ought to be sufficient. In case that's not enough, a
RuntimeError
will be raised.Testing: Here's a modified version of the example code on the
README
... perhaps this chunked usage should be included in the documentation?