-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
things-cli fails with "sqlite3.OperationalError: unable to open database file" when listing a large number of todos (~2500) #125
Comments
Each query creates a new DB connection. This change is to close each new connection rather than leaving them open, to allow for runs which involve large numbers of queries.
Each query creates a new DB connection. This change is to close each new connection rather than leaving them open, to allow for runs which involve large numbers of queries.
Actually, the tests with Python 3.13 warn about:
|
Thanks, in that case I think it'd make sense to roll back the second of those commits, as the intention of that second commit was as an optional performance improvement to leave the connection open (separate transactions, but shared connection), and that warning makes it clear that it's bad practice to do so. I.e. this later of the commits would need to be rolled back: I tested both versions (with just the first commit and with both), and while sharing a connecting did speed things up (75% or so faster IIRC), it's already very fast even on large Things databases. Would you prefer a PR for the revert of that second commit above, or prefer to revert it directly? Btw thank you for this great tool! |
Thanks. A PR that make sure that |
@mbhutton @AlexanderWillner Thank you for your work here. 75% faster! 🥳 |
Note: solved, with an upcoming pull request, but filing here as a bug report for reference.
To Reproduce
Steps to reproduce the behavior:
things-cli todos
Expected behavior
It runs successfully
Observed behaviour
It fails with the following:
Python: 3.13.0
things-cli: version 0.2.1
Root cause appears to be that each SQL query creates a new connection which is never closed, and so when querying tags for each task, the number of open connections is at least as high as the number of tasks.
Working fix is to close the connection each time after each query.
The text was updated successfully, but these errors were encountered: