-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use appropriate HTTP status code instead of internal server error when >60k sequences submitted #3401
Comments
Agree with Chaoran that you possibly don't check for http status code. Things shouldn't be stuck in the aux table tho, we should clear it when the error happens, that's something that's fixable. Maybe things do get cleared after a while already, haven't looked into it. |
I changed the title, we can't prevent submission of too many sequences, but we can avoid having an internal server error. Or actually, the backend can just batch insertions to the db, then we can accept more than 65k. |
They were still in the aux table after an hour so I don't think we are clearing it - I was also unable to submit more sequences - but potentially this is a local bug I would have to check on a preview.
Thats partially true the ingest pipeline doesn't fail if the status code is not OK. We should log if it fails tho and I don't think I saw that but maybe it got lost - sadly I didn't log the logs |
Yes, we should definitely clear the aux table. I thought that we already do but maybe it isn't working well. Generally, I think that we don't cover many error cases very well and also don't have many tests for them. |
Just as an update - I did this via the API - there is no warning for that submitter: the submission post request succeeds. But the sequences are stuck in upload_metadata_aux table without an accession and I see this same error message in the backend logs - it would be great to somehow prevent this type of submission via the API.
#1613 (comment)
The text was updated successfully, but these errors were encountered: