You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When importing a large quantity of data, it is possible to trigger a Heroku timeout (30+sec request), which apparently triggers a "retry" and results in duplicate data.
Not retrying on timeout, and instead allowing whatever is in process to complete, with a message to the user that while the application timed out, their data may be partially present.
Not retrying on timeout, and instead deleting all data that was imported during the request. (Maybe some kind of atomic transactions?)
Moving the actual import processes to a background task instead of during the request. The only time during which the request needs to stay alive would be during the file upload, which is relatively quick. Of course, this would require resources for a background queue.
The text was updated successfully, but these errors were encountered:
When importing a large quantity of data, it is possible to trigger a Heroku timeout (30+sec request), which apparently triggers a "retry" and results in duplicate data.
Possible solutions:
The text was updated successfully, but these errors were encountered: