You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to process large files which contain JSON objects separated by CRLF.
I would like to start iterating over the objects, as processed by jq, before reading the whole file into memory. As far as I understand, the command-line form of jq produces results in this case as the data is piped in from stdin, and not only when the pipe has been closed.
Would this be possible to implement in jq.py by giving an open file, a TextInputWrapper, or maybe a file name? Am I wrong about jq not processing the whole file before producing results in this case? If it is not possible to implement, what are the barriers to this?
The text was updated successfully, but these errors were encountered:
I would like to process large files which contain JSON objects separated by CRLF.
I would like to start iterating over the objects, as processed by jq, before reading the whole file into memory. As far as I understand, the command-line form of jq produces results in this case as the data is piped in from stdin, and not only when the pipe has been closed.
Would this be possible to implement in jq.py by giving an open file, a TextInputWrapper, or maybe a file name? Am I wrong about jq not processing the whole file before producing results in this case? If it is not possible to implement, what are the barriers to this?
The text was updated successfully, but these errors were encountered: