You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried using pyfilesystem's s3fs to process a large quantity of data from Amazon's S3 (more than fits on disk) and I found that operation failed, because I ran out of space for the temp files s3fs creates.
Would it make sense to have an option or change the default to load files directly into memory instead of writing them to disk?
The text was updated successfully, but these errors were encountered:
I tried using pyfilesystem's s3fs to process a large quantity of data from Amazon's S3 (more than fits on disk) and I found that operation failed, because I ran out of space for the temp files
s3fs
creates.Would it make sense to have an option or change the default to load files directly into memory instead of writing them to disk?
The text was updated successfully, but these errors were encountered: