You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This won't scale much longer since if I add any more training data it won't all fit in memory. In this project they have a list of all the training images and only load what is required for each training batch.
My current way might be a bit quicker to train since the IO is front loaded but eventually something like this might be required.
The text was updated successfully, but these errors were encountered:
* Changed prepare samples to use np array for X
The previous method, with a list, took too much memory. With the old method, using 20 GB worth of 1080p images, memory usage would exceed my computer's 16 GB. With the updated code it does not exceed 1 GB. Kind of solved this issue: #2
* Update utils.py
Co-authored-by: Kevin Hughes <[email protected]>
* Update with suggestions
Co-authored-by: Kevin Hughes <[email protected]>
* Changed prepare samples to use np array for X
The previous method, with a list, took too much memory. With the old method, using 20 GB worth of 1080p images, memory usage would exceed my computer's 16 GB. With the updated code it does not exceed 1 GB. Kind of solved this issue: #2
* Update utils.py
Co-authored-by: Kevin Hughes <[email protected]>
* Update with suggestions
* Put back load_imgs
* Fix with upstream
Co-authored-by: Kevin Hughes <[email protected]>
This won't scale much longer since if I add any more training data it won't all fit in memory. In this project they have a list of all the training images and only load what is required for each training batch.
My current way might be a bit quicker to train since the IO is front loaded but eventually something like this might be required.
The text was updated successfully, but these errors were encountered: