You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To speed this up we could use asynchronous requests if an S3 backend is used. This circumvents Laravel's file system abstraction and uses the S3 SDK directly. Basically in the place shown above, detect if an S3 adapter is used for the storage disk. Then extract the S3 client from it and upload the files asynchronously as described here: https://stackoverflow.com/a/65365224/1796523
Make the number of parallel connections configurable. The default number should be 10.
Code from StackOverflow:
$files = glob('/path/to/your/files/*'); // This will return an array of all files in your foldertry {
// Init of S3 client$s3Client = new \Aws\S3\S3Client([
'version' => 'latest',
'region' => '', // Desired AWS region'credentials' => [
'key' => '', // Your AWS key'secret' => '', // Your AWS key secret
],
]);
// Logic about your requests and how to execute them$uploads = function($files) use ($s3Client) {
foreach ($filesas$file) {
yield$s3Client->putObjectAsync([
'Bucket' => '', // Name of your bucket'Key' => basename($file),
'SourceFile' => $file,
]);
}
};
// Execute your requests with Guzzle because $s3Client->putObjectAsync() returns \GuzzleHttp\Promise\Promise
\GuzzleHttp\Promise\Each::ofLimit(
$uploads($files),
3, // How much concurrent request to startfunction($response, $index) { // Callback on successvar_dump('Success: ' . $index);
},
function($reason, $index) { // Callback on failurevar_dump('Error: ' . $index);
}
)->wait();
} catch (\Aws\S3\Exception\S3Exception$e) {
var_dump($e->getMessage());
}
The text was updated successfully, but these errors were encountered:
A tiled image can easily produce 100.000 files. It is extremely slow to upload them sequentially to an S3 backend here:
core/app/Jobs/TileSingleImage.php
Lines 92 to 107 in e4756df
To speed this up we could use asynchronous requests if an S3 backend is used. This circumvents Laravel's file system abstraction and uses the S3 SDK directly. Basically in the place shown above, detect if an S3 adapter is used for the storage disk. Then extract the S3 client from it and upload the files asynchronously as described here: https://stackoverflow.com/a/65365224/1796523
Make the number of parallel connections configurable. The default number should be 10.
Code from StackOverflow:
The text was updated successfully, but these errors were encountered: