-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: 大文件下载优化 #2656 #2665
base: master
Are you sure you want to change the base?
feat: 大文件下载优化 #2656 #2665
Conversation
...rage/storage-api/src/main/kotlin/com/tencent/bkrepo/common/storage/config/CacheProperties.kt
Show resolved
Hide resolved
...-service/src/main/kotlin/com/tencent/bkrepo/common/storage/core/cache/CacheStorageService.kt
Show resolved
Hide resolved
...-service/src/main/kotlin/com/tencent/bkrepo/common/storage/core/cache/CacheStorageService.kt
Show resolved
Hide resolved
...-service/src/main/kotlin/com/tencent/bkrepo/common/storage/core/cache/CacheStorageService.kt
Show resolved
Hide resolved
...-service/src/main/kotlin/com/tencent/bkrepo/common/storage/core/cache/CacheStorageService.kt
Show resolved
Hide resolved
...-service/src/main/kotlin/com/tencent/bkrepo/common/storage/core/cache/CacheStorageService.kt
Show resolved
Hide resolved
...backend/common/common-storage/storage-service/src/test/resources/storage-cache-fs.properties
Show resolved
Hide resolved
这个修改会导致用户单连接下载时无法触发制品库后端分片下载,可能需要引导用户使用下载工具多连接下载 |
是的,也考虑是否要放开原有的分片下载,但是考虑到原有的大文件分片下载并不会进行缓存且在高峰时也会降级到普通下载,所以还是现有的大文件下载更好些。 |
#2667 由于增加一个分片缓存目录,现有的监控与清理逻辑需要相应修改,由于现有的代码已经比较复杂,所以另外提了个issue来优化这快内容 |
可以结合起来,现在是只会提前下载下一个分片,如果提前并发下载多个分片到缓存就可以提升单连接下载速度 |
从服务端考虑,更关心吞吐和稳定性,为了提高单连接的速度,采用后台1:n的模式下载,会增加服务器和存储数据源压力。另外提前下载多个分片,用户可能也不会读取,比如下载取消或者中断。所以综上所诉,如果用户追求下载速度,建议分片下载。 另外说一下,我们这里所说的单连接速度慢,实际上是某个存储数据源慢,如果数据源本身不会限速,就更不必要后台主动1:n分片下载了。所以代码设计上,更加考虑通用性,而不是针对某一个特殊的数据源,写特殊逻辑,这是我的看法。 |
之前CosClient是出于什么考虑支持后台分片下载的,现在可以禁用这个功能,或移除这部分代码,然后统一建议用户客户端分片下载吗 |
之前是会了提高单连接的速度,这个功能还是有必要存在的,cos下载提速不仅仅用在供用户客户端下载,我们自己也会作为客户端,从cos上下载,这个时候cos分片下载就起作用了。 |
No description provided.