Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow option to delete + redownload files #21

Open
txtsd opened this issue Sep 9, 2015 · 2 comments
Open

Allow option to delete + redownload files #21

txtsd opened this issue Sep 9, 2015 · 2 comments

Comments

@txtsd
Copy link

txtsd commented Sep 9, 2015

Re-downloading a file should be as easy as pressing a button.

Currently there is no such feature requiring the user to

  • Delete the file
  • Open browser
  • Find and click the download link again
  • Accept download parameters in GigaGet

Suggestion is a simple redownload button in the file details view.

@JRHaigh
Copy link

JRHaigh commented Aug 21, 2016

Presumably your intention is to redownload corrupt files. So far, everytime I've downloaded files larger than several megabytes, the files have been corrupt, each time with a different hash. So deleting and redownloading is not a dependable solution to corruption.
    Aside from avoiding the corruption in the first place, a far more reliable solution to corruption would be a ‘repair’ operation – rather than deleting the existing file, the existing file would be compared against, block-by-block, with the downloaded data. Where there are differences, those blocks of the file would be downloaded a third time and are much more likely to finally match against either the existing or the redownloaded blocks. If the former then the existing block is left unchanged, if the latter then the redownloaded block is written over the existing block, if neither then repeat until a matching pair is found or give up and leave the existing block unchanged.
    This repair mechanism would vastly reduce the download amplification factor and inconvenience of multiple manual restarts, especially for large files.

@JRHaigh
Copy link

JRHaigh commented Aug 22, 2016

This repair mechanism would vastly reduce the download amplification factor and inconvenience of multiple manual restarts, especially for large files.

So to put some example figures to that, suppose that a 1GiB file has a 1/2 chance of not being corrupt and that the error rate is uniform. A 2GiB file therefore has a 1/4 chance of not being corrupt, a 3GiB file has a 1/8 chance, 4GiB has 1/16 chance, etc.. A 10GiB file would have a 1/1024 chance of not being corrupt, so, on average, without the repair mechanism that I suggest, you'd have a download amplification factor of about 1024 and would be bored sick of manually restarting the thing! :-]
    With the repair feature that I described in my last message, it would reduce to about 2. Suppose that this 10GiB file is split into 10240 blocks, 1MiB each. A 1MiB block would have a chance of not being corrupt of (1/2)^(1/1024), about 0.9993. Assuming that the chance that a block is corrupt in the same way twice and thus a corrupt block goes unnoticed is negligible, the download amplification factor of the redownload is (1/2)^(-1/1024), about 1.0006. Considering the initial download and the 1023/1024 chance of being corrupt, the overall average download amplification factor in this example would be 1 + 1023/1024*(1/2)^(-1/1024), about 1.9997. (Please correct me if you spot a mistake.)
    So that's about a 512-fold reduction in the average download amplification factor! Having that firm guarantee that I definitely shouldn't ever need to download a file more than about twice, no matter how large it is, is something that I could really rely on. It's not as efficient as having a hash for each block, as does the bittorrent protocol, which allows for a download amplification factor only ever slightly above 1, but it's better than indefinitely large factors as is presently the case for GigaGet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants