Replies: 1 comment 4 replies
-
This seemed like a better fit for the Discussions section. I hope you didn't mind! :) Your best shot at getting an informed response to this question would be to subscribe to and ask on the netatalk-admins mailing list. This is where most of the long-time users are present. No guarantee of course, but it is worth a shot I think. And yes, the "admins" list is a bit of a misnomer. https://sourceforge.net/projects/netatalk/lists/netatalk-admins |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is your feature request related to a problem? Please describe.
Desire to tune Netatalk disk IO for better interoperability with ZFS
Describe the solution you'd like
The default values for
server quantum
anddsireadbuf
(according to the manual), implies that Netatalk will read-ahead up-to 12MB when it detects sequential reads as a prefetch mechanism.I believe (maybe naively), that
server quantum
roughly equates to ZFSrecordsize
.And
dsireadbuf
roughly equates to ZFS sysctlvfs.zfs.prefetch.max_distance
.So Netatalk on OpenZFS results in double prefetching?
If Netatalk detects a sequential read, it should reach a high value for dsireadbuf quickly? (I assume it grows the longer the sequential read lasts).
OpenZFS will in-turn observe Netatalk's read-ahead and will itself also start reading ahead, in front of Netatalks readahead.
OpenZFS by default reads ahead upto 64MB, therefore there is a risk of significant read amplification?
Describe alternatives you've considered
In the case of Netatalk on an intelligent filesystem like ZFS, is it better to reduce
dsireadbuf
and allow ZFS to perform the read ahead instead (considering it knows the block layout)? - I believe yes.And should
server quantum
be set to match ZFS recordsize? - I believe this is not important and likely to have undesirable side effects.Additional context
The default values (12MB read head for Netatalk and 64MB for ZFS) work great for large files, and provide peak speed easily. But latency can suffer, especially with small files.
In my own testing, reducing
dsireadbuf
to 2 or 4 has zero impact on reading large files, but has a small benefit for reading small files. Most notably, it seems to improve parallel IO performance during Netatalk reads (I assume because the disks are spending less time on wasted reads, allowing time for other operations).This is observational, and I have no empirical evidence as it is hard to measure the respective buffers in each layer etc.
Therefore additional explanation for these Netatalk values would help justify or dismiss this tuning idea for Netatalk on OpenZFS.
Thanks for your thoughts.
Beta Was this translation helpful? Give feedback.
All reactions