feat: Make the consumer crate no-std #471
Merged
+41
−17
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I think the only potentially controversial part of this is the decision to switch from hash tables to B-trees for the temporary data structures that are built while processing an update. Here's my reasoning.
HashMap
andHashSet
aren't inalloc
because the default hasher requires entropy. We could add astd
feature toaccesskit_consumer
and then bring inhashbrown
as a dependency only whenstd
is disabled. But for simplicity, I'd rather not add an extra feature and different code paths if we can avoid it.I don't like depending on a random number generator anyway, even when using
std
. Predictability is a valuable attribute in software, and I think it's worth maximizing it at as many levels of the stack as possible. Also, at least on Windows, using the random number generator adds an extra library dependency that non-Rust users have to use when statically linking AccessKit.As for performance, according to the Rust collections documentation, the performance of B-tree operations is O(log(n)), where as the expected performance for hash tables is O(1). Note, though, that the latter is expected performance, meaning that it's not guaranteed because hashing is probabilistic. In my opinion, log(n) never gets too large, and having that predictable upper bound is better.
I'm aware that the page I linked above also says to use hash tables unless one has a specific reason to do otherwise. I think I now disagree with that advice. I think it overvalues best-case performance compared to predictability.
Switching to B-trees does increase total binary size. I mitigated this somewhat by going to the trouble to avoid removing entries from the temporary
pending_nodes
andpending_children
maps. This should be good for speed as well, since as I mentioned, the maps are temporary; they go away as soon as update processing is finished. With this mitigation, the added binary size, for a size-optimized build withpanic = "abort"
on x86-64, is just under 10 KB. When I switch the platform adapters to also use B-trees for their own structures, that should drop a few KB.