Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* explicitly specify hasher in default HashMap impl (#3) as per clippy's implicit hasher lint * set aHash as default hasher, add DefaultBuildHasher type (#4) * add a public DefaultHashBuilder type (aHash) * hashbrown switched from Fx Hash to aHash, already a good indicator * aHash provides better performance than Fx Hash on strings when running on x86 processors, which is the exact use case I am targeting * having a DefaultHashBuilder type allows users to be slightly more insulated to changes in the hashing algorithm in the future * clean up benchmarks * apply some clippy lints * change mutex benchmark so it compiles (use aHash instead of Fx Hash) * bump version requirements, add num_cpus dependency * set number of threads equivalent to hw core count * clean up incorrect automerge (Fx Hash/aHash) * relax memory orderings (#5) * relax atomic memory orderings these are almost certainly wrong * use acquire/acquire cas semantics when swinging this is to ensure that all writes to the returned bucket array are seen it would be better if we had consume loads but that's not possible unfortunately * add read-modify-write functions (#8) * simplify * implementations to use *_and functions look at all that code we deleted. beautiful * write and document public interface for RMW ops * insert_or_modify - insert a new value or mutate an existing one * insert_with_or_modfiy - insert_or_modify using a function to get the value to insert * insert_or_modify_and - insert_or_modify that returns the result of invoking a function on the previous value * insert_with_or_modify_and - insert_with_or_modify that returns the result of invoking a function on the previous value * modify - mutate an existing value * modify_and - modify that returns the result of invoking a function on the previous value * fix README.md so it won't fail assertions d'oh * refactor insert_or_modify use insert_or_modify_and instead of insert_with_or_modify_and because it's simpler to understand and less code * use usize::next_power_of_two kind of dumb to roll my own, but lessons learned and all that * normalize order of trait bounds in generics * fix a use-after-free/double-free bug with buckets it was previously possible for buckets to be involved in use-after-free and double-free bugs. say some bucket is shared between three bucket arrays, and that bucket is then removed from the latest array. if all threads have a copy of the first bucket array, then when they try and traverse the bucket array linked list they will only get to the second bucket array, which still contains a pointer to the now-deleted bucket. despite being marked as redirect, we still have to read the bucket's key to determine if we need to walk the list. so subsequent threads would be reading this memory after it was freed. the fix is to keep an epoch count in each bucket array that is incremented each time the buckets grow, and to return the bucket array that was used to fulfill a get/insert/remove operation. when we walk the bucket array list, we require that the final bucket array has an epoch at least as high as the returned bucket array. it is supremely unlikely that this epoch tag will overflow, since bucket arrays double in size when they grow. so the hash map's bucket array after swinging will be guaranteed to be up-to-date from the point of view of the current thread, meaning there is no way for subsequent threads to read pointers to now-freed buckets. this commit also contains an improvement to HashMap::drop that removes its reliance on a std::containers::HashSet<usize>. we know that all buckets with a redirect tag are either duplicated in the next bucket array or have been removed and are waiting to be destroyed when the GC runs. so the way to know if a bucket should be deallocated in HashMap::drop is to see if the bucket pointer has the redirect tag set; if it is set, we can defer freeing it until later. otherwise we are ok to free it immediately, since it won't be present in subsequent bucket arrays and of course won't be visible in other threads (guarantees of &mut). finally, BucketArray::grow_into has been improved to check for a supplanted flag while iterating and to set that supplanted flag when it finishes operating. threads that encounter the supplanted flag can simply skip the grow_into operation, since it has already been completed by another thread. * implement HashMap::modify{,_and} this implementation seems pretty efficient; it doesn't make any duplicate checks or allocations inside the CAS loop. * remove K: Clone bound for insert_or_modify * implement insert_or_modify, but a bug is exposed * BucketArray::grow_into has a double-free/use-after-free bug * keep hammering on it * still no progress on the use-after-free/double-free bug in grow_into * add some more tests to cover more overlapping operations * refactor some code to "simplify it" according to my twisted desires * bump verions of dependencies * bump version to 0.2.0
- Loading branch information