-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize generating many bitsets for serialization #365
Comments
Let me know what you think @lemire ! |
@lemire Pinging again if you have a chance to look. |
Yes... this is a reasonable use case, but please review my comments on the PR. I think we want a non-invasive solution. That is, we want something that does not fatten the structures and code for everyone, just to handle this one case. You are probably limited by the memory allocation/garbage collection. And thus you try to create polls of arrays to get around the problem. That is fine, but it is also not a general-purpose solution. So we want to push the complexity, the potential bugs and all such things away from what most people use. See my sketch of a counterproposal. |
Thanks for taking a look, I really appreciate it! I think you're right that basically what I wanted was a custom allocator, so I responded with an alternative proposal that basically makes this explicit. Its slightly more invasive than your proposal, but much more generic and will lend itself to a much wider variety of incremental optimizations I think. |
There's another approach which I have been thinking of implementing in Java which should be easy to implement so long as the integers added to the bitmaps are sorted: avoid allocating memory in the first place since the bitmaps being built are themselves temporary. A serialized bitmap has two sections: the header and the containers. The structure of the header doesn't lend itself to streaming:
These quantities need to be known in their entirety before writing the header. Though the header can't be streamed, the containers can: they're just written out contiguously and indexed by the offsets in the header. This lends itself to a lot of object reuse while streaming integers straight to an output: it requires a resuable bitset of runcontainer locations, lists of key/cardinality pairs and container offsets, a number of containers, and a reusable container to accumulate added integers into, or even just another reusable set to buffer values in the same container into. As values are consumed, the high bits of the values need to be tracked, and whenever there is a new value in the high bits:
At the end, the header needs to be written into another destination, with the size of the header (which is calculable by this point) added to each container offset. If the destination is a file, it may be possible to move the serialized containers to the file containing the header without performing a copy (this is possible in Java but I don't know how to do it in Go). This has another rather nice advantage of avoiding a binary search in the container index for each input value. The Java implementation has the concept of a |
It does seem that what is needed is a bitmap writer. Of course, it would work much better if the input is sorted but if not, you could still partially sort it in a buffer before serialization. A bitmap writer for already sorted data could beat anything else in performance and keep allocations very scarce. |
A bitmap writer would be excellent for this particular use-case yeah, although it would still be nice to reduce allocations when reading many short-lived bitmaps. I think the frozen implementation is supposed to help with this, but I'd prefer if we could avoid that. |
I recommend working on a bitmap writer. It is a well-defined problem, it is relatively easy to implement it safely and efficiently, the API won't be confusing and it will not be a significant burden on the code base. Furthermore, the Java implementation has already a powerful and fast bitmap writer (although not specialized for serialization..), so we have validation. If well implemented, it can be faster and simpler than anything else for this task which motivates you. It is still not trivial to do: if you look at what Richard did in Java, it still requires handling efficiently different densities and data distribution so that the right container is picked. But, at least, if you assume that the input data is sorted, it is a tractable problem. It does not preclude further work in other directions later. |
One of our heaviest workloads uses this library to index a lot of ingested data in real time. In one part of the workload we basically have dozens of goroutines doing something like the following in a loop:
We've already heavily optimized the code that generates
things
to be very efficient and avoid allocations/pointers, however, when its time to actually serialize the index we're building we have to go through this library and it allocates like crazy even though we're trying to reuse a single datastructure.I've provided a benchmark that demonstrates this along with some improvements in this P.R: #364
I realize that adding pooling of internal datastructures may be a big change, but I tried to structure the P.R so that its completely "opt in" via the new
ClearRetainDatastructures()
API.EDIT: I can't provide screenshots of profiling for obvious reasons, but I just went back and checked and the allocations removed by this P.R represent ~ 56% of all allocations in our workload so this will have a very substantial performance impact for us (and others I expect who use the new API).
The text was updated successfully, but these errors were encountered: