-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Go Client v7.3.0 #433
Merged
Go Client v7.3.0 #433
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This fix supports unmarshalling ordered maps into hash maps map[K]V and []MapPair in the structs.
The client would use a single buffer on the connection and would grow it per demand in case it needed a bigger buffer, but would not shrink it. This helped with avoiding using buffer pools and the associated synchrinization, but resulted in hugew memory use in case there were a few large records in the results, even if they were infrequent. This changeset does two things: 1. Will use a memory pool for large records only. Large records are defined as records bigger than aerospike.DefaultBufferSize. This is a tiered pool with different buffer sizes. The pool uses sync.Pool under the cover, releasing unused buffers back to the runtime. 2. By using bigger aerospike.DefaultBufferSize values, the user can immitate the old behavior, so no memory pool is used. This change should result in much lower memory use by the client.
Signed-off-by: Swarit Pandey <[email protected]>
Add a histogram and use the median value in intervals to readjust the connection buffer down or upwards to optimize memory use.
"math/rand" is fast enough now
…b-batches to Get requests If the number keys for a sub-batch to a node is equal or less then the value set in BatchPolicy.DirectGetThreshold, the client use direct get instead of batch commands to reduce the load on the server.
…b-batches to Get requests If the number keys for a sub-batch to a node is equal or less then the value set in BatchPolicy.DirectGetThreshold, the client use direct get instead of batch commands to reduce the load on the server.
…lying policy is nil
…le key batch commands to signle operate commands
Improve the BatchOperate test
justinlee-aerospike
approved these changes
May 8, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changes look good
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a major feature release of the Go client and touches some of the fundamental aspects of the inner workings of it.
We suggest complete testing of your application before using it in production.
New Features
CLIENT-2238 Convert batch calls with just one key per node in sub-batches to Get requests.
If the number keys for a sub-batch to a node is equal or less then the value set in BatchPolicy.DirectGetThreshold, the client use direct get instead of batch commands to reduce the load on the server.
CLIENT-2274 Use constant sized connection buffers and resize the connection buffers over time.
The client would use a single buffer on the connection and would grow it
per demand in case it needed a bigger buffer, but would not shrink it.
This helped with avoiding using buffer pools and the associated
synchronization, but resulted in excessive memory use in case there were a
few large records in the results, even if they were infrequent.
This changeset does two things:
1. Will use a memory pool for large records only. Large records
are defined as records bigger than
aerospike.PoolCutOffBufferSize
.This is a tiered pool with different buffer sizes. The pool
uses
sync.Pool
under the cover, releasing unused buffers back to theruntime.
2. By using bigger
aerospike.DefaultBufferSize
values, the user canimitate the old behavior, so no memory pool is used most of the time.
3. Setting
aerospike.MinBufferSize
will prevent the pool using buffer sizes too small,having to grow them frequently.
4. Buffers are resized every 5 seconds to the median size of buffers used over the previous period,
within the above limits.
This change should result in much lower memory use by the client.
CLIENT-2702 Support Client Transaction Metrics. The native client can now track transaction latencies using histograms. Enable using the
Client.EnableMetrics
API.Improvements
MaxRecvMsgSize
to handle big records for the proxy client.xrand
sub-package since the native API is fast enough.WritePolicy.SendKey
documentation, thanks to Rishabh Sairawatioutil.ReadFile
withos.ReadFile
. PR chore: deprecate io/ioutil #430, thanks to Swarit PandeyFixes
[]MapPair
return in reflection.This fix supports unmarshalling ordered maps into
map[K]V
and[]MapPair
in the structs.