-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SAMZA-2688: [Elasticity] introduce configs and sub-partition concept aka SystemStreamPartitionKeyHash #1531
base: master
Are you sure you want to change the base?
Conversation
…aka SystemStreamPartitionKeyHash
@@ -71,6 +74,7 @@ public IncomingMessageEnvelope(SystemStreamPartition systemStreamPartition, Stri | |||
this.message = message; | |||
this.size = size; | |||
this.arrivalTime = Instant.now().toEpochMilli(); | |||
this.hashCodeForKeyHashComputation = key != null ? key.hashCode() : offset != null ? offset.hashCode() : hashCode(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a benefit to caching and storing it, rather than computing and exposing it via a function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did consider this originally but if there are multiple calls to getSystemStreamPartitionKeyHash then it would be worth having this cached.
* Aggregate object representing a portion of {@link SystemStreamPartition} consisting of envelopes within the | ||
* SystemStreamPartition that have envelope.key % job's elasticity factor = keyHash of this object. | ||
*/ | ||
public class SystemStreamPartitionKeyHash extends SystemStreamPartition { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to add the changes (i.e., keyhash) within SystemStreamPartition class itself?
Since logically this is representing a key-range (which is pretty such a "partition" of data, albeit different from the input kafka-partition)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
theoretically this should be possible. But the problem I forsee is with the serde of job model where the backwards compatability of reading an old job model containing old SSP serde might get impacted with the new defn and serde of SSP. Let me test this and get back on this thread.
@@ -136,6 +136,11 @@ | |||
"task.transactional.state.retain.existing.state"; | |||
private static final boolean DEFAULT_TRANSACTIONAL_STATE_RETAIN_EXISTING_STATE = true; | |||
|
|||
// Job Elasticity related configs | |||
// Take effect only when job.elasticity.factor is > 1. otherwise there is no elasticity | |||
private static final String TASK_ELASTICITY_FACTOR = "task.elasticity.factor"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this more like a task-to-partition mapping factor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am looking at this more as a split multiple. As in if the factor = 2 then split each original task into 2 virtual/elastic tasks. It can also be looked at as task-to-partition/ssp factor where factor = 2 means each ssp is read by 2 virtual/elastic tasks. would lend the same semantics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
took a pass
Feature: Elasticity for Samza job. Throughput via parallelism is tied to the number of tasks which is equal to the partition count of input streams. If a job is facing lag and is already at the max container count = number of tasks = number of input partitions, then the only choice it is left with is to repartition the input. This PR is part of the feature which aims to increase throughput by scaling task count beyond the input partition count. In this PR, the config and basic class for elasticity are introduced.
Changes: Introduce config "task.elasticity.factor" which defaults to 1. If factor = X>1 then each task is split into X elastic tasks. Also, introduce SystemStreamPartitionKeyHash which represents the portion of SSP that an elastic task will consume.
Tests: existing tests pass.
API changes: New config "task.elasticity.factor" which if > 1 enables this elasticity feature.
upgrade/usage instructions: add above config with value >1 to enable feature.