diff --git a/CHANGELOG.md b/CHANGELOG.md
index 23d5249e45..52b475abdd 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,7 +2,58 @@
## [Unreleased](https://github.com/aklivity/zilla/tree/HEAD)
-[Full Changelog](https://github.com/aklivity/zilla/compare/0.9.54...HEAD)
+[Full Changelog](https://github.com/aklivity/zilla/compare/0.9.55...HEAD)
+
+**Implemented enhancements:**
+
+- Support `extraEnv` in helm chart [\#520](https://github.com/aklivity/zilla/issues/520)
+- `kubernetes autoscaling` feature \(enhanced\) [\#518](https://github.com/aklivity/zilla/issues/518)
+- Shard MQTT topic space for client-id specific subset [\#427](https://github.com/aklivity/zilla/issues/427)
+- Distribute MQTT topic space across different Kafka topics [\#426](https://github.com/aklivity/zilla/issues/426)
+- `AsyncAPI` integration \(baseline\) [\#257](https://github.com/aklivity/zilla/issues/257)
+- `OpenAPI` integration \(baseline\) [\#255](https://github.com/aklivity/zilla/issues/255)
+- `mqtt-kafka` feature \(baseline\) [\#190](https://github.com/aklivity/zilla/issues/190)
+- `telemetry metrics` feature \(baseline\) [\#188](https://github.com/aklivity/zilla/issues/188)
+- `grpc-kafka` feature \(baseline\) [\#183](https://github.com/aklivity/zilla/issues/183)
+
+**Fixed bugs:**
+
+- Etag header field name MUST be converted to lowercase prior to their encoding in HTTP/2 [\#551](https://github.com/aklivity/zilla/issues/551)
+- BudgetDebitor fails to claim budget after sometime [\#548](https://github.com/aklivity/zilla/issues/548)
+- Unexpected flush causes NPE in connection pool [\#546](https://github.com/aklivity/zilla/issues/546)
+- \[Consumer Group\] Race condition while joining simultaneously to the same group id [\#542](https://github.com/aklivity/zilla/issues/542)
+- MQTT client connections cause errors/crashes [\#527](https://github.com/aklivity/zilla/issues/527)
+- Sporadic github action build failures [\#526](https://github.com/aklivity/zilla/issues/526)
+- Unable to write to streams buffer under bidi-stream [\#368](https://github.com/aklivity/zilla/issues/368)
+- Fix flow control bug in mqtt-kakfa publish [\#524](https://github.com/aklivity/zilla/pull/524) ([bmaidics](https://github.com/bmaidics))
+
+**Closed issues:**
+
+- Feature: Adding contributors section to the README.md file. [\#545](https://github.com/aklivity/zilla/issues/545)
+- gRPC method call doesn't respond when status code is not OK [\#504](https://github.com/aklivity/zilla/issues/504)
+
+**Merged pull requests:**
+
+- Fix mqtt connect decoding bug when remainingLenght.size \> 1 [\#554](https://github.com/aklivity/zilla/pull/554) ([bmaidics](https://github.com/bmaidics))
+- Etag header field name MUST be converted to lowercase prior to their encoding in HTTP/2 [\#552](https://github.com/aklivity/zilla/pull/552) ([akrambek](https://github.com/akrambek))
+- Don't send window before connection budgetId is assigned [\#549](https://github.com/aklivity/zilla/pull/549) ([akrambek](https://github.com/akrambek))
+- Use coordinator member list to check if the heartbeat is allowed [\#547](https://github.com/aklivity/zilla/pull/547) ([akrambek](https://github.com/akrambek))
+- Retry sync group request if there is inflight request [\#543](https://github.com/aklivity/zilla/pull/543) ([akrambek](https://github.com/akrambek))
+- Add "Back to Top" in Readme.md [\#539](https://github.com/aklivity/zilla/pull/539) ([PrajwalGraj](https://github.com/PrajwalGraj))
+- Create an appropriate buffer with the size that accommodates signal frame payload [\#537](https://github.com/aklivity/zilla/pull/537) ([akrambek](https://github.com/akrambek))
+- Adjust padding for larger message header and don't include partial data while computing crc32c [\#536](https://github.com/aklivity/zilla/pull/536) ([akrambek](https://github.com/akrambek))
+- Fix dump command to truncate output file if exists [\#534](https://github.com/aklivity/zilla/pull/534) ([attilakreiner](https://github.com/attilakreiner))
+- fix typos in README.md [\#532](https://github.com/aklivity/zilla/pull/532) ([shresthasurav](https://github.com/shresthasurav))
+- Fixed a typo in README.md [\#529](https://github.com/aklivity/zilla/pull/529) ([saakshii12](https://github.com/saakshii12))
+- Sporadic github action build failure fix [\#522](https://github.com/aklivity/zilla/pull/522) ([akrambek](https://github.com/akrambek))
+- Propagate gRPC status code when not ok [\#519](https://github.com/aklivity/zilla/pull/519) ([jfallows](https://github.com/jfallows))
+- Add extraEnv to Deployment in the helm chart [\#511](https://github.com/aklivity/zilla/pull/511) ([attilakreiner](https://github.com/attilakreiner))
+- Client topic space [\#507](https://github.com/aklivity/zilla/pull/507) ([bmaidics](https://github.com/bmaidics))
+- Mqtt topic space [\#493](https://github.com/aklivity/zilla/pull/493) ([bmaidics](https://github.com/bmaidics))
+
+## [0.9.55](https://github.com/aklivity/zilla/tree/0.9.55) (2023-10-11)
+
+[Full Changelog](https://github.com/aklivity/zilla/compare/0.9.54...0.9.55)
**Implemented enhancements:**
@@ -226,7 +277,7 @@
**Implemented enhancements:**
-- `kubernetes autoscaling` feature [\#189](https://github.com/aklivity/zilla/issues/189)
+- `kubernetes autoscaling` feature \(baseline\) [\#189](https://github.com/aklivity/zilla/issues/189)
**Closed issues:**
@@ -525,7 +576,7 @@
**Fixed bugs:**
- Error running http.kafka.oneway from zilla-examples [\#117](https://github.com/aklivity/zilla/issues/117)
-- Zillla build fails on timeout [\#102](https://github.com/aklivity/zilla/issues/102)
+- Zilla build fails on timeout [\#102](https://github.com/aklivity/zilla/issues/102)
**Merged pull requests:**
diff --git a/README.md b/README.md
index 4186c373e0..b83f911dc5 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,4 @@
+
@@ -61,7 +62,7 @@ Returns an `etag` header with `HTTP` response. Supports conditional `GET if-none
- [x] **Filtering** — Streams messages from a Kafka topic, filtered by message key and/or headers, with key and/or header values extracted from segments of the `HTTP` path if needed.
- [x] **Reliable Delivery** — Supports `event-id` and `last-event-id` header to recover from an interrupted stream without message loss, and without the client needing to acknowledge message receipt.
-- [x] **Continous Authorization** — Supports a `challenge` event, triggering the client to send up-to-date authorization credentials, such as JWT token, before expiration. The response stream is terminated if the authorization expires. Multiple SSE streams on the same `HTTP/2` connection and authorized by the same JWT token can be reauthorized by a single `challenge` event response.
+- [x] **Continuous Authorization** — Supports a `challenge` event, triggering the client to send up-to-date authorization credentials, such as JWT token, before expiration. The response stream is terminated if the authorization expires. Multiple SSE streams on the same `HTTP/2` connection and authorized by the same JWT token can be reauthorized by a single `challenge` event response.
### gRPC-Kafka Proxying
@@ -86,7 +87,7 @@ Returns an `etag` header with `HTTP` response. Supports conditional `GET if-none
- [x] **Filtering** — Local cache indexes message key and headers upon retrieval from Kafka, supporting efficiently filtered reads from cached topics.
- [x] **Fan-in, Fan-out** — Local cache uses a small number of connections to interact with Kafka brokers, independent of the number of connected clients.
- [x] **Authorization** — Specific routed topics can be guarded to enforce required client privileges.
-- [x] **Helm Chart** — Generic Zilla Helm chart avaliable.
+- [x] **Helm Chart** — Generic Zilla Helm chart available.
- [x] **Auto-reconfigure** — Detect changes in `zilla.yaml` and reconfigure Zilla automatically.
- [x] **Prometheus Integration** — Export Zilla metrics to Prometheus for observability and auto-scaling.
- [x] **Declarative Configuration** — API mappings and endpoints inside Zilla are declaratively configured via YAML.
@@ -175,7 +176,7 @@ Looking to contribute to Zilla? Check out the [Contributing to Zilla](./.github/
✨We value all contributions, whether it is source code, documentation, bug reports, feature requests or feedback!
## License
-Zilla is made available under the [Aklivity Community License](./LICENSE-AklivityCommunity). This is an open source-derived license that gives you the freedom to deploy, modify and run Zilla as you see fit, as long as you are not turning into a commercialized “as-a-service” offering. Running Zilla in the cloud for your own workloads, production or not, is completely fine.
+Zilla is made available under the [Aklivity Community License](./LICENSE-AklivityCommunity). This is an open source-derived license that gives you the freedom to deploy, modify and run Zilla as you see fit, as long as you are not turning into a standalone commercialized “Zilla-as-a-service” offering. Running Zilla in the cloud for your own workloads, production or not, is completely fine.
@@ -188,3 +189,5 @@ Zilla is made available under the [Aklivity Community License](./LICENSE-Aklivit
[release-latest-image]: https://img.shields.io/github/v/tag/aklivity/zilla?label=release
[release-latest]: https://github.com/aklivity/zilla/pkgs/container/zilla
[zilla-roadmap]: https://github.com/orgs/aklivity/projects/4/views/1
+
+(🔼 Back to top)
diff --git a/build/flyweight-maven-plugin/pom.xml b/build/flyweight-maven-plugin/pom.xml
index b2b166c147..2ddf4f110a 100644
--- a/build/flyweight-maven-plugin/pom.xml
+++ b/build/flyweight-maven-plugin/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
build
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/build/pom.xml b/build/pom.xml
index ec1611c894..09e5ce85fa 100644
--- a/build/pom.xml
+++ b/build/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/cloud/docker-image/pom.xml b/cloud/docker-image/pom.xml
index 0d9db655d6..366dbcb4b8 100644
--- a/cloud/docker-image/pom.xml
+++ b/cloud/docker-image/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
cloud
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/cloud/helm-chart/pom.xml b/cloud/helm-chart/pom.xml
index c2d531cea4..773cda2290 100644
--- a/cloud/helm-chart/pom.xml
+++ b/cloud/helm-chart/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
cloud
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/cloud/helm-chart/src/main/helm/templates/deployment.yaml b/cloud/helm-chart/src/main/helm/templates/deployment.yaml
index 1c4e041b21..6b0928e605 100644
--- a/cloud/helm-chart/src/main/helm/templates/deployment.yaml
+++ b/cloud/helm-chart/src/main/helm/templates/deployment.yaml
@@ -56,6 +56,9 @@ spec:
- name: {{ $env.name }}
value: {{ $env.value }}
{{- end }}
+ {{- if .Values.extraEnv }}
+ {{- toYaml .Values.extraEnv | nindent 12 }}
+ {{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
diff --git a/cloud/helm-chart/src/main/helm/values.yaml b/cloud/helm-chart/src/main/helm/values.yaml
index 8662c0d336..e0ad0ded9b 100644
--- a/cloud/helm-chart/src/main/helm/values.yaml
+++ b/cloud/helm-chart/src/main/helm/values.yaml
@@ -34,7 +34,7 @@ readinessProbePort: 0
initContainers: []
args: ["start", "-v", "-e"]
-env: []
+extraEnv: []
configPath: /etc/zilla
configMaps: {}
secrets: {}
diff --git a/cloud/pom.xml b/cloud/pom.xml
index 8a208a64c5..43883fa218 100644
--- a/cloud/pom.xml
+++ b/cloud/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/conf/pom.xml b/conf/pom.xml
index 4afe6abdf1..79faaafea4 100644
--- a/conf/pom.xml
+++ b/conf/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/binding-amqp.spec/pom.xml b/incubator/binding-amqp.spec/pom.xml
index 6281dd3849..fd8e41a733 100644
--- a/incubator/binding-amqp.spec/pom.xml
+++ b/incubator/binding-amqp.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/binding-amqp/pom.xml b/incubator/binding-amqp/pom.xml
index 0de5a39a03..2148ad40a1 100644
--- a/incubator/binding-amqp/pom.xml
+++ b/incubator/binding-amqp/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/catalog-inline.spec/pom.xml b/incubator/catalog-inline.spec/pom.xml
index 57127a7f50..5c0f73d77b 100644
--- a/incubator/catalog-inline.spec/pom.xml
+++ b/incubator/catalog-inline.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/catalog-inline/pom.xml b/incubator/catalog-inline/pom.xml
index b45b5c1c10..784ec78d73 100644
--- a/incubator/catalog-inline/pom.xml
+++ b/incubator/catalog-inline/pom.xml
@@ -6,7 +6,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/catalog-schema-registry.spec/pom.xml b/incubator/catalog-schema-registry.spec/pom.xml
index 50b26b1dd9..582b277f01 100644
--- a/incubator/catalog-schema-registry.spec/pom.xml
+++ b/incubator/catalog-schema-registry.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/catalog-schema-registry/pom.xml b/incubator/catalog-schema-registry/pom.xml
index b511cfe551..2374139f7c 100644
--- a/incubator/catalog-schema-registry/pom.xml
+++ b/incubator/catalog-schema-registry/pom.xml
@@ -6,7 +6,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/command-dump/pom.xml b/incubator/command-dump/pom.xml
index 8c385b7009..8f685e398d 100644
--- a/incubator/command-dump/pom.xml
+++ b/incubator/command-dump/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/command-dump/src/main/java/io/aklivity/zilla/runtime/command/dump/internal/airline/ZillaDumpCommand.java b/incubator/command-dump/src/main/java/io/aklivity/zilla/runtime/command/dump/internal/airline/ZillaDumpCommand.java
index 3f666f6870..a99769938d 100644
--- a/incubator/command-dump/src/main/java/io/aklivity/zilla/runtime/command/dump/internal/airline/ZillaDumpCommand.java
+++ b/incubator/command-dump/src/main/java/io/aklivity/zilla/runtime/command/dump/internal/airline/ZillaDumpCommand.java
@@ -15,8 +15,9 @@
package io.aklivity.zilla.runtime.command.dump.internal.airline;
import static java.lang.Integer.parseInt;
-import static java.nio.file.StandardOpenOption.APPEND;
import static java.nio.file.StandardOpenOption.CREATE;
+import static java.nio.file.StandardOpenOption.TRUNCATE_EXISTING;
+import static java.nio.file.StandardOpenOption.WRITE;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import static org.agrona.LangUtil.rethrowUnchecked;
@@ -167,7 +168,7 @@ public void run()
final LongPredicate filter = filtered.isEmpty() ? b -> true : filtered::contains;
try (Stream files = Files.walk(directory, 3);
- WritableByteChannel writer = Files.newByteChannel(output, CREATE, APPEND))
+ WritableByteChannel writer = Files.newByteChannel(output, CREATE, WRITE, TRUNCATE_EXISTING))
{
final RingBufferSpy[] streamBuffers = files
.filter(this::isStreamsFile)
diff --git a/incubator/command-generate/pom.xml b/incubator/command-generate/pom.xml
index 4af7c4670f..e7c56cade9 100644
--- a/incubator/command-generate/pom.xml
+++ b/incubator/command-generate/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/command-log/pom.xml b/incubator/command-log/pom.xml
index 720840fe23..2ea247ad72 100644
--- a/incubator/command-log/pom.xml
+++ b/incubator/command-log/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/command-tune/pom.xml b/incubator/command-tune/pom.xml
index d9533cbf9f..b83e1a9479 100644
--- a/incubator/command-tune/pom.xml
+++ b/incubator/command-tune/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/exporter-otlp.spec/pom.xml b/incubator/exporter-otlp.spec/pom.xml
index d7b8486106..0f907eea98 100644
--- a/incubator/exporter-otlp.spec/pom.xml
+++ b/incubator/exporter-otlp.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/exporter-otlp/pom.xml b/incubator/exporter-otlp/pom.xml
index b95f243e54..4b8eb520db 100644
--- a/incubator/exporter-otlp/pom.xml
+++ b/incubator/exporter-otlp/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/pom.xml b/incubator/pom.xml
index 3357a0bcd4..a1a273b93d 100644
--- a/incubator/pom.xml
+++ b/incubator/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/validator-avro.spec/pom.xml b/incubator/validator-avro.spec/pom.xml
index 2505f0e0cb..a556e34ac9 100644
--- a/incubator/validator-avro.spec/pom.xml
+++ b/incubator/validator-avro.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/validator-avro/pom.xml b/incubator/validator-avro/pom.xml
index 886e0cddf7..a7dcf9277a 100644
--- a/incubator/validator-avro/pom.xml
+++ b/incubator/validator-avro/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/validator-core.spec/pom.xml b/incubator/validator-core.spec/pom.xml
index 984f9e450d..e6b00613d6 100644
--- a/incubator/validator-core.spec/pom.xml
+++ b/incubator/validator-core.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/validator-core/pom.xml b/incubator/validator-core/pom.xml
index 78346b5e47..742e95f172 100644
--- a/incubator/validator-core/pom.xml
+++ b/incubator/validator-core/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/validator-json.spec/pom.xml b/incubator/validator-json.spec/pom.xml
index bb9d9827e0..47d3c6ae04 100644
--- a/incubator/validator-json.spec/pom.xml
+++ b/incubator/validator-json.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/incubator/validator-json/pom.xml b/incubator/validator-json/pom.xml
index 755781e3e6..42702d0cd9 100644
--- a/incubator/validator-json/pom.xml
+++ b/incubator/validator-json/pom.xml
@@ -6,7 +6,7 @@
io.aklivity.zilla
incubator
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/manager/pom.xml b/manager/pom.xml
index b0e2851b33..f250b554c1 100644
--- a/manager/pom.xml
+++ b/manager/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/pom.xml b/pom.xml
index f3e2fbea6b..28e03a3dd2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -7,7 +7,7 @@
4.0.0
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
pom
zilla
https://github.com/aklivity/zilla
diff --git a/runtime/binding-echo/pom.xml b/runtime/binding-echo/pom.xml
index 1c728dc857..8f77846c17 100644
--- a/runtime/binding-echo/pom.xml
+++ b/runtime/binding-echo/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-fan/pom.xml b/runtime/binding-fan/pom.xml
index fd5ee4a437..a71c2f8e45 100644
--- a/runtime/binding-fan/pom.xml
+++ b/runtime/binding-fan/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-filesystem/pom.xml b/runtime/binding-filesystem/pom.xml
index cfb806828a..9b7f9a07b7 100644
--- a/runtime/binding-filesystem/pom.xml
+++ b/runtime/binding-filesystem/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-grpc-kafka/pom.xml b/runtime/binding-grpc-kafka/pom.xml
index b76bc8ed5c..a6174ce5af 100644
--- a/runtime/binding-grpc-kafka/pom.xml
+++ b/runtime/binding-grpc-kafka/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-grpc-kafka/src/main/java/io/aklivity/zilla/runtime/binding/grpc/kafka/internal/stream/GrpcKafkaProxyFactory.java b/runtime/binding-grpc-kafka/src/main/java/io/aklivity/zilla/runtime/binding/grpc/kafka/internal/stream/GrpcKafkaProxyFactory.java
index 29bb3cc2ed..2241278ed8 100644
--- a/runtime/binding-grpc-kafka/src/main/java/io/aklivity/zilla/runtime/binding/grpc/kafka/internal/stream/GrpcKafkaProxyFactory.java
+++ b/runtime/binding-grpc-kafka/src/main/java/io/aklivity/zilla/runtime/binding/grpc/kafka/internal/stream/GrpcKafkaProxyFactory.java
@@ -1320,8 +1320,9 @@ protected void onKafkaData(
if (grpcStatus != null &&
!HEADER_VALUE_GRPC_OK.value().equals(grpcStatus.value().value()))
{
+ OctetsFW value = grpcStatus.value();
String16FW status = statusRW
- .set(grpcStatus.value().buffer(), grpcStatus.offset(), grpcStatus.sizeof())
+ .set(value.buffer(), value.offset(), value.sizeof())
.build();
doGrpcAbort(traceId, authorization, status);
}
@@ -1410,7 +1411,8 @@ private void doGrpcAbort(
long authorization,
String16FW status)
{
- if (GrpcKafkaState.replyOpened(state) && !GrpcKafkaState.replyClosed(state))
+ if (GrpcKafkaState.replyOpening(state) &&
+ !GrpcKafkaState.replyClosed(state))
{
replySeq = correlater.replySeq;
diff --git a/runtime/binding-grpc-kafka/src/test/java/io/aklivity/zilla/runtime/blinding/grpc/kafka/internal/stream/GrpcKafkaProduceProxyIT.java b/runtime/binding-grpc-kafka/src/test/java/io/aklivity/zilla/runtime/blinding/grpc/kafka/internal/stream/GrpcKafkaProduceProxyIT.java
index 1470ed8a24..3128c2ebb2 100644
--- a/runtime/binding-grpc-kafka/src/test/java/io/aklivity/zilla/runtime/blinding/grpc/kafka/internal/stream/GrpcKafkaProduceProxyIT.java
+++ b/runtime/binding-grpc-kafka/src/test/java/io/aklivity/zilla/runtime/blinding/grpc/kafka/internal/stream/GrpcKafkaProduceProxyIT.java
@@ -68,6 +68,16 @@ public void shouldRejectUnaryRpc() throws Exception
k3po.finish();
}
+ @Test
+ @Configuration("produce.proxy.rpc.yaml")
+ @Specification({
+ "${grpc}/unary.rpc.error/client",
+ "${kafka}/unary.rpc.error/server"})
+ public void shouldRejectUnaryRpcWithError() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Configuration("produce.proxy.rpc.yaml")
@Specification({
diff --git a/runtime/binding-grpc/pom.xml b/runtime/binding-grpc/pom.xml
index 675cb9ec74..dc8c56c37c 100644
--- a/runtime/binding-grpc/pom.xml
+++ b/runtime/binding-grpc/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-grpc/src/main/java/io/aklivity/zilla/runtime/binding/grpc/internal/stream/GrpcClientFactory.java b/runtime/binding-grpc/src/main/java/io/aklivity/zilla/runtime/binding/grpc/internal/stream/GrpcClientFactory.java
index 419b50743d..3634c3f24e 100644
--- a/runtime/binding-grpc/src/main/java/io/aklivity/zilla/runtime/binding/grpc/internal/stream/GrpcClientFactory.java
+++ b/runtime/binding-grpc/src/main/java/io/aklivity/zilla/runtime/binding/grpc/internal/stream/GrpcClientFactory.java
@@ -113,6 +113,8 @@ public class GrpcClientFactory implements GrpcStreamFactory
private final GrpcResetExFW.Builder grpcResetExRW = new GrpcResetExFW.Builder();
private final GrpcMessageFW.Builder grpcMessageRW = new GrpcMessageFW.Builder();
+ private final GrpcAbortExFW grpcAbortedStatusRO;
+
private final MutableDirectBuffer writeBuffer;
private final MutableDirectBuffer metadataBuffer;
private final MutableDirectBuffer extBuffer;
@@ -139,6 +141,11 @@ public GrpcClientFactory(
this.grpcTypeId = context.supplyTypeId(GrpcBinding.NAME);
this.bindings = new Long2ObjectHashMap<>();
this.helper = new HttpGrpcResponseHeaderHelper();
+
+ this.grpcAbortedStatusRO = grpcAbortExRW.wrap(new UnsafeBuffer(new byte[32]), 0, 32)
+ .typeId(grpcTypeId)
+ .status(HEADER_VALUE_GRPC_ABORTED)
+ .build();
}
@Override
@@ -234,6 +241,7 @@ private final class GrpcClient
private int replyPad;
private int state;
+ private String grpcStatus;
private GrpcClient(
MessageConsumer application,
@@ -403,7 +411,14 @@ private void onAppWindow(
replyPad = padding;
state = GrpcState.openReply(state);
- delegate.doNetWindow(traceId, authorization, budgetId, padding, replyAck, replyMax);
+ if (GrpcState.replyClosing(state))
+ {
+ doAppAbortDeferred(traceId, authorization);
+ }
+ else
+ {
+ delegate.doNetWindow(traceId, authorization, budgetId, padding, replyAck, replyMax);
+ }
assert replyAck <= replySeq;
@@ -450,21 +465,49 @@ private void doAppData(
assert replySeq <= replyAck + replyMax;
}
- private void doAppAbort(
+ private void doAppAbortDeferring(
+ long traceId,
+ long authorization,
+ String16FW grpcStatus)
+ {
+ this.grpcStatus = grpcStatus != null ? grpcStatus.asString() : null;
+ this.state = GrpcState.closingReply(state);
+
+ if (GrpcState.replyOpened(state))
+ {
+ doAppAbortDeferred(traceId, authorization);
+ }
+ }
+
+ private void doAppAbortDeferred(
long traceId,
long authorization)
{
- if (!GrpcState.replyClosed(state))
+ GrpcAbortExFW abortEx = grpcStatus != null
+ ? grpcAbortExRW.wrap(extBuffer, 0, extBuffer.capacity())
+ .typeId(grpcTypeId)
+ .status(grpcStatus)
+ .build()
+ : grpcAbortExRW.wrap(extBuffer, 0, extBuffer.capacity())
+ .typeId(grpcTypeId)
+ .status(HEADER_VALUE_GRPC_INTERNAL_ERROR)
+ .build();
+
+ doAppAbort(traceId, authorization, abortEx);
+ }
+
+ private void doAppAbort(
+ long traceId,
+ long authorization,
+ Flyweight extension)
+ {
+ if (GrpcState.replyOpening(state) &&
+ !GrpcState.replyClosed(state))
{
state = GrpcState.closeReply(state);
- GrpcAbortExFW abortEx = grpcAbortExRW.wrap(extBuffer, 0, extBuffer.capacity())
- .typeId(grpcTypeId)
- .status(HEADER_VALUE_GRPC_ABORTED)
- .build();
-
doAbort(application, originId, routedId, replyId, replySeq, replyAck, replyMax,
- traceId, authorization, abortEx);
+ traceId, authorization, extension);
}
}
@@ -500,15 +543,19 @@ private void doAppWindow(
private void doAppReset(
long traceId,
- long authorization,
- Flyweight extension)
+ long authorization)
{
if (!GrpcState.initialClosed(state))
{
state = GrpcState.closeInitial(state);
+ GrpcResetExFW resetEx = grpcResetExRW.wrap(extBuffer, 0, extBuffer.capacity())
+ .typeId(grpcTypeId)
+ .status(HEADER_VALUE_GRPC_ABORTED)
+ .build();
+
doReset(application, originId, routedId, initialId, initialSeq, initialAck, initialMax,
- traceId, authorization, extension);
+ traceId, authorization, resetEx);
}
}
}
@@ -725,21 +772,13 @@ private void onNetBegin(
replyMax = maximum;
state = GrpcState.openingReply(state);
+ delegate.doAppBegin(traceId, authorization, affinity);
+
if (!HTTP_HEADER_VALUE_STATUS_200.equals(status) ||
grpcStatus != null && !HEADER_VALUE_GRPC_OK.equals(grpcStatus))
{
- final String16FW newGrpcStatus = grpcStatus == null ? HEADER_VALUE_GRPC_INTERNAL_ERROR : grpcStatus;
- GrpcResetExFW resetEx = grpcResetExRW.wrap(extBuffer, 0, extBuffer.capacity())
- .typeId(grpcTypeId)
- .status(newGrpcStatus)
- .build();
-
- delegate.doAppReset(traceId, authorization, resetEx);
- doNetAbort(traceId, authorization);
- }
- else
- {
- delegate.doAppBegin(traceId, authorization, affinity);
+ delegate.doAppAbortDeferring(traceId, authorization, grpcStatus);
+ doNetReset(traceId, authorization);
}
}
@@ -823,10 +862,9 @@ private void onNetEnd(
}
else
{
- delegate.doAppAbort(traceId, authorization);
+ delegate.doAppAbortDeferring(traceId, authorization,
+ grpcStatus != null ? grpcStatus.value() : HEADER_VALUE_GRPC_INTERNAL_ERROR);
}
-
-
}
private void onNetAbort(
@@ -846,7 +884,7 @@ private void onNetAbort(
state = GrpcState.closeReply(state);
- delegate.doAppAbort(traceId, authorization);
+ delegate.doAppAbort(traceId, authorization, grpcAbortedStatusRO);
}
private void onNetReset(
@@ -857,12 +895,7 @@ private void onNetReset(
state = GrpcState.closeInitial(state);
- GrpcResetExFW resetEx = grpcResetExRW.wrap(extBuffer, 0, extBuffer.capacity())
- .typeId(grpcTypeId)
- .status(HEADER_VALUE_GRPC_ABORTED)
- .build();
-
- delegate.doAppReset(traceId, authorization, resetEx);
+ delegate.doAppReset(traceId, authorization);
}
private void onNetWindow(
diff --git a/runtime/binding-grpc/src/test/java/io/aklivity/zilla/runtime/binding/grpc/internal/streams/client/UnaryRpcIT.java b/runtime/binding-grpc/src/test/java/io/aklivity/zilla/runtime/binding/grpc/internal/streams/client/UnaryRpcIT.java
index 7855ea51e3..f110c239c6 100644
--- a/runtime/binding-grpc/src/test/java/io/aklivity/zilla/runtime/binding/grpc/internal/streams/client/UnaryRpcIT.java
+++ b/runtime/binding-grpc/src/test/java/io/aklivity/zilla/runtime/binding/grpc/internal/streams/client/UnaryRpcIT.java
@@ -124,4 +124,25 @@ public void serverSendsWriteAbortOnOpenRequestResponse() throws Exception
}
+ @Test
+ @Configuration("client.when.yaml")
+ @Specification({
+ "${app}/server.send.write.abort.on.open.response/client",
+ "${net}/response.with.grpc.error/server"
+ })
+ public void shouldAbortResponseWithGrpcError() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Configuration("client.when.yaml")
+ @Specification({
+ "${app}/response.missing.grpc.status/client",
+ "${net}/response.missing.grpc.status/server",
+ })
+ public void shouldAbortResponseMissingGrpcStatus() throws Exception
+ {
+ k3po.finish();
+ }
}
diff --git a/runtime/binding-http-filesystem/pom.xml b/runtime/binding-http-filesystem/pom.xml
index 554866aa2d..f829fedfbc 100644
--- a/runtime/binding-http-filesystem/pom.xml
+++ b/runtime/binding-http-filesystem/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-http-filesystem/src/main/java/io/aklivity/zilla/runtime/binding/http/filesystem/internal/stream/HttpFileSystemProxyFactory.java b/runtime/binding-http-filesystem/src/main/java/io/aklivity/zilla/runtime/binding/http/filesystem/internal/stream/HttpFileSystemProxyFactory.java
index dfa0dc4767..b0e8b3c234 100644
--- a/runtime/binding-http-filesystem/src/main/java/io/aklivity/zilla/runtime/binding/http/filesystem/internal/stream/HttpFileSystemProxyFactory.java
+++ b/runtime/binding-http-filesystem/src/main/java/io/aklivity/zilla/runtime/binding/http/filesystem/internal/stream/HttpFileSystemProxyFactory.java
@@ -55,7 +55,7 @@ public final class HttpFileSystemProxyFactory implements HttpFileSystemStreamFac
private static final String8FW HEADER_STATUS_NAME = new String8FW(":status");
private static final String16FW HEADER_STATUS_VALUE_200 = new String16FW("200");
private static final String16FW HEADER_STATUS_VALUE_304 = new String16FW("304");
- private static final String8FW HEADER_ETAG_NAME = new String8FW("Etag");
+ private static final String8FW HEADER_ETAG_NAME = new String8FW("etag");
private static final String8FW HEADER_CONTENT_TYPE_NAME = new String8FW("content-type");
private static final String8FW HEADER_CONTENT_LENGTH_NAME = new String8FW("content-length");
private static final int READ_PAYLOAD_MASK = 1 << FileSystemCapabilities.READ_PAYLOAD.ordinal();
diff --git a/runtime/binding-http-kafka/pom.xml b/runtime/binding-http-kafka/pom.xml
index aeb254f3e0..882b289080 100644
--- a/runtime/binding-http-kafka/pom.xml
+++ b/runtime/binding-http-kafka/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-http/pom.xml b/runtime/binding-http/pom.xml
index c21ac66cfb..0af4e77706 100644
--- a/runtime/binding-http/pom.xml
+++ b/runtime/binding-http/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-kafka-grpc/pom.xml b/runtime/binding-kafka-grpc/pom.xml
index e6f4288577..af0cbe0f46 100644
--- a/runtime/binding-kafka-grpc/pom.xml
+++ b/runtime/binding-kafka-grpc/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-kafka-grpc/src/main/java/io/aklivity/zilla/runtime/binding/kafka/grpc/internal/stream/KafkaGrpcRemoteServerFactory.java b/runtime/binding-kafka-grpc/src/main/java/io/aklivity/zilla/runtime/binding/kafka/grpc/internal/stream/KafkaGrpcRemoteServerFactory.java
index baa6d57abb..f8d41b4da3 100644
--- a/runtime/binding-kafka-grpc/src/main/java/io/aklivity/zilla/runtime/binding/kafka/grpc/internal/stream/KafkaGrpcRemoteServerFactory.java
+++ b/runtime/binding-kafka-grpc/src/main/java/io/aklivity/zilla/runtime/binding/kafka/grpc/internal/stream/KafkaGrpcRemoteServerFactory.java
@@ -786,9 +786,9 @@ private void doKafkaEnd(
if (KafkaGrpcState.initialOpened(state) &&
!KafkaGrpcState.initialClosed(state))
{
- initialSeq = delegate.initialSeq;
- initialAck = delegate.initialAck;
- initialMax = delegate.initialMax;
+ initialSeq = delegate.replySeq;
+ initialAck = delegate.replyAck;
+ initialMax = delegate.replyMax;
state = KafkaGrpcState.closeInitial(state);
doKafkaTombstone(traceId, authorization, HEADER_VALUE_GRPC_OK);
@@ -806,9 +806,9 @@ private void doKafkaAbort(
if (KafkaGrpcState.initialOpening(state) &&
!KafkaGrpcState.initialClosed(state))
{
- initialSeq = delegate.initialSeq;
- initialAck = delegate.initialAck;
- initialMax = delegate.initialMax;
+ initialSeq = delegate.replySeq;
+ initialAck = delegate.replyAck;
+ initialMax = delegate.replyMax;
state = KafkaGrpcState.closeInitial(state);
doKafkaTombstone(traceId, authorization, status);
@@ -1453,7 +1453,6 @@ private void onGrpcAbort(
final String16FW status = abortEx != null ? abortEx.status() : HEADER_VALUE_GRPC_ABORTED;
correlater.doKafkaAbort(traceId, authorization, status);
-
}
private void onGrpcReset(
@@ -1464,6 +1463,7 @@ private void onGrpcReset(
final int maximum = reset.maximum();
final long traceId = reset.traceId();
final long authorization = reset.authorization();
+ final OctetsFW extension = reset.extension();
assert acknowledge <= sequence;
assert sequence <= initialSeq;
@@ -1472,7 +1472,7 @@ private void onGrpcReset(
initialAck = acknowledge;
initialMax = maximum;
- state = KafkaGrpcState.closingInitial(state);
+ state = KafkaGrpcState.closeInitial(state);
cleanup(traceId, authorization);
@@ -1501,7 +1501,7 @@ private void onGrpcWindow(
initialBud = budgetId;
initialPad = padding;
initialCap = capabilities;
- state = KafkaGrpcState.openReply(state);
+ state = KafkaGrpcState.openInitial(state);
assert initialAck <= initialSeq;
@@ -1592,7 +1592,7 @@ private void doGrpcBegin(
OctetsFW service,
OctetsFW method)
{
- state = KafkaGrpcState.openingReply(state);
+ state = KafkaGrpcState.openingInitial(state);
grpc = newGrpcStream(this::onGrpcMessage, originId, routedId, initialId, initialSeq, initialAck, initialMax,
traceId, authorization, affinity, server.condition.scheme(), server.condition.authority(),
@@ -1621,7 +1621,8 @@ private void doGrpcAbort(
long traceId,
long authorization)
{
- if (KafkaGrpcState.replyOpened(state) && !KafkaGrpcState.replyClosed(state))
+ if (KafkaGrpcState.initialOpening(state) &&
+ !KafkaGrpcState.initialClosed(state))
{
final GrpcAbortExFW grpcAbortEx =
grpcAbortExRW.wrap(extBuffer, 0, extBuffer.capacity())
@@ -1639,9 +1640,9 @@ private void doGrpcEnd(
long traceId,
long authorization)
{
- if (!KafkaGrpcState.replyClosed(state))
+ if (!KafkaGrpcState.initialClosed(state))
{
- state = KafkaGrpcState.closeReply(state);
+ state = KafkaGrpcState.closeInitial(state);
doEnd(grpc, originId, routedId, initialId, initialSeq, initialAck, initialMax,
traceId, authorization);
@@ -1666,8 +1667,7 @@ private void doGrpcReset(
long traceId,
long authorization)
{
- if (KafkaGrpcState.replyOpening(state) &&
- !KafkaGrpcState.replyClosed(state))
+ if (!KafkaGrpcState.replyClosed(state))
{
state = KafkaGrpcState.closeReply(state);
@@ -1682,6 +1682,7 @@ private void doGrpcReset(
}
}
}
+
private void doBegin(
MessageConsumer receiver,
long originId,
diff --git a/runtime/binding-kafka/pom.xml b/runtime/binding-kafka/pom.xml
index f56738efc3..1b76792ac8 100644
--- a/runtime/binding-kafka/pom.xml
+++ b/runtime/binding-kafka/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/KafkaConfiguration.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/KafkaConfiguration.java
index f70dadadc7..c1cc175f09 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/KafkaConfiguration.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/KafkaConfiguration.java
@@ -47,6 +47,7 @@ public class KafkaConfiguration extends Configuration
public static final IntPropertyDef KAFKA_CLIENT_PRODUCE_MAX_REQUEST_MILLIS;
public static final IntPropertyDef KAFKA_CLIENT_PRODUCE_MAX_RESPONSE_MILLIS;
public static final IntPropertyDef KAFKA_CLIENT_PRODUCE_MAX_BYTES;
+ public static final IntPropertyDef KAFKA_CLIENT_PRODUCE_RECORD_FRAMING_SIZE;
public static final PropertyDef KAFKA_CACHE_DIRECTORY;
public static final LongPropertyDef KAFKA_CACHE_PRODUCE_CAPACITY;
public static final PropertyDef KAFKA_CACHE_CLEANUP_POLICY;
@@ -88,6 +89,7 @@ public class KafkaConfiguration extends Configuration
KAFKA_CLIENT_PRODUCE_MAX_REQUEST_MILLIS = config.property("client.produce.max.request.millis", 0);
KAFKA_CLIENT_PRODUCE_MAX_RESPONSE_MILLIS = config.property("client.produce.max.response.millis", 120000);
KAFKA_CLIENT_PRODUCE_MAX_BYTES = config.property("client.produce.max.bytes", Integer.MAX_VALUE);
+ KAFKA_CLIENT_PRODUCE_RECORD_FRAMING_SIZE = config.property("client.produce.record.framing.size", 512);
KAFKA_CLIENT_SASL_SCRAM_NONCE = config.property(NonceSupplier.class, "client.sasl.scram.nonce",
KafkaConfiguration::decodeNonceSupplier, KafkaConfiguration::defaultNonceSupplier);
KAFKA_CLIENT_GROUP_REBALANCE_TIMEOUT = config.property(Duration.class, "client.group.rebalance.timeout",
@@ -172,6 +174,11 @@ public int clientProduceMaxBytes()
return KAFKA_CLIENT_PRODUCE_MAX_BYTES.getAsInt(this);
}
+ public int clientProduceRecordFramingSize()
+ {
+ return KAFKA_CLIENT_PRODUCE_RECORD_FRAMING_SIZE.getAsInt(this);
+ }
+
public Path cacheDirectory()
{
return KAFKA_CACHE_DIRECTORY.get(this);
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/config/KafkaBindingConfig.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/config/KafkaBindingConfig.java
index 7f8cd4c6aa..25fc45cb80 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/config/KafkaBindingConfig.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/config/KafkaBindingConfig.java
@@ -92,15 +92,6 @@ public KafkaTopicConfig topic(
: null;
}
- public boolean bootstrap(
- String topic)
- {
- return topic != null &&
- options != null &&
- options.bootstrap != null &&
- options.bootstrap.contains(topic);
- }
-
public KafkaSaslConfig sasl()
{
return options != null ? options.sasl : null;
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheServerBootstrapFactory.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheBootstrapFactory.java
similarity index 80%
rename from runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheServerBootstrapFactory.java
rename to runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheBootstrapFactory.java
index 44f10a622d..9f17334f44 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheServerBootstrapFactory.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheBootstrapFactory.java
@@ -23,6 +23,8 @@
import org.agrona.DirectBuffer;
import org.agrona.MutableDirectBuffer;
+import org.agrona.collections.Int2IntHashMap;
+import org.agrona.collections.Int2ObjectHashMap;
import org.agrona.collections.Long2LongHashMap;
import org.agrona.collections.MutableInteger;
import org.agrona.concurrent.UnsafeBuffer;
@@ -31,6 +33,7 @@
import io.aklivity.zilla.runtime.binding.kafka.internal.KafkaBinding;
import io.aklivity.zilla.runtime.binding.kafka.internal.KafkaConfiguration;
import io.aklivity.zilla.runtime.binding.kafka.internal.config.KafkaBindingConfig;
+import io.aklivity.zilla.runtime.binding.kafka.internal.types.Array32FW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.ArrayFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.KafkaConfigFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.KafkaOffsetFW;
@@ -46,19 +49,22 @@
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.FlushFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaBeginExFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaBootstrapBeginExFW;
+import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaConsumerAssignmentFW;
+import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaConsumerDataExFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaDataExFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaDescribeDataExFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaFetchFlushExFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaFlushExFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaMetaDataExFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaResetExFW;
+import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.KafkaTopicPartitionFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.ResetFW;
import io.aklivity.zilla.runtime.binding.kafka.internal.types.stream.WindowFW;
import io.aklivity.zilla.runtime.engine.EngineContext;
import io.aklivity.zilla.runtime.engine.binding.BindingHandler;
import io.aklivity.zilla.runtime.engine.binding.function.MessageConsumer;
-public final class KafkaCacheServerBootstrapFactory implements BindingHandler
+public final class KafkaCacheBootstrapFactory implements BindingHandler
{
private static final String16FW CONFIG_NAME_CLEANUP_POLICY = new String16FW("cleanup.policy");
private static final String16FW CONFIG_NAME_MAX_MESSAGE_BYTES = new String16FW("max.message.bytes");
@@ -78,6 +84,8 @@ public final class KafkaCacheServerBootstrapFactory implements BindingHandler
private static final Consumer EMPTY_EXTENSION = ex -> {};
+ private static final MessageConsumer NO_RECEIVER = (m, b, i, l) -> {};
+
private final BeginFW beginRO = new BeginFW();
private final DataFW dataRO = new DataFW();
private final EndFW endRO = new EndFW();
@@ -107,7 +115,7 @@ public final class KafkaCacheServerBootstrapFactory implements BindingHandler
private final LongUnaryOperator supplyReplyId;
private final LongFunction supplyBinding;
- public KafkaCacheServerBootstrapFactory(
+ public KafkaCacheBootstrapFactory(
KafkaConfiguration config,
EngineContext context,
LongFunction supplyBinding)
@@ -152,7 +160,7 @@ public MessageConsumer newStream(
final KafkaBindingConfig binding = supplyBinding.apply(routedId);
- if (binding != null && binding.bootstrap(topicName))
+ if (binding != null)
{
final KafkaTopicConfig topic = binding.topic(topicName);
final long resolvedId = routedId;
@@ -362,6 +370,9 @@ private final class KafkaBootstrapStream
private final List fetchStreams;
private final Long2LongHashMap nextOffsetsById;
private final long defaultOffset;
+ private final Int2ObjectHashMap consumers;
+ private final Int2IntHashMap leadersByAssignedId;
+ private final Int2IntHashMap leadersByPartitionId;
private int state;
@@ -376,6 +387,11 @@ private final class KafkaBootstrapStream
private long replyBudgetId;
+ private KafkaUnmergedConsumerStream consumerStream;
+ private String groupId;
+ private String consumerId;
+ private int timeout;
+
KafkaBootstrapStream(
MessageConsumer sender,
long originId,
@@ -401,6 +417,9 @@ private final class KafkaBootstrapStream
this.fetchStreams = new ArrayList<>();
this.nextOffsetsById = new Long2LongHashMap(-1L);
this.defaultOffset = defaultOffset;
+ this.consumers = new Int2ObjectHashMap<>();
+ this.leadersByPartitionId = new Int2IntHashMap(-1);
+ this.leadersByAssignedId = new Int2IntHashMap(-1);
}
private void onBootstrapInitial(
@@ -444,6 +463,23 @@ private void onBootstrapInitialBegin(
assert state == 0;
state = KafkaState.openingInitial(state);
+ final OctetsFW extension = begin.extension();
+ final DirectBuffer buffer = extension.buffer();
+ final int offset = extension.offset();
+ final int limit = extension.limit();
+
+ final KafkaBeginExFW beginEx = kafkaBeginExRO.wrap(buffer, offset, limit);
+ final KafkaBootstrapBeginExFW bootstrapBeginEx = beginEx.bootstrap();
+
+ this.groupId = bootstrapBeginEx.groupId().asString();
+ this.consumerId = bootstrapBeginEx.consumerId().asString();
+ this.timeout = bootstrapBeginEx.timeout();
+
+ if (groupId != null && !groupId.isEmpty())
+ {
+ this.consumerStream = new KafkaUnmergedConsumerStream(this);
+ }
+
describeStream.doDescribeInitialBegin(traceId);
}
@@ -639,11 +675,25 @@ private void onTopicMetaDataChanged(
long traceId,
ArrayFW partitions)
{
- partitions.forEach(partition -> onPartitionMetaDataChangedIfNecessary(traceId, partition));
-
+ leadersByPartitionId.clear();
+ partitions.forEach(p -> leadersByPartitionId.put(p.partitionId(), p.leaderId()));
partitionCount.value = 0;
partitions.forEach(partition -> partitionCount.value++);
- assert fetchStreams.size() >= partitionCount.value;
+
+ if (this.consumerStream != null)
+ {
+ this.consumerStream.doConsumerInitialBeginIfNecessary(traceId);
+ }
+ else
+ {
+ leadersByAssignedId.clear();
+ partitions.forEach(p ->
+ {
+ onPartitionMetaDataChangedIfNecessary(traceId, p);
+ leadersByAssignedId.put(p.partitionId(), p.leaderId());
+ });
+ assert leadersByAssignedId.size() == partitionCount.value;
+ }
}
private void onPartitionMetaDataChangedIfNecessary(
@@ -674,6 +724,59 @@ private void onPartitionMetaDataChangedIfNecessary(
assert leader.leaderId == leaderId;
}
+ private void onPartitionMetaDataChangedIfNecessary(
+ long traceId,
+ int partitionId,
+ int leaderId)
+ {
+ final long partitionOffset = nextPartitionOffset(partitionId);
+
+ KafkaBootstrapFetchStream leader = findPartitionLeader(partitionId);
+
+ if (leader != null && leader.leaderId != leaderId)
+ {
+ leader.leaderId = leaderId;
+ leader.doFetchInitialBeginIfNecessary(traceId, partitionOffset);
+ }
+
+ if (leader == null)
+ {
+ leader = new KafkaBootstrapFetchStream(partitionId, leaderId, this);
+ leader.doFetchInitialBegin(traceId, partitionOffset);
+ fetchStreams.add(leader);
+ }
+
+ assert leader != null;
+ assert leader.partitionId == partitionId;
+ assert leader.leaderId == leaderId;
+ }
+
+ private void onTopicConsumerDataChanged(
+ long traceId,
+ Array32FW partitions,
+ Array32FW newAssignments)
+ {
+ leadersByAssignedId.clear();
+ partitions.forEach(p ->
+ {
+ int partitionId = p.partitionId();
+ int leaderId = leadersByPartitionId.get(partitionId);
+ leadersByAssignedId.put(partitionId, leaderId);
+
+ onPartitionMetaDataChangedIfNecessary(traceId, partitionId, leaderId);
+ });
+
+ consumers.clear();
+ newAssignments.forEach(a ->
+ {
+ a.partitions().forEach(p ->
+ {
+ final String consumerId = a.consumerId().asString();
+ consumers.put(p.partitionId(), consumerId);
+ });
+ });
+ }
+
private void onPartitionLeaderReady(
long traceId,
long partitionId)
@@ -1542,4 +1645,268 @@ private void doFetchReplyReset(
traceId, bootstrap.authorization);
}
}
+
+ private final class KafkaUnmergedConsumerStream
+ {
+ private final KafkaBootstrapStream bootstrap;
+
+ private long initialId;
+ private long replyId;
+ private MessageConsumer receiver = NO_RECEIVER;
+
+ private int state;
+
+ private long initialSeq;
+ private long initialAck;
+ private int initialMax;
+
+ private long replySeq;
+ private long replyAck;
+ private int replyMax;
+
+ private KafkaUnmergedConsumerStream(
+ KafkaBootstrapStream bootstrap)
+ {
+ this.bootstrap = bootstrap;
+ }
+
+ private void doConsumerInitialBeginIfNecessary(
+ long traceId)
+ {
+ if (!KafkaState.initialOpening(state))
+ {
+ doConsumerInitialBegin(traceId);
+ }
+ }
+
+ private void doConsumerInitialBegin(
+ long traceId)
+ {
+ assert state == 0;
+
+ state = KafkaState.openingInitial(state);
+
+ this.initialId = supplyInitialId.applyAsLong(bootstrap.resolvedId);
+ this.replyId = supplyReplyId.applyAsLong(initialId);
+ this.receiver = newStream(this::onConsumerReply,
+ bootstrap.routedId, bootstrap.resolvedId, initialId, initialSeq, initialAck, initialMax,
+ traceId, bootstrap.authorization, 0L,
+ ex -> ex.set((b, o, l) -> kafkaBeginExRW.wrap(b, o, l)
+ .typeId(kafkaTypeId)
+ .consumer(c -> c
+ .groupId(bootstrap.groupId)
+ .consumerId(bootstrap.consumerId)
+ .timeout(bootstrap.timeout)
+ .topic(bootstrap.topic)
+ .partitionIds(p -> bootstrap.leadersByPartitionId.forEach((k, v) ->
+ p.item(tp -> tp.partitionId(k))))
+ )
+ .build()
+ .sizeof()));
+ }
+
+ private void doConsumerInitialEndIfNecessary(
+ long traceId)
+ {
+ if (!KafkaState.initialClosed(state))
+ {
+ doConsumerInitialEnd(traceId);
+ }
+ }
+
+ private void doConsumerInitialEnd(
+ long traceId)
+ {
+ state = KafkaState.closedInitial(state);
+
+ doEnd(receiver, bootstrap.routedId, bootstrap.resolvedId, initialId, initialSeq, initialAck, initialMax,
+ traceId, bootstrap.authorization, EMPTY_EXTENSION);
+ }
+
+ private void doConsumerInitialAbortIfNecessary(
+ long traceId)
+ {
+ if (KafkaState.initialOpening(state) && !KafkaState.initialClosed(state))
+ {
+ doConsumerInitialAbort(traceId);
+ }
+ }
+
+ private void doConsumerInitialAbort(
+ long traceId)
+ {
+ state = KafkaState.closedInitial(state);
+
+ doAbort(receiver, bootstrap.routedId, bootstrap.resolvedId, initialId, initialSeq, initialAck, initialMax,
+ traceId, bootstrap.authorization, EMPTY_EXTENSION);
+ }
+
+ private void onConsumerReply(
+ int msgTypeId,
+ DirectBuffer buffer,
+ int index,
+ int length)
+ {
+ switch (msgTypeId)
+ {
+ case BeginFW.TYPE_ID:
+ final BeginFW begin = beginRO.wrap(buffer, index, index + length);
+ onConsumerReplyBegin(begin);
+ break;
+ case DataFW.TYPE_ID:
+ final DataFW data = dataRO.wrap(buffer, index, index + length);
+ onConsumerReplyData(data);
+ break;
+ case EndFW.TYPE_ID:
+ final EndFW end = endRO.wrap(buffer, index, index + length);
+ onConsumerReplyEnd(end);
+ break;
+ case AbortFW.TYPE_ID:
+ final AbortFW abort = abortRO.wrap(buffer, index, index + length);
+ onConsumerReplyAbort(abort);
+ break;
+ case ResetFW.TYPE_ID:
+ final ResetFW reset = resetRO.wrap(buffer, index, index + length);
+ onConsumerInitialReset(reset);
+ break;
+ case WindowFW.TYPE_ID:
+ final WindowFW window = windowRO.wrap(buffer, index, index + length);
+ onConsumerInitialWindow(window);
+ break;
+ default:
+ break;
+ }
+ }
+
+ private void onConsumerReplyBegin(
+ BeginFW begin)
+ {
+ final long traceId = begin.traceId();
+
+ state = KafkaState.openingReply(state);
+
+ doConsumerReplyWindow(traceId, 0, 8192);
+ }
+
+ private void onConsumerReplyData(
+ DataFW data)
+ {
+ final long sequence = data.sequence();
+ final long acknowledge = data.acknowledge();
+ final long traceId = data.traceId();
+ final int reserved = data.reserved();
+ final OctetsFW extension = data.extension();
+
+ assert acknowledge <= sequence;
+ assert sequence >= replySeq;
+
+ replySeq = sequence + reserved;
+
+ assert replyAck <= replySeq;
+
+ if (replySeq > replyAck + replyMax)
+ {
+ bootstrap.doBootstrapCleanup(traceId);
+ }
+ else
+ {
+ final KafkaDataExFW kafkaDataEx = extension.get(kafkaDataExRO::wrap);
+ final KafkaConsumerDataExFW kafkaConsumerDataEx = kafkaDataEx.consumer();
+ final Array32FW partitions = kafkaConsumerDataEx.partitions();
+ final Array32FW assignments = kafkaConsumerDataEx.assignments();
+ bootstrap.onTopicConsumerDataChanged(traceId, partitions, assignments);
+
+ doConsumerReplyWindow(traceId, 0, replyMax);
+ }
+ }
+
+ private void onConsumerReplyEnd(
+ EndFW end)
+ {
+ final long traceId = end.traceId();
+
+ state = KafkaState.closedReply(state);
+
+ bootstrap.doBootstrapReplyBeginIfNecessary(traceId);
+ bootstrap.doBootstrapReplyEndIfNecessary(traceId);
+
+ doConsumerInitialEndIfNecessary(traceId);
+ }
+
+ private void onConsumerReplyAbort(
+ AbortFW abort)
+ {
+ final long traceId = abort.traceId();
+
+ state = KafkaState.closedReply(state);
+
+ bootstrap.doBootstrapReplyAbortIfNecessary(traceId);
+
+ doConsumerInitialAbortIfNecessary(traceId);
+ }
+
+ private void onConsumerInitialReset(
+ ResetFW reset)
+ {
+ final long traceId = reset.traceId();
+
+ state = KafkaState.closedInitial(state);
+
+ bootstrap.doBootstrapInitialResetIfNecessary(traceId);
+
+ doConsumerReplyResetIfNecessary(traceId);
+ }
+
+ private void onConsumerInitialWindow(
+ WindowFW window)
+ {
+ if (!KafkaState.initialOpened(state))
+ {
+ final long traceId = window.traceId();
+
+ state = KafkaState.openedInitial(state);
+
+ bootstrap.doBootstrapInitialWindow(traceId, 0L, 0, 0, 0);
+ }
+ }
+
+ private void doConsumerReplyWindow(
+ long traceId,
+ int minReplyNoAck,
+ int minReplyMax)
+ {
+ final long newReplyAck = Math.max(replySeq - minReplyNoAck, replyAck);
+
+ if (newReplyAck > replyAck || minReplyMax > replyMax || !KafkaState.replyOpened(state))
+ {
+ replyAck = newReplyAck;
+ assert replyAck <= replySeq;
+
+ replyMax = minReplyMax;
+
+ state = KafkaState.openedReply(state);
+
+ doWindow(receiver, bootstrap.routedId, bootstrap.resolvedId, replyId, replySeq, replyAck, replyMax,
+ traceId, bootstrap.authorization, 0L, bootstrap.replyPad);
+ }
+ }
+
+ private void doConsumerReplyResetIfNecessary(
+ long traceId)
+ {
+ if (!KafkaState.replyClosed(state))
+ {
+ doConsumerReplyReset(traceId);
+ }
+ }
+
+ private void doConsumerReplyReset(
+ long traceId)
+ {
+ state = KafkaState.closedReply(state);
+
+ doReset(receiver, bootstrap.routedId, bootstrap.resolvedId, replyId, replySeq, replyAck, replyMax,
+ traceId, bootstrap.authorization);
+ }
+ }
}
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheClientFactory.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheClientFactory.java
index 4d389f5785..ecfa9675e0 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheClientFactory.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheClientFactory.java
@@ -82,6 +82,9 @@ public KafkaCacheClientFactory(
final KafkaMergedFactory cacheMergedFactory = new KafkaMergedFactory(
config, context, bindings::get, accountant.creditor());
+ final KafkaCacheBootstrapFactory cacheBootstrapFactory = new KafkaCacheBootstrapFactory(
+ config, context, bindings::get);
+
final Int2ObjectHashMap factories = new Int2ObjectHashMap<>();
factories.put(KafkaBeginExFW.KIND_META, cacheMetaFactory);
factories.put(KafkaBeginExFW.KIND_DESCRIBE, cacheDescribeFactory);
@@ -91,6 +94,7 @@ public KafkaCacheClientFactory(
factories.put(KafkaBeginExFW.KIND_FETCH, cacheFetchFactory);
factories.put(KafkaBeginExFW.KIND_PRODUCE, cacheProduceFactory);
factories.put(KafkaBeginExFW.KIND_MERGED, cacheMergedFactory);
+ factories.put(KafkaBeginExFW.KIND_BOOTSTRAP, cacheBootstrapFactory);
this.kafkaTypeId = context.supplyTypeId(KafkaBinding.NAME);
this.factories = factories;
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheServerFactory.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheServerFactory.java
index d936e962ad..cf1b2379e3 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheServerFactory.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaCacheServerFactory.java
@@ -58,7 +58,7 @@ public KafkaCacheServerFactory(
final Long2ObjectHashMap bindings = new Long2ObjectHashMap<>();
final Int2ObjectHashMap factories = new Int2ObjectHashMap<>();
- final KafkaCacheServerBootstrapFactory cacheBootstrapFactory = new KafkaCacheServerBootstrapFactory(
+ final KafkaCacheBootstrapFactory cacheBootstrapFactory = new KafkaCacheBootstrapFactory(
config, context, bindings::get);
final KafkaCacheMetaFactory cacheMetaFactory = new KafkaCacheMetaFactory(
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientConnectionPool.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientConnectionPool.java
index dc3b57ee97..c71a83b2d4 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientConnectionPool.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientConnectionPool.java
@@ -17,7 +17,6 @@
import static io.aklivity.zilla.runtime.binding.kafka.internal.types.ProxyAddressProtocol.STREAM;
import static io.aklivity.zilla.runtime.engine.budget.BudgetCreditor.NO_BUDGET_ID;
-import static io.aklivity.zilla.runtime.engine.budget.BudgetCreditor.NO_CREDITOR_INDEX;
import static io.aklivity.zilla.runtime.engine.concurrent.Signaler.NO_CANCEL_ID;
import static java.lang.System.currentTimeMillis;
@@ -666,6 +665,7 @@ private void onStreamBegin(
state = KafkaState.openingInitial(state);
doStreamBegin(authorization, traceId);
+ doStreamWindow(authorization, traceId);
}
private void onStreamData(
@@ -787,23 +787,26 @@ private void doStreamWindow(
long authorization,
long traceId)
{
- final long initialSeqOffsetPeek = initialSeqOffset.peekLong();
-
- if (initialSeqOffsetPeek != NO_OFFSET)
+ if (KafkaState.initialOpened(connection.state))
{
- assert initialAck <= connection.initialAck - initialSeqOffsetPeek + initialAckSnapshot;
-
- initialAck = connection.initialAck - initialSeqOffsetPeek + initialAckSnapshot;
+ final long initialSeqOffsetPeek = initialSeqOffset.peekLong();
- if (initialAck == initialSeq)
+ if (initialSeqOffsetPeek != NO_OFFSET)
{
- initialSeqOffset.removeLong();
- initialAckSnapshot = initialAck;
+ assert initialAck <= connection.initialAck - initialSeqOffsetPeek + initialAckSnapshot;
+
+ initialAck = connection.initialAck - initialSeqOffsetPeek + initialAckSnapshot;
+
+ if (initialAck == initialSeq)
+ {
+ initialSeqOffset.removeLong();
+ initialAckSnapshot = initialAck;
+ }
}
- }
- doWindow(sender, originId, routedId, initialId, initialSeq, initialAck, connection.initialMax,
- traceId, authorization, connection.initialBudId, connection.initialPad);
+ doWindow(sender, originId, routedId, initialId, initialSeq, initialAck, connection.initialMax,
+ traceId, authorization, connection.initialBudId, connection.initialPad);
+ }
}
private void doStreamBegin(
@@ -814,8 +817,6 @@ private void doStreamBegin(
doBegin(sender, originId, routedId, replyId, replySeq, replyAck, replyMax,
traceId, authorization, connection.initialBudId, EMPTY_EXTENSION);
-
- doStreamWindow(authorization, traceId);
}
private void doStreamData(
@@ -1077,6 +1078,12 @@ private void doConnectionBegin(
if (KafkaState.closed(state))
{
state = 0;
+ initialAck = 0;
+ initialSeq = 0;
+ initialMax = 0;
+ replyAck = 0;
+ replySeq = 0;
+ initialBudId = NO_BUDGET_ID;
}
if (!KafkaState.initialOpening(state))
@@ -1574,10 +1581,10 @@ private void cleanupConnection(
private void cleanupBudgetCreditorIfNecessary()
{
- if (initialBudId != NO_CREDITOR_INDEX)
+ if (initialBudId != NO_BUDGET_ID)
{
creditor.release(initialBudId);
- initialBudId = NO_CREDITOR_INDEX;
+ initialBudId = NO_BUDGET_ID;
}
}
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientGroupFactory.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientGroupFactory.java
index 50df3db6f7..ec5321f274 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientGroupFactory.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientGroupFactory.java
@@ -3371,6 +3371,16 @@ private void onNetworkSignal(
case SIGNAL_SYNC_GROUP_REQUEST:
assignment = payload;
doEncodeRequestIfNecessary(traceId, initialBudgetId);
+
+ if (decoder != decodeSyncGroupResponse)
+ {
+ final DirectBuffer buffer = payload.value();
+ final int offset = 0;
+ final int sizeof = payload.sizeof();
+
+ signaler.signalNow(originId, routedId, initialId, traceId, SIGNAL_SYNC_GROUP_REQUEST, 0,
+ buffer, offset, sizeof);
+ }
break;
case SIGNAL_HEARTBEAT_REQUEST:
encoders.add(encodeHeartbeatRequest);
@@ -3938,7 +3948,8 @@ private void doHeartbeatRequest(
{
final String memberId = delegate.groupMembership.memberIds.getOrDefault(delegate.groupId, UNKNOWN_MEMBER_ID);
- if (!memberId.equals(UNKNOWN_MEMBER_ID))
+ if (KafkaState.initialOpened(state) &&
+ !memberId.equals(UNKNOWN_MEMBER_ID))
{
if (heartbeatRequestId != NO_CANCEL_ID)
{
diff --git a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientProduceFactory.java b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientProduceFactory.java
index d6c2cb2b5b..4e2e32816c 100644
--- a/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientProduceFactory.java
+++ b/runtime/binding-kafka/src/main/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/KafkaClientProduceFactory.java
@@ -86,8 +86,6 @@ public final class KafkaClientProduceFactory extends KafkaClientSaslHandshaker i
{
private static final int PRODUCE_REQUEST_RECORDS_OFFSET_MAX = 512;
- private static final int KAFKA_RECORD_FRAMING = 100; // TODO
-
private static final int FLAGS_CON = 0x00;
private static final int FLAGS_FIN = 0x01;
private static final int FLAGS_INIT = 0x02;
@@ -178,6 +176,7 @@ public final class KafkaClientProduceFactory extends KafkaClientSaslHandshaker i
private final KafkaProduceClientDecoder decodeReject = this::decodeReject;
private final int produceMaxWaitMillis;
+ private final int produceRecordFramingSize;
private final long produceRequestMaxDelay;
private final int kafkaTypeId;
private final int proxyTypeId;
@@ -201,6 +200,7 @@ public KafkaClientProduceFactory(
{
super(config, context);
this.produceMaxWaitMillis = config.clientProduceMaxResponseMillis();
+ this.produceRecordFramingSize = config.clientProduceRecordFramingSize();
this.produceRequestMaxDelay = config.clientProduceMaxRequestMillis();
this.kafkaTypeId = context.supplyTypeId(KafkaBinding.NAME);
this.proxyTypeId = context.supplyTypeId("proxy");
@@ -539,7 +539,8 @@ private int flushRecordInit(
final int valueSize = payload != null ? payload.sizeof() : 0;
client.valueCompleteSize = valueSize + client.encodeableRecordBytesDeferred;
- final int maxEncodeableBytes = client.encodeSlotLimit + client.valueCompleteSize + KAFKA_RECORD_FRAMING;
+
+ final int maxEncodeableBytes = client.encodeSlotLimit + client.valueCompleteSize + produceRecordFramingSize;
if (client.encodeSlot != NO_SLOT &&
maxEncodeableBytes > encodePool.slotCapacity())
{
@@ -969,7 +970,8 @@ private void onApplicationData(
assert initialAck <= initialSeq;
- if (initialSeq > initialAck + initialMax)
+ if (initialSeq > initialAck + initialMax ||
+ extension.sizeof() > produceRecordFramingSize)
{
cleanupApplication(traceId, EMPTY_OCTETS);
client.cleanupNetwork(traceId);
@@ -1094,7 +1096,7 @@ private void doAppWindow(
state = KafkaState.openedInitial(state);
doWindow(application, originId, routedId, initialId, initialSeq, initialAck, initialMax,
- traceId, client.authorization, client.initialBud, KAFKA_RECORD_FRAMING);
+ traceId, client.authorization, client.initialBud, produceRecordFramingSize);
}
}
@@ -1191,6 +1193,7 @@ private final class KafkaProduceClient extends KafkaSaslClient
private int encodeableRecordCount;
private int encodeableRecordBytes;
private int encodeableRecordBytesDeferred;
+ private int encodeableRecordValueBytes;
private int flushableRequestBytes;
private int decodeSlot = NO_SLOT;
@@ -1567,7 +1570,8 @@ private void flush(
final int length = limit - progress;
if (encodeSlot != NO_SLOT &&
flushableRequestBytes > 0 &&
- encodeSlotLimit + length + KAFKA_RECORD_FRAMING + flushableRecordHeadersBytes > encodePool.slotCapacity())
+ encodeSlotLimit + length + produceRecordFramingSize + flushableRecordHeadersBytes >
+ encodePool.slotCapacity())
{
doNetworkData(traceId, budgetId, EMPTY_BUFFER, 0, 0);
}
@@ -1652,6 +1656,7 @@ private void doEncodeRecordInit(
encodeSlotBuffer.putBytes(encodeSlotLimit, encodeBuffer, 0, encodeProgress);
encodeSlotLimit += encodeProgress;
+ encodeableRecordValueBytes = 0;
if (headersCount > 0)
{
@@ -1678,7 +1683,8 @@ private void doEncodeRecordCont(
{
final int length = value.sizeof();
- final int encodeableBytes = KAFKA_RECORD_FRAMING + encodeSlotLimit + length + flushableRecordHeadersBytes;
+ final int encodeableBytes = produceRecordFramingSize + encodeSlotLimit +
+ length + flushableRecordHeadersBytes;
if (encodeableBytes >= encodePool.slotCapacity())
{
doEncodeRequestIfNecessary(traceId, budgetId);
@@ -1689,6 +1695,7 @@ private void doEncodeRecordCont(
encodeSlotBuffer.putBytes(encodeSlotLimit, value.buffer(), value.offset(), length);
encodeSlotLimit += length;
+ encodeableRecordValueBytes += length;
if ((flags & FLAGS_FIN) == 0)
{
@@ -1893,7 +1900,8 @@ private void doEncodeProduceRequest(
final ByteBuffer encodeSlotByteBuffer = encodePool.byteBuffer(encodeSlot);
final int encodeSlotBytePosition = encodeSlotByteBuffer.position();
- encodeSlotByteBuffer.limit(encodeSlotBytePosition + encodeSlotLimit);
+ final int partialValueSize = flushFlags != FLAGS_FIN ? encodeableRecordValueBytes : 0;
+ encodeSlotByteBuffer.limit(encodeSlotBytePosition + encodeSlotLimit - partialValueSize);
encodeSlotByteBuffer.position(encodeSlotBytePosition + encodeSlotOffset + crcLimit);
final CRC32C crc = crc32c;
diff --git a/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheBootstrapIT.java b/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheBootstrapIT.java
index 40752fe1a0..d679728ebe 100644
--- a/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheBootstrapIT.java
+++ b/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheBootstrapIT.java
@@ -36,6 +36,7 @@
public class CacheBootstrapIT
{
private final K3poRule k3po = new K3poRule()
+ .addScriptRoot("net", "io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap")
.addScriptRoot("app", "io/aklivity/zilla/specs/binding/kafka/streams/application/merged");
private final TestRule timeout = new DisableOnDebug(new Timeout(10, SECONDS));
@@ -70,4 +71,14 @@ public void shouldReceiveMergedMessageValues() throws Exception
k3po.awaitBarrier("SENT_MESSAGE_C2");
k3po.finish();
}
+
+ @Test
+ @Configuration("cache.yaml")
+ @Specification({
+ "${net}/group.fetch.message.value/client",
+ "${app}/unmerged.group.fetch.message.value/server"})
+ public void shouldReceiveGroupMessageValue() throws Exception
+ {
+ k3po.finish();
+ }
}
diff --git a/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheProduceIT.java b/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheProduceIT.java
index 0f67e22554..7c9503eb0f 100644
--- a/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheProduceIT.java
+++ b/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/CacheProduceIT.java
@@ -360,7 +360,6 @@ public void shouldRejectMessageValue() throws Exception
k3po.finish();
}
- @Ignore("GitHub Actions")
@Test
@Configuration("cache.yaml")
@Specification({
diff --git a/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/ClientMergedIT.java b/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/ClientMergedIT.java
index 8ae2d6fd51..7803a7ce1f 100644
--- a/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/ClientMergedIT.java
+++ b/runtime/binding-kafka/src/test/java/io/aklivity/zilla/runtime/binding/kafka/internal/stream/ClientMergedIT.java
@@ -49,7 +49,7 @@ public class ClientMergedIT
.countersBufferCapacity(8192)
.configure(ENGINE_BUFFER_SLOT_CAPACITY, 8192)
.configure(KAFKA_CLIENT_META_MAX_AGE_MILLIS, 1000)
- .configure(KAFKA_CLIENT_PRODUCE_MAX_BYTES, 116)
+ .configure(KAFKA_CLIENT_PRODUCE_MAX_BYTES, 528)
.configurationRoot("io/aklivity/zilla/specs/binding/kafka/config")
.external("net0")
.clean();
@@ -234,7 +234,7 @@ public void shouldProduceMergedMessageValues() throws Exception
@Configure(
name = "zilla.binding.kafka.client.produce.max.bytes",
value = "200000")
- @ScriptProperty("padding ${512 + 100}")
+ @ScriptProperty("padding ${512 + 512}")
public void shouldProduceMergedMessageValue10k() throws Exception
{
k3po.finish();
@@ -248,7 +248,7 @@ public void shouldProduceMergedMessageValue10k() throws Exception
@Configure(
name = "zilla.binding.kafka.client.produce.max.bytes",
value = "200000")
- @ScriptProperty("padding ${512 + 100}")
+ @ScriptProperty("padding ${512 + 512}")
public void shouldProduceMergedMessageValue100k() throws Exception
{
k3po.finish();
diff --git a/runtime/binding-mqtt-kafka/pom.xml b/runtime/binding-mqtt-kafka/pom.xml
index fa58eb20fe..1930466c2f 100644
--- a/runtime/binding-mqtt-kafka/pom.xml
+++ b/runtime/binding-mqtt-kafka/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/config/MqttKafkaConditionConfig.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/config/MqttKafkaConditionConfig.java
index 8df04215ba..3b4747193c 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/config/MqttKafkaConditionConfig.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/config/MqttKafkaConditionConfig.java
@@ -14,15 +14,20 @@
*/
package io.aklivity.zilla.runtime.binding.mqtt.kafka.config;
+import java.util.List;
+
import io.aklivity.zilla.runtime.engine.config.ConditionConfig;
public class MqttKafkaConditionConfig extends ConditionConfig
{
- public final String topic;
+ public final List topics;
+ public final MqttKafkaConditionKind kind;
public MqttKafkaConditionConfig(
- String topic)
+ List topics,
+ MqttKafkaConditionKind kind)
{
- this.topic = topic;
+ this.topics = topics;
+ this.kind = kind;
}
}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/config/MqttKafkaConditionKind.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/config/MqttKafkaConditionKind.java
new file mode 100644
index 0000000000..67f9755a0d
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/config/MqttKafkaConditionKind.java
@@ -0,0 +1,20 @@
+/*
+ * Copyright 2021-2023 Aklivity Inc
+ *
+ * Licensed under the Aklivity Community License (the "License"); you may not use
+ * this file except in compliance with the License. You may obtain a copy of the
+ * License at
+ *
+ * https://www.aklivity.io/aklivity-community-license/
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+package io.aklivity.zilla.runtime.binding.mqtt.kafka.config;
+public enum MqttKafkaConditionKind
+{
+ PUBLISH,
+ SUBSCRIBE
+}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfiguration.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfiguration.java
index 2dc397183d..945170aded 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfiguration.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfiguration.java
@@ -37,6 +37,8 @@ public class MqttKafkaConfiguration extends Configuration
public static final PropertyDef TIME;
public static final BooleanPropertyDef WILL_AVAILABLE;
public static final IntPropertyDef WILL_STREAM_RECONNECT_DELAY;
+ public static final BooleanPropertyDef BOOTSTRAP_AVAILABLE;
+ public static final IntPropertyDef BOOTSTRAP_STREAM_RECONNECT_DELAY;
static
{
@@ -53,6 +55,8 @@ public class MqttKafkaConfiguration extends Configuration
MqttKafkaConfiguration::decodeLongSupplier, MqttKafkaConfiguration::defaultTime);
WILL_AVAILABLE = config.property("will.available", true);
WILL_STREAM_RECONNECT_DELAY = config.property("will.stream.reconnect", 2);
+ BOOTSTRAP_AVAILABLE = config.property("bootstrap.available", true);
+ BOOTSTRAP_STREAM_RECONNECT_DELAY = config.property("bootstrap.stream.reconnect", 2);
MQTT_KAFKA_CONFIG = config;
}
@@ -102,6 +106,17 @@ public int willStreamReconnectDelay()
return WILL_STREAM_RECONNECT_DELAY.getAsInt(this);
}
+ public boolean bootstrapAvailable()
+ {
+ return BOOTSTRAP_AVAILABLE.get(this);
+ }
+
+ public int bootstrapStreamReconnectDelay()
+ {
+ return BOOTSTRAP_STREAM_RECONNECT_DELAY.getAsInt(this);
+ }
+
+
private static StringSupplier decodeStringSupplier(
String fullyQualifiedMethodName)
{
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaBindingConfig.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaBindingConfig.java
index 53b57260ff..17089b24b4 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaBindingConfig.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaBindingConfig.java
@@ -16,19 +16,29 @@
import static java.util.stream.Collectors.toList;
+import java.util.ArrayList;
import java.util.List;
+import java.util.function.Function;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.stream.MqttKafkaSessionFactory;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.Array32FW;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.MqttTopicFilterFW;
import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.String16FW;
import io.aklivity.zilla.runtime.engine.config.BindingConfig;
import io.aklivity.zilla.runtime.engine.config.KindConfig;
public class MqttKafkaBindingConfig
{
+ private final List bootstrapRoutes;
+
public final long id;
public final KindConfig kind;
public final MqttKafkaOptionsConfig options;
public final List routes;
+ public final List> clients;
public MqttKafkaSessionFactory.KafkaSignalStream willProxy;
@@ -38,7 +48,18 @@ public MqttKafkaBindingConfig(
this.id = binding.id;
this.kind = binding.kind;
this.options = (MqttKafkaOptionsConfig) binding.options;
- this.routes = binding.routes.stream().map(MqttKafkaRouteConfig::new).collect(toList());
+ this.routes = binding.routes.stream().map(r -> new MqttKafkaRouteConfig(options, r)).collect(toList());
+ this.clients = options != null && options.clients != null ?
+ asAccessor(options.clients) : null;
+ final List bootstrapRoutes = new ArrayList<>();
+ routes.forEach(r ->
+ {
+ if (options.clients.stream().anyMatch(r::matchesClient))
+ {
+ bootstrapRoutes.add(r);
+ }
+ });
+ this.bootstrapRoutes = bootstrapRoutes;
}
public MqttKafkaRouteConfig resolve(
@@ -50,6 +71,26 @@ public MqttKafkaRouteConfig resolve(
.orElse(null);
}
+ public MqttKafkaRouteConfig resolve(
+ long authorization,
+ String topic)
+ {
+ return routes.stream()
+ .filter(r -> r.authorized(authorization) && r.matches(topic))
+ .findFirst()
+ .orElse(null);
+ }
+
+ public List resolveAll(
+ long authorization,
+ Array32FW filters)
+ {
+ return routes.stream()
+ .filter(r -> r.authorized(authorization) &&
+ filters.anyMatch(f -> r.matches(f.pattern().asString())))
+ .collect(Collectors.toList());
+ }
+
public String16FW messagesTopic()
{
return options.topics.messages;
@@ -64,4 +105,38 @@ public String16FW retainedTopic()
{
return options.topics.retained;
}
+
+ public List bootstrapRoutes()
+ {
+ return bootstrapRoutes;
+ }
+
+ private List> asAccessor(
+ List clients)
+ {
+ List> accessors = new ArrayList<>();
+
+ if (clients != null)
+ {
+ for (String client : clients)
+ {
+ Matcher topicMatch =
+ Pattern.compile(client.replace("{identity}", "(?[^\\s/]+)").replace("#", ".*"))
+ .matcher("");
+
+ Function accessor = topic ->
+ {
+ String result = null;
+ if (topic != null && topicMatch.reset(topic).matches())
+ {
+ result = topicMatch.group("identity");
+ }
+ return result;
+ };
+ accessors.add(accessor);
+ }
+ }
+
+ return accessors;
+ }
}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapter.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapter.java
index ca3ed0595c..2902f92a1d 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapter.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapter.java
@@ -14,18 +14,26 @@
*/
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+import java.util.ArrayList;
+import java.util.List;
+
import jakarta.json.Json;
+import jakarta.json.JsonArray;
+import jakarta.json.JsonArrayBuilder;
import jakarta.json.JsonObject;
import jakarta.json.JsonObjectBuilder;
import jakarta.json.bind.adapter.JsonbAdapter;
import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionConfig;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionKind;
import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaBinding;
import io.aklivity.zilla.runtime.engine.config.ConditionConfig;
import io.aklivity.zilla.runtime.engine.config.ConditionConfigAdapterSpi;
public class MqttKafkaConditionConfigAdapter implements ConditionConfigAdapterSpi, JsonbAdapter
{
+ private static final String SUBSCRIBE_NAME = "subscribe";
+ private static final String PUBLISH_NAME = "publish";
private static final String TOPIC_NAME = "topic";
@Override
@@ -39,12 +47,24 @@ public JsonObject adaptToJson(
ConditionConfig condition)
{
MqttKafkaConditionConfig mqttKafkaCondition = (MqttKafkaConditionConfig) condition;
+ JsonArrayBuilder topics = Json.createArrayBuilder();
+
+ mqttKafkaCondition.topics.forEach(s ->
+ {
+ JsonObjectBuilder subscribeJson = Json.createObjectBuilder();
+ subscribeJson.add(TOPIC_NAME, s);
+ topics.add(subscribeJson);
+ });
JsonObjectBuilder object = Json.createObjectBuilder();
- if (mqttKafkaCondition.topic != null)
+ if (mqttKafkaCondition.kind == MqttKafkaConditionKind.SUBSCRIBE)
+ {
+ object.add(SUBSCRIBE_NAME, topics);
+ }
+ else if (mqttKafkaCondition.kind == MqttKafkaConditionKind.PUBLISH)
{
- object.add(TOPIC_NAME, mqttKafkaCondition.topic);
+ object.add(PUBLISH_NAME, topics);
}
return object.build();
@@ -54,10 +74,30 @@ public JsonObject adaptToJson(
public ConditionConfig adaptFromJson(
JsonObject object)
{
- String topic = object.containsKey(TOPIC_NAME)
- ? object.getString(TOPIC_NAME)
- : null;
+ List topics = new ArrayList<>();
+ MqttKafkaConditionKind kind = null;
+
+ if (object.containsKey(SUBSCRIBE_NAME))
+ {
+ kind = MqttKafkaConditionKind.SUBSCRIBE;
+ JsonArray subscribesJson = object.getJsonArray(SUBSCRIBE_NAME);
+ subscribesJson.forEach(s ->
+ {
+ String topic = s.asJsonObject().getString(TOPIC_NAME);
+ topics.add(topic);
+ });
+ }
+ else if (object.containsKey(PUBLISH_NAME))
+ {
+ kind = MqttKafkaConditionKind.PUBLISH;
+ JsonArray publishesJson = object.getJsonArray(PUBLISH_NAME);
+ publishesJson.forEach(p ->
+ {
+ String topic = p.asJsonObject().getString(TOPIC_NAME);
+ topics.add(topic);
+ });
+ }
- return new MqttKafkaConditionConfig(topic);
+ return new MqttKafkaConditionConfig(topics, kind);
}
}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionMatcher.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionMatcher.java
new file mode 100644
index 0000000000..866ebdc322
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionMatcher.java
@@ -0,0 +1,114 @@
+/*
+ * Copyright 2021-2023 Aklivity Inc
+ *
+ * Licensed under the Aklivity Community License (the "License"); you may not use
+ * this file except in compliance with the License. You may obtain a copy of the
+ * License at
+ *
+ * https://www.aklivity.io/aklivity-community-license/
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionConfig;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionKind;
+
+public class MqttKafkaConditionMatcher
+{
+ private final List matchers;
+ public final MqttKafkaConditionKind kind;
+
+ public MqttKafkaConditionMatcher(
+ MqttKafkaConditionConfig condition)
+ {
+ this.matchers = asTopicMatchers(condition.topics);
+ this.kind = condition.kind;
+ }
+
+ public boolean matches(
+ String topic)
+ {
+ boolean match = false;
+ if (matchers != null)
+ {
+ for (Matcher matcher : matchers)
+ {
+ if (matcher.reset(topic).matches())
+ {
+ match = true;
+ break;
+ }
+ }
+ }
+ return match;
+ }
+
+
+ private static List asTopicMatchers(
+ List wildcards)
+ {
+ final List matchers = new ArrayList<>();
+ for (String wildcard : wildcards)
+ {
+ String patternBegin = wildcard.startsWith("/") ? "(" : "^(?!\\/)(";
+ String fixedPattern = patternBegin + asRegexPattern(wildcard, 0, true) + ")?\\/?\\#?";
+ String nonFixedPattern = patternBegin + asRegexPattern(wildcard, 0, false) + ")?\\/?\\#";
+ matchers.add(Pattern.compile(nonFixedPattern + "|" + fixedPattern).matcher(""));
+ }
+ return matchers;
+ }
+
+ private static String asRegexPattern(
+ String wildcard,
+ int level,
+ boolean fixedLength)
+ {
+ if (wildcard.isEmpty())
+ {
+ return "";
+ }
+
+ String[] parts = wildcard.split("/", 2);
+ String currentPart = parts[0];
+ String remainingParts = (parts.length > 1) ? parts[1] : "";
+
+ String pattern;
+ if ("".equals(currentPart))
+ {
+ pattern = "\\/";
+ level--;
+ }
+ else
+ {
+ currentPart = currentPart
+ .replace(".", "\\.")
+ .replace("$", "\\$")
+ .replace("+", "[^/]*")
+ .replace("#", ".*");
+ pattern = (level > 0) ? "(\\/\\+|\\/" + currentPart + ")" : "(\\+|" + currentPart + ")";
+ }
+
+ String nextPart = asRegexPattern(remainingParts, level + 1, fixedLength);
+ if (level > 0)
+ {
+ pattern = "(" + pattern;
+ }
+ pattern += nextPart;
+
+ if ("".equals(nextPart))
+ {
+ String endParentheses = fixedLength ? ")" : ")?";
+ pattern += "(\\/\\#)?" + endParentheses.repeat(Math.max(0, level));
+ }
+ return pattern;
+ }
+}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfig.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfig.java
index 6b4a3e5225..a7634f39bb 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfig.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfig.java
@@ -14,18 +14,23 @@
*/
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+import java.util.List;
+
import io.aklivity.zilla.runtime.engine.config.OptionsConfig;
public class MqttKafkaOptionsConfig extends OptionsConfig
{
public final MqttKafkaTopicsConfig topics;
public final String serverRef;
+ public final List clients;
public MqttKafkaOptionsConfig(
MqttKafkaTopicsConfig topics,
- String serverRef)
+ String serverRef,
+ List clients)
{
this.topics = topics;
this.serverRef = serverRef;
+ this.clients = clients;
}
}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapter.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapter.java
index 2b1b665593..a9a0ded706 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapter.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapter.java
@@ -15,7 +15,12 @@
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+import java.util.ArrayList;
+import java.util.List;
+
import jakarta.json.Json;
+import jakarta.json.JsonArray;
+import jakarta.json.JsonArrayBuilder;
import jakarta.json.JsonObject;
import jakarta.json.JsonObjectBuilder;
import jakarta.json.bind.adapter.JsonbAdapter;
@@ -29,6 +34,7 @@ public class MqttKafkaOptionsConfigAdapter implements OptionsConfigAdapterSpi, J
{
private static final String TOPICS_NAME = "topics";
private static final String SERVER_NAME = "server";
+ private static final String CLIENTS_NAME = "clients";
private static final String SESSIONS_NAME = "sessions";
private static final String MESSAGES_NAME = "messages";
private static final String RETAINED_NAME = "retained";
@@ -55,6 +61,7 @@ public JsonObject adaptToJson(
String serverRef = mqttKafkaOptions.serverRef;
MqttKafkaTopicsConfig topics = mqttKafkaOptions.topics;
+ List clients = mqttKafkaOptions.clients;
if (serverRef != null)
{
@@ -83,6 +90,12 @@ public JsonObject adaptToJson(
object.add(TOPICS_NAME, newTopics);
}
+ if (clients != null && !clients.isEmpty())
+ {
+ JsonArrayBuilder clientsBuilder = Json.createArrayBuilder();
+ clients.forEach(clientsBuilder::add);
+ object.add(CLIENTS_NAME, clientsBuilder.build());
+ }
return object.build();
}
@@ -93,6 +106,16 @@ public OptionsConfig adaptFromJson(
{
JsonObject topics = object.getJsonObject(TOPICS_NAME);
String server = object.getString(SERVER_NAME, null);
+ JsonArray clientsJson = object.getJsonArray(CLIENTS_NAME);
+
+ List clients = new ArrayList<>();
+ if (clientsJson != null)
+ {
+ for (int i = 0; i < clientsJson.size(); i++)
+ {
+ clients.add(clientsJson.getString(i));
+ }
+ }
String16FW newSessions = new String16FW(topics.getString(SESSIONS_NAME));
String16FW newMessages = new String16FW(topics.getString(MESSAGES_NAME));
@@ -100,6 +123,6 @@ public OptionsConfig adaptFromJson(
MqttKafkaTopicsConfig newTopics = new MqttKafkaTopicsConfig(newSessions, newMessages, newRetained);
- return new MqttKafkaOptionsConfig(newTopics, server);
+ return new MqttKafkaOptionsConfig(newTopics, server, clients);
}
}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaRouteConfig.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaRouteConfig.java
index 2fb5af14a3..f00e15db70 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaRouteConfig.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaRouteConfig.java
@@ -14,26 +14,44 @@
*/
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+import static java.util.stream.Collectors.toList;
+
+import java.util.List;
+import java.util.Optional;
import java.util.function.LongPredicate;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionConfig;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionKind;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.String16FW;
import io.aklivity.zilla.runtime.engine.config.RouteConfig;
public class MqttKafkaRouteConfig
{
+ private final Optional with;
+ private final List when;
+ private final LongPredicate authorized;
+
public final long id;
+ public final long order;
- //TODO: add when: capabilities, with: kafka_topic_name
- //private final List when;
- private final LongPredicate authorized;
+ public final String16FW messages;
+ public final String16FW retained;
public MqttKafkaRouteConfig(
+ MqttKafkaOptionsConfig options,
RouteConfig route)
{
this.id = route.id;
- // this.when = route.when.stream()
- // .map(MqttKafkaConditionConfig.class::cast)
- // .map(MqttKafkaConditionMatcher::new)
- // .collect(toList());
+ this.order = route.order;
+ this.with = Optional.ofNullable(route.with)
+ .map(MqttKafkaWithConfig.class::cast)
+ .map(c -> new MqttKafkaWithResolver(options, c));
+ this.messages = with.isPresent() ? with.get().messages() : options.topics.messages;
+ this.retained = options.topics.retained;
+ this.when = route.when.stream()
+ .map(MqttKafkaConditionConfig.class::cast)
+ .map(MqttKafkaConditionMatcher::new)
+ .collect(toList());
this.authorized = route.authorized;
}
@@ -43,9 +61,18 @@ boolean authorized(
return authorized.test(authorization);
}
- // boolean matches(
- // String topic)
- // {
- // return when.isEmpty() || when.stream().anyMatch(m -> m.matches(topic));
- // }
+ public boolean matchesClient(
+ String client)
+ {
+ return !when.isEmpty() && when.stream()
+ .filter(m -> m.kind == MqttKafkaConditionKind.SUBSCRIBE)
+ .allMatch(m -> m.matches(client));
+ }
+
+ public boolean matches(
+ String topic)
+ {
+ return when.isEmpty() || when.stream()
+ .anyMatch(m -> m.matches(topic));
+ }
}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfig.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfig.java
new file mode 100644
index 0000000000..5d7b0b77d7
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfig.java
@@ -0,0 +1,28 @@
+/*
+ * Copyright 2021-2023 Aklivity Inc
+ *
+ * Licensed under the Aklivity Community License (the "License"); you may not use
+ * this file except in compliance with the License. You may obtain a copy of the
+ * License at
+ *
+ * https://www.aklivity.io/aklivity-community-license/
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+
+import io.aklivity.zilla.runtime.engine.config.WithConfig;
+
+public class MqttKafkaWithConfig extends WithConfig
+{
+ public final String messages;
+
+ public MqttKafkaWithConfig(
+ String messages)
+ {
+ this.messages = messages;
+ }
+}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfigAdapter.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfigAdapter.java
new file mode 100644
index 0000000000..0f345bf758
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfigAdapter.java
@@ -0,0 +1,62 @@
+/*
+ * Copyright 2021-2023 Aklivity Inc
+ *
+ * Licensed under the Aklivity Community License (the "License"); you may not use
+ * this file except in compliance with the License. You may obtain a copy of the
+ * License at
+ *
+ * https://www.aklivity.io/aklivity-community-license/
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+
+import jakarta.json.Json;
+import jakarta.json.JsonObject;
+import jakarta.json.JsonObjectBuilder;
+import jakarta.json.bind.adapter.JsonbAdapter;
+
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaBinding;
+import io.aklivity.zilla.runtime.engine.config.WithConfig;
+import io.aklivity.zilla.runtime.engine.config.WithConfigAdapterSpi;
+
+public class MqttKafkaWithConfigAdapter implements WithConfigAdapterSpi, JsonbAdapter
+{
+ private static final String MESSAGES_NAME = "messages";
+
+ @Override
+ public String type()
+ {
+ return MqttKafkaBinding.NAME;
+ }
+
+ @Override
+ public JsonObject adaptToJson(
+ WithConfig with)
+ {
+ MqttKafkaWithConfig config = (MqttKafkaWithConfig) with;
+
+ JsonObjectBuilder object = Json.createObjectBuilder();
+
+ if (config.messages != null)
+ {
+ object.add(MESSAGES_NAME, config.messages);
+ }
+
+ return object.build();
+ }
+
+ @Override
+ public WithConfig adaptFromJson(
+ JsonObject object)
+ {
+ String topic = object.containsKey(MESSAGES_NAME)
+ ? object.getString(MESSAGES_NAME)
+ : null;
+
+ return new MqttKafkaWithConfig(topic);
+ }
+}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithResolver.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithResolver.java
new file mode 100644
index 0000000000..4d8850821e
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithResolver.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2021-2023 Aklivity Inc
+ *
+ * Licensed under the Aklivity Community License (the "License"); you may not use
+ * this file except in compliance with the License. You may obtain a copy of the
+ * License at
+ *
+ * https://www.aklivity.io/aklivity-community-license/
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+
+
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.String16FW;
+
+public class MqttKafkaWithResolver
+{
+ private final String16FW messages;
+
+ public MqttKafkaWithResolver(
+ MqttKafkaOptionsConfig options,
+ MqttKafkaWithConfig with)
+ {
+ this.messages = with.messages == null ? options.topics.messages : new String16FW(with.messages);
+ }
+
+ public String16FW messages()
+ {
+ return messages;
+ }
+}
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaProxyFactory.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaProxyFactory.java
index 08e8367cbf..7648a531bb 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaProxyFactory.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaProxyFactory.java
@@ -14,8 +14,6 @@
*/
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.stream;
-import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfiguration.WILL_STREAM_RECONNECT_DELAY;
-
import org.agrona.DirectBuffer;
import org.agrona.collections.Int2ObjectHashMap;
import org.agrona.collections.Long2ObjectHashMap;
@@ -58,7 +56,7 @@ public MqttKafkaProxyFactory(
config, context, bindings::get);
final MqttKafkaSessionFactory sessionFactory = new MqttKafkaSessionFactory(
- config, context, instanceId, bindings::get, WILL_STREAM_RECONNECT_DELAY);
+ config, context, instanceId, bindings::get);
factories.put(MqttBeginExFW.KIND_PUBLISH, publishFactory);
factories.put(MqttBeginExFW.KIND_SUBSCRIBE, subscribeFactory);
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishFactory.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishFactory.java
index 258fa73218..f9bf3eb307 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishFactory.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishFactory.java
@@ -17,6 +17,8 @@
import static java.time.Instant.now;
import java.nio.ByteOrder;
+import java.util.List;
+import java.util.function.Function;
import java.util.function.LongFunction;
import java.util.function.LongUnaryOperator;
@@ -136,17 +138,24 @@ public MessageConsumer newStream(
final long routedId = begin.routedId();
final long initialId = begin.streamId();
final long authorization = begin.authorization();
+ final OctetsFW extension = begin.extension();
+ final MqttBeginExFW mqttBeginEx = extension.get(mqttBeginExRO::tryWrap);
+
+ assert mqttBeginEx.kind() == MqttBeginExFW.KIND_PUBLISH;
+ final MqttPublishBeginExFW mqttPublishBeginEx = mqttBeginEx.publish();
final MqttKafkaBindingConfig binding = supplyBinding.apply(routedId);
- final MqttKafkaRouteConfig resolved = binding != null ? binding.resolve(authorization) : null;
+ final MqttKafkaRouteConfig resolved = binding != null ?
+ binding.resolve(authorization, mqttPublishBeginEx.topic().asString()) : null;
MessageConsumer newStream = null;
if (resolved != null)
{
final long resolvedId = resolved.id;
+ final String16FW messagesTopic = resolved.messages;
newStream = new MqttPublishProxy(mqtt, originId, routedId, initialId, resolvedId,
- binding.messagesTopic(), binding.retainedTopic())::onMqttMessage;
+ messagesTopic, binding.retainedTopic(), binding.clients)::onMqttMessage;
}
return newStream;
@@ -161,8 +170,7 @@ private final class MqttPublishProxy
private final long replyId;
private final KafkaMessagesProxy messages;
private final KafkaRetainedProxy retained;
- private final String16FW kafkaMessagesTopic;
- private final String16FW kafkaRetainedTopic;
+ private final List> clients;
private int state;
@@ -176,6 +184,7 @@ private final class MqttPublishProxy
private int replyPad;
private KafkaKeyFW key;
+ private KafkaKeyFW hashKey;
private Array32FW topicNameHeaders;
private OctetsFW clientIdOctets;
@@ -188,17 +197,17 @@ private MqttPublishProxy(
long initialId,
long resolvedId,
String16FW kafkaMessagesTopic,
- String16FW kafkaRetainedTopic)
+ String16FW kafkaRetainedTopic,
+ List> clients)
{
this.mqtt = mqtt;
this.originId = originId;
this.routedId = routedId;
this.initialId = initialId;
this.replyId = supplyReplyId.applyAsLong(initialId);
- this.messages = new KafkaMessagesProxy(originId, resolvedId, this);
- this.retained = new KafkaRetainedProxy(originId, resolvedId, this);
- this.kafkaMessagesTopic = kafkaMessagesTopic;
- this.kafkaRetainedTopic = kafkaRetainedTopic;
+ this.messages = new KafkaMessagesProxy(originId, resolvedId, this, kafkaMessagesTopic);
+ this.retained = new KafkaRetainedProxy(originId, resolvedId, this, kafkaRetainedTopic);
+ this.clients = clients;
}
private void onMqttMessage(
@@ -296,12 +305,39 @@ private void onMqttBegin(
.value(topicNameBuffer, 0, topicNameBuffer.capacity())
.build();
- messages.doKafkaBegin(traceId, authorization, affinity, kafkaMessagesTopic);
+ final String clientHashKey = clientHashKey(topicName);
+ if (clientHashKey != null)
+ {
+ final DirectBuffer clientHashKeyBuffer = new String16FW(clientHashKey).value();
+ final MutableDirectBuffer hashKeyBuffer = new UnsafeBuffer(new byte[clientHashKeyBuffer.capacity() + 4]);
+ hashKey = new KafkaKeyFW.Builder()
+ .wrap(hashKeyBuffer, 0, hashKeyBuffer.capacity())
+ .length(clientHashKeyBuffer.capacity())
+ .value(clientHashKeyBuffer, 0, clientHashKeyBuffer.capacity())
+ .build();
+ }
+
+ messages.doKafkaBegin(traceId, authorization, affinity);
this.retainAvailable = (mqttPublishBeginEx.flags() & 1 << MqttPublishFlags.RETAIN.value()) != 0;
if (retainAvailable)
{
- retained.doKafkaBegin(traceId, authorization, affinity, kafkaRetainedTopic);
+ retained.doKafkaBegin(traceId, authorization, affinity);
+ }
+ }
+
+ private String clientHashKey(
+ String topicName)
+ {
+ String clientHashKey = null;
+ if (clients != null)
+ {
+ for (Function client : clients)
+ {
+ clientHashKey = client.apply(topicName);
+ break;
+ }
}
+ return clientHashKey;
}
private void onMqttData(
@@ -320,7 +356,7 @@ private void onMqttData(
assert acknowledge <= sequence;
assert sequence >= initialSeq;
- initialSeq = sequence;
+ initialSeq = sequence + reserved;
assert initialAck <= initialSeq;
@@ -357,7 +393,8 @@ private void onMqttData(
addHeader(helper.kafkaContentTypeHeaderName, mqttPublishDataEx.contentType());
}
- if (payload.sizeof() != 0 && mqttPublishDataEx.format() != null)
+ if (payload.sizeof() != 0 && mqttPublishDataEx.format() != null &&
+ !mqttPublishDataEx.format().get().equals(MqttPayloadFormat.NONE))
{
addHeader(helper.kafkaFormatHeaderName, mqttPublishDataEx.format());
}
@@ -365,7 +402,7 @@ private void onMqttData(
if (mqttPublishDataEx.responseTopic().length() != -1)
{
final String16FW responseTopic = mqttPublishDataEx.responseTopic();
- addHeader(helper.kafkaReplyToHeaderName, kafkaMessagesTopic);
+ addHeader(helper.kafkaReplyToHeaderName, messages.topic);
addHeader(helper.kafkaReplyKeyHeaderName, responseTopic);
addFiltersHeader(responseTopic);
@@ -388,6 +425,7 @@ private void onMqttData(
.timestamp(now().toEpochMilli())
.partition(p -> p.partitionId(-1).partitionOffset(-1))
.key(b -> b.set(key))
+ .hashKey(this::setHashKey)
.headers(kafkaHeadersRW.build()))
.build();
@@ -414,6 +452,15 @@ private void onMqttData(
}
}
+ private void setHashKey(
+ KafkaKeyFW.Builder builder)
+ {
+ if (hashKey != null)
+ {
+ builder.set(hashKey);
+ }
+ }
+
private void addFiltersHeader(
String16FW responseTopic)
{
@@ -604,6 +651,8 @@ private void doMqttWindow(
initialAck = newInitialAck;
initialMax = newInitialMax;
+ assert initialAck <= initialSeq;
+
doWindow(mqtt, originId, routedId, initialId, initialSeq, initialAck, initialMax,
traceId, authorization, budgetId, padding, 0, capabilities);
}
@@ -711,6 +760,7 @@ final class KafkaMessagesProxy
private final long initialId;
private final long replyId;
private final MqttPublishProxy delegate;
+ private final String16FW topic;
private int state;
@@ -726,20 +776,22 @@ final class KafkaMessagesProxy
private KafkaMessagesProxy(
long originId,
long routedId,
- MqttPublishProxy delegate)
+ MqttPublishProxy delegate,
+ String16FW topic)
{
this.originId = originId;
this.routedId = routedId;
this.delegate = delegate;
this.initialId = supplyInitialId.applyAsLong(routedId);
this.replyId = supplyReplyId.applyAsLong(initialId);
+ this.topic = topic;
+
}
private void doKafkaBegin(
long traceId,
long authorization,
- long affinity,
- String16FW kafkaMessagesTopic)
+ long affinity)
{
initialSeq = delegate.initialSeq;
initialAck = delegate.initialAck;
@@ -747,7 +799,7 @@ private void doKafkaBegin(
state = MqttKafkaState.openingInitial(state);
kafka = newKafkaStream(this::onKafkaMessage, originId, routedId, initialId, initialSeq, initialAck, initialMax,
- traceId, authorization, affinity, kafkaMessagesTopic);
+ traceId, authorization, affinity, topic);
}
private void doKafkaData(
@@ -1016,6 +1068,7 @@ final class KafkaRetainedProxy
private final long initialId;
private final long replyId;
private final MqttPublishProxy delegate;
+ private final String16FW topic;
private int state;
@@ -1031,20 +1084,21 @@ final class KafkaRetainedProxy
private KafkaRetainedProxy(
long originId,
long routedId,
- MqttPublishProxy delegate)
+ MqttPublishProxy delegate,
+ String16FW topic)
{
this.originId = originId;
this.routedId = routedId;
this.delegate = delegate;
this.initialId = supplyInitialId.applyAsLong(routedId);
this.replyId = supplyReplyId.applyAsLong(initialId);
+ this.topic = topic;
}
private void doKafkaBegin(
long traceId,
long authorization,
- long affinity,
- String16FW kafkaRetainedTopic)
+ long affinity)
{
initialSeq = delegate.initialSeq;
initialAck = delegate.initialAck;
@@ -1052,7 +1106,7 @@ private void doKafkaBegin(
state = MqttKafkaState.openingInitial(state);
kafka = newKafkaStream(this::onKafkaMessage, originId, routedId, initialId, initialSeq, initialAck, initialMax,
- traceId, authorization, affinity, kafkaRetainedTopic);
+ traceId, authorization, affinity, topic);
}
private void doKafkaData(
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSessionFactory.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSessionFactory.java
index 1b9393a9a1..17479704a8 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSessionFactory.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSessionFactory.java
@@ -90,7 +90,6 @@
import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.stream.ResetFW;
import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.stream.SignalFW;
import io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.stream.WindowFW;
-import io.aklivity.zilla.runtime.engine.Configuration.IntPropertyDef;
import io.aklivity.zilla.runtime.engine.EngineContext;
import io.aklivity.zilla.runtime.engine.binding.BindingHandler;
import io.aklivity.zilla.runtime.engine.binding.function.MessageConsumer;
@@ -215,8 +214,7 @@ public MqttKafkaSessionFactory(
MqttKafkaConfiguration config,
EngineContext context,
InstanceId instanceId,
- LongFunction supplyBinding,
- IntPropertyDef reconnectDelay)
+ LongFunction supplyBinding)
{
this.kafkaTypeId = context.supplyTypeId(KAFKA_TYPE_NAME);
this.mqttTypeId = context.supplyTypeId(MQTT_TYPE_NAME);
@@ -247,7 +245,7 @@ public MqttKafkaSessionFactory(
this.willDeliverIds = new Object2ObjectHashMap<>();
this.sessionExpiryIds = new Object2LongHashMap<>(-1);
this.instanceId = instanceId;
- this.reconnectDelay = reconnectDelay.getAsInt(config);
+ this.reconnectDelay = config.willStreamReconnectDelay();
}
@Override
diff --git a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeFactory.java b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeFactory.java
index 853b280aa3..900ab943e3 100644
--- a/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeFactory.java
+++ b/runtime/binding-mqtt-kafka/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeFactory.java
@@ -15,17 +15,22 @@
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.stream;
+import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.stream.MqttKafkaSessionFactory.MQTT_CLIENTS_GROUP_ID;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.MqttPublishFlags.RETAIN;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.MqttSubscribeFlags.NO_LOCAL;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.MqttSubscribeFlags.RETAIN_AS_PUBLISHED;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.types.MqttSubscribeFlags.SEND_RETAINED;
import static io.aklivity.zilla.runtime.engine.buffer.BufferPool.NO_SLOT;
+import static io.aklivity.zilla.runtime.engine.concurrent.Signaler.NO_CANCEL_ID;
+import static java.lang.System.currentTimeMillis;
+import static java.util.concurrent.TimeUnit.SECONDS;
import java.util.ArrayList;
import java.util.List;
import java.util.function.LongFunction;
import java.util.function.LongUnaryOperator;
+import java.util.function.Supplier;
import org.agrona.DirectBuffer;
import org.agrona.MutableDirectBuffer;
@@ -72,6 +77,7 @@
import io.aklivity.zilla.runtime.engine.binding.BindingHandler;
import io.aklivity.zilla.runtime.engine.binding.function.MessageConsumer;
import io.aklivity.zilla.runtime.engine.buffer.BufferPool;
+import io.aklivity.zilla.runtime.engine.concurrent.Signaler;
public class MqttKafkaSubscribeFactory implements MqttKafkaStreamFactory
{
@@ -83,7 +89,8 @@ public class MqttKafkaSubscribeFactory implements MqttKafkaStreamFactory
private static final int SEND_RETAIN_FLAG = 1 << SEND_RETAINED.ordinal();
private static final int RETAIN_FLAG = 1 << RETAIN.ordinal();
private static final int RETAIN_AS_PUBLISHED_FLAG = 1 << RETAIN_AS_PUBLISHED.ordinal();
- public static final int DATA_FIN_FLAG = 0x03;
+ private static final int SIGNAL_CONNECT_BOOTSTRAP_STREAM = 1;
+ private static final int DATA_FIN_FLAG = 0x03;
private final OctetsFW emptyRO = new OctetsFW().wrap(new UnsafeBuffer(0L, 0), 0, 0);
private final BeginFW beginRO = new BeginFW();
private final DataFW dataRO = new DataFW();
@@ -115,7 +122,7 @@ public class MqttKafkaSubscribeFactory implements MqttKafkaStreamFactory
private final KafkaBeginExFW.Builder kafkaBeginExRW = new KafkaBeginExFW.Builder();
private final KafkaFlushExFW.Builder kafkaFlushExRW = new KafkaFlushExFW.Builder();
- private final Array32FW.Builder sendRetainedFiltersRW =
+ private final Array32FW.Builder filtersRW =
new Array32FW.Builder<>(new MqttTopicFilterFW.Builder(), new MqttTopicFilterFW());
private final Array32FW.Builder subscriptionIdsRW =
@@ -124,15 +131,22 @@ public class MqttKafkaSubscribeFactory implements MqttKafkaStreamFactory
private final MutableDirectBuffer writeBuffer;
private final MutableDirectBuffer extBuffer;
private final MutableDirectBuffer subscriptionIdsBuffer;
- private final MutableDirectBuffer retainFilterBuffer;
+ private final MutableDirectBuffer filterBuffer;
private final BindingHandler streamFactory;
+ private final Signaler signaler;
private final BufferPool bufferPool;
private final LongUnaryOperator supplyInitialId;
private final LongUnaryOperator supplyReplyId;
+ private final Supplier supplyTraceId;
private final int mqttTypeId;
private final int kafkaTypeId;
private final LongFunction supplyBinding;
private final MqttKafkaHeaderHelper helper;
+ private final int reconnectDelay;
+ private final boolean bootstrapAvailable;
+ private final List bootstrapStreams;
+
+ private int reconnectAttempt;
public MqttKafkaSubscribeFactory(
MqttKafkaConfiguration config,
@@ -144,13 +158,46 @@ public MqttKafkaSubscribeFactory(
this.writeBuffer = new UnsafeBuffer(new byte[context.writeBuffer().capacity()]);
this.extBuffer = new UnsafeBuffer(new byte[context.writeBuffer().capacity()]);
this.subscriptionIdsBuffer = new UnsafeBuffer(new byte[context.writeBuffer().capacity()]);
- this.retainFilterBuffer = new UnsafeBuffer(new byte[context.writeBuffer().capacity()]);
+ this.filterBuffer = new UnsafeBuffer(new byte[context.writeBuffer().capacity()]);
this.streamFactory = context.streamFactory();
+ this.signaler = context.signaler();
this.bufferPool = context.bufferPool();
this.supplyInitialId = context::supplyInitialId;
this.supplyReplyId = context::supplyReplyId;
+ this.supplyTraceId = context::supplyTraceId;
this.supplyBinding = supplyBinding;
this.helper = new MqttKafkaHeaderHelper();
+ this.bootstrapAvailable = config.bootstrapAvailable();
+ this.reconnectDelay = config.bootstrapStreamReconnectDelay();
+ this.bootstrapStreams = new ArrayList<>();
+ }
+
+ @Override
+ public void onAttached(
+ long bindingId)
+ {
+ if (bootstrapAvailable)
+ {
+ MqttKafkaBindingConfig binding = supplyBinding.apply(bindingId);
+ List bootstrap = binding.bootstrapRoutes();
+ bootstrap.forEach(r ->
+ {
+ final KafkaMessagesBootstrap stream = new KafkaMessagesBootstrap(binding.id, r);
+ bootstrapStreams.add(stream);
+ stream.doKafkaBeginAt(currentTimeMillis());
+ });
+ }
+ }
+
+ @Override
+ public void onDetached(
+ long bindingId)
+ {
+ for (KafkaMessagesBootstrap stream : bootstrapStreams)
+ {
+ stream.doKafkaEnd(supplyTraceId.get(), 0);
+ }
+ bootstrapStreams.clear();
}
@Override
@@ -166,21 +213,21 @@ public MessageConsumer newStream(
final long routedId = begin.routedId();
final long initialId = begin.streamId();
final long authorization = begin.authorization();
+ final OctetsFW extension = begin.extension();
+ final MqttBeginExFW mqttBeginEx = extension.get(mqttBeginExRO::tryWrap);
- final MqttKafkaBindingConfig binding = supplyBinding.apply(routedId);
+ assert mqttBeginEx.kind() == MqttBeginExFW.KIND_SUBSCRIBE;
+ final MqttSubscribeBeginExFW mqttSubscribeBeginEx = mqttBeginEx.subscribe();
- final MqttKafkaRouteConfig resolved = binding != null ?
- binding.resolve(authorization) : null;
+ final MqttKafkaBindingConfig binding = supplyBinding.apply(routedId);
+ final List routes = binding != null ?
+ binding.resolveAll(authorization, mqttSubscribeBeginEx.filters()) : null;
MessageConsumer newStream = null;
- if (resolved != null)
+ if (routes != null && !routes.isEmpty())
{
- final long resolvedId = resolved.id;
- final String16FW kafkaMessagesTopic = binding.messagesTopic();
- final String16FW kafkaRetainedTopic = binding.retainedTopic();
- newStream = new MqttSubscribeProxy(mqtt, originId, routedId, initialId, resolvedId,
- kafkaMessagesTopic, kafkaRetainedTopic)::onMqttMessage;
+ newStream = new MqttSubscribeProxy(mqtt, originId, routedId, initialId, routes)::onMqttMessage;
}
return newStream;
@@ -193,7 +240,7 @@ private final class MqttSubscribeProxy
private final long routedId;
private final long initialId;
private final long replyId;
- private final KafkaMessagesProxy messages;
+ private final Long2ObjectHashMap messages;
private final KafkaRetainedProxy retained;
private int state;
@@ -209,7 +256,6 @@ private final class MqttSubscribeProxy
private long replyBud;
private int mqttSharedBudget;
- private final IntArrayList messagesSubscriptionIds;
private final IntArrayList retainedSubscriptionIds;
private final Long2ObjectHashMap retainAsPublished;
private final List retainedSubscriptions;
@@ -221,21 +267,20 @@ private MqttSubscribeProxy(
long originId,
long routedId,
long initialId,
- long resolvedId,
- String16FW kafkaMessagesTopic,
- String16FW kafkaRetainedTopic)
+ List routes)
{
this.mqtt = mqtt;
this.originId = originId;
this.routedId = routedId;
this.initialId = initialId;
this.replyId = supplyReplyId.applyAsLong(initialId);
- this.messagesSubscriptionIds = new IntArrayList();
this.retainedSubscriptionIds = new IntArrayList();
this.retainedSubscriptions = new ArrayList<>();
this.retainAsPublished = new Long2ObjectHashMap<>();
- this.messages = new KafkaMessagesProxy(originId, resolvedId, kafkaMessagesTopic, this);
- this.retained = new KafkaRetainedProxy(originId, resolvedId, kafkaRetainedTopic, this);
+ this.messages = new Long2ObjectHashMap<>();
+ routes.forEach(r -> messages.put(r.order, new KafkaMessagesProxy(originId, r, this)));
+ final MqttKafkaRouteConfig retainedRoute = routes.get(0);
+ this.retained = new KafkaRetainedProxy(originId, retainedRoute.id, retainedRoute.retained, this);
}
private void onMqttMessage(
@@ -305,18 +350,7 @@ private void onMqttBegin(
clientId = newString16FW(mqttSubscribeBeginEx.clientId());
Array32FW filters = mqttSubscribeBeginEx.filters();
- filters.forEach(filter ->
- {
- int subscriptionId = (int) filter.subscriptionId();
- if (!messagesSubscriptionIds.contains(subscriptionId))
- {
- messagesSubscriptionIds.add(subscriptionId);
- }
- if ((filter.flags() & SEND_RETAIN_FLAG) != 0)
- {
- retainAvailable = true;
- }
- });
+ filters.forEach(f -> retainAvailable |= (f.flags() & SEND_RETAIN_FLAG) != 0);
final List retainedFilters = new ArrayList<>();
if (retainAvailable)
@@ -335,7 +369,7 @@ private void onMqttBegin(
{
retained.doKafkaBegin(traceId, authorization, affinity, retainedFilters);
}
- messages.doKafkaBegin(traceId, authorization, affinity, filters);
+ messages.values().forEach(m -> m.doKafkaBegin(traceId, authorization, affinity, filters));
}
private void onMqttFlush(
@@ -356,55 +390,36 @@ private void onMqttFlush(
assert initialAck <= initialSeq;
+ final MqttKafkaBindingConfig binding = supplyBinding.apply(routedId);
+
final OctetsFW extension = flush.extension();
final MqttFlushExFW mqttFlushEx = extension.get(mqttFlushExRO::tryWrap);
assert mqttFlushEx.kind() == MqttFlushExFW.KIND_SUBSCRIBE;
final MqttSubscribeFlushExFW mqttSubscribeFlushEx = mqttFlushEx.subscribe();
- Array32FW filters = mqttSubscribeFlushEx.filters();
- messagesSubscriptionIds.clear();
+ final Array32FW filters = mqttSubscribeFlushEx.filters();
- final KafkaFlushExFW kafkaFlushEx =
- kafkaFlushExRW.wrap(writeBuffer, FlushFW.FIELD_OFFSET_EXTENSION, writeBuffer.capacity())
- .typeId(kafkaTypeId)
- .merged(m -> m.fetch(f ->
+ final List routes = binding != null ?
+ binding.resolveAll(authorization, filters) : null;
+
+ if (routes != null)
+ {
+ routes.forEach(r ->
{
- f.capabilities(c -> c.set(KafkaCapabilities.FETCH_ONLY));
- filters.forEach(filter ->
+ final long routeOrder = r.order;
+ if (!messages.containsKey(routeOrder))
{
- if ((filter.flags() & SEND_RETAIN_FLAG) != 0)
- {
- retainAvailable = true;
- }
- f.filtersItem(fi ->
- {
- final int subscriptionId = (int) filter.subscriptionId();
- fi.conditionsItem(ci ->
- {
- if (!messagesSubscriptionIds.contains(subscriptionId))
- {
- messagesSubscriptionIds.add(subscriptionId);
- }
- buildHeaders(ci, filter.pattern().asString());
- });
-
- final boolean noLocal = (filter.flags() & NO_LOCAL_FLAG) != 0;
- if (noLocal)
- {
- final DirectBuffer valueBuffer = clientId.value();
- fi.conditionsItem(i -> i.not(n -> n.condition(c -> c.header(h ->
- h.nameLen(helper.kafkaLocalHeaderName.sizeof())
- .name(helper.kafkaLocalHeaderName)
- .valueLen(valueBuffer.capacity())
- .value(valueBuffer, 0, valueBuffer.capacity())))));
- }
- });
- });
- }))
- .build();
-
- messages.doKafkaFlush(traceId, authorization, budgetId, reserved, kafkaFlushEx);
+ KafkaMessagesProxy messagesProxy = new KafkaMessagesProxy(originId, r, this);
+ messages.put(routeOrder, messagesProxy);
+ messagesProxy.doKafkaBegin(traceId, authorization, 0, filters);
+ }
+ else
+ {
+ messages.get(routeOrder).doKafkaFlush(traceId, authorization, budgetId, reserved, filters);
+ }
+ });
+ }
if (retainAvailable)
{
@@ -461,7 +476,8 @@ private void onMqttData(
doMqttReset(traceId);
- messages.doKafkaAbort(traceId, authorization);
+ messages.values().forEach(m -> m.doKafkaAbort(traceId, authorization));
+ messages.clear();
if (retainAvailable)
{
retained.doKafkaAbort(traceId, authorization);
@@ -486,10 +502,11 @@ private void onMqttEnd(
assert initialAck <= initialSeq;
- messages.doKafkaEnd(traceId, initialSeq, authorization);
+ messages.values().forEach(m -> m.doKafkaEnd(traceId, authorization));
+ messages.clear();
if (retainAvailable)
{
- retained.doKafkaEnd(traceId, initialSeq, authorization);
+ retained.doKafkaEnd(traceId, authorization);
}
}
@@ -510,7 +527,8 @@ private void onMqttAbort(
assert initialAck <= initialSeq;
- messages.doKafkaAbort(traceId, authorization);
+ messages.values().forEach(m -> m.doKafkaAbort(traceId, authorization));
+ messages.clear();
if (retainAvailable)
{
retained.doKafkaAbort(traceId, authorization);
@@ -537,7 +555,8 @@ private void onMqttReset(
assert replyAck <= replySeq;
- messages.doKafkaReset(traceId);
+ messages.values().forEach(m -> m.doKafkaReset(traceId));
+ messages.clear();
if (retainAvailable)
{
retained.doKafkaReset(traceId);
@@ -575,11 +594,11 @@ private void onMqttWindow(
{
retained.doKafkaWindow(traceId, authorization, budgetId, padding, capabilities);
}
- else if (messages.messageSlotOffset != messages.messageSlotLimit)
+ else
{
- messages.flushData(traceId, authorization, budgetId);
+ messages.values().forEach(m -> m.flushDataIfNecessary(traceId, authorization, budgetId));
}
- messages.doKafkaWindow(traceId, authorization, budgetId, padding, capabilities);
+ messages.values().forEach(m -> m.doKafkaWindow(traceId, authorization, budgetId, padding, capabilities));
}
private void doMqttBegin(
@@ -616,8 +635,6 @@ private void doMqttFlush(
long budgetId,
int reserved)
{
- replySeq = messages.replySeq;
-
doFlush(mqtt, originId, routedId, replyId, replySeq, replyAck, replyMax,
traceId, authorization, budgetId, reserved, emptyRO);
}
@@ -628,7 +645,6 @@ private void doMqttAbort(
{
if (!MqttKafkaState.replyClosed(state))
{
- replySeq = messages.replySeq;
state = MqttKafkaState.closeReply(state);
doAbort(mqtt, originId, routedId, replyId, replySeq, replyAck, replyMax, traceId, authorization);
@@ -641,7 +657,6 @@ private void doMqttEnd(
{
if (!MqttKafkaState.replyClosed(state))
{
- replySeq = messages.replySeq;
state = MqttKafkaState.closeReply(state);
doEnd(mqtt, originId, routedId, replyId, replySeq, replyAck, replyMax, traceId, authorization);
@@ -655,9 +670,6 @@ private void doMqttWindow(
int padding,
int capabilities)
{
- initialAck = messages.initialAck;
- initialMax = messages.initialMax;
-
doWindow(mqtt, originId, routedId, initialId, initialSeq, initialAck, initialMax,
traceId, authorization, budgetId, padding, 0, capabilities);
}
@@ -684,6 +696,244 @@ private int replyWindow()
}
}
+ private final class KafkaMessagesBootstrap
+ {
+ private final String16FW topic;
+ private MessageConsumer kafka;
+ private final long originId;
+ private final long routedId;
+ private final long initialId;
+ private final long replyId;
+ private int state;
+
+ private long initialSeq;
+ private long initialAck;
+ private int initialMax;
+
+ private long replySeq;
+ private long replyAck;
+ private int replyMax;
+ private int replyPad;
+ private long reconnectAt;
+
+ private KafkaMessagesBootstrap(
+ long originId,
+ MqttKafkaRouteConfig route)
+ {
+ this.originId = originId;
+ this.routedId = route.id;
+ this.topic = route.messages;
+ this.initialId = supplyInitialId.applyAsLong(routedId);
+ this.replyId = supplyReplyId.applyAsLong(initialId);
+ }
+
+ private void doKafkaBeginAt(
+ long timeMillis)
+ {
+ this.reconnectAt = signaler.signalAt(
+ timeMillis,
+ SIGNAL_CONNECT_BOOTSTRAP_STREAM,
+ this::onSignalConnectBootstrapStream);
+ }
+
+ private void onSignalConnectBootstrapStream(
+ int signalId)
+ {
+ assert signalId == SIGNAL_CONNECT_BOOTSTRAP_STREAM;
+
+ this.reconnectAt = NO_CANCEL_ID;
+ doKafkaBegin(supplyTraceId.get(), 0, 0);
+ }
+
+ private void doKafkaBegin(
+ long traceId,
+ long authorization,
+ long affinity)
+ {
+ reconnectAttempt = 0;
+ state = MqttKafkaState.openingInitial(state);
+
+ kafka = newKafkaBootstrapStream(this::onKafkaMessage, originId, routedId, initialId, initialSeq, initialAck,
+ initialMax, traceId, authorization, affinity, topic);
+ }
+
+ private void doKafkaEnd(
+ long traceId,
+ long authorization)
+ {
+ if (!MqttKafkaState.initialClosed(state))
+ {
+ state = MqttKafkaState.closeInitial(state);
+
+ doEnd(kafka, originId, routedId, initialId, 0, 0, 0, traceId, authorization);
+
+ signaler.cancel(reconnectAt);
+ reconnectAt = NO_CANCEL_ID;
+ }
+ }
+
+ private void doKafkaAbort(
+ long traceId,
+ long authorization)
+ {
+ if (!MqttKafkaState.initialClosed(state))
+ {
+ state = MqttKafkaState.closeInitial(state);
+
+ doAbort(kafka, originId, routedId, initialId, 0, 0, 0, traceId, authorization);
+ }
+ }
+
+ private void doKafkaWindow(
+ long traceId,
+ long authorization,
+ long budgetId,
+ int padding,
+ int capabilities)
+ {
+ replyMax = 8192;
+
+ doWindow(kafka, originId, routedId, replyId, replySeq, replyAck, replyMax,
+ traceId, authorization, budgetId, padding, 0, capabilities);
+ }
+
+ private void onKafkaMessage(
+ int msgTypeId,
+ DirectBuffer buffer,
+ int index,
+ int length)
+ {
+ switch (msgTypeId)
+ {
+ case BeginFW.TYPE_ID:
+ final BeginFW begin = beginRO.wrap(buffer, index, index + length);
+ onKafkaBegin(begin);
+ break;
+ case EndFW.TYPE_ID:
+ final EndFW end = endRO.wrap(buffer, index, index + length);
+ onKafkaEnd(end);
+ break;
+ case AbortFW.TYPE_ID:
+ final AbortFW abort = abortRO.wrap(buffer, index, index + length);
+ onKafkaAbort(abort);
+ break;
+ case ResetFW.TYPE_ID:
+ final ResetFW reset = resetRO.wrap(buffer, index, index + length);
+ onKafkaReset(reset);
+ break;
+ }
+ }
+
+ private void onKafkaBegin(
+ BeginFW begin)
+ {
+ final long sequence = begin.sequence();
+ final long acknowledge = begin.acknowledge();
+ final int maximum = begin.maximum();
+ final long traceId = begin.traceId();
+ final long authorization = begin.authorization();
+
+ assert acknowledge <= sequence;
+ assert sequence >= replySeq;
+ assert acknowledge >= replyAck;
+
+ replySeq = sequence;
+ replyAck = acknowledge;
+ replyMax = maximum;
+ state = MqttKafkaState.openingReply(state);
+
+ assert replyAck <= replySeq;
+
+ doKafkaWindow(traceId, authorization, 0, 0, 0);
+ }
+
+ private void onKafkaEnd(
+ EndFW end)
+ {
+ final long sequence = end.sequence();
+ final long acknowledge = end.acknowledge();
+ final long traceId = end.traceId();
+ final long authorization = end.authorization();
+
+ assert acknowledge <= sequence;
+ assert sequence >= replySeq;
+
+ replySeq = sequence;
+ state = MqttKafkaState.closeReply(state);
+
+ assert replyAck <= replySeq;
+
+ doKafkaEnd(traceId, authorization);
+
+ if (reconnectDelay != 0)
+ {
+ if (reconnectAt != NO_CANCEL_ID)
+ {
+ signaler.cancel(reconnectAt);
+ }
+
+ reconnectAt = signaler.signalAt(
+ currentTimeMillis() + SECONDS.toMillis(reconnectDelay),
+ SIGNAL_CONNECT_BOOTSTRAP_STREAM,
+ this::onSignalConnectBootstrapStream);
+ }
+ }
+
+ private void onKafkaAbort(
+ AbortFW abort)
+ {
+ final long sequence = abort.sequence();
+ final long acknowledge = abort.acknowledge();
+ final long traceId = abort.traceId();
+ final long authorization = abort.authorization();
+
+ assert acknowledge <= sequence;
+ assert sequence >= replySeq;
+
+ replySeq = sequence;
+ state = MqttKafkaState.closeReply(state);
+
+ assert replyAck <= replySeq;
+
+ doKafkaAbort(traceId, authorization);
+
+ if (reconnectDelay != 0)
+ {
+ if (reconnectAt != NO_CANCEL_ID)
+ {
+ signaler.cancel(reconnectAt);
+ }
+
+ reconnectAt = signaler.signalAt(
+ currentTimeMillis() + SECONDS.toMillis(reconnectDelay),
+ SIGNAL_CONNECT_BOOTSTRAP_STREAM,
+ this::onSignalConnectBootstrapStream);
+ }
+ }
+
+ private void onKafkaReset(
+ ResetFW reset)
+ {
+ final long sequence = reset.sequence();
+ final long acknowledge = reset.acknowledge();
+
+ assert acknowledge <= sequence;
+
+ if (reconnectDelay != 0)
+ {
+ if (reconnectAt != NO_CANCEL_ID)
+ {
+ signaler.cancel(reconnectAt);
+ }
+
+ reconnectAt = signaler.signalAt(
+ currentTimeMillis() + Math.min(50 << reconnectAttempt++, SECONDS.toMillis(reconnectDelay)),
+ SIGNAL_CONNECT_BOOTSTRAP_STREAM,
+ this::onSignalConnectBootstrapStream);
+ }
+ }
+ }
+
final class KafkaMessagesProxy
{
private final String16FW topic;
@@ -692,6 +942,8 @@ final class KafkaMessagesProxy
private final long routedId;
private final long initialId;
private final long replyId;
+ private final IntArrayList messagesSubscriptionIds;
+ private final MqttKafkaRouteConfig routeConfig;
private final MqttSubscribeProxy mqtt;
private int dataSlot = NO_SLOT;
@@ -712,16 +964,23 @@ final class KafkaMessagesProxy
private KafkaMessagesProxy(
long originId,
- long routedId,
- String16FW topic,
+ MqttKafkaRouteConfig route,
MqttSubscribeProxy mqtt)
{
this.originId = originId;
- this.routedId = routedId;
- this.topic = topic;
+ this.routedId = route.id;
+ this.topic = route.messages;
+ this.routeConfig = route;
this.mqtt = mqtt;
this.initialId = supplyInitialId.applyAsLong(routedId);
this.replyId = supplyReplyId.applyAsLong(initialId);
+ this.messagesSubscriptionIds = new IntArrayList();
+ }
+
+ public boolean matchesTopicFilter(
+ String topicFilter)
+ {
+ return routeConfig.matches(topicFilter);
}
private void doKafkaBegin(
@@ -732,13 +991,30 @@ private void doKafkaBegin(
{
if (!MqttKafkaState.initialOpening(state))
{
+ final Array32FW.Builder filterBuilder =
+ filtersRW.wrap(filterBuffer, 0, filterBuffer.capacity());
+
+ filters.forEach(f ->
+ {
+ if (matchesTopicFilter(f.pattern().asString()))
+ {
+ int subscriptionId = (int) f.subscriptionId();
+ if (!messagesSubscriptionIds.contains(subscriptionId))
+ {
+ messagesSubscriptionIds.add(subscriptionId);
+ }
+ filterBuilder.item(fb -> fb
+ .subscriptionId(subscriptionId).qos(f.qos()).flags(f.flags()).pattern(f.pattern()));
+ }
+ });
+
initialSeq = mqtt.initialSeq;
initialAck = mqtt.initialAck;
initialMax = mqtt.initialMax;
state = MqttKafkaState.openingInitial(state);
kafka = newKafkaStream(this::onKafkaMessage, originId, routedId, initialId, initialSeq, initialAck, initialMax,
- traceId, authorization, affinity, mqtt.clientId, topic, filters, KafkaOffsetType.LIVE);
+ traceId, authorization, affinity, mqtt.clientId, topic, filterBuilder.build(), KafkaOffsetType.LIVE);
}
}
@@ -747,17 +1023,57 @@ private void doKafkaFlush(
long authorization,
long budgetId,
int reserved,
- Flyweight extension)
+ Array32FW filters)
{
initialSeq = mqtt.initialSeq;
+ messagesSubscriptionIds.clear();
+
+ final KafkaFlushExFW kafkaFlushEx =
+ kafkaFlushExRW.wrap(writeBuffer, FlushFW.FIELD_OFFSET_EXTENSION, writeBuffer.capacity())
+ .typeId(kafkaTypeId)
+ .merged(m -> m.fetch(f ->
+ {
+ f.capabilities(c -> c.set(KafkaCapabilities.FETCH_ONLY));
+ filters.forEach(filter ->
+ {
+ if (matchesTopicFilter(filter.pattern().asString()))
+ {
+ final int subscriptionId = (int) filter.subscriptionId();
+ if (!messagesSubscriptionIds.contains(subscriptionId))
+ {
+ messagesSubscriptionIds.add(subscriptionId);
+ }
+ if ((filter.flags() & SEND_RETAIN_FLAG) != 0)
+ {
+ mqtt.retainAvailable = true;
+ }
+ f.filtersItem(fi ->
+ {
+ fi.conditionsItem(ci -> buildHeaders(ci, filter.pattern().asString()));
+
+ final boolean noLocal = (filter.flags() & NO_LOCAL_FLAG) != 0;
+ if (noLocal)
+ {
+ final DirectBuffer valueBuffer = mqtt.clientId.value();
+ fi.conditionsItem(i -> i.not(n -> n.condition(c -> c.header(h ->
+ h.nameLen(helper.kafkaLocalHeaderName.sizeof())
+ .name(helper.kafkaLocalHeaderName)
+ .valueLen(valueBuffer.capacity())
+ .value(valueBuffer, 0, valueBuffer.capacity())))));
+ }
+ });
+ }
+ });
+ }))
+ .build();
+
doFlush(kafka, originId, routedId, initialId, initialSeq, initialAck, initialMax,
- traceId, authorization, budgetId, reserved, extension);
+ traceId, authorization, budgetId, reserved, kafkaFlushEx);
}
private void doKafkaEnd(
long traceId,
- long sequence,
long authorization)
{
if (MqttKafkaState.initialOpened(state) && !MqttKafkaState.initialClosed(state))
@@ -900,11 +1216,11 @@ private void onKafkaData(
int flag = 0;
subscriptionIdsRW.wrap(subscriptionIdsBuffer, 0, subscriptionIdsBuffer.capacity());
- for (int i = 0; i < mqtt.messagesSubscriptionIds.size(); i++)
+ for (int i = 0; i < messagesSubscriptionIds.size(); i++)
{
if (((filters >> i) & 1) == 1)
{
- long subscriptionId = mqtt.messagesSubscriptionIds.get(i);
+ long subscriptionId = messagesSubscriptionIds.get(i);
subscriptionIdsRW.item(si -> si.set((int) subscriptionId));
}
}
@@ -1164,6 +1480,17 @@ private void doKafkaWindow(
}
}
}
+
+ public void flushDataIfNecessary(
+ long traceId,
+ long authorization,
+ long budgetId)
+ {
+ if (messageSlotOffset != messageSlotLimit)
+ {
+ flushData(traceId, authorization, budgetId);
+ }
+ }
}
final class KafkaRetainedProxy
@@ -1212,7 +1539,8 @@ private void doKafkaBegin(
replyAck = 0;
replyMax = 0;
- sendRetainedFiltersRW.wrap(retainFilterBuffer, 0, retainFilterBuffer.capacity());
+ final Array32FW.Builder filterBuilder =
+ filtersRW.wrap(filterBuffer, 0, filterBuffer.capacity());
newRetainedFilters.forEach(f ->
{
@@ -1221,14 +1549,14 @@ private void doKafkaBegin(
{
mqtt.retainedSubscriptionIds.add(subscriptionId);
}
- sendRetainedFiltersRW.item(fb -> fb
+ filterBuilder.item(fb -> fb
.subscriptionId(subscriptionId).qos(f.qos).flags(f.flags).pattern(f.filter));
final boolean rap = (f.flags & RETAIN_AS_PUBLISHED_FLAG) != 0;
mqtt.retainAsPublished.put(f.id, rap);
});
mqtt.retainedSubscriptions.addAll(newRetainedFilters);
- Array32FW retainedFilters = sendRetainedFiltersRW.build();
+ Array32FW retainedFilters = filterBuilder.build();
initialSeq = mqtt.initialSeq;
initialAck = mqtt.initialAck;
@@ -1250,7 +1578,8 @@ private void doKafkaFlush(
{
initialSeq = mqtt.initialSeq;
- sendRetainedFiltersRW.wrap(retainFilterBuffer, 0, retainFilterBuffer.capacity());
+ final Array32FW.Builder filterBuilder =
+ filtersRW.wrap(filterBuffer, 0, filterBuffer.capacity());
retainedFiltersList.forEach(f ->
{
@@ -1259,13 +1588,13 @@ private void doKafkaFlush(
{
mqtt.retainedSubscriptionIds.add(subscriptionId);
}
- sendRetainedFiltersRW.item(fb -> fb
+ filterBuilder.item(fb -> fb
.subscriptionId(subscriptionId).qos(f.qos).flags(f.flags).pattern(f.filter));
final boolean rap = (f.flags & RETAIN_AS_PUBLISHED_FLAG) != 0;
mqtt.retainAsPublished.put(f.id, rap);
});
- Array32FW retainedFilters = sendRetainedFiltersRW.build();
+ Array32FW retainedFilters = filterBuilder.build();
final KafkaFlushExFW retainedKafkaFlushEx =
kafkaFlushExRW.wrap(writeBuffer, FlushFW.FIELD_OFFSET_EXTENSION, writeBuffer.capacity())
@@ -1274,20 +1603,9 @@ private void doKafkaFlush(
{
f.capabilities(c -> c.set(KafkaCapabilities.FETCH_ONLY));
retainedFilters.forEach(filter ->
- {
f.filtersItem(fi ->
- {
- final int subscriptionId = (int) filter.subscriptionId();
fi.conditionsItem(ci ->
- {
- if (!mqtt.messagesSubscriptionIds.contains(subscriptionId))
- {
- mqtt.messagesSubscriptionIds.add(subscriptionId);
- }
- buildHeaders(ci, filter.pattern().asString());
- });
- });
- });
+ buildHeaders(ci, filter.pattern().asString()))));
}))
.build();
@@ -1297,7 +1615,6 @@ private void doKafkaFlush(
private void doKafkaEnd(
long traceId,
- long sequence,
long authorization)
{
if (!MqttKafkaState.initialClosed(state))
@@ -1516,7 +1833,7 @@ private void onKafkaEnd(
assert replyAck <= replySeq;
- mqtt.messages.flushData(traceId, authorization, mqtt.replyBud);
+ mqtt.messages.values().forEach(m -> m.flushData(traceId, authorization, mqtt.replyBud));
}
private void onKafkaFlush(
@@ -1537,7 +1854,7 @@ private void onKafkaFlush(
assert replyAck <= replySeq;
mqtt.retainedSubscriptionIds.clear();
- doKafkaEnd(traceId, sequence, authorization);
+ doKafkaEnd(traceId, authorization);
}
private void onKafkaAbort(
@@ -1813,9 +2130,8 @@ private MessageConsumer newKafkaStream(
m.topic(topic);
m.partitionsItem(p ->
p.partitionId(offsetType.value())
- .partitionOffset(offsetType.value()));
+ .partitionOffset(offsetType.value()));
filters.forEach(filter ->
-
m.filtersItem(f ->
{
f.conditionsItem(ci -> buildHeaders(ci, filter.pattern().asString()));
@@ -1856,6 +2172,47 @@ private MessageConsumer newKafkaStream(
return receiver;
}
+ private MessageConsumer newKafkaBootstrapStream(
+ MessageConsumer sender,
+ long originId,
+ long routedId,
+ long streamId,
+ long sequence,
+ long acknowledge,
+ int maximum,
+ long traceId,
+ long authorization,
+ long affinity,
+ String16FW topic)
+ {
+ final KafkaBeginExFW kafkaBeginEx =
+ kafkaBeginExRW.wrap(writeBuffer, BeginFW.FIELD_OFFSET_EXTENSION, writeBuffer.capacity())
+ .typeId(kafkaTypeId)
+ .bootstrap(b -> b.topic(topic).groupId(MQTT_CLIENTS_GROUP_ID))
+ .build();
+
+
+ final BeginFW begin = beginRW.wrap(writeBuffer, 0, writeBuffer.capacity())
+ .originId(originId)
+ .routedId(routedId)
+ .streamId(streamId)
+ .sequence(sequence)
+ .acknowledge(acknowledge)
+ .maximum(maximum)
+ .traceId(traceId)
+ .authorization(authorization)
+ .affinity(affinity)
+ .extension(kafkaBeginEx.buffer(), kafkaBeginEx.offset(), kafkaBeginEx.sizeof())
+ .build();
+
+ MessageConsumer receiver =
+ streamFactory.newStream(begin.typeId(), begin.buffer(), begin.offset(), begin.sizeof(), sender);
+
+ receiver.accept(begin.typeId(), begin.buffer(), begin.offset(), begin.sizeof());
+
+ return receiver;
+ }
+
private void buildHeaders(
KafkaConditionFW.Builder conditionBuilder,
String pattern)
diff --git a/runtime/binding-mqtt-kafka/src/main/moditect/module-info.java b/runtime/binding-mqtt-kafka/src/main/moditect/module-info.java
index 056d27a5d6..e08083a7ce 100644
--- a/runtime/binding-mqtt-kafka/src/main/moditect/module-info.java
+++ b/runtime/binding-mqtt-kafka/src/main/moditect/module-info.java
@@ -24,6 +24,9 @@
provides io.aklivity.zilla.runtime.engine.config.ConditionConfigAdapterSpi
with io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config.MqttKafkaConditionConfigAdapter;
+ provides io.aklivity.zilla.runtime.engine.config.WithConfigAdapterSpi
+ with io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config.MqttKafkaWithConfigAdapter;
+
provides io.aklivity.zilla.runtime.engine.config.OptionsConfigAdapterSpi
with io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config.MqttKafkaOptionsConfigAdapter;
diff --git a/runtime/binding-mqtt-kafka/src/main/resources/META-INF/services/io.aklivity.zilla.runtime.engine.config.WithConfigAdapterSpi b/runtime/binding-mqtt-kafka/src/main/resources/META-INF/services/io.aklivity.zilla.runtime.engine.config.WithConfigAdapterSpi
new file mode 100644
index 0000000000..9d5ae9d7ce
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/main/resources/META-INF/services/io.aklivity.zilla.runtime.engine.config.WithConfigAdapterSpi
@@ -0,0 +1 @@
+io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config.MqttKafkaWithConfigAdapter
diff --git a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfigurationTest.java b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfigurationTest.java
index c962645286..d1573b664c 100644
--- a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfigurationTest.java
+++ b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/MqttKafkaConfigurationTest.java
@@ -15,6 +15,8 @@
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal;
+import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfiguration.BOOTSTRAP_AVAILABLE;
+import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfiguration.BOOTSTRAP_STREAM_RECONNECT_DELAY;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfiguration.INSTANCE_ID;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfiguration.LIFETIME_ID;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfiguration.SESSION_ID;
@@ -31,6 +33,8 @@ public class MqttKafkaConfigurationTest
public static final String TIME_NAME = "zilla.binding.mqtt.kafka.time";
public static final String WILL_AVAILABLE_NAME = "zilla.binding.mqtt.kafka.will.available";
public static final String WILL_STREAM_RECONNECT_DELAY_NAME = "zilla.binding.mqtt.kafka.will.stream.reconnect";
+ public static final String BOOTSTRAP_AVAILABLE_NAME = "zilla.binding.mqtt.kafka.bootstrap.available";
+ public static final String BOOTSTRAP_STREAM_RECONNECT_DELAY_NAME = "zilla.binding.mqtt.kafka.bootstrap.stream.reconnect";
public static final String SESSION_ID_NAME = "zilla.binding.mqtt.kafka.session.id";
public static final String WILL_ID_NAME = "zilla.binding.mqtt.kafka.will.id";
public static final String LIFETIME_ID_NAME = "zilla.binding.mqtt.kafka.lifetime.id";
@@ -42,6 +46,8 @@ public void shouldVerifyConstants()
assertEquals(TIME.name(), TIME_NAME);
assertEquals(WILL_AVAILABLE.name(), WILL_AVAILABLE_NAME);
assertEquals(WILL_STREAM_RECONNECT_DELAY.name(), WILL_STREAM_RECONNECT_DELAY_NAME);
+ assertEquals(BOOTSTRAP_AVAILABLE.name(), BOOTSTRAP_AVAILABLE_NAME);
+ assertEquals(BOOTSTRAP_STREAM_RECONNECT_DELAY.name(), BOOTSTRAP_STREAM_RECONNECT_DELAY_NAME);
assertEquals(SESSION_ID.name(), SESSION_ID_NAME);
assertEquals(WILL_ID.name(), WILL_ID_NAME);
assertEquals(LIFETIME_ID.name(), LIFETIME_ID_NAME);
diff --git a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapterTest.java b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapterTest.java
index 6c4eb05319..e739a320f4 100644
--- a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapterTest.java
+++ b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionConfigAdapterTest.java
@@ -19,6 +19,8 @@
import static org.hamcrest.Matchers.not;
import static org.hamcrest.Matchers.nullValue;
+import java.util.List;
+
import jakarta.json.bind.Jsonb;
import jakarta.json.bind.JsonbBuilder;
import jakarta.json.bind.JsonbConfig;
@@ -27,6 +29,7 @@
import org.junit.Test;
import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionConfig;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionKind;
public class MqttKafkaConditionConfigAdapterTest
{
@@ -40,25 +43,83 @@ public void initJson()
jsonb = JsonbBuilder.create(config);
}
+ @Test
+ public void shouldReadSubscribeCondition()
+ {
+ String text =
+ "{" +
+ "\"subscribe\":" +
+ "[" +
+ "{" +
+ "\"topic\": \"test\"" +
+ "}" +
+ "]" +
+ "}";
+
+ MqttKafkaConditionConfig condition = jsonb.fromJson(text, MqttKafkaConditionConfig.class);
+
+ assertThat(condition, not(nullValue()));
+ assertThat(condition.topics, not(nullValue()));
+ assertThat(condition.topics.size(), equalTo(1));
+ assertThat(condition.topics.get(0), equalTo("test"));
+ }
+
@Test
public void shouldReadCondition()
{
String text =
- "{ }";
+ "{" +
+ "\"publish\":" +
+ "[" +
+ "{" +
+ "\"topic\": \"test\"" +
+ "}" +
+ "]" +
+ "}";
MqttKafkaConditionConfig condition = jsonb.fromJson(text, MqttKafkaConditionConfig.class);
assertThat(condition, not(nullValue()));
+ assertThat(condition.topics, not(nullValue()));
+ assertThat(condition.topics.size(), equalTo(1));
+ assertThat(condition.topics.get(0), equalTo("test"));
+ }
+
+ @Test
+ public void shouldWriteSubscribeCondition()
+ {
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(List.of("test"), MqttKafkaConditionKind.SUBSCRIBE);
+
+ String text = jsonb.toJson(condition);
+
+ assertThat(text, not(nullValue()));
+ assertThat(text, equalTo(
+ "{" +
+ "\"subscribe\":" +
+ "[" +
+ "{" +
+ "\"topic\":\"test\"" +
+ "}" +
+ "]" +
+ "}"));
}
@Test
- public void shouldWriteCondition()
+ public void shouldWritePublishCondition()
{
- MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig("test");
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(List.of("test"), MqttKafkaConditionKind.PUBLISH);
String text = jsonb.toJson(condition);
assertThat(text, not(nullValue()));
- assertThat(text, equalTo("{\"topic\":\"test\"}"));
+ assertThat(text, equalTo(
+ "{" +
+ "\"publish\":" +
+ "[" +
+ "{" +
+ "\"topic\":\"test\"" +
+ "}" +
+ "]" +
+ "}"));
}
}
diff --git a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionMatcherTest.java b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionMatcherTest.java
new file mode 100644
index 0000000000..9afe0f9459
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaConditionMatcherTest.java
@@ -0,0 +1,118 @@
+/*
+ * Copyright 2021-2023 Aklivity Inc
+ *
+ * Licensed under the Aklivity Community License (the "License"); you may not use
+ * this file except in compliance with the License. You may obtain a copy of the
+ * License at
+ *
+ * https://www.aklivity.io/aklivity-community-license/
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.List;
+
+import org.junit.Test;
+
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionConfig;
+import io.aklivity.zilla.runtime.binding.mqtt.kafka.config.MqttKafkaConditionKind;
+
+public class MqttKafkaConditionMatcherTest
+{
+ @Test
+ public void shouldMatchSimpleConditions()
+ {
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(List.of("/some/hierarchical/topic/name"),
+ MqttKafkaConditionKind.SUBSCRIBE);
+ MqttKafkaConditionMatcher matcher = new MqttKafkaConditionMatcher(condition);
+
+ assertTrue(matcher.matches("/some/hierarchical/topic/name"));
+ assertTrue(matcher.matches("/some/hierarchical/topic/name/#"));
+ assertTrue(matcher.matches("/some/hierarchical/+/name/#"));
+ assertTrue(matcher.matches("/some/+/topic/+"));
+ assertTrue(matcher.matches("/some/hierarchical/topic/+"));
+ assertTrue(matcher.matches("/some/#"));
+ assertTrue(matcher.matches("/some/hierarchical/#"));
+ assertTrue(matcher.matches("#"));
+ assertTrue(matcher.matches("/#"));
+ }
+
+ @Test
+ public void shouldNotMatchSimpleConditions()
+ {
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(
+ List.of("/some/hierarchical/topic/name"),
+ MqttKafkaConditionKind.SUBSCRIBE);
+ MqttKafkaConditionMatcher matcher = new MqttKafkaConditionMatcher(condition);
+
+ assertFalse(matcher.matches("/some/+"));
+ assertFalse(matcher.matches("/some/hierarchical/+"));
+ assertFalse(matcher.matches("/some/hierarchical/topic/name/something"));
+ assertFalse(matcher.matches("some/hierarchical/topic/name"));
+ }
+
+ @Test
+ public void shouldMatchSimpleConditions2()
+ {
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(
+ List.of("/some/hierarchical/topic"), MqttKafkaConditionKind.SUBSCRIBE);
+ MqttKafkaConditionMatcher matcher = new MqttKafkaConditionMatcher(condition);
+
+ assertTrue(matcher.matches("/some/hierarchical/topic"));
+ assertTrue(matcher.matches("/some/hierarchical/topic/#"));
+ assertTrue(matcher.matches("/some/hierarchical/+/#"));
+ assertTrue(matcher.matches("/some/+/topic"));
+ assertTrue(matcher.matches("/some/+/#"));
+ assertTrue(matcher.matches("/some/hierarchical/+"));
+ assertTrue(matcher.matches("/some/#"));
+ assertTrue(matcher.matches("#"));
+ assertTrue(matcher.matches("/#"));
+ }
+
+ @Test
+ public void shouldNotMatchSimpleConditions2()
+ {
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(
+ List.of("/some/hierarchical/topic"), MqttKafkaConditionKind.SUBSCRIBE);
+ MqttKafkaConditionMatcher matcher = new MqttKafkaConditionMatcher(condition);
+
+ assertFalse(matcher.matches("/some/+"));
+ assertFalse(matcher.matches("/some/something/else"));
+ assertFalse(matcher.matches("/some/hierarchical/topic/name"));
+ assertFalse(matcher.matches("some/hierarchical/topic"));
+ }
+
+ @Test
+ public void shouldMatchWildcardConditions()
+ {
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(
+ List.of("device/#"), MqttKafkaConditionKind.SUBSCRIBE);
+ MqttKafkaConditionMatcher matcher = new MqttKafkaConditionMatcher(condition);
+
+ assertTrue(matcher.matches("device/one"));
+ assertTrue(matcher.matches("device/two"));
+ assertTrue(matcher.matches("device/+"));
+ assertTrue(matcher.matches("device/#"));
+ assertTrue(matcher.matches("device/rain/one"));
+ assertTrue(matcher.matches("#"));
+ }
+
+ @Test
+ public void shouldNotMatchWildcardConditions()
+ {
+ MqttKafkaConditionConfig condition = new MqttKafkaConditionConfig(
+ List.of("device/#"), MqttKafkaConditionKind.SUBSCRIBE);
+ MqttKafkaConditionMatcher matcher = new MqttKafkaConditionMatcher(condition);
+
+ assertFalse(matcher.matches("/device/one"));
+ assertFalse(matcher.matches("devices/one"));
+ assertFalse(matcher.matches("/#"));
+ }
+}
diff --git a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapterTest.java b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapterTest.java
index edfdce43e3..cc74c235da 100644
--- a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapterTest.java
+++ b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaOptionsConfigAdapterTest.java
@@ -19,6 +19,8 @@
import static org.hamcrest.Matchers.not;
import static org.hamcrest.Matchers.nullValue;
+import java.util.Arrays;
+
import jakarta.json.bind.Jsonb;
import jakarta.json.bind.JsonbBuilder;
import jakarta.json.bind.JsonbConfig;
@@ -48,11 +50,16 @@ public void shouldReadOptions()
"\"server\":\"mqtt-1.example.com:1883\"," +
"\"topics\":" +
"{" +
- "\"sessions\":\"sessions\"," +
- "\"messages\":\"messages\"," +
- "\"retained\":\"retained\"," +
- "}" +
- "}";
+ "\"sessions\":\"sessions\"," +
+ "\"messages\":\"messages\"," +
+ "\"retained\":\"retained\"," +
+ "}," +
+ "\"clients\":" +
+ "[" +
+ "\"/clients/{identity}/#\"," +
+ "\"/department/clients/{identity}/#\"" +
+ "]" +
+ "}";
MqttKafkaOptionsConfig options = jsonb.fromJson(text, MqttKafkaOptionsConfig.class);
@@ -62,6 +69,10 @@ public void shouldReadOptions()
assertThat(options.topics.messages.asString(), equalTo("messages"));
assertThat(options.topics.retained.asString(), equalTo("retained"));
assertThat(options.serverRef, equalTo("mqtt-1.example.com:1883"));
+ assertThat(options.clients, not(nullValue()));
+ assertThat(options.clients.size(), equalTo(2));
+ assertThat(options.clients.get(0), equalTo("/clients/{identity}/#"));
+ assertThat(options.clients.get(1), equalTo("/department/clients/{identity}/#"));
}
@Test
@@ -71,20 +82,72 @@ public void shouldWriteOptions()
new MqttKafkaTopicsConfig(
new String16FW("sessions"),
new String16FW("messages"),
- new String16FW("retained")), "mqtt-1.example.com:1883");
+ new String16FW("retained")),
+ "mqtt-1.example.com:1883",
+ Arrays.asList("/clients/{identity}/#", "/department/clients/{identity}/#"));
String text = jsonb.toJson(options);
assertThat(text, not(nullValue()));
assertThat(text, equalTo(
- "{" +
+ "{" +
"\"server\":\"mqtt-1.example.com:1883\"," +
"\"topics\":" +
"{" +
- "\"sessions\":\"sessions\"," +
- "\"messages\":\"messages\"," +
- "\"retained\":\"retained\"" +
+ "\"sessions\":\"sessions\"," +
+ "\"messages\":\"messages\"," +
+ "\"retained\":\"retained\"" +
+ "}," +
+ "\"clients\":" +
+ "[" +
+ "\"/clients/{identity}/#\"," +
+ "\"/department/clients/{identity}/#\"" +
+ "]" +
+ "}"));
+ }
+
+ @Test
+ public void shouldReadOptionsWithoutClients()
+ {
+ String text =
+ "{" +
+ "\"topics\":" +
+ "{" +
+ "\"sessions\":\"sessions\"," +
+ "\"messages\":\"messages\"," +
+ "\"retained\":\"retained\"" +
"}" +
+ "}";
+
+ MqttKafkaOptionsConfig options = jsonb.fromJson(text, MqttKafkaOptionsConfig.class);
+
+ assertThat(options, not(nullValue()));
+ assertThat(options.topics, not(nullValue()));
+ assertThat(options.topics.sessions.asString(), equalTo("sessions"));
+ assertThat(options.topics.messages.asString(), equalTo("messages"));
+ assertThat(options.topics.retained.asString(), equalTo("retained"));
+ }
+
+ @Test
+ public void shouldWriteOptionsWithoutClients()
+ {
+ MqttKafkaOptionsConfig options = new MqttKafkaOptionsConfig(
+ new MqttKafkaTopicsConfig(
+ new String16FW("sessions"),
+ new String16FW("messages"),
+ new String16FW("retained")), null, null);
+
+ String text = jsonb.toJson(options);
+
+ assertThat(text, not(nullValue()));
+ assertThat(text, equalTo(
+ "{" +
+ "\"topics\":" +
+ "{" +
+ "\"sessions\":\"sessions\"," +
+ "\"messages\":\"messages\"," +
+ "\"retained\":\"retained\"" +
+ "}" +
"}"));
}
}
diff --git a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfigAdapterTest.java b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfigAdapterTest.java
new file mode 100644
index 0000000000..0662393ef2
--- /dev/null
+++ b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/config/MqttKafkaWithConfigAdapterTest.java
@@ -0,0 +1,63 @@
+/*
+ * Copyright 2021-2023 Aklivity Inc
+ *
+ * Licensed under the Aklivity Community License (the "License"); you may not use
+ * this file except in compliance with the License. You may obtain a copy of the
+ * License at
+ *
+ * https://www.aklivity.io/aklivity-community-license/
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.config;
+
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.hamcrest.Matchers.equalTo;
+import static org.hamcrest.Matchers.not;
+import static org.hamcrest.Matchers.nullValue;
+
+import jakarta.json.bind.Jsonb;
+import jakarta.json.bind.JsonbBuilder;
+import jakarta.json.bind.JsonbConfig;
+
+import org.junit.Before;
+import org.junit.Test;
+
+public class MqttKafkaWithConfigAdapterTest
+{
+ private Jsonb jsonb;
+
+ @Before
+ public void initJson()
+ {
+ JsonbConfig config = new JsonbConfig()
+ .withAdapters(new MqttKafkaWithConfigAdapter());
+ jsonb = JsonbBuilder.create(config);
+ }
+
+ @Test
+ public void shouldReadWith()
+ {
+ String text =
+ "{\"messages\":\"test\"}";
+
+ MqttKafkaWithConfig with = jsonb.fromJson(text, MqttKafkaWithConfig.class);
+
+ assertThat(with, not(nullValue()));
+ assertThat(with.messages, equalTo("test"));
+ }
+
+ @Test
+ public void shouldWriteWith()
+ {
+ MqttKafkaWithConfig with = new MqttKafkaWithConfig("test");
+
+ String text = jsonb.toJson(with);
+
+ assertThat(text, not(nullValue()));
+ assertThat(text, equalTo("{\"messages\":\"test\"}"));
+ }
+}
diff --git a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishProxyIT.java b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishProxyIT.java
index 8dd6772f7d..cd3afb4100 100644
--- a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishProxyIT.java
+++ b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaPublishProxyIT.java
@@ -14,6 +14,7 @@
*/
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.stream;
+import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfigurationTest.BOOTSTRAP_AVAILABLE_NAME;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfigurationTest.WILL_AVAILABLE_NAME;
import static io.aklivity.zilla.runtime.engine.EngineConfiguration.ENGINE_BUFFER_SLOT_CAPACITY;
import static java.util.concurrent.TimeUnit.SECONDS;
@@ -182,6 +183,29 @@ public void shouldSendOneMessageWithChangedTopicName() throws Exception
k3po.finish();
}
+ @Test
+ @Configuration("proxy.when.topic.with.messages.yaml")
+ @Configure(name = WILL_AVAILABLE_NAME, value = "false")
+ @Specification({
+ "${mqtt}/publish.topic.space/client",
+ "${kafka}/publish.topic.space/server"})
+ public void shouldSendUsingTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Configuration("proxy.when.client.topic.space.yaml")
+ @Configure(name = WILL_AVAILABLE_NAME, value = "false")
+ @Configure(name = BOOTSTRAP_AVAILABLE_NAME, value = "false")
+ @Specification({
+ "${mqtt}/publish.client.topic.space/client",
+ "${kafka}/publish.client.topic.space/server"})
+ public void shouldSendUsingClientTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Configuration("proxy.yaml")
@Configure(name = WILL_AVAILABLE_NAME, value = "false")
diff --git a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeProxyIT.java b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeProxyIT.java
index 7eecb79af7..210fd89747 100644
--- a/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeProxyIT.java
+++ b/runtime/binding-mqtt-kafka/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/kafka/internal/stream/MqttKafkaSubscribeProxyIT.java
@@ -14,6 +14,7 @@
*/
package io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.stream;
+import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfigurationTest.BOOTSTRAP_STREAM_RECONNECT_DELAY_NAME;
import static io.aklivity.zilla.runtime.binding.mqtt.kafka.internal.MqttKafkaConfigurationTest.WILL_AVAILABLE_NAME;
import static io.aklivity.zilla.runtime.engine.EngineConfiguration.ENGINE_BUFFER_SLOT_CAPACITY;
import static io.aklivity.zilla.runtime.engine.EngineConfiguration.ENGINE_DRAIN_ON_CLOSE;
@@ -162,6 +163,61 @@ public void shouldReceiveOneMessageWithChangedTopicName() throws Exception
k3po.finish();
}
+ @Test
+ @Configuration("proxy.when.topic.with.messages.yaml")
+ @Configure(name = WILL_AVAILABLE_NAME, value = "false")
+ @Specification({
+ "${mqtt}/subscribe.topic.space/client",
+ "${kafka}/subscribe.topic.space/server"})
+ public void shouldFilterTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Configuration("proxy.when.client.topic.space.yaml")
+ @Configure(name = WILL_AVAILABLE_NAME, value = "false")
+ @Specification({
+ "${mqtt}/subscribe.client.topic.space/client",
+ "${kafka}/subscribe.client.topic.space/server"})
+ public void shouldFilterClientTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Configuration("proxy.when.client.topic.space.yaml")
+ @Configure(name = WILL_AVAILABLE_NAME, value = "false")
+ @Configure(name = BOOTSTRAP_STREAM_RECONNECT_DELAY_NAME, value = "1")
+ @Specification({
+ "${kafka}/subscribe.bootstrap.stream.end.reconnect/server"})
+ public void shouldReconnectBootstrapStreamOnKafkaEnd() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Configuration("proxy.when.client.topic.space.yaml")
+ @Configure(name = WILL_AVAILABLE_NAME, value = "false")
+ @Configure(name = BOOTSTRAP_STREAM_RECONNECT_DELAY_NAME, value = "1")
+ @Specification({
+ "${kafka}/subscribe.bootstrap.stream.abort.reconnect/server"})
+ public void shouldReconnectBootstrapStreamOnKafkaAbort() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Configuration("proxy.when.client.topic.space.yaml")
+ @Configure(name = WILL_AVAILABLE_NAME, value = "false")
+ @Configure(name = BOOTSTRAP_STREAM_RECONNECT_DELAY_NAME, value = "1")
+ @Specification({
+ "${kafka}/subscribe.bootstrap.stream.reset.reconnect/server"})
+ public void shouldReconnectBootstrapStreamOnKafkaReset() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Configuration("proxy.yaml")
@Configure(name = WILL_AVAILABLE_NAME, value = "false")
diff --git a/runtime/binding-mqtt/pom.xml b/runtime/binding-mqtt/pom.xml
index 3ae82de409..2a1cf01533 100644
--- a/runtime/binding-mqtt/pom.xml
+++ b/runtime/binding-mqtt/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-mqtt/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/MqttServerFactory.java b/runtime/binding-mqtt/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/MqttServerFactory.java
index 7d1a6d5ce8..2af460c5de 100644
--- a/runtime/binding-mqtt/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/MqttServerFactory.java
+++ b/runtime/binding-mqtt/src/main/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/MqttServerFactory.java
@@ -298,6 +298,7 @@ public final class MqttServerFactory implements MqttStreamFactory
private final MqttServerDecoder decodePacketType = this::decodePacketType;
private final MqttServerDecoder decodeConnect = this::decodeConnect;
private final MqttServerDecoder decodeConnectPayload = this::decodeConnectPayload;
+ private final MqttServerDecoder decodeConnectWillMessage = this::decodeConnectWillMessage;
private final MqttServerDecoder decodePublish = this::decodePublish;
private final MqttServerDecoder decodeSubscribe = this::decodeSubscribe;
private final MqttServerDecoder decodeUnsubscribe = this::decodeUnsubscribe;
@@ -847,7 +848,7 @@ private int decodeConnect(
progress = server.onDecodeConnect(traceId, authorization, buffer, progress, limit, mqttConnect);
- final int decodedLength = progress - offset - 2;
+ final int decodedLength = progress - offset - mqttPacketHeaderRO.sizeof();
server.decodableRemainingBytes -= decodedLength;
}
@@ -886,6 +887,27 @@ private int decodeConnectPayload(
return progress;
}
+ private int decodeConnectWillMessage(
+ MqttServer server,
+ final long traceId,
+ final long authorization,
+ final long budgetId,
+ final DirectBuffer buffer,
+ final int offset,
+ final int limit)
+ {
+ int progress = offset;
+
+ progress = server.onDecodeConnectWillMessage(traceId, authorization, buffer, progress, limit);
+ server.decodableRemainingBytes -= progress - offset;
+ if (server.decodableRemainingBytes == 0)
+ {
+ server.decoder = decodePacketType;
+ }
+
+ return progress;
+ }
+
private int decodePublish(
MqttServer server,
final long traceId,
@@ -920,7 +942,7 @@ private int decodePublish(
{
final String topic = mqttPublishHeader.topic;
final int topicKey = topicKey(topic);
- MqttServer.MqttPublishStream publisher = server.publishStreams.get(topicKey);
+ MqttServer.MqttPublishStream publisher = server.publishes.get(topicKey);
if (publisher == null)
{
@@ -1043,7 +1065,7 @@ else if ((subscribe.typeAndFlags() & 0b1111_1111) != SUBSCRIBE_FIXED_HEADER)
if (reasonCode == 0)
{
- if (!MqttState.replyOpened(server.sessionStream.state))
+ if (!MqttState.replyOpened(server.session.state))
{
//We don't know the server capabilities yet
break decode;
@@ -1239,8 +1261,8 @@ private final class MqttServer
private final long replyId;
private final long encodeBudgetId;
- private final Int2ObjectHashMap publishStreams;
- private final Long2ObjectHashMap subscribeStreams;
+ private final Int2ObjectHashMap publishes;
+ private final Long2ObjectHashMap subscribes;
private final Int2ObjectHashMap topicAliases;
private final Int2IntHashMap subscribePacketIds;
private final Object2IntHashMap unsubscribePacketIds;
@@ -1248,7 +1270,7 @@ private final class MqttServer
private final Function credentials;
private final MqttConnectProperty authField;
- private MqttSessionStream sessionStream;
+ private MqttSessionStream session;
private String16FW clientId;
@@ -1322,8 +1344,8 @@ private MqttServer(
this.replyId = replyId;
this.encodeBudgetId = budgetId;
this.decoder = decodeInitialType;
- this.publishStreams = new Int2ObjectHashMap<>();
- this.subscribeStreams = new Long2ObjectHashMap<>();
+ this.publishes = new Int2ObjectHashMap<>();
+ this.subscribes = new Long2ObjectHashMap<>();
this.topicAliases = new Int2ObjectHashMap<>();
this.subscribePacketIds = new Int2IntHashMap(-1);
this.unsubscribePacketIds = new Object2IntHashMap<>(-1);
@@ -1552,7 +1574,7 @@ private void onKeepAliveTimeoutSignal(
final long now = System.currentTimeMillis();
if (now >= keepAliveTimeoutAt)
{
- sessionStream.doSessionAbort(traceId);
+ session.doSessionAbort(traceId);
onDecodeError(traceId, authorization, KEEP_ALIVE_TIMEOUT);
decoder = decodeIgnoreAll;
}
@@ -1749,16 +1771,78 @@ else if (this.authField.equals(MqttConnectProperty.PASSWORD))
reasonCode = BAD_USER_NAME_OR_PASSWORD;
break decode;
}
+
+ this.sessionId = sessionAuth;
+
+ this.session = new MqttSessionStream(originId, resolved.id, 0);
+
+ final MqttBeginExFW.Builder builder = mqttSessionBeginExRW.wrap(sessionExtBuffer, 0, sessionExtBuffer.capacity())
+ .typeId(mqttTypeId)
+ .session(s -> s
+ .flags(connectFlags & (CLEAN_START_FLAG_MASK | WILL_FLAG_MASK))
+ .expiry(sessionExpiry)
+ .clientId(clientId)
+ );
+ session.doSessionBegin(traceId, affinity, builder.build());
+
+ if (willFlagSet)
+ {
+ decoder = decodeConnectWillMessage;
+ }
else
{
- resolveSession(traceId, resolved.id);
+ progress = connectPayloadLimit;
+ }
+ }
+
+ if (reasonCode != SUCCESS)
+ {
+ doCancelConnectTimeout();
+
+ if (reasonCode != BAD_USER_NAME_OR_PASSWORD)
+ {
+ doEncodeConnack(traceId, authorization, reasonCode, assignedClientId, false, null);
+ }
+
+ if (session != null)
+ {
+ session.doSessionAppEnd(traceId, EMPTY_OCTETS);
}
+ doNetworkEnd(traceId, authorization);
+
+ decoder = decodeIgnoreAll;
+ progress = limit;
+ }
+
+ return progress;
+ }
+
+ private int onDecodeConnectWillMessage(
+ long traceId,
+ long authorization,
+ DirectBuffer buffer,
+ int progress,
+ int limit)
+ {
+ byte reasonCode = SUCCESS;
+ decode:
+ {
+ final MqttConnectPayload payload = mqttConnectPayloadRO.reset();
+ int connectPayloadLimit = payload.decode(buffer, progress, limit, connectFlags);
- if (willFlagSet && !MqttState.initialOpened(sessionStream.state))
+ final boolean willFlagSet = isSetWillFlag(connectFlags);
+
+ reasonCode = payload.reasonCode;
+
+ if (reasonCode != SUCCESS)
{
break decode;
}
+ if (willFlagSet && !MqttState.initialOpened(session.state))
+ {
+ break decode;
+ }
if (isSetWillRetain(connectFlags))
{
@@ -1776,8 +1860,6 @@ else if (this.authField.equals(MqttConnectProperty.PASSWORD))
break decode;
}
- this.sessionId = sessionAuth;
-
final int flags = connectFlags;
final int willFlags = decodeWillFlags(flags);
final int willQos = decodeWillQos(flags);
@@ -1809,11 +1891,11 @@ else if (this.authField.equals(MqttConnectProperty.PASSWORD))
final MqttWillMessageFW will = willMessageBuilder.build();
final int willPayloadSize = willMessageBuilder.sizeof();
- if (!sessionStream.hasSessionWindow(willPayloadSize))
+ if (!session.hasSessionWindow(willPayloadSize))
{
break decode;
}
- sessionStream.doSessionData(traceId, willPayloadSize, sessionDataExBuilder.build(), will);
+ session.doSessionData(traceId, willPayloadSize, sessionDataExBuilder.build(), will);
}
progress = connectPayloadLimit;
}
@@ -1827,9 +1909,9 @@ else if (this.authField.equals(MqttConnectProperty.PASSWORD))
doEncodeConnack(traceId, authorization, reasonCode, assignedClientId, false, null);
}
- if (sessionStream != null)
+ if (session != null)
{
- sessionStream.doSessionAppEnd(traceId, EMPTY_OCTETS);
+ session.doSessionAppEnd(traceId, EMPTY_OCTETS);
}
doNetworkEnd(traceId, authorization);
@@ -1841,28 +1923,6 @@ else if (this.authField.equals(MqttConnectProperty.PASSWORD))
return progress;
}
- private void resolveSession(
- long traceId,
- long resolvedId)
- {
- final int flags = connectFlags & (CLEAN_START_FLAG_MASK | WILL_FLAG_MASK);
-
- final MqttBeginExFW.Builder builder = mqttSessionBeginExRW.wrap(sessionExtBuffer, 0, sessionExtBuffer.capacity())
- .typeId(mqttTypeId)
- .session(s -> s
- .flags(flags)
- .expiry(sessionExpiry)
- .clientId(clientId)
- );
-
- if (sessionStream == null)
- {
- sessionStream = new MqttSessionStream(originId, resolvedId, 0);
- }
-
- sessionStream.doSessionBegin(traceId, affinity, builder.build());
- }
-
private MqttPublishStream resolvePublishStream(
long traceId,
long authorization,
@@ -1879,7 +1939,7 @@ private MqttPublishStream resolvePublishStream(
final long resolvedId = resolved.id;
final int topicKey = topicKey(topic);
- stream = publishStreams.computeIfAbsent(topicKey, s -> new MqttPublishStream(routedId, resolvedId, topic));
+ stream = publishes.computeIfAbsent(topicKey, s -> new MqttPublishStream(routedId, resolvedId, topic));
stream.doPublishBegin(traceId, affinity);
}
else
@@ -1915,7 +1975,7 @@ else if (mqttPublishHeaderRO.retained && !retainAvailable(capabilities))
else
{
final int topicKey = topicKey(mqttPublishHeaderRO.topic);
- MqttPublishStream stream = publishStreams.get(topicKey);
+ MqttPublishStream stream = publishes.get(topicKey);
final MqttDataExFW.Builder builder = mqttPublishDataExRW.wrap(dataExtBuffer, 0, dataExtBuffer.capacity())
.typeId(mqttTypeId)
@@ -2066,8 +2126,8 @@ private void onDecodeSubscribe(
final MqttSessionStateFW.Builder state =
mqttSessionStateFW.wrap(sessionStateBuffer, 0, sessionStateBuffer.capacity());
- sessionStream.unAckedSubscriptions.addAll(newSubscriptions);
- sessionStream.subscriptions.forEach(sub ->
+ session.unAckedSubscriptions.addAll(newSubscriptions);
+ session.subscriptions.forEach(sub ->
state.subscriptionsItem(subscriptionBuilder ->
subscriptionBuilder
.subscriptionId(sub.id)
@@ -2086,11 +2146,11 @@ private void onDecodeSubscribe(
final MqttSessionStateFW sessionState = state.build();
final int payloadSize = sessionState.sizeof();
- if (!sessionStream.hasSessionWindow(payloadSize))
+ if (!session.hasSessionWindow(payloadSize))
{
break decode;
}
- sessionStream.doSessionData(traceId, payloadSize, sessionDataExBuilder.build(), sessionState);
+ session.doSessionData(traceId, payloadSize, sessionDataExBuilder.build(), sessionState);
}
}
doSignalKeepAliveTimeout(traceId);
@@ -2131,7 +2191,7 @@ private void openSubscribeStreams(
subscriptionsByRouteId.forEach((key, value) ->
{
- MqttSubscribeStream stream = subscribeStreams.computeIfAbsent(key, s ->
+ MqttSubscribeStream stream = subscribes.computeIfAbsent(key, s ->
new MqttSubscribeStream(routedId, key, implicitSubscribe));
stream.packetId = packetId;
value.removeIf(s -> s.reasonCode > GRANTED_QOS_2);
@@ -2181,17 +2241,17 @@ private void onDecodeUnsubscribe(
}
else
{
- List unAckedSubscriptions = sessionStream.unAckedSubscriptions.stream()
+ List unAckedSubscriptions = session.unAckedSubscriptions.stream()
.filter(s -> topicFilters.contains(s.filter) && subscribePacketIds.containsKey(s.id))
.collect(Collectors.toList());
if (!unAckedSubscriptions.isEmpty())
{
- sessionStream.deferredUnsubscribes.put(packetId, topicFilters);
+ session.deferredUnsubscribes.put(packetId, topicFilters);
return;
}
boolean matchingSubscription = topicFilters.stream().anyMatch(tf ->
- sessionStream.subscriptions.stream().anyMatch(s -> s.filter.equals(tf)));
+ session.subscriptions.stream().anyMatch(s -> s.filter.equals(tf)));
if (matchingSubscription)
{
topicFilters.forEach(filter -> unsubscribePacketIds.put(filter, packetId));
@@ -2214,7 +2274,7 @@ private void doSendSessionState(
.typeId(mqttTypeId)
.session(sessionBuilder -> sessionBuilder.kind(k -> k.set(MqttSessionDataKind.STATE)));
- List currentState = sessionStream.subscriptions();
+ List currentState = session.subscriptions();
List newState = currentState.stream()
.filter(subscription -> !topicFilters.contains(subscription.filter))
.collect(Collectors.toList());
@@ -2233,7 +2293,7 @@ private void doSendSessionState(
final MqttSessionStateFW sessionState = sessionStateBuilder.build();
final int payloadSize = sessionState.sizeof();
- sessionStream.doSessionData(traceId, payloadSize, sessionDataExBuilder.build(), sessionState);
+ session.doSessionData(traceId, payloadSize, sessionDataExBuilder.build(), sessionState);
}
private void sendUnsuback(
@@ -2257,7 +2317,7 @@ private void sendUnsuback(
final MqttBindingConfig binding = bindings.get(routedId);
final MqttRouteConfig resolved =
binding != null ? binding.resolveSubscribe(sessionId, topicFilter) : null;
- final MqttSubscribeStream stream = subscribeStreams.get(resolved.id);
+ final MqttSubscribeStream stream = subscribes.get(resolved.id);
Optional subscription = stream.getSubscriptionByFilter(topicFilter, newState);
@@ -2305,11 +2365,11 @@ private void onDecodeDisconnect(
{
if (disconnect != null && disconnect.reasonCode() == DISCONNECT_WITH_WILL_MESSAGE)
{
- sessionStream.doSessionAbort(traceId);
+ session.doSessionAbort(traceId);
}
else
{
- sessionStream.doSessionAppEnd(traceId, EMPTY_OCTETS);
+ session.doSessionAppEnd(traceId, EMPTY_OCTETS);
}
}
@@ -2893,7 +2953,7 @@ private void encodeNetwork(
{
cleanupEncodeSlot();
- if (publishStreams.isEmpty() && subscribeStreams.isEmpty() && decoder == decodeIgnoreAll)
+ if (publishes.isEmpty() && subscribes.isEmpty() && decoder == decodeIgnoreAll)
{
doNetworkEnd(traceId, authorization);
}
@@ -2984,11 +3044,11 @@ private void cleanupNetwork(
private void cleanupStreamsUsingAbort(
long traceId)
{
- publishStreams.values().forEach(s -> s.cleanupAbort(traceId));
- subscribeStreams.values().forEach(s -> s.cleanupAbort(traceId));
- if (sessionStream != null)
+ publishes.values().forEach(s -> s.cleanupAbort(traceId));
+ subscribes.values().forEach(s -> s.cleanupAbort(traceId));
+ if (session != null)
{
- sessionStream.cleanupAbort(traceId);
+ session.cleanupAbort(traceId);
}
}
@@ -2996,11 +3056,11 @@ private void closeStreams(
long traceId,
long authorization)
{
- publishStreams.values().forEach(s -> s.doPublishAppEnd(traceId));
- subscribeStreams.values().forEach(s -> s.doSubscribeAppEnd(traceId));
- if (sessionStream != null)
+ publishes.values().forEach(s -> s.doPublishAppEnd(traceId));
+ subscribes.values().forEach(s -> s.doSubscribeAppEnd(traceId));
+ if (session != null)
{
- sessionStream.cleanupEnd(traceId);
+ session.cleanupEnd(traceId);
}
}
@@ -3399,7 +3459,7 @@ private void onSessionData(
subscription.reasonCode = filter.reasonCode();
newState.add(subscription);
});
- List currentSubscriptions = sessionStream.subscriptions();
+ List currentSubscriptions = session.subscriptions();
if (newState.size() >= currentSubscriptions.size())
{
List newSubscriptions = newState.stream()
@@ -3441,7 +3501,7 @@ private void onSessionData(
}
}
}
- sessionStream.setSubscriptions(newState);
+ session.setSubscriptions(newState);
}
}
@@ -3971,7 +4031,7 @@ private void doPublishAppEnd(
if (!MqttState.initialClosed(state))
{
doCancelPublishExpiration();
- publishStreams.remove(topicKey);
+ publishes.remove(topicKey);
doEnd(application, originId, routedId, initialId, initialSeq, initialAck, initialMax,
traceId, sessionId, EMPTY_OCTETS);
}
@@ -3991,7 +4051,7 @@ private void setPublishNetClosed()
if (MqttState.closed(state))
{
- publishStreams.remove(topicKey);
+ publishes.remove(topicKey);
}
}
@@ -4004,7 +4064,7 @@ private void setPublishAppClosed()
if (MqttState.closed(state))
{
- publishStreams.remove(topicKey);
+ publishes.remove(topicKey);
}
}
@@ -4184,7 +4244,7 @@ private void setNetClosed()
if (MqttState.closed(state))
{
- subscribeStreams.remove(routedId);
+ subscribes.remove(routedId);
}
}
@@ -4315,10 +4375,10 @@ private void onSubscribeWindow(
if (!subscriptions.isEmpty() && !adminSubscribe)
{
- if (!sessionStream.deferredUnsubscribes.isEmpty())
+ if (!session.deferredUnsubscribes.isEmpty())
{
Iterator>> iterator =
- sessionStream.deferredUnsubscribes.entrySet().iterator();
+ session.deferredUnsubscribes.entrySet().iterator();
List ackedTopicFilters = new ArrayList<>();
while (iterator.hasNext())
{
@@ -4446,7 +4506,7 @@ private void setSubscribeAppClosed()
{
if (MqttState.closed(state))
{
- subscribeStreams.remove(routedId);
+ subscribes.remove(routedId);
}
}
diff --git a/runtime/binding-mqtt/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/server/ConnectionIT.java b/runtime/binding-mqtt/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/server/ConnectionIT.java
index 61294723de..307d5f0bed 100644
--- a/runtime/binding-mqtt/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/server/ConnectionIT.java
+++ b/runtime/binding-mqtt/src/test/java/io/aklivity/zilla/runtime/binding/mqtt/internal/stream/server/ConnectionIT.java
@@ -71,7 +71,7 @@ public void shouldConnect() throws Exception
@Configuration("server.credentials.username.yaml")
@Specification({
"${net}/connect.username.authentication.successful/client",
- "${app}/session.connect/server"})
+ "${app}/session.connect.authorization/server"})
public void shouldAuthenticateUsernameAndConnect() throws Exception
{
k3po.finish();
@@ -90,7 +90,7 @@ public void shouldFailUsernameAuthentication() throws Exception
@Configuration("server.credentials.password.yaml")
@Specification({
"${net}/connect.password.authentication.successful/client",
- "${app}/session.connect/server"})
+ "${app}/session.connect.authorization/server"})
public void shouldAuthenticatePasswordAndConnect() throws Exception
{
k3po.finish();
diff --git a/runtime/binding-proxy/pom.xml b/runtime/binding-proxy/pom.xml
index ce2e709c26..294b6a49d8 100644
--- a/runtime/binding-proxy/pom.xml
+++ b/runtime/binding-proxy/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-sse-kafka/pom.xml b/runtime/binding-sse-kafka/pom.xml
index 72601309ea..f40027e35a 100644
--- a/runtime/binding-sse-kafka/pom.xml
+++ b/runtime/binding-sse-kafka/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-sse/pom.xml b/runtime/binding-sse/pom.xml
index e3b30cd69a..c377a5c6cc 100644
--- a/runtime/binding-sse/pom.xml
+++ b/runtime/binding-sse/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-tcp/pom.xml b/runtime/binding-tcp/pom.xml
index 9132c2e26c..c346899c1d 100644
--- a/runtime/binding-tcp/pom.xml
+++ b/runtime/binding-tcp/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-tls/pom.xml b/runtime/binding-tls/pom.xml
index ee095271b2..a087140e0f 100644
--- a/runtime/binding-tls/pom.xml
+++ b/runtime/binding-tls/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/binding-ws/pom.xml b/runtime/binding-ws/pom.xml
index 39fa0df767..01492c9e1b 100644
--- a/runtime/binding-ws/pom.xml
+++ b/runtime/binding-ws/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/command-metrics/pom.xml b/runtime/command-metrics/pom.xml
index 5e1ce6c8f4..089fe3ef72 100644
--- a/runtime/command-metrics/pom.xml
+++ b/runtime/command-metrics/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/command-start/pom.xml b/runtime/command-start/pom.xml
index 382e070139..6568c64827 100644
--- a/runtime/command-start/pom.xml
+++ b/runtime/command-start/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/command-stop/pom.xml b/runtime/command-stop/pom.xml
index bfc4706975..552a3910e3 100644
--- a/runtime/command-stop/pom.xml
+++ b/runtime/command-stop/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/command/pom.xml b/runtime/command/pom.xml
index f98f9caae1..8de7fcdd43 100644
--- a/runtime/command/pom.xml
+++ b/runtime/command/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/engine/pom.xml b/runtime/engine/pom.xml
index 08ff1f358a..3fc56c7ae2 100644
--- a/runtime/engine/pom.xml
+++ b/runtime/engine/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/Engine.java b/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/Engine.java
index c9586708d6..ed793a8f25 100644
--- a/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/Engine.java
+++ b/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/Engine.java
@@ -99,6 +99,7 @@ public final class Engine implements Collector, AutoCloseable
private final URL rootConfigURL;
private final Collection dispatchers;
private final boolean readonly;
+ private final EngineConfiguration config;
private Future watcherTaskRef;
Engine(
@@ -114,6 +115,7 @@ public final class Engine implements Collector, AutoCloseable
Collection affinities,
boolean readonly)
{
+ this.config = config;
this.nextTaskId = new AtomicInteger();
this.factory = Executors.defaultThreadFactory();
@@ -248,6 +250,11 @@ public void start() throws Exception
@Override
public void close() throws Exception
{
+ if (config.drainOnClose())
+ {
+ dispatchers.forEach(DispatchAgent::drain);
+ }
+
final List errors = new ArrayList<>();
watcherTask.close();
diff --git a/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/DispatchAgent.java b/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/DispatchAgent.java
index d06cdf1730..fc15bb933e 100644
--- a/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/DispatchAgent.java
+++ b/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/DispatchAgent.java
@@ -310,7 +310,7 @@ public DispatchAgent(
this.timerWheel = new DeadlineTimerWheel(MILLISECONDS, currentTimeMillis(), 512, 1024);
this.tasksByTimerId = new Long2ObjectHashMap<>();
this.futuresById = new Long2ObjectHashMap<>();
- this.signaler = new ElektronSignaler(executor);
+ this.signaler = new ElektronSignaler(executor, Math.max(config.bufferSlotCapacity(), 512));
this.poller = new Poller();
@@ -713,17 +713,6 @@ public int doWork()
@Override
public void onClose()
{
- final long closeAt = System.nanoTime();
- while (config.drainOnClose() &&
- streamsBuffer.consumerPosition() < streamsBuffer.producerPosition())
- {
- ThreadHints.onSpinWait();
-
- if (System.nanoTime() - closeAt >= Duration.ofSeconds(30).toNanos())
- {
- break;
- }
- }
configuration.detachAll();
poller.onClose();
@@ -769,6 +758,20 @@ public void onClose()
}
}
+ public void drain()
+ {
+ final long closeAt = System.nanoTime();
+ while (streamsBuffer.consumerPosition() < streamsBuffer.producerPosition())
+ {
+ ThreadHints.onSpinWait();
+
+ if (System.nanoTime() - closeAt >= Duration.ofSeconds(30).toNanos())
+ {
+ break;
+ }
+ }
+ }
+
@Override
public String toString()
{
@@ -1669,9 +1672,10 @@ public Affinity resolveAffinity(
return affinity;
}
- private static SignalFW.Builder newSignalRW()
+ private static SignalFW.Builder newSignalRW(
+ int capacity)
{
- MutableDirectBuffer buffer = new UnsafeBuffer(new byte[512]);
+ MutableDirectBuffer buffer = new UnsafeBuffer(new byte[capacity]);
return new SignalFW.Builder().wrap(buffer, 0, buffer.capacity());
}
@@ -1688,16 +1692,18 @@ private Int2ObjectHashMap[] initDispatcher()
private final class ElektronSignaler implements Signaler
{
- private final ThreadLocal signalRW = withInitial(DispatchAgent::newSignalRW);
+ private final ThreadLocal signalRW;
private final ExecutorService executorService;
private long nextFutureId;
private ElektronSignaler(
- ExecutorService executorService)
+ ExecutorService executorService,
+ int slotCapacity)
{
this.executorService = executorService;
+ signalRW = withInitial(() -> newSignalRW(slotCapacity));
}
public void executeTaskAt(
diff --git a/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/NamespaceRegistry.java b/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/NamespaceRegistry.java
index 6e0ca671d4..ce36736c1d 100644
--- a/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/NamespaceRegistry.java
+++ b/runtime/engine/src/main/java/io/aklivity/zilla/runtime/engine/internal/registry/NamespaceRegistry.java
@@ -240,7 +240,7 @@ else if (metricGroupId == originTypeId)
private void detachBinding(
BindingConfig config)
{
- int bindingId = supplyLabelId.applyAsInt(config.name);
+ int bindingId = NamespacedId.localId(config.id);
BindingRegistry context = bindingsById.remove(bindingId);
if (context != null)
{
@@ -264,7 +264,7 @@ private void attachVault(
private void detachVault(
VaultConfig config)
{
- int vaultId = supplyLabelId.applyAsInt(config.name);
+ int vaultId = NamespacedId.localId(config.id);
VaultRegistry context = vaultsById.remove(vaultId);
if (context != null)
{
@@ -287,7 +287,7 @@ private void attachGuard(
private void detachGuard(
GuardConfig config)
{
- int guardId = supplyLabelId.applyAsInt(config.name);
+ int guardId = NamespacedId.localId(config.id);
GuardRegistry context = guardsById.remove(guardId);
if (context != null)
{
@@ -310,7 +310,7 @@ private void attachCatalog(
private void detachCatalog(
CatalogConfig config)
{
- int catalogId = supplyLabelId.applyAsInt(config.name);
+ int catalogId = NamespacedId.localId(config.id);
CatalogRegistry context = catalogsById.remove(catalogId);
if (context != null)
{
@@ -330,7 +330,7 @@ private void attachMetric(
private void detachMetric(
MetricConfig config)
{
- int metricId = supplyLabelId.applyAsInt(config.name);
+ int metricId = NamespacedId.localId(config.id);
metricsById.remove(metricId);
}
@@ -349,7 +349,7 @@ private void attachExporter(
private void detachExporter(
ExporterConfig config)
{
- int exporterId = supplyLabelId.applyAsInt(config.name);
+ int exporterId = NamespacedId.localId(config.id);
ExporterRegistry registry = exportersById.remove(exporterId);
if (registry != null)
{
diff --git a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/EngineIT.java b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/EngineIT.java
index 158e3ec490..a0713f833d 100644
--- a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/EngineIT.java
+++ b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/EngineIT.java
@@ -18,7 +18,6 @@
import static java.util.concurrent.TimeUnit.SECONDS;
import static org.junit.rules.RuleChain.outerRule;
-import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.DisableOnDebug;
@@ -118,7 +117,6 @@ public void shouldReceiveClientSentWriteClose() throws Exception
k3po.finish();
}
- @Ignore("GitHub Actions")
@Test
@Configuration("server.yaml")
@Specification({
diff --git a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/ReconfigureFileIT.java b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/ReconfigureFileIT.java
index e57e33072d..73d3f294c4 100644
--- a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/ReconfigureFileIT.java
+++ b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/internal/ReconfigureFileIT.java
@@ -145,7 +145,7 @@ public void shouldReconfigureWhenModifiedUsingSymlink() throws Exception
k3po.finish();
}
- @Ignore("Github Actions")
+ @Ignore("Fails on JDK 13")
@Test
@Configuration("zilla.reconfigure.modify.complex.chain.json")
@Specification({
diff --git a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/EngineRule.java b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/EngineRule.java
index 0adfc1f262..c2660008a8 100644
--- a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/EngineRule.java
+++ b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/EngineRule.java
@@ -303,6 +303,7 @@ public void evaluate() throws Throwable
final List errors = new ArrayList<>();
final ErrorHandler errorHandler = ex ->
{
+ ex.printStackTrace();
errors.add(ex);
baseThread.interrupt();
};
diff --git a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/internal/k3po/ext/behavior/ZillaScope.java b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/internal/k3po/ext/behavior/ZillaScope.java
index d9c7393577..47ea8a7753 100644
--- a/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/internal/k3po/ext/behavior/ZillaScope.java
+++ b/runtime/engine/src/test/java/io/aklivity/zilla/runtime/engine/test/internal/k3po/ext/behavior/ZillaScope.java
@@ -16,7 +16,6 @@
package io.aklivity.zilla.runtime.engine.test.internal.k3po.ext.behavior;
import static io.aklivity.zilla.runtime.engine.internal.stream.BudgetId.ownerIndex;
-import static io.aklivity.zilla.runtime.engine.test.internal.k3po.ext.behavior.ZillaTransmission.HALF_DUPLEX;
import java.nio.file.Path;
import java.util.function.LongSupplier;
@@ -214,7 +213,7 @@ public void doClose(
ZillaTarget target = supplyTarget(channel);
target.doClose(channel, handlerFuture);
- if (!readClosed && channel.getConfig().getTransmission() == HALF_DUPLEX)
+ if (!readClosed)
{
final ChannelFuture abortFuture = Channels.future(channel);
source.doAbortInput(channel, abortFuture);
diff --git a/runtime/exporter-prometheus/pom.xml b/runtime/exporter-prometheus/pom.xml
index 715d6f853f..a1e166d9cb 100644
--- a/runtime/exporter-prometheus/pom.xml
+++ b/runtime/exporter-prometheus/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/guard-jwt/pom.xml b/runtime/guard-jwt/pom.xml
index 29d532a47f..3686c69746 100644
--- a/runtime/guard-jwt/pom.xml
+++ b/runtime/guard-jwt/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/metrics-grpc/pom.xml b/runtime/metrics-grpc/pom.xml
index 3d5bbe2e03..e5b3f16308 100644
--- a/runtime/metrics-grpc/pom.xml
+++ b/runtime/metrics-grpc/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/metrics-http/pom.xml b/runtime/metrics-http/pom.xml
index ac1c532d72..2c413a0768 100644
--- a/runtime/metrics-http/pom.xml
+++ b/runtime/metrics-http/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/metrics-stream/pom.xml b/runtime/metrics-stream/pom.xml
index 8f1900578e..8d66674d9e 100644
--- a/runtime/metrics-stream/pom.xml
+++ b/runtime/metrics-stream/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/pom.xml b/runtime/pom.xml
index 3de5d223ff..82473e074c 100644
--- a/runtime/pom.xml
+++ b/runtime/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/runtime/vault-filesystem/pom.xml b/runtime/vault-filesystem/pom.xml
index 78341b8f74..d506cc3068 100644
--- a/runtime/vault-filesystem/pom.xml
+++ b/runtime/vault-filesystem/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
runtime
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-echo.spec/pom.xml b/specs/binding-echo.spec/pom.xml
index 39a5e2cbef..cb8596b529 100644
--- a/specs/binding-echo.spec/pom.xml
+++ b/specs/binding-echo.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-fan.spec/pom.xml b/specs/binding-fan.spec/pom.xml
index bb262afdad..6b1f6f725f 100644
--- a/specs/binding-fan.spec/pom.xml
+++ b/specs/binding-fan.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-filesystem.spec/pom.xml b/specs/binding-filesystem.spec/pom.xml
index 2a49a709fa..ef7fad24a2 100644
--- a/specs/binding-filesystem.spec/pom.xml
+++ b/specs/binding-filesystem.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-grpc-kafka.spec/pom.xml b/specs/binding-grpc-kafka.spec/pom.xml
index 97e0388662..88ff04cebd 100644
--- a/specs/binding-grpc-kafka.spec/pom.xml
+++ b/specs/binding-grpc-kafka.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/grpc/produce/unary.rpc.error/client.rpt b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/grpc/produce/unary.rpc.error/client.rpt
new file mode 100644
index 0000000000..dfc9f9f631
--- /dev/null
+++ b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/grpc/produce/unary.rpc.error/client.rpt
@@ -0,0 +1,42 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/grpc0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${grpc:beginEx()
+ .typeId(zilla:id("grpc"))
+ .scheme("http")
+ .authority("localhost:8080")
+ .service("example.EchoService")
+ .method("EchoUnary")
+ .metadata("custom", "test")
+ .metadata("idempotency-key", "59410e57-3e0f-4b61-9328-f645a7968ac8")
+ .build()}
+connected
+
+write ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+write flush
+
+write close
+
+read zilla:abort.ext ${grpc:abortEx()
+ .typeId(zilla:id("grpc"))
+ .status("9")
+ .build()}
+read aborted
diff --git a/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/grpc/produce/unary.rpc.error/server.rpt b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/grpc/produce/unary.rpc.error/server.rpt
new file mode 100644
index 0000000000..1d240780ec
--- /dev/null
+++ b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/grpc/produce/unary.rpc.error/server.rpt
@@ -0,0 +1,42 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/grpc0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+accepted
+
+read zilla:begin.ext ${grpc:matchBeginEx()
+ .typeId(zilla:id("grpc"))
+ .scheme("http")
+ .authority("localhost:8080")
+ .service("example.EchoService")
+ .method("EchoUnary")
+ .metadata("custom", "test")
+ .metadata("idempotency-key", "59410e57-3e0f-4b61-9328-f645a7968ac8")
+ .build()}
+connected
+
+read ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+
+read closed
+
+write zilla:abort.ext ${grpc:abortEx()
+ .typeId(zilla:id("grpc"))
+ .status("9")
+ .build()}
+write abort
diff --git a/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/kafka/produce/unary.rpc.error/client.rpt b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/kafka/produce/unary.rpc.error/client.rpt
new file mode 100644
index 0000000000..5574e9268d
--- /dev/null
+++ b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/kafka/produce/unary.rpc.error/client.rpt
@@ -0,0 +1,110 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("requests")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("test")
+ .header("zilla:identity", "test")
+ .header("zilla:service", "example.EchoService")
+ .header("zilla:method", "EchoUnary")
+ .header("zilla:reply-to", "responses")
+ .header("zilla:correlation-id", "59410e57-3e0f-4b61-9328-f645a7968ac8-479f2c3fb58bc3f04bbe15440a657670")
+ .build()
+ .build()}
+write ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+write flush
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("test")
+ .header("zilla:identity", "test")
+ .header("zilla:service", "example.EchoService")
+ .header("zilla:method", "EchoUnary")
+ .header("zilla:reply-to", "responses")
+ .header("zilla:correlation-id", "59410e57-3e0f-4b61-9328-f645a7968ac8-479f2c3fb58bc3f04bbe15440a657670")
+ .build()
+ .build()}
+
+write flush
+
+write close
+write notify SENT_ASYNC_REQUEST
+read closed
+
+connect await SENT_ASYNC_REQUEST
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("responses")
+ .partition(-1, -2)
+ .filter()
+ .header("zilla:correlation-id", "59410e57-3e0f-4b61-9328-f645a7968ac8-479f2c3fb58bc3f04bbe15440a657670")
+ .build()
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .partition(0, 1, 1)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("test")
+ .header("zilla:status", "9")
+ .build()
+ .build()}
+read zilla:data.null
+
+read advised zilla:flush ${kafka:matchFlushEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .fetch()
+ .progress(0, 2, 2, 2)
+ .build()
+ .build()}
+
+write close
+read closed
diff --git a/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/kafka/produce/unary.rpc.error/server.rpt b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/kafka/produce/unary.rpc.error/server.rpt
new file mode 100644
index 0000000000..cd2dbd2925
--- /dev/null
+++ b/specs/binding-grpc-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/kafka/streams/kafka/produce/unary.rpc.error/server.rpt
@@ -0,0 +1,109 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("requests")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("test")
+ .header("zilla:identity", "test")
+ .header("zilla:service", "example.EchoService")
+ .header("zilla:method", "EchoUnary")
+ .header("zilla:reply-to", "responses")
+ .header("zilla:correlation-id", "59410e57-3e0f-4b61-9328-f645a7968ac8-479f2c3fb58bc3f04bbe15440a657670")
+ .build()
+ .build()}
+
+read ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("test")
+ .header("zilla:identity", "test")
+ .header("zilla:service", "example.EchoService")
+ .header("zilla:method", "EchoUnary")
+ .header("zilla:reply-to", "responses")
+ .header("zilla:correlation-id", "59410e57-3e0f-4b61-9328-f645a7968ac8-479f2c3fb58bc3f04bbe15440a657670")
+ .build()
+ .build()}
+read zilla:data.null
+
+read closed
+write close
+
+accepted
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("responses")
+ .partition(-1, -2)
+ .filter()
+ .header("zilla:correlation-id", "59410e57-3e0f-4b61-9328-f645a7968ac8-479f2c3fb58bc3f04bbe15440a657670")
+ .build()
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .timestamp(kafka:timestamp())
+ .partition(0, 1, 1)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("test")
+ .header("zilla:status", "9")
+ .build()
+ .build()}
+
+write flush
+
+write advise zilla:flush ${kafka:flushEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .fetch()
+ .progress(0, 2, 2, 2)
+ .build()
+ .build()}
+
+read closed
+write close
diff --git a/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/GrpcProduceIT.java b/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/GrpcProduceIT.java
index c32b62a8fa..9c90117277 100644
--- a/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/GrpcProduceIT.java
+++ b/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/GrpcProduceIT.java
@@ -98,6 +98,15 @@ public void shouldExchangeMessageInUnary() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${grpc}/unary.rpc.error/client",
+ "${grpc}/unary.rpc.error/server"})
+ public void shouldRejectUnaryRpcWithError() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Specification({
"${grpc}/unary.rpc.rejected/client",
diff --git a/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/KafkaProduceIT.java b/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/KafkaProduceIT.java
index e2a9ff62d9..4776f9219b 100644
--- a/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/KafkaProduceIT.java
+++ b/specs/binding-grpc-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/kafka/streams/KafkaProduceIT.java
@@ -107,6 +107,15 @@ public void shouldRejectUnaryRpc() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${kafka}/unary.rpc.error/client",
+ "${kafka}/unary.rpc.error/server"})
+ public void shouldRejectUnaryRpcWithError() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Specification({
"${kafka}/unary.rpc.sent.write.abort/client",
diff --git a/specs/binding-grpc.spec/pom.xml b/specs/binding-grpc.spec/pom.xml
index da14f3e444..d48dff8e31 100644
--- a/specs/binding-grpc.spec/pom.xml
+++ b/specs/binding-grpc.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/response.missing.grpc.status/client.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/response.missing.grpc.status/client.rpt
new file mode 100644
index 0000000000..fc13096d57
--- /dev/null
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/response.missing.grpc.status/client.rpt
@@ -0,0 +1,45 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/app0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${grpc:beginEx()
+ .typeId(zilla:id("grpc"))
+ .scheme("http")
+ .authority("localhost:8080")
+ .service("example.EchoService")
+ .method("EchoUnary")
+ .metadata("custom", "test")
+ .build()}
+connected
+
+write ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+write flush
+
+write close
+
+read ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+
+read zilla:abort.ext ${grpc:abortEx()
+ .typeId(zilla:id("grpc"))
+ .status("13")
+ .build()}
+read aborted
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/response.missing.grpc.status/server.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/response.missing.grpc.status/server.rpt
new file mode 100644
index 0000000000..2bbfdf4e81
--- /dev/null
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/response.missing.grpc.status/server.rpt
@@ -0,0 +1,46 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/app0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+accepted
+
+read zilla:begin.ext ${grpc:matchBeginEx()
+ .typeId(zilla:id("grpc"))
+ .scheme("http")
+ .authority("localhost:8080")
+ .service("example.EchoService")
+ .method("EchoUnary")
+ .metadata("custom", "test")
+ .build()}
+connected
+
+read ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+
+read closed
+
+write ${grpc:protobuf()
+ .string(1, "Hello World")
+ .build()}
+write flush
+
+write zilla:abort.ext ${grpc:abortEx()
+ .typeId(zilla:id("grpc"))
+ .status("13")
+ .build()}
+write abort
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/client.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/client.rpt
index 6ea70ed517..adbca04b17 100644
--- a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/client.rpt
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/client.rpt
@@ -36,7 +36,7 @@ write close
read zilla:abort.ext ${grpc:abortEx()
.typeId(zilla:id("grpc"))
- .status("10")
+ .status("9")
.build()}
read aborted
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/server.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/server.rpt
index cdd5453625..f7289de25b 100644
--- a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/server.rpt
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/application/unary.rpc/server.send.write.abort.on.open.response/server.rpt
@@ -35,7 +35,7 @@ read closed
write zilla:abort.ext ${grpc:abortEx()
.typeId(zilla:id("grpc"))
- .status("10")
+ .status("9")
.build()}
write flush
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.missing.grpc.status/client.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.missing.grpc.status/client.rpt
new file mode 100644
index 0000000000..866729ae1b
--- /dev/null
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.missing.grpc.status/client.rpt
@@ -0,0 +1,53 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/net0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${http:beginEx()
+ .typeId(zilla:id("http"))
+ .header(":method", "POST")
+ .header(":scheme", "http")
+ .header(":authority", "localhost:8080")
+ .header(":path", "/example.EchoService/EchoUnary")
+ .header("content-type", "application/grpc")
+ .header("te", "trailers")
+ .header("custom", "test")
+ .build()}
+connected
+
+write ${grpc:message()
+ .string(1, "Hello World")
+ .build()}
+write flush
+
+write close
+
+read zilla:begin.ext ${http:matchBeginEx()
+ .typeId(zilla:id("http"))
+ .header(":status", "200")
+ .header("content-type", "application/grpc")
+ .header("grpc-encoding", "identity")
+ .build()}
+
+read ${grpc:message()
+ .string(1, "Hello World")
+ .build()}
+
+read zilla:end.ext ${http:endEx()
+ .typeId(zilla:id("http"))
+ .build()}
+read closed
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.missing.grpc.status/server.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.missing.grpc.status/server.rpt
new file mode 100644
index 0000000000..d3ce8b85fe
--- /dev/null
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.missing.grpc.status/server.rpt
@@ -0,0 +1,55 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/net0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+accepted
+
+read zilla:begin.ext ${http:matchBeginEx()
+ .typeId(zilla:id("http"))
+ .header(":method", "POST")
+ .header(":scheme", "http")
+ .header(":authority", "localhost:8080")
+ .header(":path", "/example.EchoService/EchoUnary")
+ .header("content-type", "application/grpc")
+ .header("te", "trailers")
+ .header("custom", "test")
+ .build()}
+connected
+
+read ${grpc:message()
+ .string(1, "Hello World")
+ .build()}
+
+read closed
+
+write zilla:begin.ext ${http:beginEx()
+ .typeId(zilla:id("http"))
+ .header(":status", "200")
+ .header("content-type", "application/grpc")
+ .header("grpc-encoding", "identity")
+ .build()}
+write flush
+
+write ${grpc:message()
+ .string(1, "Hello World")
+ .build()}
+write flush
+
+write zilla:end.ext ${http:endEx()
+ .typeId(zilla:id("http"))
+ .build()}
+write close
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.with.grpc.error/client.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.with.grpc.error/client.rpt
new file mode 100644
index 0000000000..ffc2ab5410
--- /dev/null
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.with.grpc.error/client.rpt
@@ -0,0 +1,47 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/net0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${http:beginEx()
+ .typeId(zilla:id("http"))
+ .header(":method", "POST")
+ .header(":scheme", "http")
+ .header(":authority", "localhost:8080")
+ .header(":path", "/example.EchoService/EchoUnary")
+ .header("content-type", "application/grpc")
+ .header("te", "trailers")
+ .header("custom", "test")
+ .build()}
+connected
+
+write ${grpc:message()
+ .string(1, "Hello World")
+ .build()}
+write flush
+
+write close
+
+read zilla:begin.ext ${http:matchBeginEx()
+ .typeId(zilla:id("http"))
+ .header(":status", "200")
+ .header("content-type", "application/grpc")
+ .header("grpc-encoding", "identity")
+ .header("grpc-status", "9")
+ .build()}
+
+read closed
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.with.grpc.error/server.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.with.grpc.error/server.rpt
new file mode 100644
index 0000000000..85625a5ced
--- /dev/null
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/response.with.grpc.error/server.rpt
@@ -0,0 +1,48 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/net0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+accepted
+
+read zilla:begin.ext ${http:matchBeginEx()
+ .typeId(zilla:id("http"))
+ .header(":method", "POST")
+ .header(":scheme", "http")
+ .header(":authority", "localhost:8080")
+ .header(":path", "/example.EchoService/EchoUnary")
+ .header("content-type", "application/grpc")
+ .header("te", "trailers")
+ .header("custom", "test")
+ .build()}
+connected
+
+read ${grpc:message()
+ .string(1, "Hello World")
+ .build()}
+
+read closed
+
+write zilla:begin.ext ${http:beginEx()
+ .typeId(zilla:id("http"))
+ .header(":status", "200")
+ .header("content-type", "application/grpc")
+ .header("grpc-encoding", "identity")
+ .header("grpc-status", "9")
+ .build()}
+write flush
+
+write close
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/client.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/client.rpt
index 70f18e1c69..d5af75fc51 100644
--- a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/client.rpt
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/client.rpt
@@ -45,7 +45,7 @@ read zilla:begin.ext ${http:matchBeginEx()
read zilla:end.ext ${http:endEx()
.typeId(zilla:id("http"))
- .trailer("grpc-status", "10")
+ .trailer("grpc-status", "9")
.build()}
read closed
diff --git a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/server.rpt b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/server.rpt
index 4a815657f8..f60d03ad9d 100644
--- a/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/server.rpt
+++ b/specs/binding-grpc.spec/src/main/scripts/io/aklivity/zilla/specs/binding/grpc/streams/network/unary.rpc/server.send.write.abort.on.open.response/server.rpt
@@ -46,6 +46,6 @@ write flush
write zilla:end.ext ${http:endEx()
.typeId(zilla:id("http"))
- .trailer("grpc-status", "10")
+ .trailer("grpc-status", "9")
.build()}
write close
diff --git a/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/application/UnaryRpcIT.java b/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/application/UnaryRpcIT.java
index 95dcb86929..5621868579 100644
--- a/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/application/UnaryRpcIT.java
+++ b/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/application/UnaryRpcIT.java
@@ -94,4 +94,14 @@ public void serverSendsWriteAbortOnOpenRequestResponse() throws Exception
{
k3po.finish();
}
+
+ @Test
+ @Specification({
+ "${app}/response.missing.grpc.status/client",
+ "${app}/response.missing.grpc.status/server",
+ })
+ public void shouldAbortResponseMissingGrpcStatus() throws Exception
+ {
+ k3po.finish();
+ }
}
diff --git a/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/network/UnaryRpcIT.java b/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/network/UnaryRpcIT.java
index 2d0cc03267..8bc47c27ad 100644
--- a/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/network/UnaryRpcIT.java
+++ b/specs/binding-grpc.spec/src/test/java/io/aklivity/zilla/specs/binding/grpc/streams/network/UnaryRpcIT.java
@@ -86,6 +86,26 @@ public void shouldTimeoutOnNoResponse() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${net}/response.with.grpc.error/client",
+ "${net}/response.with.grpc.error/server",
+ })
+ public void shouldAbortResponseOnGrpcError() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${net}/response.missing.grpc.status/client",
+ "${net}/response.missing.grpc.status/server",
+ })
+ public void shouldAbortResponseMissingGrpcStatus() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Specification({
"${net}/server.send.read.abort.on.open.request/client",
diff --git a/specs/binding-http-filesystem.spec/pom.xml b/specs/binding-http-filesystem.spec/pom.xml
index d74b4866bf..4a65d066b0 100644
--- a/specs/binding-http-filesystem.spec/pom.xml
+++ b/specs/binding-http-filesystem.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/client.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/client.rpt
index ca3a31900c..b679a74afc 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/client.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/client.rpt
@@ -36,7 +36,7 @@ read zilla:begin.ext ${http:beginEx()
.header(":status", "200")
.header("content-type", "text/html")
.header("content-length", "77")
- .header("Etag", "BBBBBBBBBBBBBBBB")
+ .header("etag", "BBBBBBBBBBBBBBBB")
.build()}
read "\n"
"Welcome\n"
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/server.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/server.rpt
index d1b0fda750..3db6319c3b 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/server.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.modified/server.rpt
@@ -37,7 +37,7 @@ write zilla:begin.ext ${http:beginEx()
.header(":status", "200")
.header("content-type", "text/html")
.header("content-length", "77")
- .header("Etag", "BBBBBBBBBBBBBBBB")
+ .header("etag", "BBBBBBBBBBBBBBBB")
.build()}
write "\n"
"Welcome\n"
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/client.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/client.rpt
index e14c1cc55e..36496f4c85 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/client.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/client.rpt
@@ -36,7 +36,7 @@ read zilla:begin.ext ${http:beginEx()
.header(":status", "304")
.header("content-type", "text/html")
.header("content-length", "0")
- .header("Etag", "AAAAAAAAAAAAAAAA")
+ .header("etag", "AAAAAAAAAAAAAAAA")
.build()}
read closed
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/server.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/server.rpt
index f4e940663e..bea8f126d1 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/server.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.map.not.modified/server.rpt
@@ -37,6 +37,6 @@ write zilla:begin.ext ${http:beginEx()
.header(":status", "304")
.header("content-type", "text/html")
.header("content-length", "0")
- .header("Etag", "AAAAAAAAAAAAAAAA")
+ .header("etag", "AAAAAAAAAAAAAAAA")
.build()}
write close
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/client.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/client.rpt
index bb55ba738a..b7bca32c41 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/client.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/client.rpt
@@ -34,7 +34,7 @@ read zilla:begin.ext ${http:beginEx()
.header(":status", "200")
.header("content-type", "text/html")
.header("content-length", "77")
- .header("Etag", "c7183509522eb56e5cf927a3b2e8c15a")
+ .header("etag", "c7183509522eb56e5cf927a3b2e8c15a")
.build()}
read "\n"
"Welcome\n"
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/server.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/server.rpt
index 2af257ee6b..a1c23860a1 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/server.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file.with.query/server.rpt
@@ -35,7 +35,7 @@ write zilla:begin.ext ${http:beginEx()
.header(":status", "200")
.header("content-type", "text/html")
.header("content-length", "77")
- .header("Etag", "c7183509522eb56e5cf927a3b2e8c15a")
+ .header("etag", "c7183509522eb56e5cf927a3b2e8c15a")
.build()}
write "\n"
"Welcome\n"
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/client.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/client.rpt
index 03dc8ab7e5..bea66acbf3 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/client.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/client.rpt
@@ -34,7 +34,7 @@ read zilla:begin.ext ${http:beginEx()
.header(":status", "200")
.header("content-type", "text/html")
.header("content-length", "77")
- .header("Etag", "c7183509522eb56e5cf927a3b2e8c15a")
+ .header("etag", "c7183509522eb56e5cf927a3b2e8c15a")
.build()}
read "\n"
"Welcome\n"
diff --git a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/server.rpt b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/server.rpt
index 3bee9470c0..5ab9299de5 100644
--- a/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/server.rpt
+++ b/specs/binding-http-filesystem.spec/src/main/scripts/io/aklivity/zilla/specs/binding/http/filesystem/streams/http/client.read.file/server.rpt
@@ -35,7 +35,7 @@ write zilla:begin.ext ${http:beginEx()
.header(":status", "200")
.header("content-type", "text/html")
.header("content-length", "77")
- .header("Etag", "c7183509522eb56e5cf927a3b2e8c15a")
+ .header("etag", "c7183509522eb56e5cf927a3b2e8c15a")
.build()}
write "\n"
"Welcome\n"
diff --git a/specs/binding-http-kafka.spec/pom.xml b/specs/binding-http-kafka.spec/pom.xml
index 8ec8b0a26e..3db65e0514 100644
--- a/specs/binding-http-kafka.spec/pom.xml
+++ b/specs/binding-http-kafka.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-http.spec/pom.xml b/specs/binding-http.spec/pom.xml
index 8db973bfee..4e042fc500 100644
--- a/specs/binding-http.spec/pom.xml
+++ b/specs/binding-http.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-kafka-grpc.spec/pom.xml b/specs/binding-kafka-grpc.spec/pom.xml
index 8eb160468b..467dac42a7 100644
--- a/specs/binding-kafka-grpc.spec/pom.xml
+++ b/specs/binding-kafka-grpc.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-kafka.spec/pom.xml b/specs/binding-kafka.spec/pom.xml
index 763e238114..0a65aa0b50 100644
--- a/specs/binding-kafka.spec/pom.xml
+++ b/specs/binding-kafka.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-kafka.spec/src/main/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctions.java b/specs/binding-kafka.spec/src/main/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctions.java
index ef3aa8660d..df04add264 100644
--- a/specs/binding-kafka.spec/src/main/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctions.java
+++ b/specs/binding-kafka.spec/src/main/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctions.java
@@ -996,6 +996,27 @@ public KafkaBootstrapBeginExBuilder topic(
return this;
}
+ public KafkaBootstrapBeginExBuilder groupId(
+ String groupId)
+ {
+ bootstrapBeginExRW.groupId(groupId);
+ return this;
+ }
+
+ public KafkaBootstrapBeginExBuilder consumerId(
+ String consumerId)
+ {
+ bootstrapBeginExRW.consumerId(consumerId);
+ return this;
+ }
+
+ public KafkaBootstrapBeginExBuilder timeout(
+ int timeout)
+ {
+ bootstrapBeginExRW.timeout(timeout);
+ return this;
+ }
+
public KafkaBeginExBuilder build()
{
final KafkaBootstrapBeginExFW bootstrapBeginEx = bootstrapBeginExRW.build();
@@ -4103,6 +4124,15 @@ public static final class KafkaBeginExMatcherBuilder
private Integer kind;
private Predicate caseMatcher;
+ public KafkaBootstrapBeginExMatcherBuilder bootstrap()
+ {
+ final KafkaBootstrapBeginExMatcherBuilder matcherBuilder = new KafkaBootstrapBeginExMatcherBuilder();
+
+ this.kind = KafkaApi.BOOTSTRAP.value();
+ this.caseMatcher = matcherBuilder::match;
+ return matcherBuilder;
+ }
+
public KafkaMergedBeginExMatcherBuilder merged()
{
final KafkaMergedBeginExMatcherBuilder matcherBuilder = new KafkaMergedBeginExMatcherBuilder();
@@ -4540,6 +4570,70 @@ private boolean matchMetadata(
}
}
+ public final class KafkaBootstrapBeginExMatcherBuilder
+ {
+ private String16FW topic;
+ private String16FW groupId;
+ private String16FW consumerId;
+
+ private KafkaBootstrapBeginExMatcherBuilder()
+ {
+ }
+
+ public KafkaBootstrapBeginExMatcherBuilder topic(
+ String topic)
+ {
+ this.topic = new String16FW(topic);
+ return this;
+ }
+
+ public KafkaBootstrapBeginExMatcherBuilder groupId(
+ String groupId)
+ {
+ this.groupId = new String16FW(groupId);
+ return this;
+ }
+
+ public KafkaBootstrapBeginExMatcherBuilder consumerId(
+ String consumerId)
+ {
+ this.consumerId = new String16FW(consumerId);
+ return this;
+ }
+
+ public KafkaBeginExMatcherBuilder build()
+ {
+ return KafkaBeginExMatcherBuilder.this;
+ }
+
+ private boolean match(
+ KafkaBeginExFW beginEx)
+ {
+ final KafkaBootstrapBeginExFW bootstrapBeginEx = beginEx.bootstrap();
+ return matchTopic(bootstrapBeginEx) &&
+ matchGroupId(bootstrapBeginEx) &&
+ matchConsumerId(bootstrapBeginEx);
+ }
+
+ private boolean matchTopic(
+ final KafkaBootstrapBeginExFW bootstrapBeginEx)
+ {
+ return topic == null || topic.equals(bootstrapBeginEx.topic());
+ }
+
+ private boolean matchGroupId(
+ final KafkaBootstrapBeginExFW bootstrapBeginEx)
+ {
+ return groupId == null || groupId.equals(bootstrapBeginEx.groupId());
+ }
+
+ private boolean matchConsumerId(
+ final KafkaBootstrapBeginExFW bootstrapBeginEx)
+ {
+ return consumerId == null || consumerId.equals(bootstrapBeginEx.consumerId());
+ }
+ }
+
public final class KafkaMergedBeginExMatcherBuilder
{
private KafkaCapabilities capabilities;
diff --git a/specs/binding-kafka.spec/src/main/resources/META-INF/zilla/kafka.idl b/specs/binding-kafka.spec/src/main/resources/META-INF/zilla/kafka.idl
index fb0078d524..e0c1808aa3 100644
--- a/specs/binding-kafka.spec/src/main/resources/META-INF/zilla/kafka.idl
+++ b/specs/binding-kafka.spec/src/main/resources/META-INF/zilla/kafka.idl
@@ -221,6 +221,9 @@ scope kafka
struct KafkaBootstrapBeginEx
{
string16 topic;
+ string16 groupId = null;
+ string16 consumerId = null;
+ int32 timeout = 0;
}
struct KafkaMergedBeginEx
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/group.fetch.message.value/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/group.fetch.message.value/client.rpt
new file mode 100644
index 0000000000..f2e0afbda3
--- /dev/null
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/group.fetch.message.value/client.rpt
@@ -0,0 +1,32 @@
+#
+# Copyright 2021-2023 Aklivity Inc.
+#
+# Aklivity licenses this file to you under the Apache License,
+# version 2.0 (the "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at:
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+connect "zilla://streams/app0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("test")
+ .groupId("client-1")
+ .consumerId("consumer-1")
+ .timeout(45000)
+ .build()
+ .build()}
+
+connected
+
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/group.fetch.message.value/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/group.fetch.message.value/server.rpt
new file mode 100644
index 0000000000..505be66266
--- /dev/null
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/group.fetch.message.value/server.rpt
@@ -0,0 +1,37 @@
+#
+# Copyright 2021-2023 Aklivity Inc.
+#
+# Aklivity licenses this file to you under the Apache License,
+# version 2.0 (the "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at:
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+property deltaMillis 0L
+property newTimestamp ${kafka:timestamp() + deltaMillis}
+
+accept "zilla://streams/app0"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .bootstrap()
+ .topic("test")
+ .groupId("client-1")
+ .consumerId("consumer-1")
+ .timeout(45000)
+ .build()
+ .build()}
+
+connected
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/unmerged.group.fetch.message.value/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/unmerged.group.fetch.message.value/client.rpt
new file mode 100644
index 0000000000..05d9edd9da
--- /dev/null
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/unmerged.group.fetch.message.value/client.rpt
@@ -0,0 +1,217 @@
+#
+# Copyright 2021-2023 Aklivity Inc.
+#
+# Aklivity licenses this file to you under the Apache License,
+# version 2.0 (the "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at:
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+connect "zilla://streams/app1"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .describe()
+ .topic("test")
+ .config("cleanup.policy")
+ .config("max.message.bytes")
+ .config("segment.bytes")
+ .config("segment.index.bytes")
+ .config("segment.ms")
+ .config("retention.bytes")
+ .config("retention.ms")
+ .config("delete.retention.ms")
+ .config("min.compaction.lag.ms")
+ .config("max.compaction.lag.ms")
+ .config("min.cleanable.dirty.ratio")
+ .build()
+ .build()}
+
+connected
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .describe()
+ .topic("test")
+ .config("cleanup.policy")
+ .config("max.message.bytes")
+ .config("segment.bytes")
+ .config("segment.index.bytes")
+ .config("segment.ms")
+ .config("retention.bytes")
+ .config("retention.ms")
+ .config("delete.retention.ms")
+ .config("min.compaction.lag.ms")
+ .config("max.compaction.lag.ms")
+ .config("min.cleanable.dirty.ratio")
+ .build()
+ .build()}
+
+read zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .describe()
+ .config("cleanup.policy", "delete")
+ .config("max.message.bytes", 1000012)
+ .config("segment.bytes", 1073741824)
+ .config("segment.index.bytes", 10485760)
+ .config("segment.ms", 604800000)
+ .config("retention.bytes", -1)
+ .config("retention.ms", 604800000)
+ .config("delete.retention.ms", 86400000)
+ .config("min.compaction.lag.ms", 0)
+ .config("max.compaction.lag.ms", 9223372036854775807)
+ .config("min.cleanable.dirty.ratio", 0.5)
+ .build()
+ .build()}
+
+read notify RECEIVED_CONFIG
+
+connect await RECEIVED_CONFIG
+ "zilla://streams/app1"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .meta()
+ .topic("test")
+ .build()
+ .build()}
+
+connected
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .meta()
+ .topic("test")
+ .build()
+ .build()}
+
+read zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .meta()
+ .partition(0, 1)
+ .partition(1, 2)
+ .build()
+ .build()}
+
+
+read notify PARTITION_COUNT_2
+
+connect await PARTITION_COUNT_2
+ "zilla://streams/app1"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .group()
+ .groupId("client-1")
+ .protocol("highlander")
+ .timeout(45000)
+ .metadata(kafka:memberMetadata()
+ .consumerId("consumer-1")
+ .topic("test")
+ .partitionId(0)
+ .partitionId(1)
+ .build()
+ .build())
+ .build()
+ .build()}
+
+connected
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .group()
+ .groupId("client-1")
+ .protocol("highlander")
+ .timeout(30000)
+ .build()
+ .build()}
+
+read advised zilla:flush ${kafka:flushEx()
+ .typeId(zilla:id("kafka"))
+ .group()
+ .leaderId("memberId-1")
+ .memberId("memberId-1")
+ .members("memberId-1", kafka:memberMetadata()
+ .consumerId("consumer-1")
+ .topic("test")
+ .partitionId(0)
+ .partitionId(1)
+ .build()
+ .build())
+ .build()
+ .build()}
+
+write ${kafka:memberAssignment()
+ .member("memberId-1")
+ .assignment()
+ .topic("test")
+ .partitionId(0)
+ .partitionId(1)
+ .consumer()
+ .id("consumer-1")
+ .partitionId(0)
+ .partitionId(1)
+ .build()
+ .build()
+ .build()
+ .build()}
+write flush
+
+read ${kafka:topicAssignment()
+ .topic()
+ .id("test")
+ .partitionId(0)
+ .consumer()
+ .id("consumer-1")
+ .partitionId(0)
+ .build()
+ .build()
+ .build()}
+
+read notify RECEIVED_CONSUMER
+
+connect await RECEIVED_CONSUMER
+ "zilla://streams/app1"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .fetch()
+ .topic("test")
+ .partition(0, -2)
+ .build()
+ .build()}
+
+connected
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .fetch()
+ .topic("test")
+ .partition(0, 1, 2)
+ .build()
+ .build()}
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .fetch()
+ .partition(0, 1, 2)
+ .build()
+ .build()}
+read "Hello, world #A1"
+
+
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/unmerged.group.fetch.message.value/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/unmerged.group.fetch.message.value/server.rpt
new file mode 100644
index 0000000000..180178cfba
--- /dev/null
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/bootstrap/unmerged.group.fetch.message.value/server.rpt
@@ -0,0 +1,214 @@
+#
+# Copyright 2021-2023 Aklivity Inc.
+#
+# Aklivity licenses this file to you under the Apache License,
+# version 2.0 (the "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at:
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+property deltaMillis 0L
+property newTimestamp ${kafka:timestamp() + deltaMillis}
+
+accept "zilla://streams/app1"
+ option zilla:window 8192
+ option zilla:transmission "half-duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .describe()
+ .topic("test")
+ .config("cleanup.policy")
+ .config("max.message.bytes")
+ .config("segment.bytes")
+ .config("segment.index.bytes")
+ .config("segment.ms")
+ .config("retention.bytes")
+ .config("retention.ms")
+ .config("delete.retention.ms")
+ .config("min.compaction.lag.ms")
+ .config("max.compaction.lag.ms")
+ .config("min.cleanable.dirty.ratio")
+ .build()
+ .build()}
+
+connected
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .describe()
+ .topic("test")
+ .config("cleanup.policy")
+ .config("max.message.bytes")
+ .config("segment.bytes")
+ .config("segment.index.bytes")
+ .config("segment.ms")
+ .config("retention.bytes")
+ .config("retention.ms")
+ .config("delete.retention.ms")
+ .config("min.compaction.lag.ms")
+ .config("max.compaction.lag.ms")
+ .config("min.cleanable.dirty.ratio")
+ .build()
+ .build()}
+write flush
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .describe()
+ .config("cleanup.policy", "delete")
+ .config("max.message.bytes", 1000012)
+ .config("segment.bytes", 1073741824)
+ .config("segment.index.bytes", 10485760)
+ .config("segment.ms", 604800000)
+ .config("retention.bytes", -1)
+ .config("retention.ms", 604800000)
+ .config("delete.retention.ms", 86400000)
+ .config("min.compaction.lag.ms", 0)
+ .config("max.compaction.lag.ms", 9223372036854775807)
+ .config("min.cleanable.dirty.ratio", 0.5)
+ .build()
+ .build()}
+write flush
+
+accepted
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .meta()
+ .topic("test")
+ .build()
+ .build()}
+
+connected
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .meta()
+ .topic("test")
+ .build()
+ .build()}
+write flush
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .meta()
+ .partition(0, 1)
+ .partition(1, 2)
+ .build()
+ .build()}
+write flush
+
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .group()
+ .groupId("client-1")
+ .protocol("highlander")
+ .timeout(45000)
+ .metadata(kafka:memberMetadata()
+ .consumerId("consumer-1")
+ .topic("test")
+ .partitionId(0)
+ .partitionId(1)
+ .build()
+ .build())
+ .build()
+ .build()}
+
+connected
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .group()
+ .groupId("client-1")
+ .protocol("highlander")
+ .timeout(30000)
+ .build()
+ .build()}
+write flush
+
+write advise zilla:flush ${kafka:flushEx()
+ .typeId(zilla:id("kafka"))
+ .group()
+ .leaderId("memberId-1")
+ .memberId("memberId-1")
+ .members("memberId-1", kafka:memberMetadata()
+ .consumerId("consumer-1")
+ .topic("test")
+ .partitionId(0)
+ .partitionId(1)
+ .build()
+ .build())
+ .build()
+ .build()}
+
+read ${kafka:memberAssignment()
+ .member("memberId-1")
+ .assignment()
+ .topic("test")
+ .partitionId(0)
+ .partitionId(1)
+ .consumer()
+ .id("consumer-1")
+ .partitionId(0)
+ .partitionId(1)
+ .build()
+ .build()
+ .build()
+ .build()}
+
+write ${kafka:topicAssignment()
+ .topic()
+ .id("test")
+ .partitionId(0)
+ .consumer()
+ .id("consumer-1")
+ .partitionId(0)
+ .build()
+ .build()
+ .build()}
+write flush
+
+accepted
+
+read zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .fetch()
+ .topic("test")
+ .partition(0, -2)
+ .build()
+ .build()}
+
+connected
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .fetch()
+ .topic("test")
+ .partition(0, 1, 2)
+ .build()
+ .build()}
+write flush
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .fetch()
+ .timestamp(newTimestamp)
+ .partition(0, 1, 2)
+ .build()
+ .build()}
+write "Hello, world #A1"
+write flush
+
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/client.rpt
index 54250942d5..38f9f2218c 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/client.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/client.rpt
@@ -69,6 +69,8 @@ read zilla:begin.ext ${kafka:beginEx()
.build()
.build()}
+read notify RECEIVED_SKIPPED_MESSAGE
+
read advised zilla:flush ${kafka:matchFlushEx()
.typeId(zilla:id("kafka"))
.fetch()
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/server.rpt
index d8f744f9fd..3aa8e19482 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/server.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.committed.aborted/server.rpt
@@ -73,6 +73,8 @@ write zilla:begin.ext ${kafka:beginEx()
.build()}
write flush
+write await RECEIVED_SKIPPED_MESSAGE
+
write advise zilla:flush ${kafka:flushEx()
.typeId(zilla:id("kafka"))
.fetch()
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/client.rpt
index f6966136dc..5f9630fb42 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/client.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/client.rpt
@@ -90,6 +90,8 @@ read zilla:data.ext ${kafka:matchDataEx()
.build()}
read "Hello, world"
+read notify RECEIVED_SKIPPED_MESSAGE
+
read advised zilla:flush ${kafka:flushEx()
.typeId(zilla:id("kafka"))
.fetch()
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/server.rpt
index 906730b372..d70b145538 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/server.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/fetch/isolation.read.uncommitted.aborted/server.rpt
@@ -98,6 +98,8 @@ write zilla:data.ext ${kafka:dataEx()
write "Hello, world"
write flush
+write await RECEIVED_SKIPPED_MESSAGE
+
write advise zilla:flush ${kafka:flushEx()
.typeId(zilla:id("kafka"))
.fetch()
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/client.rpt
index b21502d3b5..cd5237ac4a 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/client.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/client.rpt
@@ -73,7 +73,7 @@ read zilla:begin.ext ${kafka:beginEx()
write zilla:data.ext ${kafka:dataEx()
.typeId(zilla:id("kafka"))
.produce()
- .deferred(102400 - 8192 + 512 + 100)
+ .deferred(102400 - 8192 + 512 + 512)
.timestamp(newTimestamp)
.build()
.build()}
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/server.rpt
index cf93c76e87..6cd3a8d747 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/server.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.100k/server.rpt
@@ -18,7 +18,7 @@ property serverAddress "zilla://streams/app0"
accept ${serverAddress}
option zilla:window 8192
- option zilla:padding 612
+ option zilla:padding 1024
option zilla:transmission "half-duplex"
accepted
@@ -71,7 +71,7 @@ write zilla:begin.ext ${kafka:beginEx()
read zilla:data.ext ${kafka:matchDataEx()
.typeId(zilla:id("kafka"))
.produce()
- .deferred(102400 - 8192 + 512 + 100)
+ .deferred(102400 - 8192 + 512 + 512)
.build()
.build()}
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/client.rpt
index dcd8f9d0de..b054e49965 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/client.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/client.rpt
@@ -73,7 +73,7 @@ read zilla:begin.ext ${kafka:beginEx()
write zilla:data.ext ${kafka:dataEx()
.typeId(zilla:id("kafka"))
.produce()
- .deferred(10240 - 8192 + 512 + 100)
+ .deferred(10240 - 8192 + 512 + 512)
.timestamp(newTimestamp)
.build()
.build()}
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/server.rpt
index c248b9e910..6b3eaeb77b 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/server.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.value.10k/server.rpt
@@ -18,7 +18,7 @@ property serverAddress "zilla://streams/app0"
accept ${serverAddress}
option zilla:window 8192
- option zilla:padding 612
+ option zilla:padding 1024
option zilla:transmission "half-duplex"
accepted
@@ -71,7 +71,7 @@ write zilla:begin.ext ${kafka:beginEx()
read zilla:data.ext ${kafka:matchDataEx()
.typeId(zilla:id("kafka"))
.produce()
- .deferred(10240 - 8192 + 512 + 100)
+ .deferred(10240 - 8192 + 512 + 512)
.build()
.build()}
read zilla:data.ext ${kafka:matchDataEx()
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/client.rpt
index 8ddf6911ff..5b8050c2df 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/client.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/client.rpt
@@ -81,7 +81,7 @@ write zilla:data.ext ${kafka:dataEx()
.produce()
.build()
.build()}
-write ${kafka:randomBytes(7580)}
+write ${kafka:randomBytes(8192-(512+512))}
write flush
write zilla:data.ext ${kafka:dataEx()
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/server.rpt
index b9d3f4903c..5a2d505c1c 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/server.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/application/produce/message.values.sequential/server.rpt
@@ -79,7 +79,7 @@ read zilla:data.ext ${kafka:matchDataEx()
.produce()
.build()
.build()}
-read [0..7580]
+read [0..7168]
read zilla:data.ext ${kafka:matchDataEx()
.typeId(zilla:id("kafka"))
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/client.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/client.rpt
index 6e967c671d..9b48621d87 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/client.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/client.rpt
@@ -78,7 +78,7 @@ write zilla:begin.ext ${proxy:beginEx()
connected
-write 7690
+write 7278
0s
3s
${newRequestId}
@@ -90,9 +90,9 @@ write 7690
4s "test"
1
0
- 7650 # record set size
+ 7238 # record set size
0L # first offset
- 7638 # length
+ 7226 # length
-1
[0x02]
0x4e8723aa
@@ -104,13 +104,13 @@ write 7690
-1s
-1
1 # records
- ${kafka:varint(7587)}
+ ${kafka:varint(7175)}
[0x00]
${kafka:varint(0)}
${kafka:varint(0)}
${kafka:varint(-1)} # key
- ${kafka:varint(7580)} # value
- ${kafka:randomBytes(7580)}
+ ${kafka:varint(7168)} # value
+ ${kafka:randomBytes(7168)}
${kafka:varint(0)} # headers
read 44
diff --git a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/server.rpt b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/server.rpt
index 0c2dae01c2..a6aaebb5a4 100644
--- a/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/server.rpt
+++ b/specs/binding-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/kafka/streams/network/produce.v3/message.values.sequential/server.rpt
@@ -74,7 +74,7 @@ read zilla:begin.ext ${proxy:beginEx()
connected
-read 7690
+read 7278
0s
3s
(int:requestId)
@@ -86,9 +86,9 @@ read 7690
4s "test"
1
0
- 7650 # record set size
+ 7238 # record set size
0L # first offset
- 7638 # length
+ 7226 # length
-1
[0x02]
[0..4]
@@ -100,13 +100,13 @@ read 7690
-1s
-1
1 # records
- ${kafka:varint(7587)}
+ ${kafka:varint(7175)}
[0x00]
${kafka:varint(0)}
${kafka:varint(0)}
${kafka:varint(-1)} # key
- ${kafka:varint(7580)} # value
- [0..7580]
+ ${kafka:varint(7168)} # value
+ [0..7168]
${kafka:varint(0)} # headers
write 44
diff --git a/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctionsTest.java b/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctionsTest.java
index 9486983a4f..685ea47c96 100644
--- a/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctionsTest.java
+++ b/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/internal/KafkaFunctionsTest.java
@@ -182,6 +182,9 @@ public void shouldGenerateBootstrapBeginExtension()
.typeId(0x01)
.bootstrap()
.topic("topic")
+ .groupId("group")
+ .consumerId("consumer")
+ .timeout(1000)
.build()
.build();
@@ -192,6 +195,34 @@ public void shouldGenerateBootstrapBeginExtension()
final KafkaBootstrapBeginExFW bootstrapBeginEx = beginEx.bootstrap();
assertEquals("topic", bootstrapBeginEx.topic().asString());
+ assertEquals("group", bootstrapBeginEx.groupId().asString());
+ assertEquals("consumer", bootstrapBeginEx.consumerId().asString());
+ assertEquals(1000, bootstrapBeginEx.timeout());
+ }
+
+ @Test
+ public void shouldMatchBootstrapBeginExtension() throws Exception
+ {
+ BytesMatcher matcher = KafkaFunctions.matchBeginEx()
+ .bootstrap()
+ .topic("test")
+ .groupId("group")
+ .consumerId("consumer")
+ .build()
+ .build();
+
+ ByteBuffer byteBuf = ByteBuffer.allocate(1024);
+
+ new KafkaBeginExFW.Builder()
+ .wrap(new UnsafeBuffer(byteBuf), 0, byteBuf.capacity())
+ .typeId(0x01)
+ .bootstrap(f -> f
+ .topic("test")
+ .groupId("group")
+ .consumerId("consumer"))
+ .build();
+
+ assertNotNull(matcher.match(byteBuf));
}
@Test
diff --git a/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/streams/application/BootstrapIT.java b/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/streams/application/BootstrapIT.java
index 476c8ed828..bf01fda115 100644
--- a/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/streams/application/BootstrapIT.java
+++ b/specs/binding-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/kafka/streams/application/BootstrapIT.java
@@ -44,4 +44,14 @@ public void shouldRequestBootstrapPartitionOffsetsEarliest() throws Exception
{
k3po.finish();
}
+
+
+ @Test
+ @Specification({
+ "${app}/group.fetch.message.value/client",
+ "${app}/group.fetch.message.value/server"})
+ public void shouldReceiveGroupMessageValue() throws Exception
+ {
+ k3po.finish();
+ }
}
diff --git a/specs/binding-mqtt-kafka.spec/pom.xml b/specs/binding-mqtt-kafka.spec/pom.xml
index 6c60815e30..ab8692eeeb 100644
--- a/specs/binding-mqtt-kafka.spec/pom.xml
+++ b/specs/binding-mqtt-kafka.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.client.topic.space.yaml b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.client.topic.space.yaml
new file mode 100644
index 0000000000..2e59cf3a9c
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.client.topic.space.yaml
@@ -0,0 +1,45 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+---
+name: test
+bindings:
+ mqtt0:
+ type: mqtt-kafka
+ kind: proxy
+ options:
+ topics:
+ sessions: mqtt-sessions
+ messages: mqtt-messages
+ retained: mqtt-retained
+ clients:
+ - /clients/{identity}/#
+ routes:
+ - when:
+ - publish:
+ - topic: /clients/#
+ - subscribe:
+ - topic: /clients/#
+ with:
+ messages: mqtt-clients
+ exit: kafka0
+ - when:
+ - subscribe:
+ - topic: /sensor-clients/#
+ - subscribe:
+ - topic: /device-clients/#
+ with:
+ messages: mqtt-devices
+ exit: kafka0
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.capabilities.with.kafka.topic.yaml b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.topic.with.messages.yaml
similarity index 62%
rename from specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.capabilities.with.kafka.topic.yaml
rename to specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.topic.with.messages.yaml
index 8b674c4091..e0fbc9a402 100644
--- a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.capabilities.with.kafka.topic.yaml
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/config/proxy.when.topic.with.messages.yaml
@@ -13,25 +13,33 @@
# specific language governing permissions and limitations under the License.
#
-#TODO: this should be the final config
---
name: test
bindings:
- sse0:
+ mqtt0:
type: mqtt-kafka
kind: proxy
options:
topics:
- sessions: mqtt_sessions
- messages: mqtt_messages
+ sessions: mqtt-sessions
+ messages: mqtt-messages
+ retained: mqtt-retained
routes:
- exit: kafka0
when:
- - capabilities: session
+ - subscribe:
+ - topic: sensor/one
+ - publish:
+ - topic: sensor/one
with:
- - topic: mqtt_sessions
+ messages: mqtt-sensors
- exit: kafka0
when:
- - capabilities: publish_and_subscribe
+ - subscribe:
+ - topic: device/#
+ - topic: sensor/two
+ - publish:
+ - topic: device/#
+ - topic: sensor/two
with:
- - topic: mqtt_messages
+ messages: mqtt-devices
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/schema/mqtt.kafka.schema.patch.json b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/schema/mqtt.kafka.schema.patch.json
index e49e47ccde..3bc637ff87 100644
--- a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/schema/mqtt.kafka.schema.patch.json
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/schema/mqtt.kafka.schema.patch.json
@@ -71,6 +71,15 @@
"messages"
],
"additionalProperties": false
+ },
+ "clients":
+ {
+ "title": "Clients",
+ "type": "array",
+ "items":
+ {
+ "type": "string"
+ }
}
},
"required":
@@ -79,15 +88,96 @@
],
"additionalProperties": false
},
- "routes": false
+ "routes":
+ {
+ "items":
+ {
+ "properties":
+ {
+ "when":
+ {
+ "items":
+ {
+ "anyOf":
+ [
+ {
+ "properties":
+ {
+ "subscribe":
+ {
+ "title": "Subscribe",
+ "type": "array",
+ "items":
+ {
+ "topic":
+ {
+ "title": "Topic",
+ "type": "string"
+ }
+ }
+ }
+ }
+ },
+ {
+ "properties":
+ {
+ "publish":
+ {
+ "title": "Subscribe",
+ "type": "array",
+ "items":
+ {
+ "topic":
+ {
+ "title": "Topic",
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ ]
+ }
+ },
+ "with":
+ {
+ "items":
+ {
+ "properties":
+ {
+ "messages":
+ {
+ "title": "Messages Topic",
+ "type": "string"
+ }
+ },
+ "additionalProperties": false
+ }
+ }
+ },
+ "required":
+ [
+ "with"
+ ]
+ }
+ }
},
+ "required":
+ [
+ "options"
+ ],
"anyOf":
[
{
"required":
[
- "exit",
- "options"
+ "exit"
+ ]
+ },
+ {
+ "required":
+ [
+ "routes"
]
}
]
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.client.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.client.topic.space/client.rpt
new file mode 100644
index 0000000000..6bbf950c0b
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.client.topic.space/client.rpt
@@ -0,0 +1,85 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-clients")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+write notify PRODUCE_CONNECTED
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("/clients/client-2/sensors/one")
+ .hashKey("client-2")
+ .header("zilla:filter", "")
+ .header("zilla:filter", "clients")
+ .header("zilla:filter", "client-2")
+ .header("zilla:filter", "sensors")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-1")
+ .build()
+ .build()}
+write "Hello, world"
+write flush
+
+
+connect await PRODUCE_CONNECTED
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-clients")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("/clients/client-1/device/one")
+ .hashKey("client-1")
+ .header("zilla:filter", "")
+ .header("zilla:filter", "clients")
+ .header("zilla:filter", "client-1")
+ .header("zilla:filter", "device")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .build()
+ .build()}
+write "Hello, again"
+write flush
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.client.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.client.topic.space/server.rpt
new file mode 100644
index 0000000000..6a4fae9bfb
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.client.topic.space/server.rpt
@@ -0,0 +1,83 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-clients")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("/clients/client-2/sensors/one")
+ .hashKey("client-2")
+ .header("zilla:filter", "")
+ .header("zilla:filter", "clients")
+ .header("zilla:filter", "client-2")
+ .header("zilla:filter", "sensors")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-1")
+ .build()
+ .build()}
+read "Hello, world"
+
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-clients")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("/clients/client-1/device/one")
+ .hashKey("client-1")
+ .header("zilla:filter", "")
+ .header("zilla:filter", "clients")
+ .header("zilla:filter", "client-1")
+ .header("zilla:filter", "device")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .build()
+ .build()}
+
+read "Hello, again"
+
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.multiple.clients/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.multiple.clients/client.rpt
index f3fe1188d3..bdbf1dba7c 100644
--- a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.multiple.clients/client.rpt
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.multiple.clients/client.rpt
@@ -29,7 +29,7 @@ write zilla:begin.ext ${kafka:beginEx()
connected
-write notify CLIENT1_CONNECTED
+write notify PRODUCE_CONNECTED
write zilla:data.ext ${kafka:dataEx()
.typeId(zilla:id("kafka"))
.merged()
@@ -80,7 +80,7 @@ write "message3"
write flush
-connect await CLIENT1_CONNECTED
+connect await PRODUCE_CONNECTED
"zilla://streams/kafka0"
option zilla:window 8192
option zilla:transmission "duplex"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.topic.space/client.rpt
new file mode 100644
index 0000000000..92ae1cf67a
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.topic.space/client.rpt
@@ -0,0 +1,81 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-sensors")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+write notify PRODUCE_CONNECTED
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("sensor/one")
+ .header("zilla:filter", "sensor")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-1")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+
+write "Hello, world"
+write flush
+
+
+connect await PRODUCE_CONNECTED
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-devices")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("device/one")
+ .header("zilla:filter", "device")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-1")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+
+write "Hello, again"
+write flush
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.topic.space/server.rpt
new file mode 100644
index 0000000000..fbf99c2535
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/publish.topic.space/server.rpt
@@ -0,0 +1,80 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-sensors")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("sensor/one")
+ .header("zilla:filter", "sensor")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-1")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+
+read "Hello, world"
+
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("PRODUCE_ONLY")
+ .topic("mqtt-devices")
+ .partition(-1, -2)
+ .ackMode("LEADER_ONLY")
+ .build()
+ .build()}
+
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .deferred(0)
+ .partition(-1, -1)
+ .key("device/one")
+ .header("zilla:filter", "device")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-1")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+
+read "Hello, again"
+
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.abort.reconnect/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.abort.reconnect/client.rpt
new file mode 100644
index 0000000000..e817886363
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.abort.reconnect/client.rpt
@@ -0,0 +1,47 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
+
+read aborted
+read notify RECEIVED_ABORT
+write abort
+
+
+connect await RECEIVED_ABORT
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+connected
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.abort.reconnect/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.abort.reconnect/server.rpt
new file mode 100644
index 0000000000..ab951c4895
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.abort.reconnect/server.rpt
@@ -0,0 +1,46 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
+
+write abort
+read aborted
+
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.end.reconnect/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.end.reconnect/client.rpt
new file mode 100644
index 0000000000..6e87b204c1
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.end.reconnect/client.rpt
@@ -0,0 +1,47 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
+
+read closed
+read notify RECEIVED_CLOSE
+write close
+
+
+connect await RECEIVED_CLOSE
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+connected
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.end.reconnect/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.end.reconnect/server.rpt
new file mode 100644
index 0000000000..83e1a057ed
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.end.reconnect/server.rpt
@@ -0,0 +1,46 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
+
+write close
+read closed
+
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.reset.reconnect/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.reset.reconnect/client.rpt
new file mode 100644
index 0000000000..bb2aeab831
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.reset.reconnect/client.rpt
@@ -0,0 +1,46 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
+
+write aborted
+read notify SENT_RESET
+
+
+connect await SENT_RESET
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+connected
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.reset.reconnect/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.reset.reconnect/server.rpt
new file mode 100644
index 0000000000..f26cca1a7a
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.bootstrap.stream.reset.reconnect/server.rpt
@@ -0,0 +1,45 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
+
+read abort
+
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.client.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.client.topic.space/client.rpt
new file mode 100644
index 0000000000..d48a05ae95
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.client.topic.space/client.rpt
@@ -0,0 +1,72 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+connected
+
+
+connect await RECEIVED_BOOTSTRAP_CONNECTED
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("mqtt-devices")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("")
+ .sequence("sensor-clients")
+ .sequence("client-2")
+ .sequence("sensors")
+ .sequence("one")
+ .build()
+ .build()
+ .evaluation("EAGER")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .filters(1)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("/sensor-clients/client-2/sensors/one")
+ .header("zilla:filter", "")
+ .header("zilla:filter", "sensor-clients")
+ .header("zilla:filter", "client-2")
+ .header("zilla:filter", "sensors")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+read "Hello, world"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.client.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.client.topic.space/server.rpt
new file mode 100644
index 0000000000..8eab312e00
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.client.topic.space/server.rpt
@@ -0,0 +1,75 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .bootstrap()
+ .topic("mqtt-clients")
+ .groupId("mqtt-clients")
+ .build()
+ .build()}
+
+connected
+write notify RECEIVED_BOOTSTRAP_CONNECTED
+
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("mqtt-devices")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("")
+ .sequence("sensor-clients")
+ .sequence("client-2")
+ .sequence("sensors")
+ .sequence("one")
+ .build()
+ .build()
+ .evaluation("EAGER")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .timestamp(kafka:timestamp())
+ .filters(1)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("/sensor-clients/client-2/sensors/one")
+ .header("zilla:filter", "")
+ .header("zilla:filter", "sensor-clients")
+ .header("zilla:filter", "client-2")
+ .header("zilla:filter", "sensors")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+write "Hello, world"
+write flush
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.topic.space/client.rpt
new file mode 100644
index 0000000000..780d896cd8
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.topic.space/client.rpt
@@ -0,0 +1,127 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("mqtt-devices")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("device")
+ .skip(1)
+ .build()
+ .build()
+ .evaluation("EAGER")
+ .build()
+ .build()}
+
+connected
+write notify FETCH_CONNECTED
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .filters(1)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("device/one")
+ .header("zilla:filter", "device")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+read "Hello, world"
+
+write advise zilla:flush ${kafka:flushEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .fetch()
+ .capabilities("FETCH_ONLY")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("device")
+ .skip(1)
+ .build()
+ .build()
+ .filter()
+ .headers("zilla:filter")
+ .sequence("sensor")
+ .skipMany()
+ .build()
+ .build()
+ .build()
+ .build()}
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .filters(2)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("sensor/two")
+ .header("zilla:filter", "sensor")
+ .header("zilla:filter", "two")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+read "Hello, again"
+
+
+connect await FETCH_CONNECTED
+ "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${kafka:beginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("mqtt-sensors")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("sensor")
+ .skipMany()
+ .build()
+ .build()
+ .evaluation("EAGER")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${kafka:matchDataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .filters(1)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("sensor/one")
+ .header("zilla:filter", "sensor")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+read "Hi, world"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.topic.space/server.rpt
new file mode 100644
index 0000000000..4f772dad18
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/kafka/subscribe.topic.space/server.rpt
@@ -0,0 +1,132 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/kafka0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("mqtt-devices")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("device")
+ .skip(1)
+ .build()
+ .build()
+ .evaluation("EAGER")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .timestamp(kafka:timestamp())
+ .filters(1)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("device/one")
+ .header("zilla:filter", "device")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+write "Hello, world"
+write flush
+
+read advised zilla:flush ${kafka:flushEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .fetch()
+ .capabilities("FETCH_ONLY")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("device")
+ .skip(1)
+ .build()
+ .build()
+ .filter()
+ .headers("zilla:filter")
+ .sequence("sensor")
+ .skipMany()
+ .build()
+ .build()
+ .build()
+ .build()}
+
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .timestamp(kafka:timestamp())
+ .filters(2)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("sensor/two")
+ .header("zilla:filter", "sensor")
+ .header("zilla:filter", "two")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+write "Hello, again"
+write flush
+write notify SENT_SENSOR_TWO_DATA
+
+accepted
+
+read zilla:begin.ext ${kafka:matchBeginEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .capabilities("FETCH_ONLY")
+ .topic("mqtt-sensors")
+ .filter()
+ .headers("zilla:filter")
+ .sequence("sensor")
+ .skipMany()
+ .build()
+ .build()
+ .evaluation("EAGER")
+ .build()
+ .build()}
+
+connected
+
+write await SENT_SENSOR_TWO_DATA
+write zilla:data.ext ${kafka:dataEx()
+ .typeId(zilla:id("kafka"))
+ .merged()
+ .timestamp(kafka:timestamp())
+ .filters(1)
+ .partition(0, 1, 2)
+ .progress(0, 2)
+ .progress(1, 1)
+ .key("sensor/one")
+ .header("zilla:filter", "sensor")
+ .header("zilla:filter", "one")
+ .header("zilla:local", "client-2")
+ .header("zilla:format", "TEXT")
+ .build()
+ .build()}
+write "Hi, world"
+write flush
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.client.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.client.topic.space/client.rpt
new file mode 100644
index 0000000000..efb49b192f
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.client.topic.space/client.rpt
@@ -0,0 +1,62 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-1")
+ .topic("/clients/client-2/sensors/one")
+ .build()
+ .build()}
+
+connected
+
+write notify PUBLISH_CONNECTED
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .build()
+ .build()}
+write "Hello, world"
+write flush
+
+
+connect await PUBLISH_CONNECTED
+ "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-2")
+ .topic("/clients/client-1/device/one")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .build()
+ .build()}
+write "Hello, again"
+write flush
+
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.client.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.client.topic.space/server.rpt
new file mode 100644
index 0000000000..a2fe1202ae
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.client.topic.space/server.rpt
@@ -0,0 +1,56 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${mqtt:matchBeginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-1")
+ .topic("/clients/client-2/sensors/one")
+ .build()
+ .build()}
+connected
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .build()
+ .build()}
+read "Hello, world"
+
+
+accepted
+
+read zilla:begin.ext ${mqtt:matchBeginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-2")
+ .topic("/clients/client-1/device/one")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .build()
+ .build()}
+read "Hello, again"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.multiple.clients/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.multiple.clients/client.rpt
index bf7b5195af..1290fee700 100644
--- a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.multiple.clients/client.rpt
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.multiple.clients/client.rpt
@@ -27,7 +27,7 @@ write zilla:begin.ext ${mqtt:beginEx()
connected
-write notify RECEIVED_REPLY_BEGIN
+write notify PUBLISH_CONNECTED
write zilla:data.ext ${mqtt:dataEx()
.typeId(zilla:id("mqtt"))
.publish()
@@ -58,7 +58,7 @@ write zilla:data.ext ${mqtt:dataEx()
write "message3"
write flush
-connect await RECEIVED_REPLY_BEGIN
+connect await PUBLISH_CONNECTED
"zilla://streams/mqtt0"
option zilla:window 8192
option zilla:transmission "duplex"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.topic.space/client.rpt
new file mode 100644
index 0000000000..ddd06cb609
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.topic.space/client.rpt
@@ -0,0 +1,66 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-1")
+ .topic("sensor/one")
+ .build()
+ .build()}
+
+connected
+
+write notify PUBLISH_CONNECTED
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .format("TEXT")
+ .build()
+ .build()}
+
+write "Hello, world"
+write flush
+
+
+connect await PUBLISH_CONNECTED
+ "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-1")
+ .topic("device/one")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .format("TEXT")
+ .build()
+ .build()}
+
+write "Hello, again"
+write flush
+
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.topic.space/server.rpt
new file mode 100644
index 0000000000..0ae3bc8ad9
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/publish.topic.space/server.rpt
@@ -0,0 +1,61 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${mqtt:matchBeginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-1")
+ .topic("sensor/one")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .format("TEXT")
+ .build()
+ .build()}
+
+read "Hello, world"
+
+
+accepted
+
+read zilla:begin.ext ${mqtt:matchBeginEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .clientId("client-1")
+ .topic("device/one")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .publish()
+ .format("TEXT")
+ .build()
+ .build()}
+
+read "Hello, again"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.client.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.client.topic.space/client.rpt
new file mode 100644
index 0000000000..4d00bf0ace
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.client.topic.space/client.rpt
@@ -0,0 +1,39 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect await RECEIVED_BOOTSTRAP_CONNECTED
+ "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .clientId("client-1")
+ .filter("/sensor-clients/client-2/sensors/one", 1)
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("/sensor-clients/client-2/sensors/one")
+ .subscriptionId(1)
+ .format("TEXT")
+ .build()
+ .build()}
+read "Hello, world"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.client.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.client.topic.space/server.rpt
new file mode 100644
index 0000000000..bccec51e45
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.client.topic.space/server.rpt
@@ -0,0 +1,41 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .clientId("client-1")
+ .filter("/sensor-clients/client-2/sensors/one", 1)
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("/sensor-clients/client-2/sensors/one")
+ .subscriptionId(1)
+ .format("TEXT")
+ .build()
+ .build()}
+write "Hello, world"
+write flush
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.topic.space/client.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.topic.space/client.rpt
new file mode 100644
index 0000000000..78ed241569
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.topic.space/client.rpt
@@ -0,0 +1,67 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+connect "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .clientId("client")
+ .filter("device/+", 1)
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("device/one")
+ .subscriptionId(1)
+ .format("TEXT")
+ .build()
+ .build()}
+read "Hello, world"
+
+write advise zilla:flush ${mqtt:flushEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .filter("device/+", 1)
+ .filter("sensor/#", 2)
+ .build()
+ .build()}
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("sensor/two")
+ .subscriptionId(2)
+ .format("TEXT")
+ .build()
+ .build()}
+read "Hello, again"
+
+
+read zilla:data.ext ${mqtt:matchDataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("sensor/one")
+ .subscriptionId(2)
+ .format("TEXT")
+ .build()
+ .build()}
+read "Hi, world"
diff --git a/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.topic.space/server.rpt b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.topic.space/server.rpt
new file mode 100644
index 0000000000..e48a115221
--- /dev/null
+++ b/specs/binding-mqtt-kafka.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/mqtt/subscribe.topic.space/server.rpt
@@ -0,0 +1,71 @@
+#
+# Copyright 2021-2023 Aklivity Inc
+#
+# Licensed under the Aklivity Community License (the "License"); you may not use
+# this file except in compliance with the License. You may obtain a copy of the
+# License at
+#
+# https://www.aklivity.io/aklivity-community-license/
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OF ANY KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations under the License.
+#
+
+accept "zilla://streams/mqtt0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+
+accepted
+
+read zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .clientId("client")
+ .filter("device/+", 1)
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("device/one")
+ .subscriptionId(1)
+ .format("TEXT")
+ .build()
+ .build()}
+write "Hello, world"
+write flush
+
+read advised zilla:flush ${mqtt:flushEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .filter("device/+", 1)
+ .filter("sensor/#", 2)
+ .build()
+ .build()}
+
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("sensor/two")
+ .subscriptionId(2)
+ .format("TEXT")
+ .build()
+ .build()}
+write "Hello, again"
+write flush
+
+write zilla:data.ext ${mqtt:dataEx()
+ .typeId(zilla:id("mqtt"))
+ .subscribe()
+ .topic("sensor/one")
+ .subscriptionId(2)
+ .format("TEXT")
+ .build()
+ .build()}
+write "Hi, world"
+write flush
diff --git a/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/config/SchemaTest.java b/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/config/SchemaTest.java
index f70d2087fb..9e4b04ea42 100644
--- a/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/config/SchemaTest.java
+++ b/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/config/SchemaTest.java
@@ -48,4 +48,20 @@ public void shouldValidateProxyWithOptions()
assertThat(config, not(nullValue()));
}
+
+ @Test
+ public void shouldValidateProxyWhenTopicWithMessages()
+ {
+ JsonObject config = schema.validate("proxy.when.topic.with.messages.yaml");
+
+ assertThat(config, not(nullValue()));
+ }
+
+ @Test
+ public void shouldValidateProxyWhenClientTopicSpace()
+ {
+ JsonObject config = schema.validate("proxy.when.client.topic.space.yaml");
+
+ assertThat(config, not(nullValue()));
+ }
}
diff --git a/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/KafkaIT.java b/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/KafkaIT.java
index b683ff1426..12a4faa740 100644
--- a/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/KafkaIT.java
+++ b/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/KafkaIT.java
@@ -179,6 +179,24 @@ public void shouldSendMultipleClients() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${kafka}/publish.topic.space/client",
+ "${kafka}/publish.topic.space/server"})
+ public void shouldSendUsingTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${kafka}/publish.client.topic.space/client",
+ "${kafka}/publish.client.topic.space/server"})
+ public void shouldSendUsingClientTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Specification({
"${kafka}/publish.with.user.properties.distinct/client",
@@ -467,6 +485,51 @@ public void shouldFilterIsolatedExactAndWildcard() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${kafka}/subscribe.topic.space/client",
+ "${kafka}/subscribe.topic.space/server"})
+ public void shouldFilterTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${kafka}/subscribe.client.topic.space/client",
+ "${kafka}/subscribe.client.topic.space/server"})
+ public void shouldFilterClientTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${kafka}/subscribe.bootstrap.stream.end.reconnect/client",
+ "${kafka}/subscribe.bootstrap.stream.end.reconnect/server"})
+ public void shouldReconnectBootstrapStreamOnKafkaEnd() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${kafka}/subscribe.bootstrap.stream.abort.reconnect/client",
+ "${kafka}/subscribe.bootstrap.stream.abort.reconnect/server"})
+ public void shouldReconnectBootstrapStreamOnKafkaAbort() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${kafka}/subscribe.bootstrap.stream.reset.reconnect/client",
+ "${kafka}/subscribe.bootstrap.stream.reset.reconnect/server"})
+ public void shouldReconnectBootstrapStreamOnKafkaReset() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Specification({
"${kafka}/unsubscribe.after.subscribe/client",
diff --git a/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/MqttIT.java b/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/MqttIT.java
index cbd728c02b..791a7efaa7 100644
--- a/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/MqttIT.java
+++ b/specs/binding-mqtt-kafka.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/kafka/streams/MqttIT.java
@@ -125,6 +125,24 @@ public void shouldSendMultipleClients() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${mqtt}/publish.topic.space/client",
+ "${mqtt}/publish.topic.space/server"})
+ public void shouldSendUsingTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${mqtt}/publish.client.topic.space/client",
+ "${mqtt}/publish.client.topic.space/server"})
+ public void shouldSendUsingClientTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Specification({
"${mqtt}/publish.retained/client",
@@ -395,6 +413,26 @@ public void shouldFilterIsolatedExactAndWildcard() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${mqtt}/subscribe.topic.space/client",
+ "${mqtt}/subscribe.topic.space/server"})
+ public void shouldFilterTopicSpace() throws Exception
+ {
+ k3po.finish();
+ }
+
+ @Test
+ @Specification({
+ "${mqtt}/subscribe.client.topic.space/client",
+ "${mqtt}/subscribe.client.topic.space/server"})
+ public void shouldFilterClientTopicSpace() throws Exception
+ {
+ k3po.start();
+ k3po.notifyBarrier("RECEIVED_BOOTSTRAP_CONNECTED");
+ k3po.finish();
+ }
+
@Test
@Specification({
"${mqtt}/unsubscribe.after.subscribe/client",
diff --git a/specs/binding-mqtt.spec/pom.xml b/specs/binding-mqtt.spec/pom.xml
index 010be2d9a5..02f4961b9f 100644
--- a/specs/binding-mqtt.spec/pom.xml
+++ b/specs/binding-mqtt.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-mqtt.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/streams/application/session.connect.authorization/client.rpt b/specs/binding-mqtt.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/streams/application/session.connect.authorization/client.rpt
new file mode 100644
index 0000000000..fdb49465b2
--- /dev/null
+++ b/specs/binding-mqtt.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/streams/application/session.connect.authorization/client.rpt
@@ -0,0 +1,44 @@
+#
+# Copyright 2021-2023 Aklivity Inc.
+#
+# Aklivity licenses this file to you under the Apache License,
+# version 2.0 (the "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at:
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+connect "zilla://streams/app0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+ option zilla:authorization 1L
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .session()
+ .flags("CLEAN_START")
+ .clientId("client")
+ .build()
+ .build()}
+
+read zilla:begin.ext ${mqtt:matchBeginEx()
+ .typeId(zilla:id("mqtt"))
+ .session()
+ .flags("CLEAN_START")
+ .qosMax(2)
+ .packetSizeMax(66560)
+ .capabilities("RETAIN", "WILDCARD", "SUBSCRIPTION_IDS", "SHARED_SUBSCRIPTIONS")
+ .clientId("client")
+ .build()
+ .build()}
+
+connected
+
+read zilla:data.empty
+
diff --git a/specs/binding-mqtt.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/streams/application/session.connect.authorization/server.rpt b/specs/binding-mqtt.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/streams/application/session.connect.authorization/server.rpt
new file mode 100644
index 0000000000..bd2eb8ff17
--- /dev/null
+++ b/specs/binding-mqtt.spec/src/main/scripts/io/aklivity/zilla/specs/binding/mqtt/streams/application/session.connect.authorization/server.rpt
@@ -0,0 +1,46 @@
+#
+# Copyright 2021-2023 Aklivity Inc.
+#
+# Aklivity licenses this file to you under the Apache License,
+# version 2.0 (the "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at:
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+accept "zilla://streams/app0"
+ option zilla:window 8192
+ option zilla:transmission "duplex"
+ option zilla:authorization 1L
+
+accepted
+
+read zilla:begin.ext ${mqtt:matchBeginEx()
+ .typeId(zilla:id("mqtt"))
+ .session()
+ .flags("CLEAN_START")
+ .clientId("client")
+ .build()
+ .build()}
+
+write zilla:begin.ext ${mqtt:beginEx()
+ .typeId(zilla:id("mqtt"))
+ .session()
+ .flags("CLEAN_START")
+ .qosMax(2)
+ .packetSizeMax(66560)
+ .capabilities("RETAIN", "WILDCARD", "SUBSCRIPTION_IDS", "SHARED_SUBSCRIPTIONS")
+ .clientId("client")
+ .build()
+ .build()}
+
+connected
+
+write zilla:data.empty
+write flush
diff --git a/specs/binding-mqtt.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/streams/application/SessionIT.java b/specs/binding-mqtt.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/streams/application/SessionIT.java
index ce701e9387..3e136d862f 100644
--- a/specs/binding-mqtt.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/streams/application/SessionIT.java
+++ b/specs/binding-mqtt.spec/src/test/java/io/aklivity/zilla/specs/binding/mqtt/streams/application/SessionIT.java
@@ -46,6 +46,15 @@ public void shouldConnect() throws Exception
k3po.finish();
}
+ @Test
+ @Specification({
+ "${app}/session.connect.authorization/client",
+ "${app}/session.connect.authorization/server"})
+ public void shouldConnectAndAuthorize() throws Exception
+ {
+ k3po.finish();
+ }
+
@Test
@Specification({
"${app}/session.connect.with.session.expiry/client",
diff --git a/specs/binding-proxy.spec/pom.xml b/specs/binding-proxy.spec/pom.xml
index 62d4ced9ce..028896531d 100644
--- a/specs/binding-proxy.spec/pom.xml
+++ b/specs/binding-proxy.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-sse-kafka.spec/pom.xml b/specs/binding-sse-kafka.spec/pom.xml
index 042e145829..774ddc5e2c 100644
--- a/specs/binding-sse-kafka.spec/pom.xml
+++ b/specs/binding-sse-kafka.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-sse.spec/pom.xml b/specs/binding-sse.spec/pom.xml
index 8af9feb699..e4379aa7be 100644
--- a/specs/binding-sse.spec/pom.xml
+++ b/specs/binding-sse.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-tcp.spec/pom.xml b/specs/binding-tcp.spec/pom.xml
index ecb0dd5737..867716f571 100644
--- a/specs/binding-tcp.spec/pom.xml
+++ b/specs/binding-tcp.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-tls.spec/pom.xml b/specs/binding-tls.spec/pom.xml
index 34b5df9512..80622e82fd 100644
--- a/specs/binding-tls.spec/pom.xml
+++ b/specs/binding-tls.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/binding-ws.spec/pom.xml b/specs/binding-ws.spec/pom.xml
index 7d3d2ec6e0..5fbd7b3b07 100644
--- a/specs/binding-ws.spec/pom.xml
+++ b/specs/binding-ws.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/engine.spec/pom.xml b/specs/engine.spec/pom.xml
index 387ff27f9f..33ee97b41e 100644
--- a/specs/engine.spec/pom.xml
+++ b/specs/engine.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/exporter-prometheus.spec/pom.xml b/specs/exporter-prometheus.spec/pom.xml
index 222ebe9391..4a95dfa7b4 100644
--- a/specs/exporter-prometheus.spec/pom.xml
+++ b/specs/exporter-prometheus.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/guard-jwt.spec/pom.xml b/specs/guard-jwt.spec/pom.xml
index 931e85242e..659a6eec67 100644
--- a/specs/guard-jwt.spec/pom.xml
+++ b/specs/guard-jwt.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/metrics-grpc.spec/pom.xml b/specs/metrics-grpc.spec/pom.xml
index d88060faf7..87b1719e77 100644
--- a/specs/metrics-grpc.spec/pom.xml
+++ b/specs/metrics-grpc.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/metrics-http.spec/pom.xml b/specs/metrics-http.spec/pom.xml
index 99f6b1eae3..9fa84e4e48 100644
--- a/specs/metrics-http.spec/pom.xml
+++ b/specs/metrics-http.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/metrics-stream.spec/pom.xml b/specs/metrics-stream.spec/pom.xml
index 7a99818bd5..7ad4797a58 100644
--- a/specs/metrics-stream.spec/pom.xml
+++ b/specs/metrics-stream.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/pom.xml b/specs/pom.xml
index f2058879f1..cf74162d19 100644
--- a/specs/pom.xml
+++ b/specs/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
zilla
- 0.9.55
+ 0.9.56
../pom.xml
diff --git a/specs/vault-filesystem.spec/pom.xml b/specs/vault-filesystem.spec/pom.xml
index 83e92af9ec..1794163774 100644
--- a/specs/vault-filesystem.spec/pom.xml
+++ b/specs/vault-filesystem.spec/pom.xml
@@ -8,7 +8,7 @@
io.aklivity.zilla
specs
- 0.9.55
+ 0.9.56
../pom.xml