Skip to content

Commit

Permalink
Update additional cases (without Charmed)
Browse files Browse the repository at this point in the history
  • Loading branch information
izmalk committed Nov 29, 2024
1 parent 00398ee commit 2728fd2
Show file tree
Hide file tree
Showing 38 changed files with 287 additions and 238 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,13 @@ juju model-config logging-config="<root>=INFO;unit=DEBUG"
# Build the charm locally
charmcraft pack

# Deploy the latest ZooKeeper release
# Deploy the latest Apache ZooKeeper release
juju deploy zookeeper --channel edge -n 3

# Deploy the charm
juju deploy ./*.charm -n 3

# After ZooKeeper has initialised, relate the applications
# After Apache ZooKeeper has initialised, relate the applications
juju relate kafka zookeeper
```

Expand Down
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The Charmed Operator can be found on [Charmhub](https://charmhub.io/kafka) and i
- SASL/SCRAM auth for Broker-Broker and Client-Broker authentication enabled by default.
- Access control management supported with user-provided ACL lists.

As currently Kafka requires a paired ZooKeeper deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions.
As currently Apache Kafka requires a paired Apache ZooKeeper deployment in production, this operator makes use of the [Apache ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions.

### Features checklist

Expand All @@ -33,7 +33,7 @@ The following are some of the most important planned features and their implemen

## Requirements

For production environments, it is recommended to deploy at least 5 nodes for Zookeeper and 3 for Kafka.
For production environments, it is recommended to deploy at least 5 nodes for Zookeeper and 3 for Apache Kafka.

The following requirements are meant to be for production environment:

Expand All @@ -51,7 +51,7 @@ For more information on how to perform typical tasks, see the How to guides sect

### Deployment

The Kafka and ZooKeeper operators can both be deployed as follows:
The Apache Kafka and Apache ZooKeeper operators can both be deployed as follows:

```shell
$ juju deploy zookeeper -n 5
Expand All @@ -70,18 +70,18 @@ To watch the process, the `juju status` command can be used. Once all the units
juju run-action kafka/leader get-admin-credentials --wait
```

Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Kafka Charmed Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client.
Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Charmed Apache Kafka Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client.

For example, to list the current topics on the Kafka cluster, run the following command:
For example, to list the current topics on the Apache Kafka cluster, run the following command:

```shell
BOOTSTRAP_SERVERS=$(juju run-action kafka/leader get-admin-credentials --wait | grep "bootstrap.servers" | cut -d "=" -f 2)
juju ssh kafka/leader 'charmed-kafka.topics --bootstrap-server $BOOTSTRAP_SERVERS --list --command-config /var/snap/charmed-kafka/common/client.properties'
```

Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Kafka to enable listeners.
Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Apache Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Apache Kafka to enable listeners.

Available Kafka bin commands can be found with:
Available Apache Kafka bin commands can be found with:

```
snap info charmed-kafka
Expand Down Expand Up @@ -119,7 +119,7 @@ Use the same action without a password parameter to randomly generate a password
Currently, the Charmed Apache Kafka Operator supports 1 or more storage volumes. A 10G storage volume will be installed by default for `log.dirs`.
This is used for logs storage, mounted on `/var/snap/kafka/common`

When storage is added or removed, the Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created.
When storage is added or removed, the Apache Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Apache Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created.

## Relations

Expand Down
2 changes: 1 addition & 1 deletion actions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ set-tls-private-key:

get-admin-credentials:
description: Get administrator authentication credentials for client commands
The returned client_properties can be used for Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration
The returned client_properties can be used for Apache Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration
This action must be called on the leader unit.

get-listeners:
Expand Down
6 changes: 3 additions & 3 deletions config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ options:
type: int
default: 1073741824
message_max_bytes:
description: The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.
description: The largest record batch size allowed by Apache Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.
type: int
default: 1048588
offsets_topic_num_partitions:
Expand Down Expand Up @@ -81,7 +81,7 @@ options:
type: int
default: 11
zookeeper_ssl_cipher_suites:
description: Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.
description: Specifies the enabled cipher suites to be used in Apache ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.
type: string
default: ""
profile:
Expand Down Expand Up @@ -113,6 +113,6 @@ options:
type: float
default: 0.8
expose_external:
description: "String to determine how to expose the Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'"
description: "String to determine how to expose the Apache Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'"
type: string
default: "nodeport"
14 changes: 7 additions & 7 deletions docs/explanation/e-cluster-configuration.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Overview of a cluster configuration content

[Apache Kafka](https://kafka.apache.org) is an open-source distributed event streaming platform that requires an external solution to coordinate and sync metadata between all active brokers.
One of such solutions is [ZooKeeper](https://zookeeper.apache.org).
One of such solutions is [Apache ZooKeeper](https://zookeeper.apache.org).

Here are some of the responsibilities of ZooKeeper in a Kafka cluster:
Here are some of the responsibilities of Apache ZooKeeper in an Apache Kafka cluster:

- **Cluster membership**: through regular heartbeats, it keeps tracks of the brokers entering and leaving the cluster, providing an up-to-date list of brokers.
- **Controller election**: one of the Kafka brokers is responsible for managing the leader/follower status for all the partitions. ZooKeeper is used to elect a controller and to make sure there is only one of it.
- **Topic configuration**: each topic can be replicated on multiple partitions. ZooKeeper keeps track of the locations of the partitions and replicas, so that high-availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in ZooKeeper.
- **Access control and authentication**: ZooKeeper stores access control lists (ACL) for Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic.
- **Controller election**: one of the Apache Kafka brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it.
- **Topic configuration**: each topic can be replicated on multiple partitions. Apache ZooKeeper keeps track of the locations of the partitions and replicas, so that high-availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in Apache ZooKeeper.
- **Access control and authentication**: Apache ZooKeeper stores access control lists (ACL) for Apache Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic.

The values for the configuration parameters mentioned above are stored in znodes, the hierarchical unit data structure in ZooKeeper.
The values for the configuration parameters mentioned above are stored in znodes, the hierarchical unit data structure in Apache ZooKeeper.
A znode is represented by its path and can both have data associated with it and children nodes.
ZooKeeper clients interact with its data structure similarly to a remote file system that would be sync-ed between the ZooKeeper units for high availability.
Apache ZooKeeper clients interact with its data structure similarly to a remote file system that would be sync-ed between the Apache ZooKeeper units for high availability.
For a Charmed Apache Kafka related to a Charmed Apache ZooKeeper:
- the list of the broker ids of the cluster can be found in `/kafka/brokers/ids`
- the endpoint used to access the broker with id `0` can be found in `/kafka/brokers/ids/0`
Expand Down
10 changes: 5 additions & 5 deletions docs/explanation/e-hardening.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ a secure deployment of [Charmed Apache Kafka](https://github.com/canonical/kafka
The document is divided into the following sections:

1. Environment, outlining the recommendation for deploying a secure environment
2. Applications, outlining the product features that enable a secure deployment of a Kafka cluster
2. Applications, outlining the product features that enable a secure deployment of an Apache Kafka cluster
3. Additional resources, providing any further information about security and compliance

## Environment
Expand Down Expand Up @@ -58,7 +58,7 @@ Juju user credentials must be stored securely and rotated regularly to limit the
In the following we provide guidance on how to harden your deployment using:

1. Operating System
2. Kafka and ZooKeeper Security Upgrades
2. Apache Kafka and Apache ZooKeeper Security Upgrades
3. Encryption
4. Authentication
5. Monitoring and Auditing
Expand All @@ -68,7 +68,7 @@ In the following we provide guidance on how to harden your deployment using:
Charmed Apache Kafka and Charmed Apache ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to
connect the underlying VM to a Landscape User Account to manage security upgrades and integrate Ubuntu Pro subscriptions.

### Kafka and ZooKeeper Security Upgrades
### Apache Kafka and Apache ZooKeeper Security Upgrades

Charmed Apache Kafka and Charmed Apache ZooKeeper operators install a pinned revision of the [Charmed Apache Kafka snap](https://snapcraft.io/charmed-kafka)
and [Charmed Apache ZooKeeper snap](https://snapcraft.io/charmed-zookeeper), respectively, in order to provide reproducible and secure environments.
Expand All @@ -79,7 +79,7 @@ For more information on how to refresh the charm, see the [how-to upgrade](https
### Encryption

Charmed Apache Kafka must be deployed with encryption enabled.
To do that, you need to relate Kafka and ZooKeeper charms to one of the TLS certificate operator charms.
To do that, you need to relate Apache Kafka and Apache ZooKeeper charms to one of the TLS certificate operator charms.
Please refer to the [Charming Security page](https://charmhub.io/topics/security-with-x-509-certificates) for more information on how to select the right certificate
provider for your use-case.

Expand Down Expand Up @@ -107,7 +107,7 @@ Refer to How-To user guide for more information on:
* [how to integrate the Charmed Apache Kafka deployment with COS](/t/charmed-kafka-how-to-enable-monitoring/10283)
* [how to customise the alerting rules and dashboards](/t/charmed-kafka-documentation-how-to-integrate-custom-alerting-rules-and-dashboards/13431)

External user access to Kafka is logged to the `kafka-authorizer.log` that is pushes to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack.
External user access to Apache Kafka is logged to the `kafka-authorizer.log` that is pushes to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack.
Access denials are logged at INFO level, whereas allowed accesses are logged at DEBUG level. Depending on the auditing needs,
customize the logging level either for all logs via the [`log_level`](https://charmhub.io/kafka/configurations?channel=3/stable#log_level) config option or
only tune the logging level of the `authorizerAppender` in the `log4j.properties` file. Refer to the Reference documentation, for more information about
Expand Down
Loading

0 comments on commit 2728fd2

Please sign in to comment.