Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apache renaming #278

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,13 @@ juju model-config logging-config="<root>=INFO;unit=DEBUG"
# Build the charm locally
charmcraft pack

# Deploy the latest ZooKeeper release
# Deploy the latest Apache ZooKeeper release
juju deploy zookeeper --channel edge -n 3

# Deploy the charm
juju deploy ./*.charm -n 3

# After ZooKeeper has initialised, relate the applications
# After Apache ZooKeeper has initialised, relate the applications
juju relate kafka zookeeper
```

Expand All @@ -71,4 +71,4 @@ tox # runs 'lint' and 'unit' environments

## Canonical Contributor Agreement

Canonical welcomes contributions to the Charmed Kafka Operator. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution.
Canonical welcomes contributions to the Charmed Apache Kafka Operator. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution.
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The Charmed Operator can be found on [Charmhub](https://charmhub.io/kafka) and i
- SASL/SCRAM auth for Broker-Broker and Client-Broker authentication enabled by default.
- Access control management supported with user-provided ACL lists.

As currently Kafka requires a paired ZooKeeper deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions.
As currently Apache Kafka requires a paired Apache ZooKeeper deployment in production, this operator makes use of the [Apache ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions.

### Features checklist

Expand All @@ -33,7 +33,7 @@ The following are some of the most important planned features and their implemen

## Requirements

For production environments, it is recommended to deploy at least 5 nodes for Zookeeper and 3 for Kafka.
For production environments, it is recommended to deploy at least 5 nodes for Zookeeper and 3 for Apache Kafka.

The following requirements are meant to be for production environment:

Expand All @@ -51,7 +51,7 @@ For more information on how to perform typical tasks, see the How to guides sect

### Deployment

The Kafka and ZooKeeper operators can both be deployed as follows:
The Apache Kafka and Apache ZooKeeper operators can both be deployed as follows:

```shell
$ juju deploy zookeeper -n 5
Expand All @@ -70,18 +70,18 @@ To watch the process, the `juju status` command can be used. Once all the units
juju run-action kafka/leader get-admin-credentials --wait
```

Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Kafka Charmed Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client.
Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Charmed Apache Kafka Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client.

For example, to list the current topics on the Kafka cluster, run the following command:
For example, to list the current topics on the Apache Kafka cluster, run the following command:

```shell
BOOTSTRAP_SERVERS=$(juju run-action kafka/leader get-admin-credentials --wait | grep "bootstrap.servers" | cut -d "=" -f 2)
juju ssh kafka/leader 'charmed-kafka.topics --bootstrap-server $BOOTSTRAP_SERVERS --list --command-config /var/snap/charmed-kafka/common/client.properties'
```

Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Kafka to enable listeners.
Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Apache Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Apache Kafka to enable listeners.

Available Kafka bin commands can be found with:
Available Apache Kafka bin commands can be found with:

```
snap info charmed-kafka
Expand Down Expand Up @@ -119,7 +119,7 @@ Use the same action without a password parameter to randomly generate a password
Currently, the Charmed Apache Kafka Operator supports 1 or more storage volumes. A 10G storage volume will be installed by default for `log.dirs`.
This is used for logs storage, mounted on `/var/snap/kafka/common`

When storage is added or removed, the Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created.
When storage is added or removed, the Apache Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Apache Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created.

## Relations

Expand Down
2 changes: 1 addition & 1 deletion actions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ set-tls-private-key:

get-admin-credentials:
description: Get administrator authentication credentials for client commands
The returned client_properties can be used for Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration
The returned client_properties can be used for Apache Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration
This action must be called on the leader unit.

get-listeners:
Expand Down
6 changes: 3 additions & 3 deletions config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ options:
type: int
default: 1073741824
message_max_bytes:
description: The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.
description: The largest record batch size allowed by Apache Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.
type: int
default: 1048588
offsets_topic_num_partitions:
Expand Down Expand Up @@ -81,7 +81,7 @@ options:
type: int
default: 11
zookeeper_ssl_cipher_suites:
description: Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.
description: Specifies the enabled cipher suites to be used in Apache ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.
type: string
default: ""
profile:
Expand Down Expand Up @@ -113,6 +113,6 @@ options:
type: float
default: 0.8
expose_external:
description: "String to determine how to expose the Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'"
description: "String to determine how to expose the Apache Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'"
type: string
default: "nodeport"
18 changes: 9 additions & 9 deletions docs/explanation/e-cluster-configuration.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
# Overview of a cluster configuration content

[Apache Kafka](https://kafka.apache.org) is an open-source distributed event streaming platform that requires an external solution to coordinate and sync metadata between all active brokers.
One of such solutions is [ZooKeeper](https://zookeeper.apache.org).
One of such solutions is [Apache ZooKeeper](https://zookeeper.apache.org).

Here are some of the responsibilities of ZooKeeper in a Kafka cluster:
Here are some of the responsibilities of Apache ZooKeeper in an Apache Kafka cluster:

- **Cluster membership**: through regular heartbeats, it keeps tracks of the brokers entering and leaving the cluster, providing an up-to-date list of brokers.
- **Controller election**: one of the Kafka brokers is responsible for managing the leader/follower status for all the partitions. ZooKeeper is used to elect a controller and to make sure there is only one of it.
- **Topic configuration**: each topic can be replicated on multiple partitions. ZooKeeper keeps track of the locations of the partitions and replicas, so that high-availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in ZooKeeper.
- **Access control and authentication**: ZooKeeper stores access control lists (ACL) for Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic.
- **Controller election**: one of the Apache Kafka brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it.
- **Topic configuration**: each topic can be replicated on multiple partitions. Apache ZooKeeper keeps track of the locations of the partitions and replicas, so that high-availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in Apache ZooKeeper.
- **Access control and authentication**: Apache ZooKeeper stores access control lists (ACL) for Apache Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic.

The values for the configuration parameters mentioned above are stored in znodes, the hierarchical unit data structure in ZooKeeper.
The values for the configuration parameters mentioned above are stored in znodes, the hierarchical unit data structure in Apache ZooKeeper.
A znode is represented by its path and can both have data associated with it and children nodes.
ZooKeeper clients interact with its data structure similarly to a remote file system that would be sync-ed between the ZooKeeper units for high availability.
For a Charmed Kafka related to a Charmed ZooKeeper:
Apache ZooKeeper clients interact with its data structure similarly to a remote file system that would be sync-ed between the Apache ZooKeeper units for high availability.
For a Charmed Apache Kafka related to a Charmed Apache ZooKeeper:
- the list of the broker ids of the cluster can be found in `/kafka/brokers/ids`
- the endpoint used to access the broker with id `0` can be found in `/kafka/brokers/ids/0`
- the credentials for the Charmed Kafka users can be found in `/kafka/config/users`
- the credentials for the Charmed Apache Kafka users can be found in `/kafka/config/users`
32 changes: 16 additions & 16 deletions docs/explanation/e-hardening.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Security Hardening Guide

This document provides guidance and instructions to achieve
a secure deployment of [Charmed Kafka](https://github.com/canonical/kafka-bundle), including setting up and managing a secure environment.
a secure deployment of [Charmed Apache Kafka](https://github.com/canonical/kafka-bundle), including setting up and managing a secure environment.
The document is divided into the following sections:

1. Environment, outlining the recommendation for deploying a secure environment
2. Applications, outlining the product features that enable a secure deployment of a Kafka cluster
2. Applications, outlining the product features that enable a secure deployment of an Apache Kafka cluster
3. Additional resources, providing any further information about security and compliance

## Environment
Expand All @@ -17,7 +17,7 @@ The environment where applications operate can be divided in two components:

### Cloud

Charmed Kafka can be deployed on top of several clouds and virtualization layers:
Charmed Apache Kafka can be deployed on top of several clouds and virtualization layers:

| Cloud | Security guide |
|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Expand Down Expand Up @@ -58,36 +58,36 @@ Juju user credentials must be stored securely and rotated regularly to limit the
In the following we provide guidance on how to harden your deployment using:

1. Operating System
2. Kafka and ZooKeeper Security Upgrades
2. Apache Kafka and Apache ZooKeeper Security Upgrades
3. Encryption
4. Authentication
5. Monitoring and Auditing

### Operating System

Charmed Kafka and Charmed ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to
Charmed Apache Kafka and Charmed Apache ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to
connect the underlying VM to a Landscape User Account to manage security upgrades and integrate Ubuntu Pro subscriptions.

### Kafka and ZooKeeper Security Upgrades
### Apache Kafka and Apache ZooKeeper Security Upgrades

Charmed Kafka and Charmed ZooKeeper operators install a pinned revision of the [Charmed Kafka snap](https://snapcraft.io/charmed-kafka)
and [Charmed ZooKeeper snap](https://snapcraft.io/charmed-zookeeper), respectively, in order to provide reproducible and secure environments.
New versions of Charmed Kafka and Charmed ZooKeeper may be released to provide patching of vulnerabilities (CVEs).
Charmed Apache Kafka and Charmed Apache ZooKeeper operators install a pinned revision of the [Charmed Apache Kafka snap](https://snapcraft.io/charmed-kafka)
and [Charmed Apache ZooKeeper snap](https://snapcraft.io/charmed-zookeeper), respectively, in order to provide reproducible and secure environments.
New versions of Charmed Apache Kafka and Charmed Apache ZooKeeper may be released to provide patching of vulnerabilities (CVEs).
It is important to refresh the charm regularly to make sure the workload is as secure as possible.
For more information on how to refresh the charm, see the [how-to upgrade](https://charmhub.io/kafka/docs/h-upgrade) guide.

### Encryption

Charmed Kafka must be deployed with encryption enabled.
To do that, you need to relate Kafka and ZooKeeper charms to one of the TLS certificate operator charms.
Charmed Apache Kafka must be deployed with encryption enabled.
To do that, you need to relate Apache Kafka and Apache ZooKeeper charms to one of the TLS certificate operator charms.
Please refer to the [Charming Security page](https://charmhub.io/topics/security-with-x-509-certificates) for more information on how to select the right certificate
provider for your use-case.

For more information on encryption setup, see the [How to enable encryption](https://charmhub.io/kafka/docs/h-enable-encryption) guide.

### Authentication

Charmed Kafka supports the following authentication layers:
Charmed Apache Kafka supports the following authentication layers:

1. [SCRAM-based SASL Authentication](/t/charmed-kafka-how-to-manage-app/10285)
2. [certificate-base Authentication (mTLS)](/t/create-mtls-client-credentials/11079)
Expand All @@ -98,21 +98,21 @@ Please refer to the [listener reference documentation](/t/charmed-kafka-document

### Monitoring and Auditing

Charmed Kafka provides native integration with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack).
Charmed Apache Kafka provides native integration with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack).
To reduce the blast radius of infrastructure disruptions, the general recommendation is to deploy COS and the observed application into
separate environments, isolated one another. Refer to the [COS production deployments best practices](https://charmhub.io/topics/canonical-observability-stack/reference/best-practices)
for more information.

Refer to How-To user guide for more information on:
* [how to integrate the Charmed Kafka deployment with COS](/t/charmed-kafka-how-to-enable-monitoring/10283)
* [how to integrate the Charmed Apache Kafka deployment with COS](/t/charmed-kafka-how-to-enable-monitoring/10283)
* [how to customise the alerting rules and dashboards](/t/charmed-kafka-documentation-how-to-integrate-custom-alerting-rules-and-dashboards/13431)

External user access to Kafka is logged to the `kafka-authorizer.log` that is pushes to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack.
External user access to Apache Kafka is logged to the `kafka-authorizer.log` that is pushes to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack.
Access denials are logged at INFO level, whereas allowed accesses are logged at DEBUG level. Depending on the auditing needs,
customize the logging level either for all logs via the [`log_level`](https://charmhub.io/kafka/configurations?channel=3/stable#log_level) config option or
only tune the logging level of the `authorizerAppender` in the `log4j.properties` file. Refer to the Reference documentation, for more information about
the [file system paths](/t/charmed-kafka-documentation-reference-file-system-paths/13262).

## Additional Resources

For further information and details on the security and cryptographic specifications used by Charmed Kafka, please refer to the [Security Explanation page](/t/charmed-kafka-documentation-explanation-security/15714).
For further information and details on the security and cryptographic specifications used by Charmed Apache Kafka, please refer to the [Security Explanation page](/t/charmed-kafka-documentation-explanation-security/15714).
Loading
Loading