This documentation is intended for maintainers of Detect Secrets Stream.
-
Before altering the production database, scale down services which may block the change, as they are continuously using this database
kubectl scale --replicas=0 deployment/scan-worker kubectl scale --replicas=0 deployment/sqlexporter
-
Make changes to the database, such as
alter table token add column token_hash varchar;
-
Scale services back up
kubectl scale --replicas=<number_of_replicas> deployment/scan-worker kubectl scale --replicas=1 deployment/sqlexporter
CREATE ROLE scan_worker_role;
GRANT SELECT, INSERT ON ALL TABLES IN SCHEMA public TO scan_worker_role;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO scan_worker_role;
GRANT TRUNCATE ON TABLE public.vmt_report TO scan_worker_role;
CREATE ROLE token_viewer_role WITH LOGIN;
GRANT CONNECT ON DATABASE dss TO token_viewer_role;
GRANT USAGE ON SCHEMA public TO token_viewer_role;
GRANT SELECT ON public.vmt_report TO token_viewer_role;
CREATE USER token_viewer WITH IN GROUP token_viewer_role PASSWORD [redacted]
It's recommended to use the IBM Cloud Events Stream service to set up a Kafka queue.
- Download the kafka CLI from https://kafka.apache.org/downloads
- Extract the downloaded package into a local directory
- Obtain the configuration and Bootstrap server
- If using IBM Cloud Events Stream, go to the IBM Cloud account, event stream instance page, click
Launch Dashboard
->Consumer groups
, then clickConnect to this service
on the top right Connect a client
->Bootstrap server
Sample code
->Sample configuration properties
. Copy the configuration.
- If using IBM Cloud Events Stream, go to the IBM Cloud account, event stream instance page, click
- Obtain an API key
- From the IBM Cloud account, locate the Events Stream resource from the resources list, and click on it
- On the left pane, click
Service credentials
->New credential
- Once created, click on the newly created credential, then
View Credentials
, copy theapikey
field from the JSON output
- Go to your local directory and create an admin config file (
config/admin.properties
) and paste the previously copied configuration into it - Test that the CLI works properly by listing all topics in the queue with
bin/kafka-topics.sh --bootstrap-server <bootstrap-server> --command-config config/admin.properties --list
- (Optional) Set
KAFKA_HEAP_OPTS="-Xms512m -Xmx1g"
if the JVM runs out of memory when connecting to Kafka - (Optional) Update
config/tools-log4j.properties
to change the log level
- (Optional) Set
- Follow the steps in Connect to Kafka using kafka CLI to set up the CLI
bin/kafka-topics.sh --bootstrap-server <bootstrap-server> --command-config config/admin.properties --alter --topic <topic_name> --partitions <new_partition_count>
- You can only increase the partition count
- Follow the steps in Connect to Kafka using kafka CLI to set up the CLI
bin/kafka-consumer-groups.sh --bootstrap-server <bootstrap-server> --command-config config/admin.properties --group <consumer_group_name> --describe --offsets
- The offset is based on the consumer group. Each consumer group will have a different offset.
- Follow the steps in Connect to Kafka using kafka CLI to set up the CLI
- For consuming messages: instead of creating
config/admin.properties
, createconfig/consumer.properties
. Then usebin/kafka-console-consumer.sh
. - For producing messages: instead of creating
config/admin.properties
, createconfig/producer.properties
. Then usebin/kafka-console-producer.sh
.
- For consuming messages: instead of creating