Skip to content

Latest commit

 

History

History
93 lines (69 loc) · 4.72 KB

operation-guide.md

File metadata and controls

93 lines (69 loc) · 4.72 KB

Operation Guide

This documentation is intended for maintainers of Detect Secrets Stream.

Making changes to the production database

  1. Before altering the production database, scale down services which may block the change, as they are continuously using this database

    kubectl scale --replicas=0 deployment/scan-worker
    kubectl scale --replicas=0 deployment/sqlexporter
  2. Make changes to the database, such as alter table token add column token_hash varchar;

  3. Scale services back up

    kubectl scale --replicas=<number_of_replicas> deployment/scan-worker
    kubectl scale --replicas=1 deployment/sqlexporter

Add database roles

CREATE ROLE scan_worker_role;
GRANT SELECT, INSERT ON ALL TABLES IN SCHEMA public TO scan_worker_role;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO scan_worker_role;
GRANT TRUNCATE ON TABLE public.vmt_report TO scan_worker_role;

Add a role for token metadata viewer

CREATE ROLE token_viewer_role WITH LOGIN;
GRANT CONNECT ON DATABASE dss TO token_viewer_role;
GRANT USAGE ON SCHEMA public TO token_viewer_role;
GRANT SELECT ON public.vmt_report TO token_viewer_role;

CREATE USER token_viewer WITH IN GROUP token_viewer_role PASSWORD [redacted]

Kafka

It's recommended to use the IBM Cloud Events Stream service to set up a Kafka queue.

Connect to Kafka using the kafka CLI

  1. Download the kafka CLI from https://kafka.apache.org/downloads
  2. Extract the downloaded package into a local directory
  3. Obtain the configuration and Bootstrap server
    1. If using IBM Cloud Events Stream, go to the IBM Cloud account, event stream instance page, click Launch Dashboard -> Consumer groups, then click Connect to this service on the top right
    2. Connect a client -> Bootstrap server
    3. Sample code -> Sample configuration properties. Copy the configuration.
  4. Obtain an API key
    1. From the IBM Cloud account, locate the Events Stream resource from the resources list, and click on it
    2. On the left pane, click Service credentials -> New credential
    3. Once created, click on the newly created credential, then View Credentials, copy the apikey field from the JSON output
  5. Go to your local directory and create an admin config file (config/admin.properties) and paste the previously copied configuration into it
  6. Test that the CLI works properly by listing all topics in the queue with bin/kafka-topics.sh --bootstrap-server <bootstrap-server> --command-config config/admin.properties --list
    1. (Optional) Set KAFKA_HEAP_OPTS="-Xms512m -Xmx1g" if the JVM runs out of memory when connecting to Kafka
    2. (Optional) Update config/tools-log4j.properties to change the log level

Increase partitions for a topic

  1. Follow the steps in Connect to Kafka using kafka CLI to set up the CLI
  2. bin/kafka-topics.sh --bootstrap-server <bootstrap-server> --command-config config/admin.properties --alter --topic <topic_name> --partitions <new_partition_count>
    1. You can only increase the partition count

Display partition offset

  1. Follow the steps in Connect to Kafka using kafka CLI to set up the CLI
  2. bin/kafka-consumer-groups.sh --bootstrap-server <bootstrap-server> --command-config config/admin.properties --group <consumer_group_name> --describe --offsets
    1. The offset is based on the consumer group. Each consumer group will have a different offset.

Consume or produce a message using the CLI

  1. Follow the steps in Connect to Kafka using kafka CLI to set up the CLI
    1. For consuming messages: instead of creating config/admin.properties, create config/consumer.properties. Then use bin/kafka-console-consumer.sh.
    2. For producing messages: instead of creating config/admin.properties, create config/producer.properties. Then use bin/kafka-console-producer.sh.