Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add toxic-config component to configure Toxiproxy instances with latency toxics #14473

Merged
merged 2 commits into from
Nov 8, 2022

Conversation

andrew-farries
Copy link
Contributor

Description

As part of #9198 we will run a Toxiproxy instance in the application cluster to allow us to simulate a high-latency connection to the Gitpod database. Installation and configuration of this Toxiproxy instance is done in some preceding PRs:

Toxiproxy allows declarative configuration of proxies via a config file, but configuration of toxics has to be done dynamically while Toxiproxy is running.

This PR adds a new component toxic-config, intended to run as a sidecar container in a Kubernetes pod alongside Toxiproxy, that configures a given proxy with a latency toxic.

For example, with a Toxiproxy instance running on localhost:8474 with a proxy called mysql configured:

go run . --proxy mysql --latency=1000 --jitter=250

will configure the mysql proxy with a latency toxic with latency and jitter set to the provided values.

A later PR will run this process as a sidecar container in the Toxiproxy pod.

Related Issue(s)

Part of #9198

How to test

  • Port forward to the toxiproxy service in the preview environment:
kubectl port-forward svc/toxiproxy 8474:8474
  • Run toxic-config against the toxiproxy instance:
cd components/toxic-config
go run . --proxy mysql --latency=1000 --jitter=250
  • See that the mysql proxy now has a latency toxic configured.
docker run --rm --net=host --entrypoint="/toxiproxy-cli" -it ghcr.io/shopify/toxiproxy inspect mysql

Release Notes

NONE

Documentation

Werft options:

  • /werft with-local-preview
    If enabled this will build install/preview
  • /werft with-preview
  • /werft with-slow-database
  • /werft with-large-vm
  • /werft with-integration-tests=all
    Valid options are all, workspace, webapp, ide

@werft-gitpod-dev-com
Copy link

started the job as gitpod-build-af-add-toxic-config-component.9 because the annotations in the pull request description changed
(with .werft/ from main)

@roboquat roboquat added the size/L label Nov 7, 2022
@easyCZ
Copy link
Member

easyCZ commented Nov 8, 2022

Thanks for the change. Why does this need to be configured in go code rather than a configuration file? I feel like I'm missing a piece here.

@andrew-farries
Copy link
Contributor Author

Toxiproxy allows declarative configuration of proxies via a config file, but configuration of toxics has to be done dynamically while Toxiproxy is running. The ability to configure toxics via the config file is a requested feature (Shopify/toxiproxy#447), but currently unimplemented.

I think this stems from our use of Toxiproxy in a way that falls slightly outside it's intended use case of automated tests; in that case the toxics are much more dynamic and are added/removed via tests. For our use case, we want to configure one toxic and then leave it in place forever, for which a config file is more suitable.

FWIW, we do configure Toxiproxy with a mysql proxy via a config file:

https://github.com/gitpod-io/gitpod/blob/main/install/installer/pkg/components/toxiproxy/configmap.go

it would be nice to have the latency toxic configured in there too, but that isn't possible.

@andrew-farries
Copy link
Contributor Author

Longer term, we could make the change upstream to allow toxic configuration via config file. That would allow us to delete this toxic-config sidecar.

@easyCZ
Copy link
Member

easyCZ commented Nov 8, 2022

@andrew-farries Thanks, that makes more sense now.

@roboquat roboquat merged commit ddf6f3e into main Nov 8, 2022
@roboquat roboquat deleted the af/add-toxic-config-component branch November 8, 2022 15:43
@roboquat roboquat added deployed: webapp Meta team change is running in production deployed: IDE IDE change is running in production deployed Change is completely running in production labels Nov 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deployed: IDE IDE change is running in production deployed: webapp Meta team change is running in production deployed Change is completely running in production release-note-none size/L team: IDE team: webapp Issue belongs to the WebApp team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants