Skip to content

Latest commit

 

History

History
320 lines (243 loc) · 34.7 KB

README.md

File metadata and controls

320 lines (243 loc) · 34.7 KB

Falco

Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. You can use Falco to monitor run-time security of your Kubernetes applications and internal components.

To know more about Falco have a look at:

Introduction

This chart adds Falco to all nodes in your cluster using a DaemonSet.

Also provides a Deployment for generating Falco alerts. This is useful for testing purposes.

Adding falcosecurity repository

Prior to install the chart, add the falcosecurity charts repository:

helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update

Installing the Chart

To install the chart with the release name falco run:

helm install falco falcosecurity/falco

After a few seconds, Falco should be running.

Tip: List all releases using helm list, a release is a name used to track a specific deployment

Uninstalling the Chart

To uninstall the falco deployment:

helm uninstall falco

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following table lists the configurable parameters of the Falco chart and their default values.

Parameter Description Default
image.registry The image registry to pull from docker.io
image.repository The image repository to pull from falcosecurity/falco
image.tag The image tag to pull 0.25.0
image.pullPolicy The image pull policy IfNotPresent
image.pullSecrets The image pull secretes []
containerd.enabled Enable ContainerD support true
containerd.socket The path of the ContainerD socket /run/containerd/containerd.sock
docker.enabled Enable Docker support true
docker.socket The path of the Docker daemon socket /var/run/docker.sock
resources.requests.cpu CPU requested for being run in a node 100m
resources.requests.memory Memory requested for being run in a node 512Mi
resources.limits.cpu CPU limit 200m
resources.limits.memory Memory limit 1024Mi
extraArgs Specify additional container args []
rbac.create If true, create & use RBAC resources true
serviceAccount.create Create serviceAccount true
serviceAccount.name Use this value as serviceAccountName
fakeEventGenerator.enabled Run falcosecurity/event-generator for sample events false
fakeEventGenerator.args Arguments for falcosecurity/event-generator run --loop ^syscall
fakeEventGenerator.replicas How many replicas of falcosecurity/event-generator to run 1
daemonset.updateStrategy.type The updateStrategy for updating the daemonset RollingUpdate
daemonset.env Extra environment variables passed to daemonset pods {}
daemonset.podAnnotations Extra pod annotations to be added to pods created by the daemonset {}
podSecurityPolicy.create If true, create & use podSecurityPolicy false
proxy.httpProxy Set the Proxy server if is behind a firewall
proxy.httpsProxy Set the Proxy server if is behind a firewall
proxy.noProxy Set the Proxy server if is behind a firewall
timezone Set the daemonset's timezone
priorityClassName Set the daemonset's priorityClassName
ebpf.enabled Enable eBPF support for Falco instead of falco-probe kernel module false
ebpf.settings.hostNetwork Needed to enable eBPF JIT at runtime for performance reasons true
auditLog.enabled Enable K8s audit log support for Falco false
auditLog.dynamicBackend.enabled Deploy the Audit Sink where Falco listens for K8s audit log events false
auditLog.dynamicBackend.url Define if Audit Sink client config should point to a fixed url (useful for development) instead of the default webserver service. ``
falco.rulesFile The location of the rules files [/etc/falco/falco_rules.yaml, /etc/falco/falco_rules.local.yaml, /etc/falco/rules.available/application_rules.yaml, /etc/falco/rules.d]
falco.timeFormatISO8601 Display times using ISO 8601 instead of local time zone false
falco.jsonOutput Output events in json or text false
falco.jsonIncludeOutputProperty Include output property in json output true
falco.logStderr Send Falco debugging information logs to stderr true
falco.logSyslog Send Falco debugging information logs to syslog true
falco.logLevel The minimum level of Falco debugging information to include in logs info
falco.priority The minimum rule priority level to load and run debug
falco.bufferedOutputs Use buffered outputs to channels false
falco.syscallEventDrops.actions Actions to be taken when system calls were dropped from the circular buffer [log, alert]
falco.syscallEventDrops.rate Rate at which log/alert messages are emitted .03333
falco.syscallEventDrops.maxBurst Max burst of messages emitted 10
falco.outputs.rate Number of tokens gained per second 1
falco.outputs.maxBurst Maximum number of tokens outstanding 1000
falco.syslogOutput.enabled Enable syslog output for security notifications true
falco.fileOutput.enabled Enable file output for security notifications false
falco.fileOutput.keepAlive Open file once or every time a new notification arrives false
falco.fileOutput.filename The filename for logging notifications ./events.txt
falco.stdoutOutput.enabled Enable stdout output for security notifications true
falco.webserver.enabled Enable Falco embedded webserver to accept K8s audit events true
falco.webserver.listenPort Port where Falco embedded webserver listen to connections 8765
falco.webserver.k8sAuditEndpoint Endpoint where Falco embedded webserver accepts K8s audit events /k8s-audit
falco.programOutput.enabled Enable program output for security notifications false
falco.programOutput.keepAlive Start the program once or re-spawn when a notification arrives false
falco.programOutput.program Command to execute for program output mail -s "Falco Notification" [email protected]
falco.httpOutput.enabled Enable http output for security notifications false
falco.httpOutput.url Url to notify using the http output when a notification arrives http://some.url
falco.grpc.enabled Enable the Falco gRPC server false
falco.grpc.threadiness Number of threads (and context) the gRPC server will use, 0 by default, which means "auto" 0
falco.grpc.unixSocketPath Unix socket the gRPC server will create unix:///var/run/falco/falco.sock
falco.grpc.listenPort Port where Falco gRPC server listen to connections 5060
falco.grpc.privateKey Key file path for the Falco gRPC server /etc/falco/certs/server.key
falco.grpc.certChain Cert file path for the Falco gRPC server /etc/falco/certs/server.crt
falco.grpc.rootCerts CA root file path for the Falco gRPC server /etc/falco/certs/ca.crt
falco.grpcOutput.enabled Enable the gRPC output and events will be kept in memory until you read them with a gRPC client. false
customRules Third party rules enabled for Falco {}
integrations.gcscc.enabled Enable Google Cloud Security Command Center integration false
integrations.gcscc.webhookUrl The URL where sysdig-gcscc-connector webhook is listening http://sysdig-gcscc-connector.default.svc.cluster.local:8080/events
integrations.gcscc.webhookAuthenticationToken Token used for authentication and webhook b27511f86e911f20b9e0f9c8104b4ec4
integrations.natsOutput.enabled Enable NATS Output integration false
integrations.natsOutput.natsUrl The NATS' URL where Falco is going to publish security alerts nats://nats.nats-io.svc.cluster.local:4222
integrations.pubsubOutput.credentialsData Contents retrieved from cat $HOME/.config/gcloud/legacy_credentials/<email>/adc.json
integrations.pubsubOutput.enabled Enable GCloud PubSub Output Integration false
integrations.pubsubOutput.projectID GCloud Project ID where the Pub/Sub will be created
integrations.snsOutput.enabled Enable Amazon SNS Output integration false
integrations.snsOutput.topic The SNS topic where Falco is going to publish security alerts
integrations.snsOutput.aws_access_key_id The AWS Access Key Id credentials for access to SNS n
integrations.snsOutput.aws_secret_access_key The AWS Secret Access Key credential to access to SNS
integrations.snsOutput.aws_default_region The AWS region where SNS is deployed
nodeSelector The node selection constraint {}
affinity The affinity constraint {}
tolerations The tolerations for scheduling node-role.kubernetes.io/master:NoSchedule
scc.create Create OpenShift's Security Context Constraint true

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install falco --set falco.jsonOutput=true falcosecurity/falco

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install falco -f values.yaml falcosecurity/falco

Tip: You can use the default values.yaml

Loading custom rules

Falco ships with a nice default ruleset. Is a good starting point but sooner or later we are going to need to add custom rules which fits our needs.

So the question is: How we can load custom rules in our Falco deployment?

We are going to create a file which contains custom rules so that we can keep it in a Git repository.

cat custom-rules.yaml

And the file looks like this one:

customRules:
  rules-traefik.yaml: |-
    - macro: traefik_consider_syscalls
      condition: (evt.num < 0)

    - macro: app_traefik
      condition: container and container.image startswith "traefik"

    # Restricting listening ports to selected set

    - list: traefik_allowed_inbound_ports_tcp
      items: [443, 80, 8080]

    - rule: Unexpected inbound tcp connection traefik
      desc: Detect inbound traffic to traefik using tcp on a port outside of expected set
      condition: inbound and evt.rawres >= 0 and not fd.sport in (traefik_allowed_inbound_ports_tcp) and app_traefik
      output: Inbound network connection to traefik on unexpected port (command=%proc.cmdline pid=%proc.pid connection=%fd.name sport=%fd.sport user=%user.name %container.info image=%container.image)
      priority: NOTICE

    # Restricting spawned processes to selected set

    - list: traefik_allowed_processes
      items: ["traefik"]

    - rule: Unexpected spawned process traefik
      desc: Detect a process started in a traefik container outside of an expected set
      condition: spawned_process and not proc.name in (traefik_allowed_processes) and app_traefik
      output: Unexpected process spawned in traefik container (command=%proc.cmdline pid=%proc.pid user=%user.name %container.info image=%container.image)
      priority: NOTICE

So next step is to use the custom-rules.yaml file for installing the Falco Helm chart.

helm install falco -f custom-rules.yaml falcosecurity/falco

And we will see in our logs something like:

Tue Jun  5 15:08:57 2018: Loading rules from file /etc/falco/rules.d/rules-traefik.yaml:

And this means that our Falco installation has loaded the rules and is ready to help us.

Automating the generation of custom-rules.yaml file

Sometimes edit YAML files with multistrings is a bit error prone, so we added a script for automating this step and make your life easier.

This script lives in falco-extras repository in the scripts directory.

Imagine that you would like to add rules for your Redis, MongoDB and Traefik containers, you have to:

git clone https://github.com/draios/falco-extras.git
cd falco-extras
./scripts/rules2helm rules/rules-mongo.yaml rules/rules-redis.yaml rules/rules-traefik.yaml > custom-rules.yaml
helm install falco -f custom-rules.yaml falcosecurity/falco

And that's all, in a few seconds you will see your pods up and running with MongoDB, Redis and Traefik rules enabled.

Enabling K8s audit event support

This has been tested with Kops and Minikube. You will need the following components:

  • A Kubernetes cluster greater than v1.13
  • The apiserver must be configured with Dynamic Auditing feature, do it with the following flags:
    • --audit-dynamic-configuration
    • --feature-gates=DynamicAuditing=true
    • --runtime-config=auditregistration.k8s.io/v1alpha1=true

You can do it with the scripts provided by Falco engineers just running:

cd examples/k8s_audit_config
bash enable-k8s-audit.sh minikube dynamic

Or in the case of Kops:

cd examples/k8s_audit_config
APISERVER_HOST=api.my-kops-cluster.com bash ./enable-k8s-audit.sh kops dynamic

Then you can install Falco chart enabling the enabling the falco.webserver flag:

helm install falco --set auditLog.enabled=true --set auditLog.dynamicBackend.enabled=true falcosecurity/falco

And that's it, you will start to see the K8s audit log related alerts.

Known validation failed error

Perhaps you may find the case where you receive an error like the following one:

helm install falco --set auditLog.enabled=true falcosecurity/falco
Error: validation failed: unable to recognize "": no matches for kind "AuditSink" in version "auditregistration.k8s.io/v1alpha1"

This means that the apiserver cannot recognize the auditregistration.k8s.io resource, which means that the dynamic auditing feature hasn't been enabled properly. You need to enable it or ensure that your using a Kubernetes version greater than v1.13.

Enabling gRPC

The Falco gRPC server and the Falco gRPC Outputs APIs are not enabled by default. Morover, Falco supports running a gRPC server with two main binding types:

  • Over a local unix socket with no authentication
  • Over the network with mandatory mutual TLS authentication (mTLS)

Tip: Once gRPC is enabled, you can deploy falco-exporter to export metrics to Prometheus.

gRPC over unix socket (default)

The preferred way to use the gRPC is over a unix socket.

To install Falco with gRPC enabled over a unix socket, you have to:

helm install falco \
  --set falco.grpc.enabled=true \
  --set falco.grpcOutput.enabled=true \
  falcosecurity/falco

gRPC over network

The gRPC server over the network can only be used with mutual authentication between the clients and the server using TLS certificates. How to generate the certificates is documented here.

To install Falco with gRPC enabled over the network, you have to:

helm install falco \
  --set falco.grpc.enabled=true \
  --set falco.grpcOutput.enabled=true \
  --set falco.grpc.unixSocketPath="" \
  --set-file certs.server.key=/path/to/server.key \
  --set-file certs.server.crt=/path/to/server.crt \
  --set-file certs.ca.crt=/path/to/ca.crt \
  falcosecurity/falco