Skip to content

Commit

Permalink
deploy: 4b6b833
Browse files Browse the repository at this point in the history
  • Loading branch information
beekhof committed Nov 11, 2024
1 parent 7ef3a9c commit 3070835
Show file tree
Hide file tree
Showing 261 changed files with 26,385 additions and 696 deletions.
55 changes: 52 additions & 3 deletions blog/2021-12-31-medical-diagnosis/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,16 @@
Edge Patterns Many patterns include both a data center and one or more edge clusters. The following diagram outlines the general deployment flow for applications on an edge cluster. Edge OpenShift clusters are typically smaller than data center clusters and might be deployed on a three-node cluster that allows workloads on master nodes, or even on a single-node cluster (SNO). These edge clusters can be deployed on bare metal, local virtual machines, or in a public or private cloud.
GitOps for edge After provisioning the edge cluster, import or join it with the hub or data center cluster. For more details on Instructions for importing the cluster see, Importing a cluster.
After importing the cluster, ACM (Advanced Cluster Management) on the data center deploys an ACM agent and agent-addon pod into the edge cluster. ACM then installs OpenShift GitOps, which deploys the required applications based on the specified criteria.
`,url:"https://validatedpatterns.io/learn/about-validated-patterns/",breadcrumb:"/learn/about-validated-patterns/"},"https://validatedpatterns.io/contribute/contribute-to-docs/":{title:"Contributor's guide",tags:[],content:` Contribute to Validated Patterns documentation Different ways to contribute There are a few different ways you can contribute to Validated Patterns documentation:
`,url:"https://validatedpatterns.io/learn/about-validated-patterns/",breadcrumb:"/learn/about-validated-patterns/"},"https://validatedpatterns.io/patterns/coco-pattern/coco-pattern-azure-requirements/":{title:"Azure requirements",tags:[],content:`Azure requirements This demo currently has been tested only on azure. The configuration tested used the openshift-install. OpenShift documentation contains details on how to do this.
The documentation outlines minimum required configuration for an azure account.
Changes required Do not accept default sizes for OpenShift install. It is recommended to up the workers to at least Standard_D8s_v5. This can be done by using openshift-install create install-config first and adjusting the workers under platform e.g.:
- architecture: amd64 hyperthreading: Enabled name: worker platform: azure: type: Standard_D8s_v5 replicas: 3 On a cloud provider the virtual machines for the kata containers use "peer pods" which are running directly on the cloud provider’s hypervisor (see the diagram below). This means that access is required to the "confidential computing" virtual machine class. On Azure the Standard_DCas_v5 class of virtual machines are used. These virtual machines are NOT available in all regions. Users will also need to up the specific limits for Standard_DC2as_v5 virtual machines.
DNS for the openshift cluster also MUST be provided by azure DNS.
Azure configuration required for the validated pattern The validated pattern requires access to azure apis to provision peer-pod VMs and to obtain certificates from let’s encrypt.
Azure configuration information must be provided in two places:
The a secret must be loaded using a ../../../learn/secrets-management-in-the-validated-patterns-framework/[values-secret] file. The values-secret.yaml.template file provides the appropriate structure
A broader set of information about the cluster is required in values-global.yaml (see below).
global: azure: clientID: '' # Service principle ID subscriptionID: '' tenantID: '' # Tenant ID DNSResGroup: '' # Resource group for the azure DNS hosted zone hostedZoneName: '' # the hosted zone name clusterResGroup: '' # Resource group of the cluster clusterSubnet: '' # subnet of the cluster clusterNSG: '' # network security group of the worker nodes in the cluster clusterRegion: '' `,url:"https://validatedpatterns.io/patterns/coco-pattern/coco-pattern-azure-requirements/",breadcrumb:"/patterns/coco-pattern/coco-pattern-azure-requirements/"},"https://validatedpatterns.io/contribute/contribute-to-docs/":{title:"Contributor's guide",tags:[],content:` Contribute to Validated Patterns documentation Different ways to contribute There are a few different ways you can contribute to Validated Patterns documentation:
Email the Validated Patterns team at [email protected].
Create a GitHub or Jira issue .
Submit a pull request (PR). To create a PR, create a local clone of your own fork of the Validated Patterns docs repository, make your changes, and submit a PR. This option is best if you have substantial changes.
Expand Down Expand Up @@ -129,7 +138,31 @@
[source,yaml]
[source,go]
[source,javascript]
`,url:"https://validatedpatterns.io/contribute/contribute-to-docs/",breadcrumb:"/contribute/contribute-to-docs/"},"https://validatedpatterns.io/patterns/emerging-disease-detection/edd-getting-started/":{title:"Getting started",tags:[],content:` Deploying the Emerging Disease Detection pattern Prerequisites An OpenShift cluster (Go to the OpenShift console). Cluster must have a dynamic StorageClass to provision PersistentVolumes.
`,url:"https://validatedpatterns.io/contribute/contribute-to-docs/",breadcrumb:"/contribute/contribute-to-docs/"},"https://validatedpatterns.io/patterns/coco-pattern/coco-pattern-getting-started/":{title:"Getting started",tags:[],content:` Deploying Install an OpenShift Cluster on Azure
Update the required Azure configuration and secrets
./pattern.sh make install
Wait: The cluster needs to reboot all nodes at least once, and reprovision the ingress to use the let’s encrypt certificates.
If the services do not come up use the ArgoCD UI to triage potential timeouts.
Simple Confidential container tests The pattern deploys some simple tests of CoCo with this pattern. A "Hello Openshift" (e.g. curl to return "Hello Openshift!") application has been deployed in three form factor.
A vanilla kubernetes pod at oc get pods -n hello-openshift standard
A confidential container oc get pods -n hello-openshift secure
A confidential container with a relaxed policy at oc get pods -n hello-openshift insecure-policy
In this case the insecure policy is designed to allow a user to be able to exec into the confidential container. Typically this is disabled by an immutable policy established at pod creation time.
Doing a oc get pods for either of the pods running a confidential container should show the runtimeClassName: kata-remote for the pod.
Logging into azure once the pods have been provisioned will show that each of these two pods has been provisioned with it’s own Standard_DC2as_v5 virtual machine.
oc exec testing In a OpenShift cluster without confidential containers, Role Based Access Control (RBAC), may be used to prevent users from execing into a container to mutate it. However:
Cluster admins can always circumvent this capability
Anyone logged into the node directly can also circumvent this capability.
Confidential containers can prevent this. Running: oc exec -n hello-openshift -it secure — bash will result in a denial of access, irrespective of the user undertaking the action, including kubeadmin. For running this with either the standard pod oc exec -n hello-openshift -it standard — bash, or the CoCo pod with the policy disabled oc exec -n hello-openshift -it insecure-policy — bash will allow shell access.
Confidential Data Hub testing Part of the CoCo VM is a component called the Confidential Data Hub (CDH), which simplifies access to the Trustee Key Broker service for end applications. Find out more about how the CDH and Trustee work together here.
The CDH presents to containers within the pod (only), via a localhost URL. The CoCo container with an insecure policy can be used for testing the behaviour.
oc exec -n hello-openshift -it insecure-policy — bash to get a shell into a confidential container
Trustee’s configuration specifies the list of secrets which the KBS can access with the kbsSecretResources attribute.
Secrets within the CDH can be accessed (by default) at http://127.0.0.1:8006/cdh/resource/default/$K8S_SECRET/$K8S_SECRET_KEY.
In this case http://127.0.0.1:8006/cdh/resource/default/passphrase/passphrase by default will return a string which was randomly generated when the pattern was deployed.
This should be the same as result as oc get secrets -n trustee-operator-system passphrase -o yaml | yq '.data.passphrase' | base64 -d\`
Tailing the logs for the kbs container e.g. oc logs -n trustee-operator-system kbs-deployment-5b574bccd6-twjxh -f shows the evidence which is flowing to the KBS from the CDH.
`,url:"https://validatedpatterns.io/patterns/coco-pattern/coco-pattern-getting-started/",breadcrumb:"/patterns/coco-pattern/coco-pattern-getting-started/"},"https://validatedpatterns.io/patterns/emerging-disease-detection/edd-getting-started/":{title:"Getting started",tags:[],content:` Deploying the Emerging Disease Detection pattern Prerequisites An OpenShift cluster (Go to the OpenShift console). Cluster must have a dynamic StorageClass to provision PersistentVolumes.
A GitHub account (and a token for it with repositories permissions, to read from and write to your forks)
For installation tooling dependencies, see Patterns quick start.
The use of this pattern depends on having a Red Hat OpenShift cluster. In this version of the validated pattern there is no dedicated Hub / Edge cluster for the Emerging Disease Detection pattern. This single node pattern can be extend as a managed cluster(s) to a central hub.
Expand Down Expand Up @@ -3675,7 +3708,23 @@
main: multiSourceConfig: enabled: true clusterGroupChartVersion: "0.9.*" helmRepoUrl: registry.internal.disconnected.net/hybridcloudpatterns patternsOperator: source: cs-community-operator-index-v4-16 gitops: operatorSource: cs-redhat-operator-index-v4-16 values-hub.yaml:
acm: mce_operator: source: cs-redhat-operator-index-v4-16 clusterGroup: subscriptions: acm: name: advanced-cluster-management namespace: open-cluster-management channel: release-2.11 source: cs-redhat-operator-index-v4-16 Deploy the pattern At this point we can clone Multicloud Gitops on to a VM that lives in the disconnected network and deploy the pattern. The only thing we need to do first is to point the installation script to the mirrored helm chart inside the disconnected registry.
# Points to the mirrored VP install chart export PATTERN_DISCONNECTED_HOME=registry.internal.disconnected.net/hybridcloudpatterns ./pattern.sh make install After a while the cluster will converge to its desired final state and the MultiCloud Gitops pattern will be installed successfully.
`,url:"https://validatedpatterns.io/blog/2024-10-12-disconnected/",breadcrumb:"/blog/2024-10-12-disconnected/"},"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/":{title:"The Slimming of Common",tags:[],content:` Preamble Historically Validated Patterns, shipped all the helm charts and all the ansible code needed to deploy a pattern within the git repository of the pattern itself. The common subfolder in any pattern is a git subtree containing all of the common repository at a certain point in time. Some thoughts around the choice of git subtrees can be found here
`,url:"https://validatedpatterns.io/blog/2024-10-12-disconnected/",breadcrumb:"/blog/2024-10-12-disconnected/"},"https://validatedpatterns.io/patterns/coco-pattern/":{title:"Confidential Containers pattern",tags:[],content:`About coco-pattern Confidential computing is a technology for securing data in use. It uses a Trusted Execution Environment provided within the hardware of the processor to prevent access from others who have access to the system. Confidential containers is a project to standardize the consumption of confidential computing by making the security boundary for confidential computing to be a Kubernetes pod. [Kata containers](https://katacontainers.io/) is used to establish the boundary via a shim VM.
A core goal of confidential computing is to use this technology to isolate the workload from both Kubernetes and hypervisor administrators.
This pattern uses Red Hat OpenShift sandbox containers to deploy and configure confidential containers on Microsoft Azure.
It deploys three copies of 'Hello OpenShift' to demonstrate some of the security boundaries that enforced with confidential containers.
Requirements An an azure account with the required access rights
An OpenShift cluster, within the Azure environment updated beyond 4.16.10
Security considerations This pattern is a demonstration only and contains configuration that is not best practice
The default configuration deploys everything in a single cluster for testing purposes. The RATS architecture mandates that the Key Broker Service (e.g. Trustee) is in a trusted security zone.
The Attestation Service has wide open security policies.
Future work Deploying the environment the 'Trusted' environment including the KBS on a separate cluster to the secured workloads
Deploying to alternative environments supporting confidential computing including bare metal x86 clusters; IBM Cloud; IBM Z
Finishing the sample AI application
Architecture Confidential Containers typically has two environments. A trusted zone, and an untrusted zone. In these zones, Trustee, and the sandbox container operator are deployed, respectively.
For demonstration purposes the pattern currently is converged on one cluster**
References OpenShift sandboxed containers documentation
OpenShift confidential containers solution blog
`,url:"https://validatedpatterns.io/patterns/coco-pattern/",breadcrumb:"/patterns/coco-pattern/"},"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/":{title:"The Slimming of Common",tags:[],content:` Preamble Historically Validated Patterns, shipped all the helm charts and all the ansible code needed to deploy a pattern within the git repository of the pattern itself. The common subfolder in any pattern is a git subtree containing all of the common repository at a certain point in time. Some thoughts around the choice of git subtrees can be found here
While having common in a git subtree in every pattern repositories has served us fairly well, it came with a number of trade-offs and less than ideal aspects:
Most people are not really familiar with git subtrees and updating common was fairly cumbersome The pieces inside common (helm charts, ansible & scripts) could not be updated independently Folks would just change the local common folder and not submit changes to the upstream repository, ultimately causing merge conflicts when updating common or when they eventually merged downstream
At the time ArgoCD did not support multi-source so we did not really have many other choices other than shipping the whole of common as a folder. Now the multi source feature has become quite stable and it is possible to ship your values overrides for argo in one git repository and your helm charts in another repository (be it in a git repository, in a helm repository or even in an OCI-compliant registry).
Expand Down
Loading

0 comments on commit 3070835

Please sign in to comment.