diff --git a/blog/2021-12-31-medical-diagnosis/index.html b/blog/2021-12-31-medical-diagnosis/index.html index 463a23061..84c28916b 100644 --- a/blog/2021-12-31-medical-diagnosis/index.html +++ b/blog/2021-12-31-medical-diagnosis/index.html @@ -3746,7 +3746,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-03-23-acm-mustonlyhave/index.html b/blog/2022-03-23-acm-mustonlyhave/index.html index 09b8f01c2..947939b9b 100644 --- a/blog/2022-03-23-acm-mustonlyhave/index.html +++ b/blog/2022-03-23-acm-mustonlyhave/index.html @@ -3816,7 +3816,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-03-30-multicloud-gitops/index.html b/blog/2022-03-30-multicloud-gitops/index.html index b07f565f1..d99574c77 100644 --- a/blog/2022-03-30-multicloud-gitops/index.html +++ b/blog/2022-03-30-multicloud-gitops/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-06-30-ansible-edge-gitops/index.html b/blog/2022-06-30-ansible-edge-gitops/index.html index 97cde23bf..eba90fe13 100644 --- a/blog/2022-06-30-ansible-edge-gitops/index.html +++ b/blog/2022-06-30-ansible-edge-gitops/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-07-15-push-vs-pull/index.html b/blog/2022-07-15-push-vs-pull/index.html index 93bf9b750..bd10aa4e5 100644 --- a/blog/2022-07-15-push-vs-pull/index.html +++ b/blog/2022-07-15-push-vs-pull/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-08-24-clustergroups/index.html b/blog/2022-08-24-clustergroups/index.html index 540a1cfb4..f054d6b0a 100644 --- a/blog/2022-08-24-clustergroups/index.html +++ b/blog/2022-08-24-clustergroups/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-09-02-route/index.html b/blog/2022-09-02-route/index.html index e11cc12d1..4ace2f42e 100644 --- a/blog/2022-09-02-route/index.html +++ b/blog/2022-09-02-route/index.html @@ -3816,7 +3816,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-10-12-acm-provisioning/index.html b/blog/2022-10-12-acm-provisioning/index.html index e2cdcb145..e49b1aa04 100644 --- a/blog/2022-10-12-acm-provisioning/index.html +++ b/blog/2022-10-12-acm-provisioning/index.html @@ -3826,7 +3826,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-11-20-argo-rollouts/index.html b/blog/2022-11-20-argo-rollouts/index.html index 0bb61fe48..b6ad29045 100644 --- a/blog/2022-11-20-argo-rollouts/index.html +++ b/blog/2022-11-20-argo-rollouts/index.html @@ -3942,7 +3942,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2022-12-01-multicluster-devsecops/index.html b/blog/2022-12-01-multicluster-devsecops/index.html index 1bcb8b042..91132979a 100644 --- a/blog/2022-12-01-multicluster-devsecops/index.html +++ b/blog/2022-12-01-multicluster-devsecops/index.html @@ -3745,7 +3745,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2023-11-17-argo-configmanagement-plugins/index.html b/blog/2023-11-17-argo-configmanagement-plugins/index.html index 88a64c4b2..bb1a2e9a9 100644 --- a/blog/2023-11-17-argo-configmanagement-plugins/index.html +++ b/blog/2023-11-17-argo-configmanagement-plugins/index.html @@ -3824,7 +3824,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2023-12-01-new-pattern-tiers/index.html b/blog/2023-12-01-new-pattern-tiers/index.html index 1f06347b3..68483b5db 100644 --- a/blog/2023-12-01-new-pattern-tiers/index.html +++ b/blog/2023-12-01-new-pattern-tiers/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2023-12-05-nutanix-testing/index.html b/blog/2023-12-05-nutanix-testing/index.html index 915258b17..7d68c15db 100644 --- a/blog/2023-12-05-nutanix-testing/index.html +++ b/blog/2023-12-05-nutanix-testing/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2023-12-15-understanding-namespaces/index.html b/blog/2023-12-15-understanding-namespaces/index.html index 5bd5b7388..425cd57ba 100644 --- a/blog/2023-12-15-understanding-namespaces/index.html +++ b/blog/2023-12-15-understanding-namespaces/index.html @@ -3825,7 +3825,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2023-12-20-private-repos/index.html b/blog/2023-12-20-private-repos/index.html index 959cda0b4..5dafa3eaf 100644 --- a/blog/2023-12-20-private-repos/index.html +++ b/blog/2023-12-20-private-repos/index.html @@ -3782,7 +3782,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-01-16-deploying-mcg-with-cisco-flashstack-portworx/index.html b/blog/2024-01-16-deploying-mcg-with-cisco-flashstack-portworx/index.html index d75049bc9..5a148f152 100644 --- a/blog/2024-01-16-deploying-mcg-with-cisco-flashstack-portworx/index.html +++ b/blog/2024-01-16-deploying-mcg-with-cisco-flashstack-portworx/index.html @@ -3738,7 +3738,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-01-26-more-secrets-options/index.html b/blog/2024-01-26-more-secrets-options/index.html index 2613b99a2..2803aa20c 100644 --- a/blog/2024-01-26-more-secrets-options/index.html +++ b/blog/2024-01-26-more-secrets-options/index.html @@ -3881,7 +3881,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-02-07-hcp-htpasswd-config/index.html b/blog/2024-02-07-hcp-htpasswd-config/index.html index 7ee705238..c07893fee 100644 --- a/blog/2024-02-07-hcp-htpasswd-config/index.html +++ b/blog/2024-02-07-hcp-htpasswd-config/index.html @@ -3768,7 +3768,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-03-05-intel-accelerated-patterns/index.html b/blog/2024-03-05-intel-accelerated-patterns/index.html index 8e21a9aed..883db5cdd 100644 --- a/blog/2024-03-05-intel-accelerated-patterns/index.html +++ b/blog/2024-03-05-intel-accelerated-patterns/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-07-12-in-cluster-git/index.html b/blog/2024-07-12-in-cluster-git/index.html index 49f376fb5..77509c6bc 100644 --- a/blog/2024-07-12-in-cluster-git/index.html +++ b/blog/2024-07-12-in-cluster-git/index.html @@ -3778,7 +3778,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-07-19-write-token-kubeconfig/index.html b/blog/2024-07-19-write-token-kubeconfig/index.html index 394795a4f..4ee7af31a 100644 --- a/blog/2024-07-19-write-token-kubeconfig/index.html +++ b/blog/2024-07-19-write-token-kubeconfig/index.html @@ -3751,7 +3751,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-08-30-push-secrets/index.html b/blog/2024-08-30-push-secrets/index.html index ca32a0d3f..a8ca19467 100644 --- a/blog/2024-08-30-push-secrets/index.html +++ b/blog/2024-08-30-push-secrets/index.html @@ -3784,7 +3784,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-09-13-using-hypershift/index.html b/blog/2024-09-13-using-hypershift/index.html index b4f600a16..7b9b5c775 100644 --- a/blog/2024-09-13-using-hypershift/index.html +++ b/blog/2024-09-13-using-hypershift/index.html @@ -3802,7 +3802,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-09-26-slimming-of-common/index.html b/blog/2024-09-26-slimming-of-common/index.html index 3d21d6b78..5dba2aff5 100644 --- a/blog/2024-09-26-slimming-of-common/index.html +++ b/blog/2024-09-26-slimming-of-common/index.html @@ -3800,7 +3800,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-10-12-disconnected/index.html b/blog/2024-10-12-disconnected/index.html index 9ff6fc77b..8e73ef4e2 100644 --- a/blog/2024-10-12-disconnected/index.html +++ b/blog/2024-10-12-disconnected/index.html @@ -3838,7 +3838,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/2024-11-07-clustergroup-sequencing/index.html b/blog/2024-11-07-clustergroup-sequencing/index.html index f3f4cc8f2..b190ccfe7 100644 --- a/blog/2024-11-07-clustergroup-sequencing/index.html +++ b/blog/2024-11-07-clustergroup-sequencing/index.html @@ -3943,7 +3943,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/index.html b/blog/index.html index 06810ddfe..68e01ca34 100644 --- a/blog/index.html +++ b/blog/index.html @@ -3742,7 +3742,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/page/2/index.html b/blog/page/2/index.html index 60b6cf330..df6e06bef 100644 --- a/blog/page/2/index.html +++ b/blog/page/2/index.html @@ -3741,7 +3741,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/page/3/index.html b/blog/page/3/index.html index 5a1b263ba..33484d8f4 100644 --- a/blog/page/3/index.html +++ b/blog/page/3/index.html @@ -3752,7 +3752,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/page/4/index.html b/blog/page/4/index.html index 49bae0cb8..b1c834df8 100644 --- a/blog/page/4/index.html +++ b/blog/page/4/index.html @@ -3746,7 +3746,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog/page/5/index.html b/blog/page/5/index.html index 228ae7901..b058974c1 100644 --- a/blog/page/5/index.html +++ b/blog/page/5/index.html @@ -3738,7 +3738,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/acm/index.html b/blog_tags/acm/index.html index 4b53fa098..4e9ce2aa2 100644 --- a/blog_tags/acm/index.html +++ b/blog_tags/acm/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/acs/index.html b/blog_tags/acs/index.html index 55779c242..5695be52f 100644 --- a/blog_tags/acs/index.html +++ b/blog_tags/acs/index.html @@ -3744,7 +3744,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/announce/index.html b/blog_tags/announce/index.html index e3b55b5ec..feafd0e32 100644 --- a/blog_tags/announce/index.html +++ b/blog_tags/announce/index.html @@ -3742,7 +3742,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/ansible-edge-gitops/index.html b/blog_tags/ansible-edge-gitops/index.html index 5a618e8c0..1dfa6b221 100644 --- a/blog_tags/ansible-edge-gitops/index.html +++ b/blog_tags/ansible-edge-gitops/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/devops/index.html b/blog_tags/devops/index.html index 9dda438ce..9cca09161 100644 --- a/blog_tags/devops/index.html +++ b/blog_tags/devops/index.html @@ -3749,7 +3749,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/devsecops/index.html b/blog_tags/devsecops/index.html index f4dcbecaf..4503c3bb4 100644 --- a/blog_tags/devsecops/index.html +++ b/blog_tags/devsecops/index.html @@ -3749,7 +3749,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/git/index.html b/blog_tags/git/index.html index 3eb7c612f..4adb5884f 100644 --- a/blog_tags/git/index.html +++ b/blog_tags/git/index.html @@ -3738,7 +3738,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/gitops/index.html b/blog_tags/gitops/index.html index 4b4c81052..1e8136817 100644 --- a/blog_tags/gitops/index.html +++ b/blog_tags/gitops/index.html @@ -3748,7 +3748,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/how-to/index.html b/blog_tags/how-to/index.html index f338042b7..a8900ae8e 100644 --- a/blog_tags/how-to/index.html +++ b/blog_tags/how-to/index.html @@ -3744,7 +3744,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/index.html b/blog_tags/index.html index d48a45c5e..22d2d97fc 100644 --- a/blog_tags/index.html +++ b/blog_tags/index.html @@ -3737,7 +3737,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/medical-diagnosis/index.html b/blog_tags/medical-diagnosis/index.html index f59b0ba12..f8d6f81bc 100644 --- a/blog_tags/medical-diagnosis/index.html +++ b/blog_tags/medical-diagnosis/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/multi-cloud-gitops/index.html b/blog_tags/multi-cloud-gitops/index.html index d16bb5307..700d7cf47 100644 --- a/blog_tags/multi-cloud-gitops/index.html +++ b/blog_tags/multi-cloud-gitops/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/nutanix/index.html b/blog_tags/nutanix/index.html index 7225c709c..fae59bcd8 100644 --- a/blog_tags/nutanix/index.html +++ b/blog_tags/nutanix/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/openshift-platform-plus/index.html b/blog_tags/openshift-platform-plus/index.html index 93d6f7afc..10a2e0bbd 100644 --- a/blog_tags/openshift-platform-plus/index.html +++ b/blog_tags/openshift-platform-plus/index.html @@ -3749,7 +3749,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/openshift/index.html b/blog_tags/openshift/index.html index ba3ae5746..0c7f9d1a4 100644 --- a/blog_tags/openshift/index.html +++ b/blog_tags/openshift/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/page/2/index.html b/blog_tags/page/2/index.html index 206df7872..ea0a54b66 100644 --- a/blog_tags/page/2/index.html +++ b/blog_tags/page/2/index.html @@ -3737,7 +3737,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/page/3/index.html b/blog_tags/page/3/index.html index d1e97cde4..c0387798b 100644 --- a/blog_tags/page/3/index.html +++ b/blog_tags/page/3/index.html @@ -3737,7 +3737,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/page/4/index.html b/blog_tags/page/4/index.html index 00f04ae83..c8b949607 100644 --- a/blog_tags/page/4/index.html +++ b/blog_tags/page/4/index.html @@ -3737,7 +3737,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/page/5/index.html b/blog_tags/page/5/index.html index 77366d7f7..86b1ff0a4 100644 --- a/blog_tags/page/5/index.html +++ b/blog_tags/page/5/index.html @@ -3737,7 +3737,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/patterns/index.html b/blog_tags/patterns/index.html index cf57b1343..8b26e30f1 100644 --- a/blog_tags/patterns/index.html +++ b/blog_tags/patterns/index.html @@ -3743,7 +3743,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/patterns/page/2/index.html b/blog_tags/patterns/page/2/index.html index 22600bcbe..dc8dd1948 100644 --- a/blog_tags/patterns/page/2/index.html +++ b/blog_tags/patterns/page/2/index.html @@ -3742,7 +3742,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/patterns/page/3/index.html b/blog_tags/patterns/page/3/index.html index 22c42fa10..41209c5a9 100644 --- a/blog_tags/patterns/page/3/index.html +++ b/blog_tags/patterns/page/3/index.html @@ -3753,7 +3753,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/patterns/page/4/index.html b/blog_tags/patterns/page/4/index.html index 78ac4918d..af798d953 100644 --- a/blog_tags/patterns/page/4/index.html +++ b/blog_tags/patterns/page/4/index.html @@ -3743,7 +3743,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/pipelines/index.html b/blog_tags/pipelines/index.html index 516bc34bf..b4286dec3 100644 --- a/blog_tags/pipelines/index.html +++ b/blog_tags/pipelines/index.html @@ -3749,7 +3749,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/provisioning/index.html b/blog_tags/provisioning/index.html index a8492f988..8a75d480b 100644 --- a/blog_tags/provisioning/index.html +++ b/blog_tags/provisioning/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/quay/index.html b/blog_tags/quay/index.html index 1942f4d80..474d08b67 100644 --- a/blog_tags/quay/index.html +++ b/blog_tags/quay/index.html @@ -3744,7 +3744,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/route/index.html b/blog_tags/route/index.html index cdc391e3e..de1045865 100644 --- a/blog_tags/route/index.html +++ b/blog_tags/route/index.html @@ -3739,7 +3739,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/secrets/index.html b/blog_tags/secrets/index.html index dc6f56720..3c77d3f6a 100644 --- a/blog_tags/secrets/index.html +++ b/blog_tags/secrets/index.html @@ -3738,7 +3738,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/security/index.html b/blog_tags/security/index.html index 53e9af2fa..36e230308 100644 --- a/blog_tags/security/index.html +++ b/blog_tags/security/index.html @@ -3744,7 +3744,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/sequencing/index.html b/blog_tags/sequencing/index.html index 8a64d0cdf..d727ef64c 100644 --- a/blog_tags/sequencing/index.html +++ b/blog_tags/sequencing/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/subscriptions/index.html b/blog_tags/subscriptions/index.html index fc939d4e8..3480a9582 100644 --- a/blog_tags/subscriptions/index.html +++ b/blog_tags/subscriptions/index.html @@ -3740,7 +3740,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/validated-pattern/index.html b/blog_tags/validated-pattern/index.html index d553b4a91..ec2cd4ac9 100644 --- a/blog_tags/validated-pattern/index.html +++ b/blog_tags/validated-pattern/index.html @@ -3743,7 +3743,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/blog_tags/xray/index.html b/blog_tags/xray/index.html index 0c4b868e4..707a50132 100644 --- a/blog_tags/xray/index.html +++ b/blog_tags/xray/index.html @@ -3741,7 +3741,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/ci/index.html b/ci/index.html index db307cf41..24b4d7064 100644 --- a/ci/index.html +++ b/ci/index.html @@ -3742,7 +3742,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/ci/internal/index.html b/ci/internal/index.html index fb0a6f2ac..593154491 100644 --- a/ci/internal/index.html +++ b/ci/internal/index.html @@ -3741,7 +3741,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/contribute/background-on-pattern-development/index.html b/contribute/background-on-pattern-development/index.html index e80eaa3bf..3265c8ed7 100644 --- a/contribute/background-on-pattern-development/index.html +++ b/contribute/background-on-pattern-development/index.html @@ -3737,7 +3737,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/contribute/contribute-to-docs/index.html b/contribute/contribute-to-docs/index.html index b884731f9..84059ac41 100644 --- a/contribute/contribute-to-docs/index.html +++ b/contribute/contribute-to-docs/index.html @@ -3795,7 +3795,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/contribute/creating-a-pattern/index.html b/contribute/creating-a-pattern/index.html index f460495bf..d48174fc6 100644 --- a/contribute/creating-a-pattern/index.html +++ b/contribute/creating-a-pattern/index.html @@ -3824,7 +3824,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/contribute/extending-a-pattern/index.html b/contribute/extending-a-pattern/index.html index 1df968b72..249c9411f 100644 --- a/contribute/extending-a-pattern/index.html +++ b/contribute/extending-a-pattern/index.html @@ -3841,7 +3841,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/contribute/index.html b/contribute/index.html index 8b48b3cc1..f4f60c675 100644 --- a/contribute/index.html +++ b/contribute/index.html @@ -3736,7 +3736,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/contribute/support-policies/index.html b/contribute/support-policies/index.html index b2b5cf419..1986400c0 100644 --- a/contribute/support-policies/index.html +++ b/contribute/support-policies/index.html @@ -3737,7 +3737,25 @@ clusterGroup: applications: test: project: hub chart: nginx chartVersion: 13.2.12 repoURL: https://charts.bitnami.com/bitnami How to develop a chart directly from git It is possible to point the framework directly to a git repository pointing to a helm chart. This is especially useful for developing a chart. There are two cases to distinguish here. The clustergroup chart. Tweak values-global.yaml as follows: spec: clusterGroupName: hub multiSourceConfig: enabled: true clusterGroupGitRepoUrl: https://github.com/myorg/clustergroup-chart clusterGroupChartGitRevision: dev-branch For all the other charts we just need to add repoURL, path and the chartVersion fields: -clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs +clusterGroup: applications: acm: name: acm namespace: open-cluster-management project: hub path: "." chartVersion: dev-branch repoURL: https://github.com/myorg/acm-chart `,url:"https://validatedpatterns.io/blog/2024-09-26-slimming-of-common/",breadcrumb:"/blog/2024-09-26-slimming-of-common/"},"https://validatedpatterns.io/patterns/regional-dr/":{title:"Regional Disaster Recovery",tags:[],content:`OpenShift Regional DR Context As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. +This pattern is designed to prove the resiliency capabilities of Red Hat Openshift in such scenario. +The Regional Disaster Recovery Pattern, is designed to setup an multiple instances of Openshift Container Platform cluster connectedbetween them to prove multi-region resiliency by maintaing the application running in the event of a regional failure. +In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. +NOTE: please consider using longer times if you have a large dataset or very long distances between the clusters +Background The Regional DR Validated Pattern for Red Hat OpenShift increases the resiliency of your applications by connecting multiple clusters across different regions. This pattern uses Red Hat Advanced Cluster Management to offer a Red Hat OpenShift Data Foundation-based multi-region disaster recovery plan if an entire region fails. +Red Hat OpenShift Data Foundation offers two solutions for disaster recovery: Metro DR and Regional DR. As their name suggests, Metro DR refers to a metropolitan area disasters, which occur when the disaster covers only a single area in a region (availability zone), and Regional DR refers to when the entire region fails. Currently, only active-passive mode is supported. +A word on synchronization. A metropolitan network generally offers less latency; data can be written to multiple targets simultaneously, a feature required for active-active DR designs. On the other hand, writing to multiple targets in a cross-regional network might introduce unbearable latency to data synchronization and our applications. Therefore, Regional DR can only work with active-passive DR designs, where the targets are replicated asynchronously. +The synchronization between Availability Zones is faster and can be performed synchronous. However, in order don’t include a lot of latency on the data synchronization process, when data is replicated across regions, it necessary includes latencies based on the distance between both regions (e.g. The latency between two regions on Europe, will always be less than between Europe and Asia, so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. +It requires an already existing Openshift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. +Prerequisites Installing this pattern requires: +One online Red Hat OpenShift cluster to become the “Manager” cluster. This cluster will orchestrate application deployments and data synchronizations. Connection to a Cloud Provider (AWS/Azure/GCP) configured in the Manager cluster. This is required for deploying the active and passive OCP clusters. Red Hat OpenShift CLI installed Solution elements The Regional DR Pattern leverages Red Hat OpenShift Data Foundation’s Regional DR solution, automating applications failover between Red Had Advanced Cluster Management managed clusters in different regions. +The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF We have developed a DR trigger which will be used to start the DR process The end user needs to configure which PV’s need synchronization and the latencies ACS Can be used for eventual policies The clusters are connected by submariner and, to have a faster recovery time, we suggest having hybernated clusters ready to be used Red Hat Technologies Red Hat Openshift Container Platform Red Hat Openshift Data Foundation Red Hat Openshift GitOps Red Hat Openshift Advanced Cluster Management Red Hat Openshift Advanced Cluster Security Operators and Technologies this Pattern Uses Regional DR Trigger Operator Submariner Tested on Red Hat Openshift Container Platform v4.13 Red Hat Openshift Container Platform v4.14 Red Hat Openshift Container Platform v4.15 Architecture This section explains the architecture deployed by this Pattern and its Logical and Physical perspectives. Logical architecture Installation This patterns is designed to be installed in an Openshift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, coordinating their communication and orchestrating when and where an application is going to be deployed. +As part of the pattern configuration, the administrator needs to define both clusters installation details as would be done using the Openshift-installer binary. +For installing the pattern, follow the next steps: +Fork the Pattern. Describe the instructions for creating the clusters and syncing data between them. Commit and push your changes (to your fork). Set your secret cloud provider credentials. Connect to your target Hub cluster. Install the Pattern. Start deploying resilient applications. Pattern Configuration For a full example, check the Pattern’s values.yaml. The install-config specifications are detailed here. +Detailed configuration instructions can be found here. +Owners For any request, bug report or comment about this pattern, please forward it to: +Alejandro Villegas (avillega@rehat.com) Tomer Figenblat (tfigenbl@redhat.com) `,url:"https://validatedpatterns.io/patterns/regional-dr/",breadcrumb:"/patterns/regional-dr/"},"https://validatedpatterns.io/blog/2024-09-13-using-hypershift/":{title:"Using HyperShift",tags:[],content:`Getting Started Hosted Control Planes (aka: HyperShift) is project that enables rapid provisioning and deprovisioning of OpenShift clusters. Use this guide to create and delete your hostedclusters and to interrogate the hostingcluster for compute resource information. Upstream documentation can be found HyperShift Upstream Project Docs PreReqs and Assumptions Deploying HyperShift clusters requires the following: Resource Default Path Description hcp /usr/local/bin diff --git a/images/logos/regional-dr.png b/images/logos/regional-dr.png new file mode 100644 index 000000000..b997ce7ba Binary files /dev/null and b/images/logos/regional-dr.png differ diff --git a/images/regional-resiliency-pattern/architecture-diagram-vp-regional-dr-v6.png b/images/regional-resiliency-pattern/architecture-diagram-vp-regional-dr-v6.png new file mode 100644 index 000000000..94e25501b Binary files /dev/null and b/images/regional-resiliency-pattern/architecture-diagram-vp-regional-dr-v6.png differ diff --git a/images/regional-resiliency-pattern/logical-architecture-diagram-vp-regional-dr-v6.png b/images/regional-resiliency-pattern/logical-architecture-diagram-vp-regional-dr-v6.png new file mode 100644 index 000000000..c48010f98 Binary files /dev/null and b/images/regional-resiliency-pattern/logical-architecture-diagram-vp-regional-dr-v6.png differ diff --git a/index.html b/index.html index 0214672a2..2d3232f8b 100644 --- a/index.html +++ b/index.html @@ -2,8 +2,8 @@

Logo

Reference architectures with added value

Validated Patterns are an evolution of how you deploy applications in a hybrid cloud. With a pattern, you can automatically deploy a full application stack through a GitOps-based framework. With this framework, you can create business-centric solutions while maintaining a level of Continuous Integration (CI) over your application.

Learn more

Watch how validated patterns enhance your cloud-native applications

Latest Patterns

Latest Blog Post

by Martin Jackson

November 7, 2024

How to sequence subscriptions in the Validated Patterns framework

patterns +Sandbox

Latest Blog Post

by Martin Jackson

November 7, 2024

How to sequence subscriptions in the Validated Patterns framework

patterns how-to sequencing subscriptions