From a910163ed6d275049e5f49e0cd9a6f3aa1335858 Mon Sep 17 00:00:00 2001 From: Alex Buck Date: Thu, 13 Jun 2024 16:13:23 -0400 Subject: [PATCH] status --- .../ai-ml/architecture/batch-scoring-deep-learning-content.md | 2 +- docs/ai-ml/index.md | 2 +- docs/best-practices/data-partitioning-strategies-content.md | 2 +- docs/databases/architecture/n-tier-cassandra-content.md | 2 +- docs/example-scenario/aks-agic/aks-agic-content.md | 2 +- .../azure-virtual-desktop/azure-virtual-desktop-content.md | 2 +- docs/example-scenario/forensics/index-content.md | 2 +- .../devtest-labs-reference-architecture-content.md | 2 +- .../iaas-high-availability-disaster-recovery-content.md | 4 ++-- docs/guide/hadoop/apache-hbase-migration-content.md | 2 +- docs/guide/hadoop/apache-sqoop-migration-content.md | 2 +- docs/guide/iot-edge-vision/index.md | 2 +- docs/guide/multitenant/approaches/compute.md | 2 +- docs/guide/multitenant/considerations/pricing-models.md | 2 +- docs/guide/multitenant/overview.md | 2 +- docs/guide/multitenant/service/aks-content.md | 4 ++-- docs/guide/multitenant/service/key-vault.md | 2 +- docs/hybrid/hybrid-cross-cluster-scaling-content.md | 2 +- .../architecture/hub-spoke-vwan-architecture-content.md | 2 +- .../aks-microservices/aks-microservices-advanced-content.md | 2 +- .../containers/aks/baseline-aks-content.md | 2 +- .../containers/aks/windows-containers-on-aks-content.md | 2 +- .../run-sap-bw4hana-with-linux-virtual-machines-content.md | 2 +- .../articles/move-azure-resources-across-regions-content.md | 2 +- .../articles/sap-s4-hana-on-hli-with-ha-and-dr-content.md | 2 +- .../articles/sap-workload-automation-suse-content.md | 3 ++- .../app-service/architectures/multi-region-content.md | 2 +- docs/web-apps/serverless/architectures/web-app-content.md | 2 +- 28 files changed, 31 insertions(+), 30 deletions(-) diff --git a/docs/ai-ml/architecture/batch-scoring-deep-learning-content.md b/docs/ai-ml/architecture/batch-scoring-deep-learning-content.md index 7dfecf5eb69..bcfdabb6ec2 100644 --- a/docs/ai-ml/architecture/batch-scoring-deep-learning-content.md +++ b/docs/ai-ml/architecture/batch-scoring-deep-learning-content.md @@ -51,7 +51,7 @@ Processing involves the following steps: ### Potential use cases -A media organization has a video whose style they want to change to look like a specific painting. The organization wants to apply this style to all frames of the video in a timely manner and in an automated fashion. For more background about neural style transfer algorithms, see [Image Style Transfer Using Convolutional Neural Networks][image-style-transfer] (PDF). +A media organization has a video whose style they want to change to look like a specific painting. The organization wants to apply this style to all frames of the video in a timely manner and in an automated fashion. For more background about neural style transfer algorithms, see [Image Style Transfer Using Convolutional Neural Networks (PDF)][image-style-transfer]. ## Considerations diff --git a/docs/ai-ml/index.md b/docs/ai-ml/index.md index a1f67406ab8..0f612e63210 100644 --- a/docs/ai-ml/index.md +++ b/docs/ai-ml/index.md @@ -20,7 +20,7 @@ categories: # Artificial intelligence (AI) architecture design -*Artificial intelligence* (AI) is the capability of a computer to imitate intelligent human behavior. Through AI, machines can analyze images, comprehend speech, interact in natural ways, and make predictions using data. +*Artificial intelligence (AI)* is the capability of a computer to imitate intelligent human behavior. Through AI, machines can analyze images, comprehend speech, interact in natural ways, and make predictions using data. ![Illustration depicting the relationship of artificial intelligence as a parent concept. Within AI is machine learning. Within machine learning is deep learning.](_images/ai-overview-img-001.png) diff --git a/docs/best-practices/data-partitioning-strategies-content.md b/docs/best-practices/data-partitioning-strategies-content.md index 940fafffc54..296ed18572f 100644 --- a/docs/best-practices/data-partitioning-strategies-content.md +++ b/docs/best-practices/data-partitioning-strategies-content.md @@ -196,7 +196,7 @@ The ability to search for data is often the primary method of navigation and exp Azure Search stores searchable content as JSON documents in a database. You define indexes that specify the searchable fields in these documents and provide these definitions to Azure Search. When a user submits a search request, Azure Search uses the appropriate indexes to find matching items. -To reduce contention, the storage that's used by Azure Search can be divided into 1, 2, 3, 4, 6, or 12 partitions, and each partition can be replicated up to 6 times. The product of the number of partitions multiplied by the number of replicas is called the *search unit* (SU). A single instance of Azure Search can contain a maximum of 36 SUs (a database with 12 partitions only supports a maximum of 3 replicas). +To reduce contention, the storage that's used by Azure Search can be divided into 1, 2, 3, 4, 6, or 12 partitions, and each partition can be replicated up to 6 times. The product of the number of partitions multiplied by the number of replicas is called the *search unit (SU)*. A single instance of Azure Search can contain a maximum of 36 SUs (a database with 12 partitions only supports a maximum of 3 replicas). You are billed for each SU that is allocated to your service. As the volume of searchable content increases or the rate of search requests grows, you can add SUs to an existing instance of Azure Search to handle the extra load. Azure Search itself distributes the documents evenly across the partitions. No manual partitioning strategies are currently supported. diff --git a/docs/databases/architecture/n-tier-cassandra-content.md b/docs/databases/architecture/n-tier-cassandra-content.md index 59f4aea647f..962e2b2ba3e 100644 --- a/docs/databases/architecture/n-tier-cassandra-content.md +++ b/docs/databases/architecture/n-tier-cassandra-content.md @@ -188,7 +188,7 @@ For more information, see the cost section in [Microsoft Azure Well-Architected ### Security -Virtual networks are a traffic isolation boundary in Azure. VMs in one VNet can't communicate directly with VMs in a different VNet. VMs within the same VNet can communicate, unless you create [network security groups][nsg] (NSGs) to restrict traffic. For more information, see [Microsoft cloud services and network security][network-security]. +Virtual networks are a traffic isolation boundary in Azure. VMs in one VNet can't communicate directly with VMs in a different VNet. VMs within the same VNet can communicate, unless you create [network security groups (NSGs)][nsg] to restrict traffic. For more information, see [Microsoft cloud services and network security][network-security]. For incoming Internet traffic, the load balancer rules define which traffic can reach the back end. However, load balancer rules don't support IP safe lists, so if you want to add certain public IP addresses to a safe list, add an NSG to the subnet. diff --git a/docs/example-scenario/aks-agic/aks-agic-content.md b/docs/example-scenario/aks-agic/aks-agic-content.md index b8aa11a3dc5..c58afed831f 100644 --- a/docs/example-scenario/aks-agic/aks-agic-content.md +++ b/docs/example-scenario/aks-agic/aks-agic-content.md @@ -134,7 +134,7 @@ Although Kubernetes cannot guarantee perfectly secure isolation between tenants, *Download a [Visio file](https://arch-center.azureedge.net/aks-agic.vsdx) of this architecture.* -The [Application Gateway Ingress Controller (AGIC)](/azure/application-gateway/ingress-controller-overview) is a Kubernetes application, which makes it possible for [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) customers to use an [Azure Application Gateway](/azure/application-gateway/overview) to expose their containerized applications to the Internet. AGIC monitors the Kubernetes cluster that it is hosted on and continuously updates an Application Gateway, so that the selected services are exposed to the Internet. The Ingress Controller runs in its own pod on the customer's AKS instance. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway-specific configuration and applied to the [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview). This architecture sample shows proven practices to deploy a public or private [Azure Kubernetes Service (AKS) cluster](/azure/aks/intro-kubernetes), with an [Azure Application Gateway](/azure/application-gateway/overview) and an [Application Gateway Ingress Controller](/azure/application-gateway/ingress-controller-overview) add-on. +The [Application Gateway Ingress Controller (AGIC)](/azure/application-gateway/ingress-controller-overview) is a Kubernetes application, which makes it possible for [Azure Kubernetes Service (AKS)](/azure/aks/intro-kubernetes) customers to use an [Azure Application Gateway](/azure/application-gateway/overview) to expose their containerized applications to the Internet. AGIC monitors the Kubernetes cluster that it is hosted on and continuously updates an Application Gateway, so that the selected services are exposed to the Internet. The Ingress Controller runs in its own pod on the customer's AKS instance. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway-specific configuration and applied to the [Azure Resource Manager](/azure/azure-resource-manager/management/overview). This architecture sample shows proven practices to deploy a public or private [Azure Kubernetes Service (AKS) cluster](/azure/aks/intro-kubernetes), with an [Azure Application Gateway](/azure/application-gateway/overview) and an [Application Gateway Ingress Controller](/azure/application-gateway/ingress-controller-overview) add-on. A single instance of the [Azure Application Gateway Kubernetes Ingress Controller (AGIC)](/azure/application-gateway/ingress-controller-multiple-namespace-support) can ingest events from and observe multiple namespaces. Should the AKS administrator decide to use Application Gateway as an ingress, all namespaces will use the same instance of Application Gateway. A single installation of Ingress Controller will monitor accessible namespaces and will configure the Application Gateway that it is associated with. diff --git a/docs/example-scenario/azure-virtual-desktop/azure-virtual-desktop-content.md b/docs/example-scenario/azure-virtual-desktop/azure-virtual-desktop-content.md index 4b4abfb4966..928da6effd4 100644 --- a/docs/example-scenario/azure-virtual-desktop/azure-virtual-desktop-content.md +++ b/docs/example-scenario/azure-virtual-desktop/azure-virtual-desktop-content.md @@ -152,7 +152,7 @@ Azure Virtual Desktop, much like Azure, has certain service limitations that you - We recommend that you deploy no more than 5,000 VMs per Azure subscription per region. This recommendation applies to both personal and pooled host pools, based on Windows Enterprise single and multi-session. Most customers use Windows Enterprise multi-session, which allows multiple users to log in to each VM. You can increase the resources of individual session-host VMs to accommodate more user sessions. - For automated session-host scaling tools, the limits are around 2,500 VMs per Azure subscription per region, because VM status interaction consumes more resources. - To manage enterprise environments with more than 5,000 VMs per Azure subscription in the same region, you can create multiple Azure subscriptions in a hub-spoke architecture and connect them via virtual network peering (using one subscription per spoke). You could also deploy VMs in a different region in the same subscription to increase the number of VMs. -- Azure Resource Manager (ARM) subscription API throttling limits don't allow more than 600 Azure VM reboots per hour via the Azure portal. You can reboot all your machines at once via the operating system, which doesn't consume any Azure Resource Manager subscription API calls. For more information about counting and troubleshooting throttling limits based on your Azure subscription, see [Troubleshoot API throttling errors](/azure/virtual-machines/troubleshooting/troubleshooting-throttling-errors). +- Azure Resource Manager subscription API throttling limits don't allow more than 600 Azure VM reboots per hour via the Azure portal. You can reboot all your machines at once via the operating system, which doesn't consume any Azure Resource Manager subscription API calls. For more information about counting and troubleshooting throttling limits based on your Azure subscription, see [Troubleshoot API throttling errors](/azure/virtual-machines/troubleshooting/troubleshooting-throttling-errors). - You can currently deploy up to 132 VMs in a single ARM template deployment in the Azure Virtual Desktop portal. To create more than 132 VMs, run the ARM template deployment in the Azure Virtual Desktop portal multiple times. - Azure VM session-host name prefixes can't exceed 11 characters, due to auto-assigning of instance names and the NetBIOS limit of 15 characters per computer account. - By default, you can deploy up to 800 instances of most resource types in a resource group. Azure Compute doesn't have this limit. diff --git a/docs/example-scenario/forensics/index-content.md b/docs/example-scenario/forensics/index-content.md index 2dbecefeb6c..fd1f1a3bb72 100644 --- a/docs/example-scenario/forensics/index-content.md +++ b/docs/example-scenario/forensics/index-content.md @@ -83,7 +83,7 @@ Access to the target architecture includes the following roles: #### Azure Storage account -The [Azure Storage account](/azure/storage/common/storage-account-overview) in the SOC subscription hosts the disk snapshots in a container configured with a *legal hold* policy as Azure immutable blob storage. Immutable blob storage stores business-critical data objects in a *write once, read many* (WORM) state, which makes the data nonerasable and uneditable for a user-specified interval. +The [Azure Storage account](/azure/storage/common/storage-account-overview) in the SOC subscription hosts the disk snapshots in a container configured with a *legal hold* policy as Azure immutable blob storage. Immutable blob storage stores business-critical data objects in a *write once, read many (WORM)* state, which makes the data nonerasable and uneditable for a user-specified interval. Be sure to enable the [secure transfer](/azure/storage/common/storage-require-secure-transfer) and [storage firewall](/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-a-virtual-network) properties. The firewall grants access only from the SOC virtual network. diff --git a/docs/example-scenario/infrastructure/devtest-labs-reference-architecture-content.md b/docs/example-scenario/infrastructure/devtest-labs-reference-architecture-content.md index e3fc6c9c147..1fe14de92dd 100644 --- a/docs/example-scenario/infrastructure/devtest-labs-reference-architecture-content.md +++ b/docs/example-scenario/infrastructure/devtest-labs-reference-architecture-content.md @@ -101,7 +101,7 @@ The following links address governance and compliance for DTL: #### Identity and Access Management -Enterprise organizations typically follow a least-privileged approach to operational access designed through Microsoft Entra ID, [Azure role-based access control](/azure/role-based-access-control/overview) (RBAC), and custom role definitions. The RBAC roles enable management of DTL resources, such as create virtual machines, create environments, and start, stop, restart, delete, and apply artifacts. +Enterprise organizations typically follow a least-privileged approach to operational access designed through Microsoft Entra ID, [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview), and custom role definitions. The RBAC roles enable management of DTL resources, such as create virtual machines, create environments, and start, stop, restart, delete, and apply artifacts. - Access to labs can be configured to segregate duties within your team into different [roles](/azure/devtest-labs/devtest-lab-add-devtest-user). Three of these RBAC roles are Owner, DevTest Labs User, and Contributor. The DTL resource should be owned by those who understand the project and team requirements for budget, machines, and required software. A common model is the project-lead or the app-admin as the lab owner and the team members as lab users. The Contributor role can be assigned to app-infra members who need permissions to manage lab resources. Lab owner is responsible for configuring the policies and adding the required users to the lab. - For enterprises that require users to connect with domain-based identities, a domain controller added to the Platform subscription can be used to domain-join DTL VMs. [DTL artifacts](/azure/devtest-labs/devtest-lab-concepts#artifacts) provide a way to domain-join VMs automatically. By default, DTL virtual machines use a local admin account. diff --git a/docs/example-scenario/infrastructure/iaas-high-availability-disaster-recovery-content.md b/docs/example-scenario/infrastructure/iaas-high-availability-disaster-recovery-content.md index 92358e15411..7b9157bfe7d 100644 --- a/docs/example-scenario/infrastructure/iaas-high-availability-disaster-recovery-content.md +++ b/docs/example-scenario/infrastructure/iaas-high-availability-disaster-recovery-content.md @@ -12,7 +12,7 @@ This article presents a decision tree and examples of high-availability (HA) and The decision flowchart reflects the principle that HA apps should use availability zones if possible. Cross-zone, and therefore cross-datacenter, HA provides > 99.99% SLA because of resilience to datacenter failure. -Availability sets and availability zones for different app tiers aren't guaranteed to be within the same datacenters. If app latency is a primary concern, you should colocate services in a single datacenter by using [proximity placement groups](https://azure.microsoft.com/blog/introducing-proximity-placement-groups) (PPGs) with availability zones and availability sets. +Availability sets and availability zones for different app tiers aren't guaranteed to be within the same datacenters. If app latency is a primary concern, you should colocate services in a single datacenter by using [proximity placement groups (PPGs)](https://azure.microsoft.com/blog/introducing-proximity-placement-groups) with availability zones and availability sets. ### Components @@ -74,7 +74,7 @@ Availability zones are suitable for many clustered app scenarios, including [Alw If you want to use a VM-based *cluster arbiter*, for example a *file-share witness*, place it in the third availability zone, to ensure quorum isn't lost if any one zone fails. Alternatively, you might be able to use a cloud-based witness in another region. -All VMs in an availability zone are in a single *fault domain* (FD) and *update domain* (UD), meaning they share a common power source and network switch, and can all be rebooted at the same time. If you create VMs across different availability zones, your VMs are effectively distributed across different FDs and UDs, so they won't all fail or be rebooted at the same time. If you want to have redundant in-zone VMs as well as cross-zone VMs, you should place the in-zone VMs in availability sets in PPGs to ensure they won't all be rebooted at once. Even for single-instance VM workloads that aren't redundant today, you can still optionally use availability sets in the PPGs to allow for future growth and flexibility. +All VMs in an availability zone are in a single *fault domain (FD)* and *update domain (UD)*, meaning they share a common power source and network switch, and can all be rebooted at the same time. If you create VMs across different availability zones, your VMs are effectively distributed across different FDs and UDs, so they won't all fail or be rebooted at the same time. If you want to have redundant in-zone VMs as well as cross-zone VMs, you should place the in-zone VMs in availability sets in PPGs to ensure they won't all be rebooted at once. Even for single-instance VM workloads that aren't redundant today, you can still optionally use availability sets in the PPGs to allow for future growth and flexibility. For deploying virtual machine scale sets across availability zones, consider using [Orchestration mode](/azure/virtual-machine-scale-sets/orchestration-modes-api-comparison), currently in public preview, which allows combining FDs and availability zones. diff --git a/docs/guide/hadoop/apache-hbase-migration-content.md b/docs/guide/hadoop/apache-hbase-migration-content.md index 63183831ed2..d110ed9cb69 100644 --- a/docs/guide/hadoop/apache-hbase-migration-content.md +++ b/docs/guide/hadoop/apache-hbase-migration-content.md @@ -61,7 +61,7 @@ The following diagram illustrates these concepts. HBase uses a combination of data structures that reside in memory and in persistent storage to deliver fast writes. When a write occurs, the data is first written to a write-ahead log (WAL), which is a data structure that's stored on persistent storage. The role of the WAL is to track changes so that logs can be replayed in case of a server failure. The WAL is only used for resiliency. After data is committed to the WAL, it's written to MemStore, which is an in-memory data structure. At this stage, a write is complete. -For long-term data persistence, HBase uses a data structure called an *HBase file* (HFile). An HFile is stored on HDFS. Depending on MemStore size and the data flush interval, data from MemStore is written to an HFile. For information about the format of an HFile, see [Appendix G: HFile format](https://HBase.apache.org/book.html#_hfile_format_2). +For long-term data persistence, HBase uses a data structure called an *HBase file (HFile)*. An HFile is stored on HDFS. Depending on MemStore size and the data flush interval, data from MemStore is written to an HFile. For information about the format of an HFile, see [Appendix G: HFile format](https://HBase.apache.org/book.html#_hfile_format_2). The following diagram shows the steps of a write operation. diff --git a/docs/guide/hadoop/apache-sqoop-migration-content.md b/docs/guide/hadoop/apache-sqoop-migration-content.md index f85b419745f..fa3907453b6 100644 --- a/docs/guide/hadoop/apache-sqoop-migration-content.md +++ b/docs/guide/hadoop/apache-sqoop-migration-content.md @@ -236,7 +236,7 @@ For more information, see [What is a private endpoint?](/azure/private-link/priv Sqoop improves data transfer performance by using MapReduce for parallel processing. After you migrate Sqoop, Data Factory can adjust performance and scalability for scenarios that perform large-scale data migrations. - A *data integration unit* (DIU) is a Data Factory unit of performance. It's a combination of CPU, memory, and network resource allocation. Data Factory can adjust up to 256 DIUs for copy activities that use the Azure integration runtime. For more information, see [Data Integration Units](/azure/data-factory/copy-activity-performance#data-integration-units). + A *data integration unit (DIU)* is a Data Factory unit of performance. It's a combination of CPU, memory, and network resource allocation. Data Factory can adjust up to 256 DIUs for copy activities that use the Azure integration runtime. For more information, see [Data Integration Units](/azure/data-factory/copy-activity-performance#data-integration-units). If you use self-hosted integration runtime, you can improve performance by scaling the machine that hosts the self-hosted integration runtime. The maximum scale-out is four nodes. diff --git a/docs/guide/iot-edge-vision/index.md b/docs/guide/iot-edge-vision/index.md index 94966fbaef9..1eea1e1bf86 100644 --- a/docs/guide/iot-edge-vision/index.md +++ b/docs/guide/iot-edge-vision/index.md @@ -22,7 +22,7 @@ ms.custom: This series of articles describes how to plan and design a computer vision workload that uses [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge). You can run Azure IoT Edge on devices, and integrate with Azure Machine Learning, Azure Storage, Azure App Services, and Power BI for end-to-end vision AI solutions. -Visually inspecting products, resources, and environments is critical for many endeavors. Human visual inspection and analytics are subject to inefficiency and inaccuracy. Enterprises now use deep learning artificial neural networks called *convolutional neural networks* (CNNs) to emulate human vision. Using CNNs for automated image input and analysis is commonly called *computer vision* or *vision AI*. +Visually inspecting products, resources, and environments is critical for many endeavors. Human visual inspection and analytics are subject to inefficiency and inaccuracy. Enterprises now use deep learning artificial neural networks called *convolutional neural networks (CNNs)* to emulate human vision. Using CNNs for automated image input and analysis is commonly called *computer vision* or *vision AI*. Technologies like containerization support portability, which allows migrating vision AI models to the network edge. You can train vision inference models in the cloud, containerize the models, and use them to create custom modules for Azure IoT Edge runtime-enabled devices. Deploying vision AI solutions at the edge yields performance and cost benefits. diff --git a/docs/guide/multitenant/approaches/compute.md b/docs/guide/multitenant/approaches/compute.md index 296106e9cf2..3922ecd9d4a 100644 --- a/docs/guide/multitenant/approaches/compute.md +++ b/docs/guide/multitenant/approaches/compute.md @@ -142,7 +142,7 @@ Compute tiers can be subject to cross-tenant data leakage, if they are not prope To avoid the [Busy Front End antipattern](../../../antipatterns/busy-front-end/index.md), avoid your front end tier doing a lot of the work that could be handled by other components or tiers of your architecture. This antipattern is particularly important when you create shared front-ends for a multitenant solution, because a busy front end will degrade the experience for all tenants. -Instead, consider using asynchronous processing by making use of queues or other messaging services. This approach also enables you to apply *quality of service* (QoS) controls for different tenants, based on their requirements. For example, all tenants might share a common front end tier, but tenants who [pay for a higher service level](../considerations/pricing-models.md) might have a higher set of dedicated resources to process the work from their queue messages. +Instead, consider using asynchronous processing by making use of queues or other messaging services. This approach also enables you to apply *quality of service (QoS)* controls for different tenants, based on their requirements. For example, all tenants might share a common front end tier, but tenants who [pay for a higher service level](../considerations/pricing-models.md) might have a higher set of dedicated resources to process the work from their queue messages. ### Inelastic or insufficient scaling diff --git a/docs/guide/multitenant/considerations/pricing-models.md b/docs/guide/multitenant/considerations/pricing-models.md index 5293f1efdee..090c0883f91 100644 --- a/docs/guide/multitenant/considerations/pricing-models.md +++ b/docs/guide/multitenant/considerations/pricing-models.md @@ -22,7 +22,7 @@ ms.custom: A good pricing model ensures that you remain profitable as the number of tenants grows and as you add new features. An important consideration when developing a commercial multitenant solution is how to design pricing models for your product. On this page, we provide guidance for technical decision-makers about the pricing models you can consider and the tradeoffs involved. -When you determine the pricing model for your product, you need to balance the *return on value* (ROV) for your customers with the *cost of goods sold* (COGS) to deliver the service. Offering more flexible commercial models (for a solution) might increase the ROV for customers, but it might also increase the architectural and commercial complexity of the solution (and therefore also increase your COGS). +When you determine the pricing model for your product, you need to balance the *return on value (ROV)* for your customers with the *cost of goods sold (COGS)* to deliver the service. Offering more flexible commercial models (for a solution) might increase the ROV for customers, but it might also increase the architectural and commercial complexity of the solution (and therefore also increase your COGS). Some important considerations that you should take into account, when developing pricing models for a solution, are as follows: diff --git a/docs/guide/multitenant/overview.md b/docs/guide/multitenant/overview.md index 10ef7005676..7ad396a238e 100644 --- a/docs/guide/multitenant/overview.md +++ b/docs/guide/multitenant/overview.md @@ -49,7 +49,7 @@ The guidance provided in this series is applicable to anyone building a multiten Some of the content in this series is designed to be useful for technical decision-makers, like chief technology officers (CTOs) and architects, and anyone designing or implementing a multitenant solution on Microsoft Azure. Other content is more technically focused and is targeted at solution architects and engineers who implement a multitenant solution. > [!NOTE] -> *Managed service providers* (MSPs) manage and operate Azure environments on behalf of their customers, and work with multiple Microsoft Entra tenants in the process. This is another form of multitenancy, but it's focused on managing Azure resources across multiple Microsoft Entra tenants. This series isn't intended to provide guidance on these matters. +> *Managed service providers (MSPs)* manage and operate Azure environments on behalf of their customers, and work with multiple Microsoft Entra tenants in the process. This is another form of multitenancy, but it's focused on managing Azure resources across multiple Microsoft Entra tenants. This series isn't intended to provide guidance on these matters. > > However, the series is likely to be helpful for ISVs who build software for MSPs, or for anyone else who builds and deploys multitenant software. diff --git a/docs/guide/multitenant/service/aks-content.md b/docs/guide/multitenant/service/aks-content.md index 42ec3041098..105166ad0c1 100644 --- a/docs/guide/multitenant/service/aks-content.md +++ b/docs/guide/multitenant/service/aks-content.md @@ -47,7 +47,7 @@ When you plan to build a multitenant [Azure Kubernetes Service (AKS)](/azure/aks In addition, you should consider the security implications of sharing different resources among multiple tenants. For example, scheduling pods from different tenants on the same node could reduce the number of machines needed in the cluster. On the other hand, you might need to prevent specific workloads from being collocated. For example, you might not allow untrusted code from outside your organization to run on the same node as containers that process sensitive information. -Although Kubernetes can't guarantee perfectly secure isolation between tenants, it does offer features that may be sufficient for specific use cases. As a best practice, you should separate each tenant and its Kubernetes resources into their namespaces. You can then use [Kubernetes role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac) (RBAC) and [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies) to enforce tenant isolation. For example, the following picture shows the typical SaaS provider model that hosts multiple instances of the same application on the same cluster, one for each tenant. Each application lives in a separate namespace. +Although Kubernetes can't guarantee perfectly secure isolation between tenants, it does offer features that may be sufficient for specific use cases. As a best practice, you should separate each tenant and its Kubernetes resources into their namespaces. You can then use [Kubernetes role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac) and [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies) to enforce tenant isolation. For example, the following picture shows the typical SaaS provider model that hosts multiple instances of the same application on the same cluster, one for each tenant. Each application lives in a separate namespace. ![Diagram that shows a SaaS provider model that hosts multiple instances of the same application on the same cluster.](./media/aks/namespaces.png) @@ -129,7 +129,7 @@ In an automated single-tenant deployment model, you deploy a dedicated set of re ![Diagram showing three tenants, each with separate deployments.](./media/aks/automated-single-tenant-deployments.png) -Each tenant workload runs in a dedicated AKS cluster and accesses a distinct set of Azure resources. Typically, multitenant solutions that are built using this model make extensive use of [infrastructure as code](/devops/deliver/what-is-infrastructure-as-code) (IaC). For example, [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep), [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview), [Terraform](/azure/developer/terraform/overview), or the [Azure Resource Manager REST APIs](/rest/api/resources) help initiate and coordinate the on-demand deployment of tenant-dedicated resources. You might use this approach when you need to provision an entirely separate infrastructure for each of your customers. When planning your deployment, consider using the [Deployment Stamps pattern](../../../patterns/deployment-stamp.yml). +Each tenant workload runs in a dedicated AKS cluster and accesses a distinct set of Azure resources. Typically, multitenant solutions that are built using this model make extensive use of [infrastructure as code (IaC)](/devops/deliver/what-is-infrastructure-as-code). For example, [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep), [Azure Resource Manager](/azure/azure-resource-manager/management/overview), [Terraform](/azure/developer/terraform/overview), or the [Azure Resource Manager REST APIs](/rest/api/resources) help initiate and coordinate the on-demand deployment of tenant-dedicated resources. You might use this approach when you need to provision an entirely separate infrastructure for each of your customers. When planning your deployment, consider using the [Deployment Stamps pattern](../../../patterns/deployment-stamp.yml). **Benefits:** diff --git a/docs/guide/multitenant/service/key-vault.md b/docs/guide/multitenant/service/key-vault.md index dba325d09f7..339de30f2e2 100644 --- a/docs/guide/multitenant/service/key-vault.md +++ b/docs/guide/multitenant/service/key-vault.md @@ -55,7 +55,7 @@ There's no limit to the number of vaults you can deploy into an Azure subscripti ### Vault per tenant, in the tenant's subscription -In some situations, your tenants might create vaults in their own Azure subscriptions, and they might want to grant your application access to work with secrets, certificates, or keys. This approach is appropriate when you allow *customer-managed keys* (CMKs) for encryption within your solution. +In some situations, your tenants might create vaults in their own Azure subscriptions, and they might want to grant your application access to work with secrets, certificates, or keys. This approach is appropriate when you allow *customer-managed keys (CMKs)* for encryption within your solution. In order to access the data in your tenant's vault, the tenant must provide your application with access to their vault. This process requires that your application authenticates through their Microsoft Entra instance. One approach is to publish a [multitenant Microsoft Entra application](/azure/active-directory/develop/single-and-multi-tenant-apps). Your tenants must perform a one-time consent process. They first register the multitenant Microsoft Entra application in their own Microsoft Entra tenant. Then, they grant your multitenant Microsoft Entra application the appropriate level of access to their vault. They also need to provide you with the full resource ID of the vault that they've created. Then, your application code can use a service principal that's associated with the multitenant Microsoft Entra application in your own Microsoft Entra ID, to access each tenant's vault. diff --git a/docs/hybrid/hybrid-cross-cluster-scaling-content.md b/docs/hybrid/hybrid-cross-cluster-scaling-content.md index 0a3575dddb9..da6ee42c728 100644 --- a/docs/hybrid/hybrid-cross-cluster-scaling-content.md +++ b/docs/hybrid/hybrid-cross-cluster-scaling-content.md @@ -90,7 +90,7 @@ Overall, this workflow involves building and deploying applications, load balanc - [Container insights](/azure/azure-monitor/containers/container-insights-overview) is a monitoring and observability solution provided by Azure Monitor that lets you gain insights into the performance and health of containers running in AKS clusters. With Azure Arc enabled for AKS, you can extend the capabilities of Container insights to monitor and manage your AKS clusters that are running outside of Azure, such as for on-premises or multicloud environments. - [Arc-Enabled SQL Managed Instances](https://azure.microsoft.com/products/azure-arc/hybrid-data-services/) is an Azure SQL data service that can be created on the Stack HCI infrastructure and managed by using Azure Arc. - [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) lets you securely store and manage cryptographic keys, secrets, and certificates. While Azure Key Vault is primarily a cloud service, it can also be used with Azure Stack HCI deployments to store and manage sensitive information securely on-premises. -- [SDN Infrastructure](/azure-stack/hci/concepts/plan-software-defined-networking-infrastructure). In an AKS hybrid deployment on Azure Stack HCI, load balancing is achieved through the Software Load Balancer SDN (SLB). SLB manages the AKS-HCI infrastructure and applications within the SDN (Software-Defined Networking) Virtual Network, including the necessary SDN network infrastructure resources like Mux load balancer VMs, Gateway VMs, and Network Controllers. +- [SDN Infrastructure](/azure-stack/hci/concepts/plan-software-defined-networking-infrastructure). In an AKS hybrid deployment on Azure Stack HCI, load balancing is achieved through the Software Load Balancer (SLB) SDN. SLB manages the AKS-HCI infrastructure and applications within the SDN (Software-Defined Networking) Virtual Network, including the necessary SDN network infrastructure resources like Mux load balancer VMs, Gateway VMs, and Network Controllers. Here's a breakdown of the components involved: diff --git a/docs/networking/architecture/hub-spoke-vwan-architecture-content.md b/docs/networking/architecture/hub-spoke-vwan-architecture-content.md index 8fb48917642..5780594d2e7 100644 --- a/docs/networking/architecture/hub-spoke-vwan-architecture-content.md +++ b/docs/networking/architecture/hub-spoke-vwan-architecture-content.md @@ -2,7 +2,7 @@ This hub-spoke architecture provides an alternate solution to the reference architectures [hub-spoke network topology in Azure](../architecture/hub-spoke.yml) and [implement a secure hybrid network](../../reference-architectures/dmz/secure-vnet-dmz.yml?tabs=portal). The *hub* is a virtual network in Azure that acts as a central point of connectivity to your on-premises network. The *spokes* are virtual networks that peer with the hub and can be used to isolate workloads. Traffic flows between the on-premises data center(s) and the hub through an ExpressRoute or VPN gateway connection. The main differentiator of this approach is the use of -[Azure Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) (VWAN) to replace hubs as a managed service. +[Azure Virtual WAN (VWAN)](https://azure.microsoft.com/services/virtual-wan/) to replace hubs as a managed service. This architecture includes the benefits of standard hub-spoke network topology and introduces new benefits: diff --git a/docs/reference-architectures/containers/aks-microservices/aks-microservices-advanced-content.md b/docs/reference-architectures/containers/aks-microservices/aks-microservices-advanced-content.md index 17e32ef123a..6ac81930bd1 100644 --- a/docs/reference-architectures/containers/aks-microservices/aks-microservices-advanced-content.md +++ b/docs/reference-architectures/containers/aks-microservices/aks-microservices-advanced-content.md @@ -184,7 +184,7 @@ Kubernetes supports *autoscaling* to increase the number of pods allocated to a #### Cluster autoscaling -The *cluster autoscaler* (CA) scales the number of nodes. Suppose pods can't be scheduled because of resource constraints; the cluster autoscaler provisions more nodes. You define a minimum number of nodes to keep the AKS cluster and your workloads operational and a maximum number of nodes for heavy traffic. The CA checks every few seconds for pending pods or empty nodes and scales the AKS cluster appropriately. +The *cluster autoscaler (CA)* scales the number of nodes. Suppose pods can't be scheduled because of resource constraints; the cluster autoscaler provisions more nodes. You define a minimum number of nodes to keep the AKS cluster and your workloads operational and a maximum number of nodes for heavy traffic. The CA checks every few seconds for pending pods or empty nodes and scales the AKS cluster appropriately. The following example shows the CA configuration from the ARM template: diff --git a/docs/reference-architectures/containers/aks/baseline-aks-content.md b/docs/reference-architectures/containers/aks/baseline-aks-content.md index bedc40b3d6d..1a52462f7ec 100644 --- a/docs/reference-architectures/containers/aks/baseline-aks-content.md +++ b/docs/reference-architectures/containers/aks/baseline-aks-content.md @@ -709,7 +709,7 @@ Another portion could be to integrate the basic workload with Microsoft Entra ID ### Use Infrastructure as Code (IaC) -Choose an idempotent declarative method over an imperative approach, where possible. Instead of writing a sequence of commands that specify configuration options, use declarative syntax that describes the resources and their properties. One option is an [Azure Resource Manager (ARM)](/azure/azure-resource-manager/templates/overview) templates. Another is Terraform. +Choose an idempotent declarative method over an imperative approach, where possible. Instead of writing a sequence of commands that specify configuration options, use declarative syntax that describes the resources and their properties. One option is using [Azure Resource Manager templates](/azure/azure-resource-manager/templates/overview). Another is Terraform. Make sure as you provision resources as per the governing policies. For example, when selecting the right VM sizes, stay within the cost constraints, availability zone options to match the requirements of your application. diff --git a/docs/reference-architectures/containers/aks/windows-containers-on-aks-content.md b/docs/reference-architectures/containers/aks/windows-containers-on-aks-content.md index 2949343dd0a..a68ca830121 100644 --- a/docs/reference-architectures/containers/aks/windows-containers-on-aks-content.md +++ b/docs/reference-architectures/containers/aks/windows-containers-on-aks-content.md @@ -79,7 +79,7 @@ The larger image sizes associated with Windows server-based images requires the ## Identity management -Windows containers cannot be domain joined, so if you require Active Directory authentication and authorization, you can use [Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) (gMSA). In order to use gMSA, you must enable the gMSA profile on your AKS cluster running Windows nodes. Refer to the [gMSA AKS documentation](/virtualization/windowscontainers/manage-containers/manage-serviceaccounts) for a full review of the architecture and a guide on enabling the profile. +Windows containers cannot be domain joined, so if you require Active Directory authentication and authorization, you can use [Group Managed Service Accounts (gMSA)](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview). In order to use gMSA, you must enable the gMSA profile on your AKS cluster running Windows nodes. Refer to the [gMSA AKS documentation](/virtualization/windowscontainers/manage-containers/manage-serviceaccounts) for a full review of the architecture and a guide on enabling the profile. ## Node and pod scaling diff --git a/docs/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines-content.md b/docs/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines-content.md index 3a55e6d585f..f8d994e4403 100644 --- a/docs/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines-content.md +++ b/docs/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines-content.md @@ -206,7 +206,7 @@ For internet-facing communications, a stand-alone solution in DMZ would be the r #### Central Services -To protect the [availability of SAP Central Services](/azure/virtual-machines/workloads/sap/sap-planning-supported-configurations#high-availability-for-sap-central-service) (ASCS) on Azure Linux virtual machines, you must use the appropriate high availability extension (HAE) for your selected Linux distribution. HAE delivers Linux clustering software and OS-specific integration components for implementation. +To protect the [availability of SAP Central Services (ASCS)](/azure/virtual-machines/workloads/sap/sap-planning-supported-configurations#high-availability-for-sap-central-service) on Azure Linux virtual machines, you must use the appropriate high availability extension (HAE) for your selected Linux distribution. HAE delivers Linux clustering software and OS-specific integration components for implementation. To avoid a cluster split-brain problem, you can set up cluster node fencing using an iSCSI STONITH Block Device (SBD), as this example shows. Or you can instead use the [Azure Fence Agent](/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker). The improved Azure Fence Agent provides much faster service failover compared to the previous version of the agent for Red Hat and SUSE environments. diff --git a/docs/solution-ideas/articles/move-azure-resources-across-regions-content.md b/docs/solution-ideas/articles/move-azure-resources-across-regions-content.md index 297a500c98d..795e7217395 100644 --- a/docs/solution-ideas/articles/move-azure-resources-across-regions-content.md +++ b/docs/solution-ideas/articles/move-azure-resources-across-regions-content.md @@ -64,7 +64,7 @@ Since your requirements might differ from the example architecture, use the foll * Account permissions: If you created a free Azure account, you're the administrator of your subscription. If you're not the subscription administrator, work with the administrator to assign the permissions that you need to move the resources. Verify that your Azure subscription allows you to create the necessary resource in the target region. - * Resource identification: Identify and categorize your resources based on the type of resource needed to export an [Azure Resource Manager (ARM)](https://azure.microsoft.com/features/resource-manager) template or to start replication using various technologies. For each of the resource types you want to move, the steps may be different. Refer to [Moving Azure resources across regions](/azure/azure-resource-manager/management/move-region) to identify the corresponding steps for each of the resource types. + * Resource identification: Identify and categorize your resources based on the type of resource needed to export an [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager) template (ARM template) or to start replication using various technologies. For each of the resource types you want to move, the steps may be different. Refer to [Moving Azure resources across regions](/azure/azure-resource-manager/management/move-region) to identify the corresponding steps for each of the resource types. 1. Move the networking components. diff --git a/docs/solution-ideas/articles/sap-s4-hana-on-hli-with-ha-and-dr-content.md b/docs/solution-ideas/articles/sap-s4-hana-on-hli-with-ha-and-dr-content.md index 134b0c441c0..ce99bd5c3c2 100644 --- a/docs/solution-ideas/articles/sap-s4-hana-on-hli-with-ha-and-dr-content.md +++ b/docs/solution-ideas/articles/sap-s4-hana-on-hli-with-ha-and-dr-content.md @@ -18,7 +18,7 @@ This system takes advantage of OS clustering for database performance, high avai 1. Azure high-speed ExpressRoute gateway is used to connect to Azure Virtual Machines. 1. Request flows into highly available ABAP SAP Central Services (ASCS) and then through application servers, which run on Azure Virtual Machines. This availability set offers a 99.95 percent uptime SLA. 1. Request is sent from App Server to SAP HANA running on primary large instance blades. -1. Primary and secondary blades are clustered at OS level for 99.99 percent availability, and data replication is handled through HANA System Replication in synchronous mode (HSR) from primary to secondary enabling zero RPO. +1. Primary and secondary blades are clustered at OS level for 99.99 percent availability, and data replication is handled through HANA System Replication (HSR) in synchronous mode from primary to secondary enabling zero RPO. 1. In-memory data of SAP HANA is persisted to high-performance NFS storage. 1. Data from NFS storage is periodically backed up in seconds, using built-in storage snapshots on the local storage, with no impact to database performance. 1. Persistent data volume on secondary storage is replicated to dedicated DR system through a dedicated backbone network for HANA storage replication. diff --git a/docs/solution-ideas/articles/sap-workload-automation-suse-content.md b/docs/solution-ideas/articles/sap-workload-automation-suse-content.md index 280f6319eab..7a6adeafdb6 100644 --- a/docs/solution-ideas/articles/sap-workload-automation-suse-content.md +++ b/docs/solution-ideas/articles/sap-workload-automation-suse-content.md @@ -95,6 +95,7 @@ Principal author: ### Solution templates SUSE SAP ARM template to create the SAP infrastructure: + - [Infrastructure for SAP NetWeaver and SAP HANA](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/suse.suse-sap-infra?tab=Overview) (Azure Marketplace) - [SUSE and Microsoft Solution Templates for SAP Applications](https://documentation.suse.com/sbp/all/single-html/SBP-SAP-AzureSolutionTemplates) (SUSE) -- [SUSE and Microsoft Solution templates for SAP Applications](https://github.com/SUSE/azure-resource-manager-sap-solution-templates) (GitHub) \ No newline at end of file +- [SUSE and Microsoft Solution templates for SAP Applications](https://github.com/SUSE/azure-resource-manager-sap-solution-templates) (GitHub) diff --git a/docs/web-apps/app-service/architectures/multi-region-content.md b/docs/web-apps/app-service/architectures/multi-region-content.md index 50132769001..dd70ea051a6 100644 --- a/docs/web-apps/app-service/architectures/multi-region-content.md +++ b/docs/web-apps/app-service/architectures/multi-region-content.md @@ -112,7 +112,7 @@ Azure Cosmos DB supports geo-replication across regions in active-active pattern ### Storage -For Azure Storage, use [read-access geo-redundant storage][ra-grs] (RA-GRS). With RA-GRS storage, the data is replicated to a secondary region. You have read-only access to the data in the secondary region through a separate endpoint. [User-initiated failover](/azure/storage/common/storage-initiate-account-failover?tabs=azure-portal) to the secondary region is supported for geo-replicated storage accounts. Initiating a storage account failover automatically updates DNS records to make the secondary storage account the new primary storage account. Failovers should only be undertaken when you deem it's necessary. This requirement is defined by your organization's disaster recovery plan, and you should consider the implications as described in the Considerations section below. +For Azure Storage, use [read-access geo-redundant storage (RA-GRS)][ra-grs]. With RA-GRS storage, the data is replicated to a secondary region. You have read-only access to the data in the secondary region through a separate endpoint. [User-initiated failover](/azure/storage/common/storage-initiate-account-failover?tabs=azure-portal) to the secondary region is supported for geo-replicated storage accounts. Initiating a storage account failover automatically updates DNS records to make the secondary storage account the new primary storage account. Failovers should only be undertaken when you deem it's necessary. This requirement is defined by your organization's disaster recovery plan, and you should consider the implications as described in the Considerations section below. If there's a regional outage or disaster, the Azure Storage team might decide to perform a geo-failover to the secondary region. For these types of failovers, there's no customer action required. Fail back to the primary region is also managed by the Azure storage team in these cases. diff --git a/docs/web-apps/serverless/architectures/web-app-content.md b/docs/web-apps/serverless/architectures/web-app-content.md index 53837ca7df6..d3993a11d64 100644 --- a/docs/web-apps/serverless/architectures/web-app-content.md +++ b/docs/web-apps/serverless/architectures/web-app-content.md @@ -115,7 +115,7 @@ These considerations implement the pillars of the Azure Well-Architected Framewo **Functions**. For the consumption plan, the HTTP trigger scales based on the traffic. There's a limit to the number of concurrent function instances, but each instance can process more than one request at a time. For an App Service plan, the HTTP trigger scales according to the number of VM instances, which can be a fixed value or can autoscale based on a set of autoscaling rules. For information, see [Azure Functions scale and hosting][functions-scale]. -**Azure Cosmos DB**. Throughput capacity for Azure Cosmos DB is measured in [Request Units][ru] (RU). A 1-RU throughput corresponds to the throughput need to GET a 1KB document. In order to scale an Azure Cosmos DB container past 10,000 RU, you must specify a [partition key][partition-key] when you create the container and include the partition key in every document that you create. For more information about partition keys, see [Partition and scale in Azure Cosmos DB][cosmosdb-scale]. +**Azure Cosmos DB**. Throughput capacity for Azure Cosmos DB is measured in [Request Units (RUs)][ru]. A 1-RU throughput corresponds to the throughput need to GET a 1KB document. In order to scale an Azure Cosmos DB container past 10,000 RU, you must specify a [partition key][partition-key] when you create the container and include the partition key in every document that you create. For more information about partition keys, see [Partition and scale in Azure Cosmos DB][cosmosdb-scale]. **API Management**. API Management can scale out and supports rule-based autoscaling. The scaling process takes at least 20 minutes. If your traffic is bursty, you should provision for the maximum burst traffic that you expect. However, autoscaling is useful for handling hourly or daily variations in traffic. For more information, see [Automatically scale an Azure API Management instance][apim-scale].