- Low-latency Remote Direct Memory Access (RDMA) networking
- Low-latency network connection between the processing running on two servers or virtual machines in Azure.
- It copies data from the network adapter directly to memory and avoids wasting CPU cycles.
- For developers, it's seen like single machine with shared memory
- Usage: Installed on head node (must be Windows)
- Server can manage compute nodes (can be Linux or Windows)
- How it works:
- Uses Intel MPI library.
- Some VMs (A8,A9 or ends with 'r') have network interface for RDMA
- Use-cases:
- Burst to Azure from on-prem
- molecular modeling and computational fluid dynamics
- In Azure:
- H-series VMs
- PaaS for HPC
- Manages VMs, computes nodes with job scheduling (uses internal queue), autoscale, data persistence.
- Batch API lets you wrap existing (on-prem or cloud) compute node so it runs as a service on computer node pool that Batch manages.
- Good for intrinsically parallel (sometimes called embarrassingly parallel or perfectly parallel) workloads
- Little to no effort to make application parallel because there's little to no dependencies between them
- Image processing
- Integrates with Azure Data Factory (pipeline solution in Azure)
- Usage
- Create Batch Account
- In Batch account, create pools of computer nodes (VMs)
- In Batch account, create job that runs basic tasks on pool
- A task is a CMD script
- Split application into smaller modules with asynchronous messaging
- Easy-to-manage in long term: Modules can be swapped/modified/updated without having to update the entire application.
- See also Azure Queues
- Queue: Buffer that supports send & receive operations.
- Supports following cases:
- Sender can send message to the queue
- Receiver receives message from the queue
- Message is removed.
- Receiver examines (peeks at) the next available message
- Message is not removed in the queue.
- Many queues supports also returning the length of messages or if there's message left in queue to avoid blocking the receiver.
- Many support transactions, leasing and visibility on top of it.
- Leasing = concurrency lock
- Message is unavailable to other receives while one receiver handles it.
- Leasing = concurrency lock
- Does not block the business logic while requests are processed.
- Too many requests => consumer service is overflowed
- Solution:
- They need to be scaled out horizontally.
- Only once instance should get the message
- Load Balancer is needed to avoid one receiver becomes bottleneck
- Solution:
- In Azure: Azure Storage queues, Azure Service Bus queues.
- Polling => overallocating resource to check data.
- Use user-defined HTTP callbacks
- In azure functions you use it with Azure Functions.
-
function.json
defines when the function is triggered. -
Example for webhooks
{ "disabled":false, "bindings":[ { "authLevel":"function", "name":"req", "type":"httpTrigger", "direction":"in", "methods":[ "post" ] }, { "name":"$return", "type":"http", "direction":"out" } ] }
-
Example for GitHub trigger
{ "bindings":[ { "type":"httpTrigger", "direction":"in", "webHookType":"github", "name":"req" }, { "type":"http", "direction":"out", "name":"res" } ], "disabled":false }
-