Skip to content

Commit

Permalink
Updated Mobile ML section and added Hybrid as well.
Browse files Browse the repository at this point in the history
  • Loading branch information
profvjreddi committed Dec 4, 2024
1 parent fa30852 commit 97c3ac7
Show file tree
Hide file tree
Showing 2 changed files with 51 additions and 4 deletions.
Binary file added contents/core/ml_systems/images/png/hybrid.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
55 changes: 51 additions & 4 deletions contents/core/ml_systems/ml_systems.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -400,13 +400,60 @@ In environmental monitoring, TinyML enables real-time data analysis from various

In summary, TinyML serves as a trailblazer in the evolution of machine learning, fostering innovation across various fields by bringing intelligence directly to the edge. Its potential to transform our interaction with technology and the world is immense, promising a future where devices are connected, intelligent, and capable of making real-time decisions and responses.

## Comparison
## Hybrid ML

While we've examined Cloud ML, Edge ML, Mobile ML, and TinyML as distinct approaches, the reality of modern ML deployments is more nuanced. Systems architects often combine these paradigms to create solutions that leverage the strengths of each approach while mitigating their individual limitations. Understanding how these systems can work together opens up new possibilities for building more efficient and effective ML applications.

### Train-Serve Split

One of the most common hybrid patterns is the train-serve split, where model training occurs in the cloud but inference happens on edge, mobile, or tiny devices. This pattern takes advantage of the cloud's vast computational resources for the training phase while benefiting from the low latency and privacy advantages of on-device inference. For example, smart home devices often use models trained on large datasets in the cloud but run inference locally to ensure quick response times and protect user privacy. In practice, this might involve training models on powerful systems like the NVIDIA DGX A100, leveraging its 8 A100 GPUs and terabyte-scale memory, before deploying optimized versions to edge devices like the NVIDIA Jetson AGX Orin for efficient inference. Similarly, mobile vision models for computational photography are typically trained on powerful cloud infrastructure but deployed to run efficiently on phone hardware.

### Hierarchical Processing

Hierarchical processing creates a multi-tier system where data and intelligence flow between different levels of the ML stack. In industrial IoT applications, tiny sensors might perform basic anomaly detection, edge devices aggregate and analyze data from multiple sensors, and cloud systems handle complex analytics and model updates. For instance, we might see ESP32-CAM devices performing basic image classification at the sensor level with their minimal 520KB RAM, feeding data up to Jetson AGX Orin devices for more sophisticated computer vision tasks, and ultimately connecting to cloud infrastructure for complex analytics and model updates.

This hierarchy allows each tier to handle tasks appropriate to its capabilities---TinyML devices handle immediate, simple decisions; edge devices manage local coordination; and cloud systems tackle complex analytics and learning tasks. Smart city installations often use this pattern, with street-level sensors feeding data to neighborhood-level edge processors, which in turn connect to city-wide cloud analytics.

### Federated Learning

Federated learning represents a sophisticated hybrid approach where model training is distributed across many edge or mobile devices while maintaining privacy. Devices learn from local data and share model updates, rather than raw data, with cloud servers that aggregate these updates into an improved global model. This pattern is particularly powerful for applications like keyboard prediction on mobile devices or healthcare analytics, where privacy is paramount but benefits from collective learning are valuable. The cloud coordinates the learning process without directly accessing sensitive data, while devices benefit from the collective intelligence of the network.

### Progressive Deployment

Progressive deployment strategies adapt models for different computational tiers, creating a cascade of increasingly lightweight versions. A model might start as a large, complex version in the cloud, then be progressively compressed and optimized for edge servers, mobile devices, and finally tiny sensors. Voice assistant systems often employ this pattern---full natural language processing runs in the cloud, while simplified wake-word detection runs on-device. This allows the system to balance capability and resource constraints across the ML stack.

### Collaborative Learning

Collaborative learning enables peer-to-peer learning between devices at the same tier, often complementing hierarchical structures. Autonomous vehicle fleets, for example, might share learning about road conditions or traffic patterns directly between vehicles while also communicating with cloud infrastructure. This horizontal collaboration allows systems to share time-sensitive information and learn from each other's experiences without always routing through central servers.

Let's bring together the different ML variants we've explored individually for a comprehensive view. @fig-venn-diagram illustrates the relationships and overlaps between Cloud ML, Edge ML, and TinyML using a Venn diagram. This visual representation effectively highlights the unique characteristics of each approach while also showing areas of commonality. Each ML paradigm has its own distinct features, but there are also intersections where these approaches share certain attributes or capabilities. This diagram helps us understand how these variants relate to each other in the broader landscape of machine learning implementations.
These hybrid patterns demonstrate how modern ML systems are evolving beyond simple client-server architectures into rich, multi-tier systems that combine the strengths of different approaches. By understanding these patterns, system architects can design solutions that effectively balance competing demands for computation, latency, privacy, and power efficiency. The future of ML systems likely lies not in choosing between cloud, edge, mobile, or tiny approaches, but in creatively combining them to build more capable and efficient systems.

![ML Venn diagram. Source: [arXiv](https://arxiv.org/html/2403.19076v1)](images/png/venndiagram.png){#fig-venn-diagram}
### Real-World Integration Patterns

In practice, ML systems rarely operate in isolation. Instead, they form interconnected networks where each paradigm - Cloud, Edge, Mobile, and TinyML - plays a specific role while communicating with other parts of the system. These interactions follow distinct patterns that emerge from the inherent strengths and limitations of each approach. Cloud systems excel at training and analytics but require significant infrastructure. Edge systems provide local processing power and reduced latency. Mobile devices offer personal computing capabilities and user interaction. TinyML enables intelligence in the smallest devices and sensors.

@fig-hybrid illustrates the key interactions between these different ML paradigms. Notice how data flows upward from sensors through processing layers to cloud analytics, while model deployments flow downward from cloud training to various inference points. The interactions aren't strictly hierarchical---mobile devices might communicate directly with both cloud services and tiny sensors, while edge systems can assist mobile devices with complex processing tasks.

![Example interaction patterns between ML paradigms, showing data flows, model deployment, and processing relationships across Cloud, Edge, Mobile, and TinyML systems.](./images/png/hybrid.png){#fig-hybrid}

To understand how these interactions manifest in real applications, let's explore several common scenarios using @fig-hybrid:

- **Model Deployment Scenario:** A company develops a computer vision model for defect detection. After training in the cloud, optimized versions are deployed to edge servers in factories, quality control tablets on the production floor, and tiny cameras embedded in the production line. This showcases how a single ML solution can be distributed across different computational tiers for optimal performance.

- **Data Flow and Analysis Scenario:** In a smart agriculture system, soil sensors (TinyML) collect moisture and nutrient data, sending results to edge processors in local stations. These process the data and forward insights to the cloud for farm-wide analytics, while also sharing results with farmers' mobile apps. This demonstrates the hierarchical flow of data from sensors to cloud analytics.

- **Edge-Mobile Assistance Scenario:** When a mobile app needs to perform complex image processing that exceeds the phone's capabilities, it connects to a nearby edge server. The edge system helps process the heavier computational tasks, sending back results to enhance the mobile app's performance. This shows how different ML tiers can cooperate to handle demanding tasks.

- **TinyML-Mobile Integration Scenario:** A fitness tracker uses TinyML to continuously monitor activity patterns and vital signs. It synchronizes this processed data with the user's smartphone, which combines it with other health data before sending consolidated updates to the cloud for long-term health analysis. This illustrates the common pattern of tiny devices using mobile devices as gateways to larger networks.

- **Multi-Layer Processing Scenario:** In a smart retail environment, tiny sensors monitor inventory levels, sending inference results to both edge systems for immediate stock management and mobile devices for staff notifications. The edge systems process this data alongside other store metrics, while the cloud analyzes trends across all store locations. This shows how multiple ML tiers can work together in a complete solution.

These real-world patterns demonstrate how different ML paradigms naturally complement each other in practice. While each approach has its own strengths, their true power emerges when they work together as an integrated system. By understanding these patterns, system architects can better design solutions that effectively leverage the capabilities of each ML tier while managing their respective constraints.


## Comparison

For a more detailed comparison of these ML variants, we can refer to @tbl-big_vs_tiny. This table offers a comprehensive analysis of Cloud ML, Edge ML, and TinyML based on various features and aspects. By examining these different characteristics side by side, we gain a clearer perspective on the unique advantages and distinguishing factors of each approach. This detailed comparison, combined with the visual overview provided by the Venn diagram, aids in making informed decisions based on the specific needs and constraints of a given application or project.
Let's bring together the different ML variants we've explored individually for a comprehensive view. For a detailed comparison of these ML variants, we can refer to @tbl-big_vs_tiny. This table offers a comprehensive analysis of Cloud ML, Edge ML, and TinyML based on various features and aspects. By examining these different characteristics side by side, we gain a clearer perspective on the unique advantages and distinguishing factors of each approach. This detailed comparison, combined with the visual overview provided by the Venn diagram, aids in making informed decisions based on the specific needs and constraints of a given application or project.

+--------------------------+----------------------------------------------------------+----------------------------------------------------------+-----------------------------------------------------------+----------------------------------------------------------+
| Aspect | Cloud ML | Edge ML | Mobile ML | TinyML |
Expand Down

0 comments on commit 97c3ac7

Please sign in to comment.