Skip to content

Commit

Permalink
fix path
Browse files Browse the repository at this point in the history
  • Loading branch information
profvjreddi committed Sep 17, 2024
1 parent a28d5be commit 27ac2d2
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions contents/ondevice_learning/ondevice_learning.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -241,15 +241,15 @@ Let's take the example of a smart sensor application that uses on-device AI to r

If we want to customize the model for the on-device characteristics, training a neural network model from scratch on the device would be impractical due to the limited computational resources and battery life. This is where transfer learning comes in. Instead of training a model from scratch, we can take a pre-trained model, such as a convolutional neural network (CNN) or a transformer network trained on a large dataset of images, and finetune it for our specific object recognition task. This finetuning can be done directly on the device using a smaller dataset of images relevant to the task. By leveraging the pre-trained model, we can reduce the computational resources and time required for training while still achieving high accuracy for the object recognition task. @fig-t-learning further illustrates the benefits of transfer learning over training from scratch.

![Training from scratch vs. transfer learning.](contents/ondevice_learning/images/png/transfer_learning.png){#fig-t-learning}
![Training from scratch vs. transfer learning.](images/png/transfer_learning.png){#fig-t-learning}

Transfer learning is important in making on-device AI practical by allowing us to leverage pre-trained models and finetune them for specific tasks, thereby reducing the computational resources and time required for training. The combination of on-device AI and transfer learning opens up new possibilities for AI applications that are more privacy-conscious and responsive to user needs.

Transfer learning has revolutionized the way models are developed and deployed, both in the cloud and at the edge. Transfer learning is being used in the real world. One such example is the use of transfer learning to develop AI models that can detect and diagnose diseases from medical images, such as X-rays, MRI scans, and CT scans. For example, researchers at Stanford University developed a transfer learning model that can detect cancer in skin images with an accuracy of 97% [@esteva2017dermatologist]. This model was pre-trained on 1.28 million images to classify a broad range of objects and then specialized for cancer detection by training on a dermatologist-curated dataset of skin images.

Implementation in production scenarios can be broadly categorized into two stages: pre-deployment and post-deployment.

![Training from scratch vs. transfer learning.](contents/ondevice_learning/images/png/transfer_learning.jpeg){#transfer}
![Training from scratch vs. transfer learning.](images/png/transfer_learning.jpeg){#transfer}


### Pre-Deployment Specialization
Expand Down Expand Up @@ -445,7 +445,7 @@ With this proposed structure, there are a few key vectors for further optimizing

@fig-federated-learning outlines the transformative impact of federated learning on on-device learning.

![Federated learning is revolutionizing on-device learning.](contents/ondevice_learning/images/png/federatedvsoil.png){#fig-federated-learning}
![Federated learning is revolutionizing on-device learning.](images/png/federatedvsoil.png){#fig-federated-learning}


### Communication Efficiency
Expand Down

0 comments on commit 27ac2d2

Please sign in to comment.