diff --git a/README.md b/README.md deleted file mode 100644 index 1ede175..0000000 --- a/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Enhancing Split Computing and Early Exit Applications through Predefined Sparsity # - -Code available soon. diff --git a/index.html b/index.html new file mode 100644 index 0000000..5eb528f --- /dev/null +++ b/index.html @@ -0,0 +1,228 @@ + + + +
+ + + + ++ In the past decade, Deep Neural Networks (DNNs) achieved state-of-the-art performance in a broad range of problems, spanning from object classification and action recognition to smart building and healthcare. + The flexibility that makes DNNs such a pervasive technology comes at a price: the computational requirements preclude their deployment on most of the resource-constrained edge devices available today to solve real-time and real-world tasks. + This paper introduces a novel approach to address this challenge by combining the concept of predefined sparsity with Split Computing (SC) and Early Exit (EE). + In particular, SC aims at splitting a DNN with a part of it deployed on an edge device and the rest on a remote server. + Instead, EE allows the system to stop using the remote server and rely solely on the edge device's computation if the answer is already good enough. + Specifically, how to apply such a predefined sparsity to a SC and EE paradigm has never been studied. + This paper studies this problem and shows how predefined sparsity significantly reduces the computational, storage, and energy burdens during the training and inference phases, regardless of the hardware platform. + This makes it a valuable approach for enhancing the performance of SC and EE applications. + Experimental results showcase reductions exceeding $4\times{}$ in storage and computational complexity without compromising performance. +
+TBA.
+
+