This repository has been archived by the owner on Aug 6, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 24
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #315 from zachhorn/q4-update-blog-post
Add Q4 update blog post
- Loading branch information
Showing
2 changed files
with
51 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
--- | ||
title: 'Akash Network: Q4 2023 Recap' | ||
date: '2023-12-21' | ||
lastmod: | ||
draft: false | ||
weight: 50 | ||
categories: | ||
- Updates | ||
tags: | ||
- Updates | ||
contributors: | ||
- Akash | ||
pinned: false | ||
homepage: false | ||
images: | ||
- q4-2023-header.png | ||
--- | ||
The demand for compute to power large language models (LLMs) and other applications has surged, resulting in record [new](https://x.com/akashnet_/status/1727369913994739853?s=20) and [total GPU](https://twitter.com/akashnet_/status/1732053939971879015?s=20) leases on the [Akash Supercloud](https://akash.network/blog/the-supercloud-for-ai-is-live/), with [CPU capacity](https://x.com/gregosuri/status/1729134681617248367?s=20) reaching an all-time high and CPU utilization hitting [56%](https://twitter.com/gregosuri/status/1730630993818644669?s=20) at the end of November. | ||
|
||
With demand showing no sign of slowing, we’re focused on bringing on new capacity and increased offerings, as our network and community grow and diversify. And with 2024 upon us, we’re offering a recap of the past quarter for Overclock Labs and Akash Network. | ||
|
||
## Completed Mainnet upgrades 7, 8, and 9 | ||
Also in November, Akash launched several upgrades to its Supercloud, enabling developers to more easily access high-end chips such as NVIDIA A100s and H100s. | ||
- [Mainnet 7](https://twitter.com/akashnet_/status/1722603953131524222?s=20): Akash completed a network configuration upgrade that unlocked massively increased deployment sizes. This allows Akash to coordinate resource-intensive workloads – most importantly, the training of AI models on distributed compute. | ||
- [Mainnet 8](https://twitter.com/akashnet_/status/1726630103302492283?s=20):With this upgrade, developers can come to the network with a budget in mind and secure access to GPUs faster than before, eliminating the back-and-forth associated with bidding. | ||
- [Mainnet 9](https://twitter.com/akashnet_/status/1729555310073766043?s=20): In less than 5 minutes and with coordination from network validators, Akash upgraded to Mainnet 9, which included a minor change to ensure properly validated bids on multi-service deployments. | ||
|
||
## Kicked Off Thumper.ai Foundation Model Training | ||
In October, we published the proposal to begin [re-training](https://twitter.com/akashnet_/status/1710041499671687259?s=20) a Stable Diffusion model using a creative commons dataset in partnership with generative AI platform Thumper.ai. The proposal passed a few days later with resounding support from the community for this [first-ever initiative](https://www.semafor.com/article/10/25/2023/the-ai-booms-chip-shortage-has-an-unlikely-hero-the-blockchain) on a permissionless network. | ||
|
||
While the experiment is still underway, Akash provided 24,000 NVIDIA A100 (80GB) hours to Thumper to code and train the model, and we’ll be publishing the model and code to Hugging Face soon. The result will be an image-generation AI model that can be used without the risk of copyright infringement, and will round out Akash’s capabilities to support the three most popular AI tasks: training, fine-tuning, and inferencing. | ||
|
||
## SDXL on Akash & Akash Chat | ||
In December, the network demonstrated that developers can tap a variety of GPUs – not just the fastest and highest-performing (which are hard for many companies to secure) – for AI imaging and chat models, paving the way for a new wave of open-source and permissionless AI applications. | ||
- **Stable Diffusion XL (SDXL) for imaging:** This image-generation app has generated over [46,000 images](https://twitter.com/akashnet_/status/1734247634506838372) at no cost to developers, and is hosted on a wide range of NVIDIA GPUs, including L40s, A100s, V100s, RTX-8000s, and 3090s. | ||
- **Akash Chat:** A zero-cost, permissionless application to easily chat with the leading open-source models is now available and hosted on a variety of GPUs available on the cloud, including A100s, V100s, and RTX 3090s. | ||
|
||
## NVIDIA L40s Live on the Network | ||
Akash is constantly sourcing new opportunities to harness idle GPU capacity – including [Foundry](https://foundrydigital.com/) and other providers – that will give people the compute power to train, fine-tune, and run inference for AI applications. | ||
|
||
In November, we [announced](https://twitter.com/akashnet_/status/1722673189237490094?s=20) that NVIDIA L40, one of the highest-performing GPUs in the world, was live on the Akash Supercloud, which has enabled greater availability of the next-generation chips, driving compute-intensive workloads. | ||
|
||
## Additional Highlights | ||
- **The market has taken notice:** Messari featured Akash in two ([1](https://twitter.com/akashnet_/status/1708869832043631087?s=20), [2](https://x.com/akashnet_/status/1719432757896421695?s=20)) infrastructure reports and Reflexivity in [another](https://twitter.com/MessariCrypto/status/1708866621173801238) in October, and Blockworks [detailed](https://twitter.com/blockworksres/status/1732401228687315366?s=20) the network's multi-phased token economics plan to scale and lower the barriers to high-performance compute even more in December. Additionally, Coinage [nominated](https://twitter.com/akashnet_/status/1725614932400419126?s=20) us as the Crypto Project of the Year, while Decrypt has [spotlighted](https://twitter.com/decryptmedia/status/1717224504525561868) our work for Achievement in AI. | ||
- **Enabling more efficient product development:** With our [latest integration](https://twitter.com/akashnet_/status/1730664115528442303?s=20) with SubQuery, which supports rapid data indexing, builders can manage and query on-chain with ease for their protocols and applications. | ||
- **Connecting with the AI community:** Earlier this month, Greg Osuri, founder of Akash Network, [spoke at](https://twitter.com/gregosuri/status/1729571486447411315?s=20) the Decentralized AI Summit alongside Logan Cerkovnik, founder, CEO, and CTO of Thumper.ai, to discuss how the decentralized, permissionless network can support AI builders. | ||
|
||
## Looking Ahead | ||
The GPU squeeze is real and is being felt across large and small companies. The most popular solution to the GPU squeeze is to boost the output of the most expensive, advanced GPUs – particularly A100s and H100s – and for the big tech giants to manufacture those chips. While that’s great news for the largest AI companies, it does not reduce price or market concentration. Next year, less powerful GPUs available on the Akash Supercloud will help sustain the AI boom, and mitigate concerns that larger players will continue to dominate the next era of tech transformation. | ||
|
||
We're grateful for your support and involvement in our community–especially in our [Hackathon](https://x.com/akashnet_/status/1715435561392222384?s=20) earlier this fall–and are excited to work together to build from here. Follow [@akashnet_](https://twitter.com/akashnet_) and [@gregosuri](https://twitter.com/gregosuri) on X to stay up-to-date on the latest from Overclock Labs and Akash Network. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.