The execution of AI tasks, such as image generation using DALL-E, prompt processing with ChatGPT, or more intricate operations involving on-chain transactions, poses a number of challenges, including:
- Access to proprietary APIs, which may come with associated fees/subscriptions.
- Proficiency in the usage of the the related open-source technologies, which may entail facing their inherent complexities.
AI Mechs run on the Gnosis chain, and enables you to post AI tasks requests on-chain and get their result delivered back to you efficiently. An AI Mech will execute these tasks for you. All you need is some xDAI in your wallet to reward the worker service executing your task. AI Mechs are hassle-free, crypto-native, and infinitely composable.
💡 These are just a few ideas on what capabilities can be brought on-chain with AI Mechs:
- fetch real-time web search results
- integrate multi-sig wallets,
- simulate chain transactions
- execute a variety of AI models:
- generative (e.g, Stability AI, Midjourney),
- action-based AI agents (e.g., AutoGPT, LangChain)
AI Mechs is a project born at ETHGlobal Lisbon.
The project consists of three components:
- Off-chain AI workers, each of which controls a Mech. Each AI worker is implemented as an autonomous service on the Autonolas stack.
- An on-chain protocol, which is used to generate a registry of AI Mechs, represented as NFTs on-chain.
- Mech Hub, a frontend which allows to interact with the protocol:
- Gives an overview of the AI workers in the registry.
- Allows Mech owners to create new workers.
- Allows users to request work from an existing worker.
- Write request metadata: the application writes the request metadata to the IPFS. The request metadata must contain the attributes
nonce
,tool
, andprompt
. Additional attributes can be passed depending on the specific tool:
{
"nonce": 15,
"tool": "prediction_request",
"prompt": "Will my favourite football team win this week's match?"
}
-
The application gets the metadata's IPFS hash.
-
The application writes the request's IPFS hash to the Mech contract which includes a small payment (currently $0.01 on the Gnosis chain deployment). Alternatively, the payment could be done separately through a Nevermined subscription.
-
The Mech service is constantly monitoring Mech contract events, and therefore gets the request hash.
-
The Mech reads the request metadata from IPFS using its hash.
-
The Mech selects the appropriate tool to handle the request from the
tool
entry in the metadata, and runs the tool with the given arguments, usually a prompt. In this example, the mech has been requested to interact with OpenAI's API, so it forwards the prompt to it, but the tool can implement any other desired behavior. -
The Mech gets a response from the tool.
-
The Mech writes the response to the IPFS.
-
The Mech receives the response the IPFS hash.
-
The Mech writes the response hash to the Mech contract.
-
The application monitors for contract Deliver events and reads the response hash from the associated transaction.
-
The application gets the response metadata from the IPFS:
{
"requestId": 68039248068127180134548324138158983719531519331279563637951550269130775,
"result": "{\"p_yes\": 0.35, \"p_no\": 0.65, \"confidence\": 0.85, \"info_utility\": 0.75}"
}
See some examples of requests and responses on the Mech Hub.
This repository contains a demo AI Mech. You can clone and extend the codebase to create your own AI Mech. You need the following requirements installed in your system:
- Python (recommended
3.10
) - Poetry
- Docker Engine
- Docker Compose
- Tendermint
==0.34.19
Follow these instructions to have your local environment prepared to run the demo below, as well as to build your own AI Mech.
-
Create a Poetry virtual environment and install the dependencies:
poetry install && poetry shell
-
Fetch the software packages using the Open Autonomy CLI
autonomy packages sync --update-packages
This will populate the Open Autonomy local registry (folder
./packages
) with the required components to run the worker services.
Follow the instructions below to run the AI Mech demo executing the tool in ./packages/valory/customs/openai_request.py
. Note that AI Mechs can be configured to work in two modes: polling mode, which periodically reads the chain, and websocket mode, which receives event updates from the chain. The default mode used by the demo is polling.
First, you need to configure the worker service. You need to create a .1env
file which contains the service configuration parameters. We provide a prefilled template (.example.env
). You will need to provide or create an OpenAI API key.
# Copy the prefilled template
cp .example.env .1env
# Edit ".1env" and replace "dummy_api_key" with your OpenAI API key.
# Source the env file
source .1env
Warning
The demo service is configured to match a specific on-chain agent (ID 3 on Mech Hub. Since you will not have access to its private key, your local instance will not be able to transact. However, it will be able to receive Requests for AI tasks sent from Mech Hub. These Requests will be executed by your local instance, but you will notice that a failure will occur when it tries to submit the transaction on-chain (Deliver type).
Now, you have two options to run the worker: as a standalone agent or as a service.
-
Ensure you have a file with a private key (
ethereum_private_key.txt
). You can generate a new private key file using the Open Autonomy CLI:autonomy generate-key ethereum
-
From one terminal, run the agent:
bash run_agent.sh
-
From another terminal, run the Tendermint node:
bash run_tm.sh
-
Ensure you have a file with the agent address and private key (
keys.json
). You can generate a new private key file using the Open Autonomy CLI:autonomy generate-key ethereum -n 1
-
Ensure that the variable
ALL_PARTICIPANTS
in the file.1env
contains the agent address fromkeys.json
:ALL_PARTICIPANTS='["your_agent_address"]'
-
Run, the service:
bash run_service.sh
Use the mech-client, which can be used either as a CLI or directly from a Python script.
To perform mech requests from your service, use the mech_interact_abci skill. This skill abstracts away all the IPFS and contract interactions so you only need to care about the following:
-
Add the mech_interact_abci skill to your dependency list, both in
packages.json
,aea-config.yaml
and any composedskill.yaml
. -
Import MechInteractParams and MechResponseSpecs in your
models.py
file. You will also need to copy some dataclasses to your rounds.py. -
Add mech_requests and mech_responses to your skills'
SynchonizedData
class (see here) -
To send a request, prepare the request metadata, write it to
synchronized_data.mech_requests
and transition into mech_interact. -
You will need to appropriately chain the
mech_interact_abci
skill with your other skills (see here) andtransaction_settlement_abci
. -
After the interaction finishes, the responses will be inside
synchronized_data.mech_responses
For a complete list of required changes, use this PR as reference.
You can create and mint your own AI Mech that handles requests for tasks that you can define.
-
Create a new tool. Tools are the components that execute the Requests for AI tasks submitted on Mech Hub. Tools are custom components and should be under the
customs
packages (ex. valory tools). Such file must contain arun
function that acceptskwargs
and must always return a tuple (Tuple[Optional[str], Optional[Dict[str, Any]], Any, Any]
). That is, therun
function must not raise any exception. If exceptions occur inside the function, they must be processed, and the return value must be set accordingly, for example, returning an error code.def run(**kwargs) -> Tuple[Optional[str], Optional[Dict[str, Any]], Any, Any]:: """Run the task""" # Your code here return result_str, prompt_used, generated_tx, counter_callback
The
kwargs
are guaranteed to contain:api_keys
(kwargs["api_keys"]
): the required API keys. This is a dictionary containing the API keys required by your Mech:<api_key>=kwargs["api_keys"][<api_key_id>]).
prompt
(kwargs["prompt"]
): a string containing the user prompt.tool
(kwargs["tool"]
): a string specifying the (sub-)tool to be used. Therun
command must parse this input and execute the task corresponding to the particular sub-tool referenced. These sub-tools will allow the user to fine-tune the use of your tool.
-
Upload the tool file to IPFS. You can push your tool to IPFS like the other packages:
autonomy push-all
You should see an output similar to this:
Pushing: /home/ardian/vlr/mech/packages/valory/customs/openai_request Pushed component with: PublicId: valory/openai_request:0.1.0 Package hash: bafybeibdcttrlgp5udygntka5fofi566pitkxhquke37ng7csvndhy4s2i
Your tool will be available on packages.json.
-
Configure your service. Edit the
.env
file. The demo service has this configuration:FILE_HASH_TO_TOOLS=[["bafybeiaodddyn4eruafqg5vldkkjfglj7jg76uvyi5xhi2cysktlu4w6r4",["openai-gpt-3.5-turbo-instruct","openai-gpt-3.5-turbo","openai-gpt-4"]],["bafybeiepc5v4ixwuu5m6p5stck5kf2ecgkydf6crj52i5umnl2qm5swb4i",["stabilityai-stable-diffusion-v1-5","stabilityai-stable-diffusion-xl-beta-v2-2-2","stabilityai-stable-diffusion-512-v2-1","stabilityai-stable-diffusion-768-v2-1"]]] API_KEYS=[["openai","dummy_api_key"],["stabilityai","dummy_api_key"]]
To add your new tool with hash
<your_tool_hash>
and sub-tool list[a, b, c]
and API key<your_api_key>
simply update the variables above to:FILE_HASH_TO_TOOLS=[[<your_tool_hash>, [a, b, c]],["bafybeiaodddyn4eruafqg5vldkkjfglj7jg76uvyi5xhi2cysktlu4w6r4",["openai-gpt-3.5-turbo-instruct","openai-gpt-3.5-turbo","openai-gpt-4"]],["bafybeiepc5v4ixwuu5m6p5stck5kf2ecgkydf6crj52i5umnl2qm5swb4i",["stabilityai-stable-diffusion-v1-5","stabilityai-stable-diffusion-xl-beta-v2-2-2","stabilityai-stable-diffusion-512-v2-1","stabilityai-stable-diffusion-768-v2-1"]]] API_KEYS=[[openai, dummy_api_key],[<your_api_key_id>, <your_api_key>]]
-
Mint your agent service in the Autonolas Protocol, and create a Mech for it in Mech Hub. This will allow you to set the
SAFE_CONTRACT_ADDRESS
andAGENT_MECH_CONTRACT_ADDRESS
in the.1env
file.Warning AI Mechs run on the Gnosis chain. You must ensure that your wallet is connected to the Gnosis chain before using the Autonolas Protocol and Mech Hub.
Here is an example of the agent NFT metadata once you create the Mech:
{ "name": "Autonolas Mech III", "description": "The mech executes AI tasks requested on-chain and delivers the results to the requester.", "inputFormat": "ipfs-v0.1", "outputFormat": "ipfs-v0.1", "image": "tbd", "tools": ["openai-gpt-3.5-turbo-instruct", "openai-gpt-3.5-turbo", "openai-gpt-4"] }
-
Run your service. You can take a look at the
run_service.sh
script and execute your service locally as above.Once your service works locally, you have the option to run it on a hosted service like Propel.
Tools |
---|
packages/jhehemann/customs/prediction_sum_url_content |
packages/napthaai/customs/prediction_request_rag |
packages/napthaai/customs/resolve_market_reasoning |
packages/nickcom007/customs/prediction_request_sme |
packages/nickcom007/customs/sme_generation_request |
packages/polywrap/customs/prediction_with_research_report |
packages/psouranis/customs/optimization_by_prompting |
packages/valory/customs/native_transfer_request |
packages/valory/customs/openai_request |
packages/valory/customs/prediction_request |
packages/valory/customs/prediction_request_claude |
packages/valory/customs/prediction_request_embedding |
packages/valory/customs/resolve_market |
packages/valory/customs/stability_ai_request |
-
OpenAI request (
openai_request.py
). Executes requests to the OpenAI API through the engine associated to the specific tool. It receives as input an arbitrary prompt and outputs the returned output by the OpenAI API.openai-gpt-3.5-turbo
openai-gpt-4
openai-gpt-3.5-turbo-instruct
-
Stability AI request (
stabilityai_request.py
): Executes requests to the Stability AI through the engine associated to the specific tool. It receives as input an arbitrary prompt and outputs the image data corresponding to the output of Stability AI.stabilityai-stable-diffusion-v1-5
stabilityai-stable-diffusion-xl-beta-v2-2-2
stabilityai-stable-diffusion-512-v2-1
stabilityai-stable-diffusion-768-v2-1
-
Native transfer request (
native_transfer_request.py
): Parses user prompt in natural language as input into an Ethereum transaction.transfer_native
-
Prediction request (
prediction_request.py
): Outputs the estimated probability of occurrence (p_yes
) or no occurrence (p_no
) of a certain event specified as the input prompt in natural language.prediction-offline
: Uses only training data of the model to make the prediction.prediction-online
: In addition to training data, it also uses online information to improve the prediction.
A keyfile is just a file with your ethereum private key as a hex-string, example:
0x0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcd
Make sure you don't have any extra characters in the file, like newlines or spaces.
Network | Service |
---|---|
Ethereum | https://registry.olas.network/ethereum/services/21 |
Gnosis | https://registry.olas.network/gnosis/services/3 |
Arbitrum | https://registry.olas.network/arbitrum/services/1 |
Polygon | https://registry.olas.network/polygon/services/3 |