Custom Component Handbook
Before reading this guide, follow the Oracle component tutorial to learn the basics of building a WAVS service.
Use the info in this guide to customize the template to create your own custom service. Check out the WAVS design considerations page to learn which use-cases WAVS is best suited for.
Foundry Template structure
The foundry template is made up of the following main files:
wavs-foundry-template/├── README.md├── makefile # Commands, variables, and configs├── components/ # WASI components│ └── eth-price-oracle/│ ├── Cargo.toml # Component dependencies│ ├── lib.rs # Main Component logic│ ├── trigger.rs # Trigger handling│ └── bindings.rs # Bindings generated by `make build`├── compiled/ # WASM files compiled by `make build`├── src/│ ├── contracts/ # Trigger and submission contracts│ └── interfaces/ # Solidity interfaces├── script/ # Scripts used in makefile commands├── cli.toml # CLI configuration├── wavs.toml # WAVS service configuration├── docs/ # Documentation└── .env # Private environment variables
- The
README
file contains the tutorial commands. - The
makefile
contains commands for building and deploying the service. It also contains variables and configs for the service. - The components directory contains the component logic for your service. Running
make wasi-build
will automatically generate bindings and compile components into thecompiled
directory. - The src directory contains the Solidity contract and interfaces.
- The script directory contains the scripts used in the makefile commands to deploy, trigger, and test the service.
- The
.env
file contains private environment variables and keys. Usecp .env.example .env
to copy the example.env
file.
WAVS services
The basic service is made up of a trigger, a component, and submission logic (optional).
Trigger: any onchain event emitted from a contract.
Component: the main logic of a WAVS service. Components are responsible for processing the trigger data and executing the business logic.
Submission: handles the logic for submitting a component's output to the blockchain.
Triggers
A trigger prompts a WAVS service to run. Operators listen for the trigger event specified by the service and execute the corresponding component off-chain. Triggers can be any onchain event emitted from any contract.
Trigger lifecycle
-
When a service is deployed, it is configured with a trigger address and event, a wasi component, and a submission contract (optional).
-
Registered operators listen to chain logs. Each operator maintains lookup maps and verifies events against registered triggers.
-
When a trigger event is emitted, operators pick up the event and verify the event matches the registered trigger.
-
If a match is found, WAVS creates a
TriggerAction
that wraps the trigger event data:
TriggerAction {// Service and workflow identificationconfig: TriggerConfig {service_id: ServiceID, // Generated during deploymentworkflow_id: WorkflowID, // Default or specifiedtrigger: Trigger::EthContractEvent {address: Address, // Contract addresschain_name: ChainName, // Chain identifierevent_hash: ByteArray<32> // Event signature}},// The actual event datadata: TriggerData::EthContractEvent {contract_address: Address, // Emitting contractchain_name: ChainName, // Source chainlog: LogData { // Raw event datatopics: Vec<Vec<u8>>, // Event signature + indexed paramsdata: Vec<u8> // ABI-encoded event data},block_height: u64 // Block number}}
- The TriggerAction is converted to a WASI-compatible format and passed to the component where it is decoded and processed.
Developing triggers
WAVS doesn't interpret the contents of event triggers. Instead, it passes the raw log data to components, which can decode and process the data according to their specific needs.
To configure a trigger for a service, you'll need to specify:
- The event signature/name that identifies which specific event should trigger the service. This can either be a hex-encoded event signature or an event name.
- The contract address where the event will be emitted from.
In the template, the trigger event is set in the Makefile
as TRIGGER_EVENT ?= NewTrigger(bytes)
and the trigger address of the example trigger contract is automatically populated during deployment. To change the trigger event or address, you can manually update the Makefile
variables and redeploy the service.
When a WAVS component receives this trigger, it uses the decode_event_log_data!
macro from the wavs-wasi-chain
crate to decode the event data for processing.
The trigger contract in the WAVS foundry template is a simple example that takes generic bytes and passes them to the component. The flow for triggers is located in several places in the template:
- The trigger contract in
src/WavsTrigger.sol
defines how triggers are created and emitted on-chain. - The trigger script in
/script/Trigger.s.sol
calls theaddTrigger
function with thecoinMarketCapID
. - The
decode_trigger_event
function in/components/eth-price-oracle/src/trigger.rs
processes the trigger data and extracts thetrigger_id
anddata
. - The
run
function in/components/eth-price-oracle/src/lib.rs
callsdecode_trigger_event
, processes the extracted trigger data, and determines how to handle it. - When testing, the
wasi-exec
command in theMakefile
passes input data when testing WAVS components via--input `cast format-bytes32-string $(COIN_MARKET_CAP_ID)`
. This uses cast to format theCOIN_MARKET_CAP_ID
as a bytes32 string and simulates an Ethereum event during local execution.
Components
WASI components contain the main logic of a WAVS service. They are responsible for processing the trigger data and executing the business logic of a service.
A basic component has three main parts:
- Decoding incoming trigger data.
- Processing the data (this is the custom logic of your component).
- Encoding and returning the result for submission (if applicable).
After being passed the TriggerAction
, the component decodes it using the decode_event_log_data!
macro from the wavs-wasi-chain
crate.
#[allow(warnings)]mod bindings;use alloy_sol_types::{sol, SolValue};use bindings::{export, wavs::worker::layer_types::{TriggerData, TriggerDataEthContractEvent}, Guest, TriggerAction};use wavs_wasi_chain::decode_event_log_data;// Solidity types for the incoming trigger event using the `sol!` macrosol! {event MyEvent(uint64 indexed triggerId, bytes data);struct MyResult {uint64 triggerId;bool success;}}// Define the componentstruct Component;export!(Component with_types_in bindings);impl Guest for Component {fn run(action: TriggerAction) -> Result<Option<Vec<u8>>, String> {match action.data {TriggerData::EthContractEvent(TriggerDataEthContractEvent { log, .. }) => {// 1. Decode the eventlet event: MyEvent = decode_event_log_data!(log).map_err(|e| format!("Failed to decode event: {}", e))?;// 2. Process data (your business logic goes here)let result = MyResult {triggerId,success: true};// 3. Return encoded resultOk(Some(result.abi_encode()))}_ => Err("Unsupported trigger type".to_string())}}}
Components must implement the Guest
trait, which is the main interface between your component and the WAVS runtime. The run
function is the entry point for processing triggers: it should receive the trigger data, decode it, process it according to your component's logic, and return the results. If you need to submit results to the blockchain, results need to be encoded using abi_encode()
.
The sol!
macro from alloy_sol_types
is used to define Solidity types in Rust. It generates Rust structs and implementations that match your Solidity types, including ABI encoding/decoding methods.
Bindings are automatically generated for any files in the /components
and /src
directories when the make build
command is run.
Submission
A service handler or submission contract handles the logic for submitting a component's output to the blockchain. A submission contract must implement the handleSignedData()
function using the IWavsServiceHandler
interface. This interface is defined in the @wavs
package: https://www.npmjs.com/package/@wavs/solidity?activeTab=code
In the template, the submission contract uses the handleSignedData()
function to validate the operator's signature and store the processed data from the component. The DataWithId
struct must match the output format from the component. Each trigger has a unique ID that links the data to its source.
Template submission example:
function handleSignedData(bytes calldata _data, bytes calldata _signature) external {// 1. Validate the operator's signature by calling the `validate` function on the `_serviceManager` contract_serviceManager.validate(_data, _signature);// 2. Decode the data into a DataWithId struct defined in the `ITypes` interfaceDataWithId memory dataWithId = abi.decode(_data, (DataWithId));// 3. Store the result in state_signatures[dataWithId.triggerId] = _signature; // 1. Store operator signature_datas[dataWithId.triggerId] = dataWithId.data; // 2. Store the data_validTriggers[dataWithId.triggerId] = true; // 3. Mark trigger as valid}
Note: submission contracts are not required for a WAVS service. If you don't need to submit data back to the blockchain, you can modify the makefile deploy-service
command to use the --submit none
flag when deploying the service:
deploy-service:@$(WAVS_CMD) deploy-service --log-level=info --data /data/.docker --home /data \--component "/data/compiled/${COMPONENT_FILENAME}" \--trigger-event-name "${TRIGGER_EVENT}" \--trigger-address "${SERVICE_TRIGGER_ADDR}" \--service-config ${SERVICE_CONFIG} \--submit none
Makefile commands
The makefile contains several commands for building, testing, and deploying WAVS components. Here's a detailed explanation of the most commonly used commands:
Building and Testing Components
- Build your WASI components
make wasi-build
Under the hood:
- Iterates over all components found in the
components
directory. - Automatically generates WASI bindings for each component.
- Runs
cargo component build --release
to compile the components. - Formats the code using
cargo fmt
. - Copies the compiled
.wasm
files to thecompiled
directory.
- Test your WASI components directly without deploying to a chain.
COIN_MARKET_CAP_ID=1 make wasi-exec
Under the hood:
- Uses the
wavs-cli
Docker image to run the component specified byCOMPONENT_FILENAME
. - Simulates the trigger event using the
COIN_MARKET_CAP_ID
as the input and theSERVICE_CONFIG
to configure the service. - Executes the component with the input data.
- Can handle input data in three formats:
@file
: Reads input from a file0x
: Treats input as hex-encoded bytes- Raw string: Treats input as raw bytes (you may need to format the input data appropriately before passing it to the component)
- For the
ETH_PRICE_ORACLE
component, the input data must be formatted into abytes32
string. This is done in the makefile'swasi-exec
command using ``--input `cast format-bytes32-string $(COIN_MARKET_CAP_ID)```. When creating your own components, update the makefile to use the appropriate format for your use case.
Variables:
COMPONENT_FILENAME
: The path of the compiled WASM file to execute.COIN_MARKET_CAP_ID
: The input data used to simulate the trigger event.SERVICE_CONFIG
: The service configuration for the component containing thehost_envs
andkv
variables.
Setup
make setup
- Purpose: Installs initial dependencies required for the project.
- Under the hood:
- Checks for system requirements like Node.js, jq, and cargo.
- Installs dependencies using
forge install
andnpm install
.
forge build
- Purpose: Builds the Solidity contracts.
- Under the hood:
- Compiles the Solidity contracts using Foundry's
forge
tool.
- Compiles the Solidity contracts using Foundry's
forge test
- Purpose: Runs tests for the Solidity contracts.
- Under the hood:
- Executes the test suite using Foundry's
forge test
command.
- Executes the test suite using Foundry's
Starting Services
make start-all
- Starts the Anvil Ethereum node and WAVS using Docker Compose. Keep this running and open another terminal to execute other commands.
- Under the hood:
- Cleans up any existing Docker containers.
- Starts the Anvil Ethereum node directly on the host.
- Runs
docker compose up
which:- Starts the main
wavs
service and theaggregator
service. - Deploys EigenLayer core contracts for local development and your Service Manager contract which manages your AVS.
- Starts the main
Deployment and Execution
export SERVICE_MANAGER_ADDR=`make get-eigen-service-manager-from-deploy`forge script ./script/Deploy.s.sol ${SERVICE_MANAGER_ADDR} --sig "run(string)" --rpc-url http://localhost:8545 --broadcast
- Under the hood:
- Retrieves the deployed service manager address from
.docker/deployments.json
. - Deploys the on-chain trigger and submission contracts.
- Links the submission contract to the Service Manager by passing the
_serviceManagerAddr
to its constructor. - Saves the deployed contract addresses in
.docker/script_deploy.json
- Uses the specified RPC URL to interact with the Ethereum node.
- Broadcasts the transaction to the network.
- Retrieves the deployed service manager address from
TRIGGER_EVENT="NewTrigger(bytes)" COMPONENT_FILENAME=usdt_balance.wasm make deploy-service
- Purpose: Registers the WASI component as a service with the WAVS network.
- Under the hood:
- Registers the service with the following configuration:
- Specifies the compiled component to run (
--component
) - Sets the trigger event to watch for (
--trigger-event-name
) - Configures the trigger contract address (
--trigger-address
) - Configures the submission contract address (
--submit-address
) - Applies service configuration including fuel limits, gas limits, and environment variables (
--service-config
).
- The service configuration is stored off-chain and used by the WAVS operator to run the component
export COIN_MARKET_CAP_ID=1export SERVICE_TRIGGER_ADDR=`make get-trigger-from-deploy`forge script ./script/Trigger.s.sol ${SERVICE_TRIGGER_ADDR} ${COIN_MARKET_CAP_ID} --sig "run(string,string)" --rpc-url http://localhost:8545 --broadcast -v 4
- Under the hood:
- Exports the
COIN_MARKET_CAP_ID
environment variable for use in subsequent commands. - Uses
jq
to extract the trigger address from.docker/script_deploy.json
. - Executes the
Trigger.s.sol
script with the trigger address andCOIN_MARKET_CAP_ID
. - Uses the specified RPC URL to interact with the local Anvil node.
- Broadcasts the transaction to the network.
- Exports the
Viewing Results
make show-result
- Uses the
ShowResult.s.sol
script to retrieve and display the result from the service.
Makefile variables
The Makefile contains several important variables that control the behavior of the WAVS service.
Component variable
COMPONENT_FILENAME ?= eth_price_oracle.wasm
- Used by
wasi-exec
anddeploy-service
commands to identify which component to run or deploy. - Change this filename to run a different service.
Service config
SERVICE_CONFIG ?= '{"fuel_limit":100000000,"max_gas":5000000,"host_envs":[],"kv":[],"workflow_id":"default","component_id":"default"}'
- Configures the WAVS service.
fuel_limit
: Maximum computational resources the service can usemax_gas
: Maximum gas limit for blockchain transactionshost_envs
: List of private environment variables to expose to the component (values must be prefixed withWAVS_ENV_
)kv
: Key-value pairs for public configurationworkflow_id
andcomponent_id
are set asdefault
in the template for simple services.
Network configuration
RPC_URL ?= http://localhost:8545
- Specifies the Ethereum RPC endpoint URL.
Trigger event
TRIGGER_EVENT ?= NewTrigger(bytes)
- Defines the event signature that WAVS will watch for on the blockchain.
- With WAVS, this can either be a hex-encoded event signature or an event name.
NewTrigger(bytes)
in this example is the trigger event from the template's trigger contract.
Trigger data
COIN_MARKET_CAP_ID ?= 1
- Specifies the
COIN_MARKET_CAP_ID
for testing the price oracle inwasi-exec
and trigger scripts (1
is the ID of Bitcoin in theEth-price-oracle
example). - In the
ETH_PRICE_ORACLE
component, the input data needs to be formatted into abytes32
string in themake wasi-exec
makefile command usingcast format-bytes32-string
. When creating your own components, update the makefile to use an appropriate format for your use case.
Contract addresses
SERVICE_MANAGER_ADDR ?= `jq -r '.eigen_service_managers.local | .[-1]' .docker/deployments.json`SERVICE_TRIGGER_ADDR ?= `jq -r '.trigger' "./.docker/script_deploy.json"`SERVICE_SUBMISSION_ADDR ?= `jq -r '.service_handler' "./.docker/script_deploy.json"`
- Automatically populated from deployment JSON files. Used by deployment and interaction commands.
- You can view the addresses of your deployed contracts using these commands:
View the addresses of your deployed contracts using these commands:
# View the trigger contract addressmake get-trigger-from-deploy# View the submission contract addressmake get-service-handler-from-deploy# View the service manager addressmake get-eigen-service-manager-from-deploy
Customizing Makefile variables
Makefile variables can be overridden when running make commands. For example, running the following in your terminal will use a different component when testing:
COMPONENT_FILENAME=my_component.wasm COIN_MARKET_CAP_ID=`cast format-bytes32-string 1` make wasi-exec
To trigger the component from an external contract, you can set the trigger address and trigger event manually in the makefile:
TRIGGER_ADDRESS ?= 0x1234567890123456789012345678901234567890TRIGGER_EVENT ?= MyCustomEvent(bytes)
You can also add variables to the makefile, such as public variables to be referenced in your component or reference private variables like API keys. Find out more in the Environment Variables section.
Toml files
There are several toml files in the template that are used to configure the service:
wavs.toml
is used to configure the WAVS service itself, including chain configurations (local, testnets, mainnet) and maximum WASM fuel limits.cli.toml
is used to configure the WAVS CLI tool, and also includes chain configurations (local, testnets, mainnet), maximum WASM fuel limits, and log levels.Cargo.toml
in the root directory is used to configure the workspace and includes dependencies, build settings, and component metadata./components/*/Cargo.toml
in each component directory is used to configure the Rust component and includes dependencies, build settings, and component metadata. It can inherit dependencies from the rootCargo.toml
file usingworkspace = true
.
These files can be customized to suit your specific needs, and many settings can be overridden using environment variables.
The following is an example of a component's Cargo.toml
file structure:
# Package metadata - inherits most values from workspace configuration[package]name = "eth-price-oracle" # Name of the componentedition.workspace = true # Rust edition (inherited from workspace)version.workspace = true # Version (inherited from workspace)authors.workspace = true # Authors (inherited from workspace)rust-version.workspace = true # Minimum Rust version (inherited from workspace)repository.workspace = true # Repository URL (inherited from workspace)# Component dependencies[dependencies]# Core dependencieswit-bindgen-rt = {workspace = true} # Required for WASI bindings and Guest traitwavs-wasi-chain = { workspace = true } # Required for core WAVS functionality# Helpful dependenciesserde = { workspace = true } # For serialization (if working with JSON)serde_json = { workspace = true } # For JSON handlingalloy-sol-macro = { workspace = true } # For Ethereum contract interactionswstd = { workspace = true } # For WASI standard library featuresalloy-sol-types = { workspace = true } # For Ethereum ABI handlinganyhow = { workspace = true } # For enhanced error handling# Library configuration[lib]crate-type = ["cdylib"] # Specifies this is a dynamic library crate# Release build optimization settings[profile.release]codegen-units = 1 # Single codegen unit for better optimizationopt-level = "s" # Optimize for sizedebug = false # Disable debug informationstrip = true # Strip symbols from binarylto = true # Enable link-time optimization# WAVS component metadata[package.metadata.component]package = "component:eth-price-oracle" # Component package name
Input and Output
When building WASI components, keep in mind that the component can receive the trigger data in two ways:
-
Triggered by an onchain event from a contract after service deployment. Components receive a
TriggerAction
containing event data which is then decoded. -
Manually via the
wasi-exec
command. The wasi-exec command simulates an onchain event and passes the trigger data directly to the component astrigger::raw
. No abi decoding is required, and the output is returned as raw bytes.- In the
ETH_PRICE_ORACLE
component, the input data needs to be formatted into abytes32
string using thecast format-bytes32-string
when using themake wasi-exec
command. When creating your own components, use an appropriate format for your use case to use thewasi-exec
command.
- In the
Data Processing Pattern
The example below shows a basic generic pattern for processing input data and returning output. In the example, the sol!
macro generates Rust types from Solidity definitions, adds ABI encoding/decoding methods, and handles type conversions (e.g., uint64
→ u64
). ABI encoding/decoding converts Rust structs to bytes and vice versa. The decode_event_log_data!
macro decodes the raw event log data and returns a Rust struct matching your Solidity event. This is used for on-chain events.
// 1. Define your Solidity types using the `sol!` macrosol! {event MyEvent(uint64 indexed triggerId, bytes data);struct MyResult {uint64 triggerId;bytes processedData;}}// 2. Handle on-chain event trigger and raw trigger typesimpl Guest for Component {fn run(action: TriggerAction) -> Result<Option<Vec<u8>>, String> {match action.data {// On-chain event handlingTriggerData::EthContractEvent(TriggerDataEthContractEvent { log, .. }) => {// Decode the eventlet event: MyEvent = decode_event_log_data!(log)?;// Process the datalet result = MyResult {triggerId: event.triggerId,processedData: process_data(&event.data)?,};// Encode for submissionOk(Some(result.abi_encode()))}// Manual trigger handling for testingTriggerData::Raw(data) => {// Process raw data directlylet result = process_data(&data)?;Ok(Some(result))}_ => Err("Unsupported trigger type".to_string())}}}
In the template, encoding and decoding is handled in the trigger.rs
file using a Destination
enum to determine how to process and return data based on the trigger source. The decode_trigger_event
function in trigger.rs
determines the destination:
- For
TriggerData::EthContractEvent
, it returnsDestination::Ethereum
- For
TriggerData::Raw
(used in testing), it returnsDestination::CliOutput
This allows the component to handle both production and testing scenarios appropriately.
Logging
Components can use logging to debug and track the execution of the component.
Logging in development:
Use println!()
to write to stdout/stderr. This is visible when running wasi-exec
locally.
println!("Debug message: {:?}", data);
Logging in production
For production, you can use a host::log()
function which takes a LogLevel
and writes its output via the tracing mechanism. Along with the string that the developer provides, it attaches additional context such as the ServiceID
, WorkflowID
, and component Digest
.
host::log(LogLevel::Info, "Production logging message");
Helpers and utilities
wavs-wasi-chain
crate
The wavs-wasi-chain
crate provides a set of helpful functions for making HTTP requests and interacting with the blockchain. It also provides a macro for decoding trigger data for use in the component.
Learn more in the crate documentation.
Sol! macro
The sol!
macro from alloy-sol-macro
allows you to generate Rust types from Solidity interface files. This is useful for handling blockchain events and data structures in components.
You can write Solidity definitions (interfaces, structs, enums, custom errors, events, and function signatures) directly inside the sol! { ... }
macro invocation in your Rust code.
At compile time, the sol!
macro parses that Solidity syntax and automatically generates the equivalent Rust types, structs, enums, and associated functions (like abi_encode()
for calls or abi_decode()
for return data/events) needed to interact with smart contracts based on those definitions.
Required Dependencies:
[dependencies]alloy-sol-macro = { workspace = true } # For Solidity type generationalloy-sol-types = { workspace = true } # For ABI handling
Basic Pattern:
mod solidity {use alloy_sol_macro::sol;// Generate types from Solidity filesol!("../../src/interfaces/ITypes.sol");// Or define types inlinesol! {struct TriggerInfo {uint64 triggerId;bytes data;}event NewTrigger(TriggerInfo _triggerInfo);}}
In the template, the sol!
macro is used in the trigger.rs
component file to generate Rust types from the ITypes.sol
file.
mod solidity {use alloy_sol_macro::sol;pub use ITypes::*;// The objects here will be generated automatically into Rust types.// If you update the .sol file, you must re-run `cargo build` to see the changes.sol!("../../src/interfaces/ITypes.sol");}
The macro reads a Solidity interface file and generates corresponding Rust types and encoding/decoding functions. In the example above, it reads ITypes.sol
which defines:
NewTrigger
eventTriggerInfo
structDataWithId
struct
More documentation on the sol!
macro can be found at: https://docs.rs/alloy-sol-macro/latest/alloy_sol_macro/macro.sol.html
Environment Variables
Components can be configured with two types of variables:
Public variables: kv
These variables can be used for non-sensitive information that can be viewed publicly. These variables can be configured in the makefile and are set during service deployment. They are accessed using std::env::var
in the component.
To add public variables, modify the "kv"
section in the SERVICE_CONFIG
in your Makefile
. The following example adds max_retries
, timeout_seconds
, and api_endpoint
variables with values:
# makefileSERVICE_CONFIG ?= '{"fuel_limit":100000000,"max_gas":5000000,"host_envs":[],"kv":[["max_retries","3"],["timeout_seconds","30"],["api_endpoint","https://api.example.com"]],"workflow_id":"default","component_id":"default"}'
Then use these variables in your component:
let max_retries = std::env::var("max_retries")?;let timeout = std::env::var("timeout_seconds")?;let endpoint = std::env::var("api_endpoint")?;
Private variables: host_envs
Private environment variables (host_envs
) can be used for sensitive data like API keys. These variables are set by operators in their environment and are not viewable by anyone. These variables must be prefixed with WAVS_ENV_
. Each operator must set these variables in their environment before deploying the service. Only variables listed in host_envs
will be available to the component.
To add private variables to your .env file, copy the .env.example
file to .env
:
# copy the example filecp .env.example .env
Then set the environment variable in your .env
file:
# .env fileWAVS_ENV_MY_API_KEY=your_secret_key_here
Variables can also be set in your ~/.bashrc
, ~/.zshrc
, or ~/.profile
files.
Then modify "host_envs"
in the SERVICE_CONFIG
section of your Makefile
. The following example adds WAVS_ENV_MY_API_KEY
to the host_envs
array. Remember to add the WAVS_ENV_
prefix to the variable name:
# makefileSERVICE_CONFIG ?= '{"fuel_limit":100000000,"max_gas":5000000,"host_envs":["WAVS_ENV_MY_API_KEY"],"kv":[],"workflow_id":"default","component_id":"default"}'
This configuration is used during local testing with make wasi-exec
and will also be applied when your service is deployed.
The following example shows how to access a private environment variable in a component:
let api_key = std::env::var("WAVS_ENV_MY_API_KEY")?;
Network requests
Components can make network requests to external APIs using the wavs-wasi-chain
crate. Since WASI components run in a synchronous environment but network requests are asynchronous, you can use block_on
from the wstd
crate to bridge this gap. The block_on
function allows you to run async code within a synchronous context, which is essential for making HTTP requests in WAVS components.
To learn how to use private environment variables like API keys in a component, see the Private Variables section.
The following dependencies are useful for making HTTP requests from a component. These are added to a component's Cargo.toml
file:
[dependencies]wavs-wasi-chain = { workspace = true } # HTTP utilitieswstd = { workspace = true } # Runtime utilities (includes block_on)serde = { workspace = true } # Serializationserde_json = { workspace = true } # JSON handling
The following example shows how to make a basic HTTP GET request from a component:
use wstd::runtime::block_on; // Required for running async code// Async function for the HTTP requestasync fn make_request() -> Result<YourResponseType, String> {// Create the requestlet url = "https://api.example.com/endpoint";let mut req = http_request_get(&url).map_err(|e| e.to_string())?;// Add headersreq.headers_mut().insert("Accept",HeaderValue::from_static("application/json"));// Make the request and parse JSON responselet json: YourResponseType = fetch_json(req).await.map_err(|e| e.to_string())?;Ok(json)}// Main component logic that uses block_onfn process_data() -> Result<YourResponseType, String> {// Use block_on to run the async functionblock_on(async move {make_request().await})?}
For making POST requests with JSON data, you can use the http_request_post_json
helper function, which automatically handles JSON serialization and sets header to application/json
:
async fn make_post_request() -> Result<PostResponse, String> {let url = "https://api.example.com/endpoint"; // The URL of the endpoint to make the request tolet post_data = ("key1", "value1"); // any serializable data can be passed in// Make POST request and parse JSON responselet response: PostResponse = fetch_json(http_request_post_json(&url, &post_data)?).await.map_err(|e| e.to_string())?;Ok(response)}// Main component logic that uses block_onfn process_data() -> Result<PostResponse, String> {// Use block_on to run the async functionblock_on(async move {make_post_request().await})?}
Other functions are available in the crate documentation.
Blockchain interactions
Interacting with blockchains like Ethereum requires specific dependencies and setup within your component.
Dependencies
The following dependencies are commonly required in your component's Cargo.toml
for Ethereum interactions:
[dependencies]# Core WAVS blockchain functionalitywit-bindgen-rt = {workspace = true} # Required for WASI bindings and Guest traitwavs-wasi-chain = { workspace = true } # HTTP utilities# Alloy crates for Ethereum interactionalloy-sol-types = { workspace = true } # ABI handling & type generationalloy-sol-macro = { workspace = true } # sol! macro for interfacesalloy-primitives = { workspace = true } # Core primitive types (Address, U256, etc.)alloy-network = "0.11.1" # Network trait and Ethereum network typealloy-provider = { version = "0.11.1", default-features = false, features = ["rpc-api"] } # RPC provideralloy-rpc-types = "0.11.1" # RPC type definitions (TransactionRequest, etc.)# Other useful cratesanyhow = { workspace = true } # Error handlingserde = { workspace = true } # Serialization/deserializationserde_json = { workspace = true } # JSON handling
Chain Configuration
Chain configurations are defined in the root wavs.toml
file. This allows components to access RPC endpoints and chain IDs without hardcoding them.
[chains.eth.local]chain_id = "31337"ws_endpoint = "ws://localhost:8545"http_endpoint = "http://localhost:8545"[chains.eth.mainnet]chain_id = "1"ws_endpoint = "wss://mainnet.infura.io/ws/v3/YOUR_INFURA_ID"http_endpoint = "https://mainnet.infura.io/v3/YOUR_INFURA_ID"
Accessing Configuration and Provider
WAVS provides host bindings to get the chain config for a given chain name in the wavs.toml file:
// Get the chain config for an Ethereum chainlet chain_config = host::get_eth_chain_config(&chain_name)?;// Get the chain config for a Cosmos chainlet chain_config = host::get_cosmos_chain_config(&chain_name)?;
You can then use wavs-wasi-chain
to create an RPC provider using the new_eth_provider
function:
use crate::bindings::host::{get_eth_chain_config, get_cosmos_chain_config}; // Import host functionsuse wavs_wasi_chain::ethereum::new_eth_provider;use alloy_provider::{Provider, RootProvider};use alloy_network::Ethereum;use anyhow::Context; // For context() error handling// Get the chain config for a specific chain defined in wavs.tomllet chain_config = get_eth_chain_config("eth.local") // Use the key from wavs.toml (e.g., "eth.local" or "eth.mainnet").map_err(|e| format!("Failed to get chain config: {}", e))?;// Create an Alloy provider instance using the HTTP endpointlet provider: RootProvider<Ethereum> = new_eth_provider::<Ethereum>(chain_config.http_endpoint.context("http_endpoint is missing in chain config")? // Ensure endpoint exists)?;
Example: Querying NFT Balance
Here's an example demonstrating how to query the balance of an ERC721 NFT contract for a given owner address.
use crate::bindings::host::get_eth_chain_config;use alloy_network::{Ethereum, Network};use alloy_primitives::{Address, Bytes, TxKind, U256};use alloy_provider::{Provider, RootProvider};use alloy_rpc_types::{TransactionInput, eth::TransactionRequest}; // Note: use eth::TransactionRequestuse alloy_sol_types::{sol, SolCall}; // Removed unused SolType, SolValueuse wavs_wasi_chain::ethereum::new_eth_provider;use anyhow::Context;use wstd::runtime::block_on; // Required to run async code// Define the ERC721 interface subset neededsol! {interface IERC721 {function balanceOf(address owner) external view returns (uint256);}}// Function to query NFT ownership (must be async)pub async fn query_nft_ownership(owner_address: Address, nft_contract: Address) -> Result<bool, String> {// 1. Get chain configuration (using "eth.local" as an example)let chain_config = get_eth_chain_config("eth.local").map_err(|e| format!("Failed to get eth.local chain config: {}", e))?;// 2. Create Ethereum providerlet provider: RootProvider<Ethereum> = new_eth_provider::<Ethereum>(chain_config.http_endpoint.context("http_endpoint missing for eth.local")?).map_err(|e| format!("Failed to create provider: {}", e))?; // Handle provider creation error// 3. Prepare the contract call using the generated interfacelet balance_call = IERC721::balanceOfCall { owner: owner_address };// 4. Construct the transaction request for a read-only calllet tx = TransactionRequest {to: Some(TxKind::Call(nft_contract)), // Specify the contract to callinput: TransactionInput {input: Some(balance_call.abi_encode().into()), // ABI-encoded call datadata: None // `data` is deprecated, use `input`},// Other fields like nonce, gas, value are not needed for eth_call..Default::default()};// 5. Execute the read-only call using the provider// Note: provider.call() returns the raw bytes resultlet result_bytes = provider.call(&tx).await.map_err(|e| format!("Provider call failed: {}", e))?;// 6. Decode the result (balanceOf returns uint256)// Ensure the result is exactly 32 bytes for U256::from_be_sliceif result_bytes.len() != 32 {return Err(format!("Unexpected result length: {}", result_bytes.len()));}let balance = U256::from_be_slice(&result_bytes);// 7. Determine ownership based on balanceOk(balance > U256::ZERO)}// Example of how to call the async function from the main sync component logicfn main_logic(owner: Address, contract: Address) -> Result<bool, String> {let is_owner = block_on(async move {query_nft_ownership(owner, contract).await})?; // Use block_on to run the async functionOk(is_owner)}
This example covers:
- Defining the Interface: Using
sol!
to create Rust bindings for thebalanceOf
function. - Provider Setup: Getting configuration and creating an
alloy
provider. - Call Preparation: Encoding the function call data using generated types.
- Transaction Request: Building the request for an
eth_call
. - Execution: Using
provider.call()
to interact with the node. - Decoding: Parsing the returned bytes into the expected
U256
type. - Async Handling: Using
async fn
andblock_on
for asynchronous network operations within the synchronous component environment.
Visit the wavs-wasi-chain documentation and the Alloy documentation for more detailed information.