# Admin API
URL: /docs/api-reference/admin-api
This page is an overview of the Admin API associated with AvalancheGo.
The Admin API can be used for measuring node health and debugging.
The Admin API is disabled by default for security reasons. To run a node with the Admin API enabled, use [`config flag --api-admin-enabled=true`](https://build.avax.network/docs/nodes/configure/configs-flags#--api-admin-enabled-boolean).
This API set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers).
## Format
This API uses the `json 2.0` RPC format. For details, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls).
## Endpoint
```
/ext/admin
```
## Methods
### `admin.alias`
Assign an API endpoint an alias, a different endpoint for the API. The original endpoint will still work. This change only affects this node; other nodes will not know about this alias.
**Signature**:
```
admin.alias({endpoint:string, alias:string}) -> {}
```
* `endpoint` is the original endpoint of the API. `endpoint` should only include the part of the endpoint after `/ext/`.
* The API being aliased can now be called at `ext/alias`.
* `alias` can be at most 512 characters.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.alias",
"params": {
"alias":"myAlias",
"endpoint":"bc/X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
Now, calls to the X-Chain can be made to either `/ext/bc/X` or, equivalently, to `/ext/myAlias`.
### `admin.aliasChain`
Give a blockchain an alias, a different name that can be used any place the blockchain's ID is used.
Aliasing a chain can also be done via the [Node API](https://build.avax.network/docs/nodes/configure/configs-flags#--chain-aliases-file-string).
Note that the alias is set for each chain on each node individually. In a multi-node Avalanche L1, the same alias should be configured on each node to use an alias across an Avalanche L1 successfully. Setting an alias for a chain on one node does not register that alias with other nodes automatically.
**Signature**:
```
admin.aliasChain(
{
chain:string,
alias:string
}
) -> {}
```
* `chain` is the blockchain's ID.
* `alias` can now be used in place of the blockchain's ID (in API endpoints, for example.)
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.aliasChain",
"params": {
"chain":"sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM",
"alias":"myBlockchainAlias"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
Now, instead of interacting with the blockchain whose ID is `sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM` by making API calls to `/ext/bc/sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM`, one can also make calls to `ext/bc/myBlockchainAlias`.
### `admin.getChainAliases`
Returns the aliases of the chain
**Signature**:
```
admin.getChainAliases(
{
chain:string
}
) -> {aliases:string[]}
```
* `chain` is the blockchain's ID.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.getChainAliases",
"params": {
"chain":"sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"aliases": [
"X",
"avm",
"2eNy1mUFdmaxXNj1eQHUe7Np4gju9sJsEtWQ4MX3ToiNKuADed"
]
},
"id": 1
}
```
### `admin.getLoggerLevel`
Returns log and display levels of loggers.
**Signature**:
```
admin.getLoggerLevel(
{
loggerName:string // optional
}
) -> {
loggerLevels: {
loggerName: {
logLevel: string,
displayLevel: string
}
}
}
```
* `loggerName` is the name of the logger to be returned. This is an optional argument. If not specified, it returns all possible loggers.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.getLoggerLevel",
"params": {
"loggerName": "C"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"loggerLevels": {
"C": {
"logLevel": "DEBUG",
"displayLevel": "INFO"
}
}
},
"id": 1
}
```
### `admin.loadVMs`
Dynamically loads any virtual machines installed on the node as plugins. See [here](https://build.avax.network/docs/virtual-machines#installing-a-vm) for more information on how to install a virtual machine on a node.
**Signature**:
```
admin.loadVMs() -> {
newVMs: map[string][]string
failedVMs: map[string]string,
}
```
* `failedVMs` is only included in the response if at least one virtual machine fails to be loaded.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.loadVMs",
"params" :{}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"newVMs": {
"tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH": ["foovm"]
},
"failedVMs": {
"rXJsCSEYXg2TehWxCEEGj6JU2PWKTkd6cBdNLjoe2SpsKD9cy": "error message"
}
},
"id": 1
}
```
### `admin.lockProfile`
Writes a profile of mutex statistics to `lock.profile`.
**Signature**:
```
admin.lockProfile() -> {}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.lockProfile",
"params" :{}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
### `admin.memoryProfile`
Writes a memory profile of the to `mem.profile`.
**Signature**:
```
admin.memoryProfile() -> {}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.memoryProfile",
"params" :{}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
### `admin.setLoggerLevel`
Sets log and display levels of loggers.
**Signature**:
```
admin.setLoggerLevel(
{
loggerName: string, // optional
logLevel: string, // optional
displayLevel: string, // optional
}
) -> {}
```
* `loggerName` is the logger's name to be changed. This is an optional parameter. If not specified, it changes all possible loggers.
* `logLevel` is the log level of written logs, can be omitted.
* `displayLevel` is the log level of displayed logs, can be omitted.
`logLevel` and `displayLevel` cannot be omitted at the same time.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.setLoggerLevel",
"params": {
"loggerName": "C",
"logLevel": "DEBUG",
"displayLevel": "INFO"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
### `admin.startCPUProfiler`
Start profiling the CPU utilization of the node. To stop, call `admin.stopCPUProfiler`. On stop, writes the profile to `cpu.profile`.
**Signature**:
```
admin.startCPUProfiler() -> {}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.startCPUProfiler",
"params" :{}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
### `admin.stopCPUProfiler`
Stop the CPU profile that was previously started.
**Signature**:
```
admin.stopCPUProfiler() -> {}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin.stopCPUProfiler"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
# Health API
URL: /docs/api-reference/health-api
This page is an overview of the Health API associated with AvalancheGo.
The Health API can be used for measuring node health.
This API set is for a specific node; it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers).
## Health Checks
The node periodically runs all health checks, including health checks for each chain.
The frequency at which health checks are run can be specified with the [--health-check-frequency](https://build.avax.network/docs/nodes/configure/configs-flags) flag.
## Filterable Health Checks
The health checks that are run by the node are filterable. You can specify which health checks you want to see by using `tags` filters. Returned results will only include health checks that match the specified tags and global health checks like `network`, `database` etc. When filtered, the returned results will not show the full node health, but only a subset of filtered health checks. This means the node can still be unhealthy in unfiltered checks, even if the returned results show that the node is healthy. AvalancheGo supports using subnetIDs as tags.
## GET Request
To get an HTTP status code response that indicates the node's health, make a `GET` request. If the node is healthy, it will return a `200` status code. If the node is unhealthy, it will return a `503` status code. In-depth information about the node's health is included in the response body.
### Filtering
To filter GET health checks, add a `tag` query parameter to the request. The `tag` parameter is a string. For example, to filter health results by subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL`, use the following query:
```sh
curl 'http://localhost:9650/ext/health?tag=29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL'
```
In this example returned results will contain global health checks and health checks that are related to subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL`.
**Note**: This filtering can show healthy results even if the node is unhealthy in other Chains/Avalanche L1s.
In order to filter results by multiple tags, use multiple `tag` query parameters. For example, to filter health results by subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL` and `28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY` use the following query:
```sh
curl 'http://localhost:9650/ext/health?tag=29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL&tag=28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY'
```
The returned results will include health checks for both subnetIDs as well as global health checks.
### Endpoints
The available endpoints for GET requests are:
* `/ext/health` returns a holistic report of the status of the node. **Most operators should monitor this status.**
* `/ext/health/health` is the same as `/ext/health`.
* `/ext/health/readiness` returns healthy once the node has finished initializing.
* `/ext/health/liveness` returns healthy once the endpoint is available.
## JSON RPC Request
### Format
This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls).
### Endpoint
### Methods
#### `health.health`
This method returns the last set of health check results.
**Example Call**:
```sh
curl -H 'Content-Type: application/json' --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"health.health",
"params": {
"tags": ["11111111111111111111111111111111LpoYY", "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"]
}
}' 'http://localhost:9650/ext/health'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"checks": {
"C": {
"message": {
"engine": {
"consensus": {
"lastAcceptedHeight": 31273749,
"lastAcceptedID": "2Y4gZGzQnu8UjnHod8j1BLewHFVEbzhULPNzqrSWEHkHNqDrYL",
"longestProcessingBlock": "0s",
"processingBlocks": 0
},
"vm": null
},
"networking": {
"percentConnected": 0.9999592612587486
}
},
"timestamp": "2024-03-26T19:44:45.2931-04:00",
"duration": 20375
},
"P": {
"message": {
"engine": {
"consensus": {
"lastAcceptedHeight": 142517,
"lastAcceptedID": "2e1FEPCBEkG2Q7WgyZh1v4nt3DXj1HDbDthyhxdq2Ltg3shSYq",
"longestProcessingBlock": "0s",
"processingBlocks": 0
},
"vm": null
},
"networking": {
"percentConnected": 0.9999592612587486
}
},
"timestamp": "2024-03-26T19:44:45.293115-04:00",
"duration": 8750
},
"X": {
"message": {
"engine": {
"consensus": {
"lastAcceptedHeight": 24464,
"lastAcceptedID": "XuFCsGaSw9cn7Vuz5e2fip4KvP46Xu53S8uDRxaC2QJmyYc3w",
"longestProcessingBlock": "0s",
"processingBlocks": 0
},
"vm": null
},
"networking": {
"percentConnected": 0.9999592612587486
}
},
"timestamp": "2024-03-26T19:44:45.29312-04:00",
"duration": 23291
},
"bootstrapped": {
"message": [],
"timestamp": "2024-03-26T19:44:45.293078-04:00",
"duration": 3375
},
"database": {
"timestamp": "2024-03-26T19:44:45.293102-04:00",
"duration": 1959
},
"diskspace": {
"message": {
"availableDiskBytes": 227332591616
},
"timestamp": "2024-03-26T19:44:45.293106-04:00",
"duration": 3042
},
"network": {
"message": {
"connectedPeers": 284,
"sendFailRate": 0,
"timeSinceLastMsgReceived": "293.098ms",
"timeSinceLastMsgSent": "293.098ms"
},
"timestamp": "2024-03-26T19:44:45.2931-04:00",
"duration": 2333
},
"router": {
"message": {
"longestRunningRequest": "66.90725ms",
"outstandingRequests": 3
},
"timestamp": "2024-03-26T19:44:45.293097-04:00",
"duration": 3542
}
},
"healthy": true
},
"id": 1
}
```
In this example response, every check has passed. So, the node is healthy.
**Response Explanation**:
* `checks` is a list of health check responses.
* A check response may include a `message` with additional context.
* A check response may include an `error` describing why the check failed.
* `timestamp` is the timestamp of the last health check.
* `duration` is the execution duration of the last health check, in nanoseconds.
* `contiguousFailures` is the number of times in a row this check failed.
* `timeOfFirstFailure` is the time this check first failed.
* `healthy` is true all the health checks are passing.
#### `health.readiness`
This method returns the last evaluation of the startup health check results.
**Example Call**:
```sh
curl -H 'Content-Type: application/json' --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"health.readiness",
"params": {
"tags": ["11111111111111111111111111111111LpoYY", "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"]
}
}' 'http://localhost:9650/ext/health'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"checks": {
"bootstrapped": {
"message": [],
"timestamp": "2024-03-26T20:02:45.299114-04:00",
"duration": 2834
}
},
"healthy": true
},
"id": 1
}
```
In this example response, every check has passed. So, the node has finished the startup process.
**Response Explanation**:
* `checks` is a list of health check responses.
* A check response may include a `message` with additional context.
* A check response may include an `error` describing why the check failed.
* `timestamp` is the timestamp of the last health check.
* `duration` is the execution duration of the last health check, in nanoseconds.
* `contiguousFailures` is the number of times in a row this check failed.
* `timeOfFirstFailure` is the time this check first failed.
* `healthy` is true all the health checks are passing.
#### `health.liveness`
This method returns healthy.
**Example Call**:
```sh
curl -H 'Content-Type: application/json' --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"health.liveness"
}' 'http://localhost:9650/ext/health'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"checks": {},
"healthy": true
},
"id": 1
}
```
In this example response, the node was able to handle the request and mark the service as healthy.
**Response Explanation**:
* `checks` is an empty list.
* `healthy` is true.
# Index API
URL: /docs/api-reference/index-api
This page is an overview of the Index API associated with AvalancheGo.
AvalancheGo can be configured to run with an indexer. That is, it saves (indexes) every container (a block, vertex or transaction) it accepts on the X-Chain, P-Chain and C-Chain. To run AvalancheGo with indexing enabled, set command line flag [--index-enabled](https://build.avax.network/docs/nodes/configure/configs-flags#--index-enabled-boolean) to true.
**AvalancheGo will only index containers that are accepted when running with `--index-enabled` set to true.** To ensure your node has a complete index, run a node with a fresh database and `--index-enabled` set to true. The node will accept every block, vertex and transaction in the network history during bootstrapping, ensuring your index is complete.
It is OK to turn off your node if it is running with indexing enabled. If it restarts with indexing still enabled, it will accept all containers that were accepted while it was offline. The indexer should never fail to index an accepted block, vertex or transaction.
Indexed containers (that is, accepted blocks, vertices and transactions) are timestamped with the time at which the node accepted that container. Note that if the container was indexed during bootstrapping, other nodes may have accepted the container much earlier. Every container indexed during bootstrapping will be timestamped with the time at which the node bootstrapped, not when it was first accepted by the network.
If `--index-enabled` is changed to `false` from `true`, AvalancheGo won't start as doing so would cause a previously complete index to become incomplete, unless the user explicitly says to do so with `--index-allow-incomplete`. This protects you from accidentally running with indexing disabled, after previously running with it enabled, which would result in an incomplete index.
This document shows how to query data from AvalancheGo's Index API. The Index API is only available when running with `--index-enabled`.
## Go Client
There is a Go implementation of an Index API client. See documentation [here](https://pkg.go.dev/github.com/ava-labs/avalanchego/indexer#Client). This client can be used inside a Go program to connect to an AvalancheGo node that is running with the Index API enabled and make calls to the Index API.
## Format
This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls).
## Endpoints
Each chain has one or more index. To see if a C-Chain block is accepted, for example, send an API call to the C-Chain block index. To see if an X-Chain vertex is accepted, for example, send an API call to the X-Chain vertex index.
### C-Chain Blocks
```
/ext/index/C/block
```
### P-Chain Blocks
```
/ext/index/P/block
```
### X-Chain Transactions
```
/ext/index/X/tx
```
### X-Chain Blocks
```
/ext/index/X/block
```
To ensure historical data can be accessed, the `/ext/index/X/vtx` is still accessible, even though it is no longer populated with chain data since the Cortina activation. If you are using `V1.10.0` or higher, you need to migrate to using the `/ext/index/X/block` endpoint.
## Methods
### `index.getContainerByID`
Get container by ID.
**Signature**:
```
index.getContainerByID({
id: string,
encoding: string
}) -> {
id: string,
bytes: string,
timestamp: string,
encoding: string,
index: string
}
```
**Request**:
* `id` is the container's ID
* `encoding` is `"hex"` only.
**Response**:
* `id` is the container's ID
* `bytes` is the byte representation of the container
* `timestamp` is the time at which this node accepted the container
* `encoding` is `"hex"` only.
* `index` is how many containers were accepted in this index before this one
**Example Call**:
```sh
curl --location --request POST 'localhost:9650/ext/index/X/tx' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "index.getContainerByID",
"params": {
"id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY",
"encoding":"hex"
},
"id": 1
}'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY",
"bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108",
"timestamp": "2021-04-02T15:34:00.262979-07:00",
"encoding": "hex",
"index": "0"
}
}
```
### `index.getContainerByIndex`
Get container by index. The first container accepted is at index 0, the second is at index 1, etc.
**Signature**:
```
index.getContainerByIndex({
index: uint64,
encoding: string
}) -> {
id: string,
bytes: string,
timestamp: string,
encoding: string,
index: string
}
```
**Request**:
* `index` is how many containers were accepted in this index before this one
* `encoding` is `"hex"` only.
**Response**:
* `id` is the container's ID
* `bytes` is the byte representation of the container
* `timestamp` is the time at which this node accepted the container
* `index` is how many containers were accepted in this index before this one
* `encoding` is `"hex"` only.
**Example Call**:
```sh
curl --location --request POST 'localhost:9650/ext/index/X/tx' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "index.getContainerByIndex",
"params": {
"index":0,
"encoding": "hex"
},
"id": 1
}'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY",
"bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108",
"timestamp": "2021-04-02T15:34:00.262979-07:00",
"encoding": "hex",
"index": "0"
}
}
```
### `index.getContainerRange`
Returns the transactions at index \[`startIndex`], \[`startIndex+1`], ... , \[`startIndex+n-1`]
* If \[`n`] == 0, returns an empty response (for example: null).
* If \[`startIndex`] > the last accepted index, returns an error (unless the above apply.)
* If \[`n`] > \[`MaxFetchedByRange`], returns an error.
* If we run out of transactions, returns the ones fetched before running out.
* `numToFetch` must be in `[0,1024]`.
**Signature**:
```
index.getContainerRange({
startIndex: uint64,
numToFetch: uint64,
encoding: string
}) -> []{
id: string,
bytes: string,
timestamp: string,
encoding: string,
index: string
}
```
**Request**:
* `startIndex` is the beginning index
* `numToFetch` is the number of containers to fetch
* `encoding` is `"hex"` only.
**Response**:
* `id` is the container's ID
* `bytes` is the byte representation of the container
* `timestamp` is the time at which this node accepted the container
* `encoding` is `"hex"` only.
* `index` is how many containers were accepted in this index before this one
**Example Call**:
```sh
curl --location --request POST 'localhost:9650/ext/index/X/tx' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "index.getContainerRange",
"params": {
"startIndex":0,
"numToFetch":100,
"encoding": "hex"
},
"id": 1
}'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": [
{
"id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY",
"bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108",
"timestamp": "2021-04-02T15:34:00.262979-07:00",
"encoding": "hex",
"index": "0"
}
]
}
```
### `index.getIndex`
Get a container's index.
**Signature**:
```
index.getIndex({
id: string,
encoding: string
}) -> {
index: string
}
```
**Request**:
* `id` is the ID of the container to fetch
* `encoding` is `"hex"` only.
**Response**:
* `index` is how many containers were accepted in this index before this one
**Example Call**:
```sh
curl --location --request POST 'localhost:9650/ext/index/X/tx' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "index.getIndex",
"params": {
"id":"6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY",
"encoding": "hex"
},
"id": 1
}'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"index": "0"
},
"id": 1
}
```
### `index.getLastAccepted`
Get the most recently accepted container.
**Signature**:
```
index.getLastAccepted({
encoding:string
}) -> {
id: string,
bytes: string,
timestamp: string,
encoding: string,
index: string
}
```
**Request**:
* `encoding` is `"hex"` only.
**Response**:
* `id` is the container's ID
* `bytes` is the byte representation of the container
* `timestamp` is the time at which this node accepted the container
* `encoding` is `"hex"` only.
**Example Call**:
```sh
curl --location --request POST 'localhost:9650/ext/index/X/tx' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "index.getLastAccepted",
"params": {
"encoding": "hex"
},
"id": 1
}'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY",
"bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108",
"timestamp": "2021-04-02T15:34:00.262979-07:00",
"encoding": "hex",
"index": "0"
}
}
```
### `index.isAccepted`
Returns true if the container is in this index.
**Signature**:
```
index.isAccepted({
id: string,
encoding: string
}) -> {
isAccepted: bool
}
```
**Request**:
* `id` is the ID of the container to fetch
* `encoding` is `"hex"` only.
**Response**:
* `isAccepted` displays if the container has been accepted
**Example Call**:
```sh
curl --location --request POST 'localhost:9650/ext/index/X/tx' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "index.isAccepted",
"params": {
"id":"6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY",
"encoding": "hex"
},
"id": 1
}'
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"isAccepted": true
},
"id": 1
}
```
## Example: Iterating Through X-Chain Transaction
Here is an example of how to iterate through all transactions on the X-Chain.
You can use the Index API to get the ID of every transaction that has been accepted on the X-Chain, and use the X-Chain API method `avm.getTx` to get a human-readable representation of the transaction.
To get an X-Chain transaction by its index (the order it was accepted in), use Index API method [index.getlastaccepted](#indexgetlastaccepted).
For example, to get the second transaction (note that `"index":1`) accepted on the X-Chain, do:
```sh
curl --location --request POST 'https://indexer-demo.avax.network/ext/index/X/tx' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "index.getContainerByIndex",
"params": {
"encoding":"hex",
"index":1
},
"id": 1
}'
```
This returns the ID of the second transaction accepted in the X-Chain's history. To get the third transaction on the X-Chain, use `"index":2`, and so on.
The above API call gives the response below:
```json
{
"jsonrpc": "2.0",
"result": {
"id": "ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo",
"bytes": "0x00000000000000000001ed5f38341e436e5d46e2bb00b45d62ae97d1b050c64bc634ae10626739e35c4b0000000221e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000000129f6afc0000000000000000000000001000000017416792e228a765c65e2d76d28ab5a16d18c342f21e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff0000000700000222afa575c00000000000000000000000010000000187d6a6dd3cd7740c8b13a410bea39b01fa83bb3e000000016f375c785edb28d52edb59b54035c96c198e9d80f5f5f5eee070592fe9465b8d0000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff0000000500000223d9ab67c0000000010000000000000000000000010000000900000001beb83d3d29f1247efb4a3a1141ab5c966f46f946f9c943b9bc19f858bd416d10060c23d5d9c7db3a0da23446b97cd9cf9f8e61df98e1b1692d764c84a686f5f801a8da6e40",
"timestamp": "2021-11-04T00:42:55.01643414Z",
"encoding": "hex",
"index": "1"
},
"id": 1
}
```
The ID of this transaction is `ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo`.
To get the transaction by its ID, use API method `avm.getTx`:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avm.getTx",
"params" :{
"txID":"ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo",
"encoding": "json"
}
}' -H 'content-type:application/json;' https://api.avax.network/ext/bc/X
```
**Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"tx": {
"unsignedTx": {
"networkID": 1,
"blockchainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM",
"outputs": [
{
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"output": {
"addresses": ["X-avax1wst8jt3z3fm9ce0z6akj3266zmgccdp03hjlaj"],
"amount": 4999000000,
"locktime": 0,
"threshold": 1
}
},
{
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"output": {
"addresses": ["X-avax1slt2dhfu6a6qezcn5sgtagumq8ag8we75f84sw"],
"amount": 2347999000000,
"locktime": 0,
"threshold": 1
}
}
],
"inputs": [
{
"txID": "qysTYUMCWdsR3MctzyfXiSvoSf6evbeFGRLLzA4j2BjNXTknh",
"outputIndex": 0,
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"input": {
"amount": 2352999000000,
"signatureIndices": [0]
}
}
],
"memo": "0x"
},
"credentials": [
{
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"credential": {
"signatures": [
"0xbeb83d3d29f1247efb4a3a1141ab5c966f46f946f9c943b9bc19f858bd416d10060c23d5d9c7db3a0da23446b97cd9cf9f8e61df98e1b1692d764c84a686f5f801"
]
}
}
]
},
"encoding": "json"
},
"id": 1
}
```
# Introduction
URL: /docs/api-reference
Comprehensive reference documentation for Avalanche APIs.
# Info API
URL: /docs/api-reference/info-api
This page is an overview of the Info API associated with AvalancheGo.
The Info API can be used to access basic information about an Avalanche node.
## Format
This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls).
## Endpoint
```
/ext/info
```
## Methods
### `info.acps`
Returns peer preferences for Avalanche Community Proposals (ACPs)
**Signature**:
```
info.acps() -> {
acps: map[uint32]{
supportWeight: uint64
supporters: set[string]
objectWeight: uint64
objectors: set[string]
abstainWeight: uint64
}
}
```
**Example Call**:
```sh
curl -sX POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.acps",
"params" :{}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"acps": {
"23": {
"supportWeight": "0",
"supporters": [],
"objectWeight": "0",
"objectors": [],
"abstainWeight": "161147778098286584"
},
"24": {
"supportWeight": "0",
"supporters": [],
"objectWeight": "0",
"objectors": [],
"abstainWeight": "161147778098286584"
},
"25": {
"supportWeight": "0",
"supporters": [],
"objectWeight": "0",
"objectors": [],
"abstainWeight": "161147778098286584"
},
"30": {
"supportWeight": "0",
"supporters": [],
"objectWeight": "0",
"objectors": [],
"abstainWeight": "161147778098286584"
},
"31": {
"supportWeight": "0",
"supporters": [],
"objectWeight": "0",
"objectors": [],
"abstainWeight": "161147778098286584"
},
"41": {
"supportWeight": "0",
"supporters": [],
"objectWeight": "0",
"objectors": [],
"abstainWeight": "161147778098286584"
},
"62": {
"supportWeight": "0",
"supporters": [],
"objectWeight": "0",
"objectors": [],
"abstainWeight": "161147778098286584"
}
}
},
"id": 1
}
```
### `info.isBootstrapped`
Check whether a given chain is done bootstrapping
**Signature**:
```
info.isBootstrapped({chain: string}) -> {isBootstrapped: bool}
```
`chain` is the ID or alias of a chain.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.isBootstrapped",
"params": {
"chain":"X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"isBootstrapped": true
},
"id": 1
}
```
### `info.getBlockchainID`
Given a blockchain's alias, get its ID. (See [`admin.aliasChain`](https://build.avax.network/docs/api-reference/admin-api#adminaliaschain).)
**Signature**:
```
info.getBlockchainID({alias:string}) -> {blockchainID:string}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getBlockchainID",
"params": {
"alias":"X"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"blockchainID": "sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM"
}
}
```
### `info.getNetworkID`
Get the ID of the network this node is participating in.
**Signature**:
```
info.getNetworkID() -> { networkID: int }
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNetworkID"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"networkID": "2"
}
}
```
Network ID of 1 = Mainnet Network ID of 5 = Fuji (testnet)
### `info.getNetworkName`
Get the name of the network this node is participating in.
**Signature**:
```
info.getNetworkName() -> { networkName:string }
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNetworkName"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"networkName": "local"
}
}
```
### `info.getNodeID`
Get the ID, the BLS key, and the proof of possession(BLS signature) of this node.
This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers).
**Signature**:
```
info.getNodeID() -> {
nodeID: string,
nodePOP: {
publicKey: string,
proofOfPossession: string
}
}
```
* `nodeID` Node ID is the unique identifier of the node that you set to act as a validator on the Primary Network.
* `nodePOP` is this node's BLS key and proof of possession. Nodes must register a BLS key to act as a validator on the Primary Network. Your node's POP is logged on startup and is accessible over this endpoint.
* `publicKey` is the 48 byte hex representation of the BLS key.
* `proofOfPossession` is the 96 byte hex representation of the BLS signature.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNodeID"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD",
"nodePOP": {
"publicKey": "0x8f95423f7142d00a48e1014a3de8d28907d420dc33b3052a6dee03a3f2941a393c2351e354704ca66a3fc29870282e15",
"proofOfPossession": "0x86a3ab4c45cfe31cae34c1d06f212434ac71b1be6cfe046c80c162e057614a94a5bc9f1ded1a7029deb0ba4ca7c9b71411e293438691be79c2dbf19d1ca7c3eadb9c756246fc5de5b7b89511c7d7302ae051d9e03d7991138299b5ed6a570a98"
}
},
"id": 1
}
```
### `info.getNodeIP`
Get the IP of this node.
This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers).
**Signature**:
```
info.getNodeIP() -> {ip: string}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNodeIP"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"ip": "192.168.1.1:9651"
},
"id": 1
}
```
### `info.getNodeVersion`
Get the version of this node.
**Signature**:
```
info.getNodeVersion() -> {
version: string,
databaseVersion: string,
gitCommit: string,
vmVersions: map[string]string,
rpcProtocolVersion: string,
}
```
where:
* `version` is this node's version
* `databaseVersion` is the version of the database this node is using
* `gitCommit` is the Git commit that this node was built from
* `vmVersions` is map where each key/value pair is the name of a VM, and the version of that VM this node runs
* `rpcProtocolVersion` is the RPCChainVM protocol version
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getNodeVersion"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"version": "avalanche/1.9.1",
"databaseVersion": "v1.4.5",
"rpcProtocolVersion": "18",
"gitCommit": "79cd09ba728e1cecef40acd60702f0a2d41ea404",
"vmVersions": {
"avm": "v1.9.1",
"evm": "v0.11.1",
"platform": "v1.9.1"
}
},
"id": 1
}
```
### `info.getTxFee`
Deprecated as of [v1.12.2](https://github.com/ava-labs/avalanchego/releases/tag/v1.12.2).
Get the fees of the network.
**Signature**:
```
info.getTxFee() ->
{
txFee: uint64,
createAssetTxFee: uint64,
createSubnetTxFee: uint64,
transformSubnetTxFee: uint64,
createBlockchainTxFee: uint64,
addPrimaryNetworkValidatorFee: uint64,
addPrimaryNetworkDelegatorFee: uint64,
addSubnetValidatorFee: uint64,
addSubnetDelegatorFee: uint64
}
```
* `txFee` is the default fee for issuing X-Chain transactions.
* `createAssetTxFee` is the fee for issuing a `CreateAssetTx` on the X-Chain.
* `createSubnetTxFee` is no longer used.
* `transformSubnetTxFee` is no longer used.
* `createBlockchainTxFee` is no longer used.
* `addPrimaryNetworkValidatorFee` is no longer used.
* `addPrimaryNetworkDelegatorFee` is no longer used.
* `addSubnetValidatorFee` is no longer used.
* `addSubnetDelegatorFee` is no longer used.
All fees are denominated in nAVAX.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getTxFee"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"txFee": "1000000",
"createAssetTxFee": "10000000",
"createSubnetTxFee": "1000000000",
"transformSubnetTxFee": "10000000000",
"createBlockchainTxFee": "1000000000",
"addPrimaryNetworkValidatorFee": "0",
"addPrimaryNetworkDelegatorFee": "0",
"addSubnetValidatorFee": "1000000",
"addSubnetDelegatorFee": "1000000"
}
}
```
### `info.getVMs`
Get the virtual machines installed on this node.
This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers).
**Signature**:
```
info.getVMs() -> {
vms: map[string][]string
}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.getVMs",
"params" :{}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"vms": {
"jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq": ["avm"],
"mgj786NP7uDwBCcq6YwThhaN8FLyybkCa4zBWTQbNgmK6k9A6": ["evm"],
"qd2U4HDWUvMrVUeTcCHp6xH3Qpnn1XbU5MDdnBoiifFqvgXwT": ["nftfx"],
"rWhpuQPF1kb72esV2momhMuTYGkEb1oL29pt2EBXWmSy4kxnT": ["platform"],
"rXJsCSEYXg2TehWxCEEGj6JU2PWKTkd6cBdNLjoe2SpsKD9cy": ["propertyfx"],
"spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ": ["secp256k1fx"]
}
},
"id": 1
}
```
### `info.peers`
Get a description of peer connections.
**Signature**:
```
info.peers({
nodeIDs: string[] // optional
}) ->
{
numPeers: int,
peers:[]{
ip: string,
publicIP: string,
nodeID: string,
version: string,
lastSent: string,
lastReceived: string,
benched: string[],
observedUptime: int,
}
}
```
* `nodeIDs` is an optional parameter to specify what NodeID's descriptions should be returned. If this parameter is left empty, descriptions for all active connections will be returned. If the node is not connected to a specified NodeID, it will be omitted from the response.
* `ip` is the remote IP of the peer.
* `publicIP` is the public IP of the peer.
* `nodeID` is the prefixed Node ID of the peer.
* `version` shows which version the peer runs on.
* `lastSent` is the timestamp of last message sent to the peer.
* `lastReceived` is the timestamp of last message received from the peer.
* `benched` shows chain IDs that the peer is currently benched on.
* `observedUptime` is this node's primary network uptime, observed by the peer.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.peers",
"params": {
"nodeIDs": []
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"numPeers": 3,
"peers": [
{
"ip": "206.189.137.87:9651",
"publicIP": "206.189.137.87:9651",
"nodeID": "NodeID-8PYXX47kqLDe2wD4oPbvRRchcnSzMA4J4",
"version": "avalanche/1.9.4",
"lastSent": "2020-06-01T15:23:02Z",
"lastReceived": "2020-06-01T15:22:57Z",
"benched": [],
"observedUptime": "99",
"trackedSubnets": [],
"benched": []
},
{
"ip": "158.255.67.151:9651",
"publicIP": "158.255.67.151:9651",
"nodeID": "NodeID-C14fr1n8EYNKyDfYixJ3rxSAVqTY3a8BP",
"version": "avalanche/1.9.4",
"lastSent": "2020-06-01T15:23:02Z",
"lastReceived": "2020-06-01T15:22:34Z",
"benched": [],
"observedUptime": "75",
"trackedSubnets": [
"29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"
],
"benched": []
},
{
"ip": "83.42.13.44:9651",
"publicIP": "83.42.13.44:9651",
"nodeID": "NodeID-LPbcSMGJ4yocxYxvS2kBJ6umWeeFbctYZ",
"version": "avalanche/1.9.3",
"lastSent": "2020-06-01T15:23:02Z",
"lastReceived": "2020-06-01T15:22:55Z",
"benched": [],
"observedUptime": "95",
"trackedSubnets": [],
"benched": []
}
]
}
}
```
### `info.uptime`
Returns the network's observed uptime of this node. This is the only reliable source of data for your node's uptime. Other sources may be using data gathered with incomplete (limited) information.
**Signature**:
```
info.uptime() ->
{
rewardingStakePercentage: float64,
weightedAveragePercentage: float64
}
```
* `rewardingStakePercentage` is the percent of stake which thinks this node is above the uptime requirement.
* `weightedAveragePercentage` is the stake-weighted average of all observed uptimes for this node.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.uptime"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"rewardingStakePercentage": "100.0000",
"weightedAveragePercentage": "99.0000"
}
}
```
#### Example Avalanche L1 Call
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.uptime",
"params" :{
"subnetID":"29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
#### Example Avalanche L1 Response
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"rewardingStakePercentage": "74.0741",
"weightedAveragePercentage": "72.4074"
}
}
```
### `info.upgrades`
Returns the upgrade history and configuration of the network.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"info.upgrades"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"apricotPhase1Time": "2020-12-05T05:00:00Z",
"apricotPhase2Time": "2020-12-05T05:00:00Z",
"apricotPhase3Time": "2020-12-05T05:00:00Z",
"apricotPhase4Time": "2020-12-05T05:00:00Z",
"apricotPhase4MinPChainHeight": 0,
"apricotPhase5Time": "2020-12-05T05:00:00Z",
"apricotPhasePre6Time": "2020-12-05T05:00:00Z",
"apricotPhase6Time": "2020-12-05T05:00:00Z",
"apricotPhasePost6Time": "2020-12-05T05:00:00Z",
"banffTime": "2020-12-05T05:00:00Z",
"cortinaTime": "2020-12-05T05:00:00Z",
"cortinaXChainStopVertexID": "11111111111111111111111111111111LpoYY",
"durangoTime": "2020-12-05T05:00:00Z",
"etnaTime": "2024-10-09T20:00:00Z",
"fortunaTime": "9999-12-01T05:00:00Z",
"graniteTime": "9999-12-01T05:00:00Z"
},
"id": 1
}
```
# Keystore API [Deprecated]
URL: /docs/api-reference/keystore-api
This page is an overview of the Keystore API associated with AvalancheGo.
Because the node operator has access to your plain-text password, you should only create a keystore user on a node that you operate. If that node is breached, you could lose all your tokens. Keystore APIs are not recommended for use on Mainnet.
Every node has a built-in keystore. Clients create users on the keystore, which act as identities to be used when interacting with blockchains. A keystore exists at the node level, so if you create a user on a node it exists *only* on that node. However, users may be imported and exported using this API.
For validation and cross-chain transfer on the Mainnet, you should issue transactions through [AvalancheJS](https://github.com/tooling/avalanche-js). That way control keys for your funds won't be stored on the node, which significantly lowers the risk should a computer running a node be compromised. See following docs for details:
1. Transfer AVAX Tokens Between Chains:
* C-Chain: [export](https://github.com/ava-labs/avalanchejs/blob/master/examples/c-chain/export.ts) and [import](https://github.com/ava-labs/avalanchejs/blob/master/examples/c-chain/import.ts)
* P-Chain: [export](https://github.com/ava-labs/avalanchejs/blob/master/examples/p-chain/export.ts) and [import](https://github.com/ava-labs/avalanchejs/blob/master/examples/p-chain/import.ts)
* X-Chain: [export](https://github.com/ava-labs/avalanchejs/blob/master/examples/x-chain/export.ts) and [import](https://github.com/ava-labs/avalanchejs/blob/master/examples/x-chain/import.ts)
2. [Add a Node to the Validator Set](https://github.com/nodes/validate/node-validator)
This API set is for a specific node, it is unavailable on the [public server](https://github.com/tooling/rpc-providers).
## Format
This API uses the `json 2.0` API format. For more information on making JSON RPC calls, see [here](https://github.com/api-reference/standards/guides/issuing-api-calls).
## Endpoint
```
/ext/keystore
```
## Methods
### `keystore.createUser`
Create a new user with the specified username and password.
**Signature**:
```
keystore.createUser(
{
username:string,
password:string
}
) -> {}
```
* `username` and `password` can be at most 1024 characters.
* Your request will be rejected if `password` is too weak. `password` should be at least 8 characters and contain upper and lower case letters as well as numbers and symbols.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"keystore.createUser",
"params" :{
"username":"myUsername",
"password":"myPassword"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/keystore
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
### `keystore.deleteUser`
Deprecated as of [v1.9.12](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Delete a user.
**Signature**:
```
keystore.deleteUser({ username: string, password:string }) -> {}
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"keystore.deleteUser",
"params" : {
"username":"myUsername",
"password":"myPassword"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/keystore
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
### `keystore.exportUser`
Deprecated as of [v1.9.12](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Export a user. The user can be imported to another node with [`keystore.importUser`](https://github.com/api-reference/keystore-api#keystoreimportuser). The user's password remains encrypted.
**Signature**:
```
keystore.exportUser(
{
username:string,
password:string,
encoding:string //optional
}
) -> {
user:string,
encoding:string
}
```
`encoding` specifies the format of the string encoding user data. Can only be `hex` when a value is provided.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"keystore.exportUser",
"params" :{
"username":"myUsername",
"password":"myPassword"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/keystore
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"user": "7655a29df6fc2747b0874e1148b423b954a25fcdb1f170d0ec8eb196430f7001942ce55b02a83b1faf50a674b1e55bfc00000000",
"encoding": "hex"
}
}
```
### `keystore.importUser`
Deprecated as of [v1.9.12](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Import a user. `password` must match the user's password. `username` doesn't have to match the username `user` had when it was exported.
**Signature**:
```
keystore.importUser(
{
username:string,
password:string,
user:string,
encoding:string //optional
}
) -> {}
```
`encoding` specifies the format of the string encoding user data. Can only be `hex` when a value is provided.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"keystore.importUser",
"params" :{
"username":"myUsername",
"password":"myPassword",
"user" :"0x7655a29df6fc2747b0874e1148b423b954a25fcdb1f170d0ec8eb196430f7001942ce55b02a83b1faf50a674b1e55bfc000000008cf2d869"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/keystore
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {}
}
```
### `keystore.listUsers`
Deprecated as of [v1.9.12](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
List the users in this keystore.
**Signature**:
```
keystore.ListUsers() -> { users: []string }
```
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"keystore.listUsers"
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/keystore
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"users": ["myUsername"]
}
}
```
# Metrics API
URL: /docs/api-reference/metrics-api
This page is an overview of the Metrics API associated with AvalancheGo.
The Metrics API allows clients to get statistics about a node's health and performance.
This API set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers).
## Endpoint
```
/ext/metrics
```
## Usage
To get the node metrics:
```sh
curl -X POST 127.0.0.1:9650/ext/metrics
```
## Format
This API produces Prometheus compatible metrics. See [here](https://prometheus.io/docs/instrumenting/exposition_formats) for information on Prometheus' formatting.
[Here](https://build.avax.network/docs/nodes/maintain/monitoring) is a tutorial that shows how to set up Prometheus and Grafana to monitor AvalancheGo node using the Metrics API.
# ProposerVM API
URL: /docs/api-reference/proposervm-api
This page is an overview of the ProposerVM API associated with AvalancheGo.
# ProposerVM API
The ProposerVM API allows clients to fetch information about a chain's Snowman++ wrapper information.
## Endpoint
```text
/ext/bc/{blockchainID}/proposervm
```
## Format
This API uses the `JSON-RPC 2.0` RPC format.
## Methods
### `proposervm.getProposedHeight`
Returns this node's current proposer VM height.
**Signature:**
```
proposervm.getProposedHeight() ->
{
height: int,
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "proposervm.getProposedHeight",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P/proposervm
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"height": "56"
},
"id": 1
}
```
### `proposervm.getCurrentEpoch`
Returns the current epoch information.
**Signature:**
```
proposervm.getCurrentEpoch() ->
{
number: int,
startTime: int,
pChainHeight: int
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "proposervm.getCurrentEpoch",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P/proposervm
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"number": "56",
"startTime":"1755802182",
"pChainHeight": "21857141"
},
"id": 1
}
```
# Subnet-EVM API
URL: /docs/api-reference/subnet-evm-api
This page describes the API endpoints available for Subnet-EVM based blockchains.
[Subnet-EVM](https://github.com/ava-labs/subnet-evm) APIs are identical to
[Coreth](https://build.avax.network/docs/api-reference/c-chain/api) C-Chain APIs, except Avalanche Specific APIs
starting with `avax`. Subnet-EVM also supports standard Ethereum APIs as well. For more
information about Coreth APIs see [GitHub](https://github.com/ava-labs/coreth).
Subnet-EVM has some additional APIs that are not available in Coreth.
## `eth_feeConfig`
Subnet-EVM comes with an API request for getting fee config at a specific block. You can use this
API to check your activated fee config.
**Signature:**
```bash
eth_feeConfig([blk BlkNrOrHash]) -> {feeConfig: json}
```
* `blk` is the block number or hash at which to retrieve the fee config. Defaults to the latest block if omitted.
**Example Call:**
```bash
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "eth_feeConfig",
"params": [
"latest"
],
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/rpc
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"feeConfig": {
"gasLimit": 15000000,
"targetBlockRate": 2,
"minBaseFee": 33000000000,
"targetGas": 15000000,
"baseFeeChangeDenominator": 36,
"minBlockGasCost": 0,
"maxBlockGasCost": 1000000,
"blockGasCostStep": 200000
},
"lastChangedAt": 0
}
}
```
## `eth_getChainConfig`
`eth_getChainConfig` returns the Chain Config of the blockchain. This API is enabled by default with
`internal-blockchain` namespace.
This API exists on the C-Chain as well, but in addition to the normal Chain Config returned by the
C-Chain `eth_getChainConfig` on `subnet-evm` additionally returns the upgrade config, which specifies
network upgrades activated after the genesis. **Signature:**
```bash
eth_getChainConfig({}) -> {chainConfig: json}
```
**Example Call:**
```bash
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"eth_getChainConfig",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"chainId": 43214,
"feeConfig": {
"gasLimit": 8000000,
"targetBlockRate": 2,
"minBaseFee": 33000000000,
"targetGas": 15000000,
"baseFeeChangeDenominator": 36,
"minBlockGasCost": 0,
"maxBlockGasCost": 1000000,
"blockGasCostStep": 200000
},
"allowFeeRecipients": true,
"homesteadBlock": 0,
"eip150Block": 0,
"eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0",
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"muirGlacierBlock": 0,
"subnetEVMTimestamp": 0,
"contractDeployerAllowListConfig": {
"adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"],
"blockTimestamp": 0
},
"contractNativeMinterConfig": {
"adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"],
"blockTimestamp": 0
},
"feeManagerConfig": {
"adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"],
"blockTimestamp": 0
},
"upgrades": {
"precompileUpgrades": [
{
"feeManagerConfig": {
"adminAddresses": null,
"blockTimestamp": 1661541259,
"disable": true
}
},
{
"feeManagerConfig": {
"adminAddresses": null,
"blockTimestamp": 1661541269
}
}
]
}
}
}
```
## `eth_getActivePrecompilesAt`
**DEPRECATED—instead use** [`eth_getActiveRulesAt`](#eth_getactiveprecompilesat).
`eth_getActivePrecompilesAt` returns activated precompiles at a specific timestamp. If no
timestamp is provided it returns the latest block timestamp. This API is enabled by default with
`internal-blockchain` namespace.
**Signature:**
```bash
eth_getActivePrecompilesAt([timestamp uint]) -> {precompiles: []Precompile}
```
* `timestamp` specifies the timestamp to show the precompiles active at this time. If omitted it shows precompiles activated at the latest block timestamp.
**Example Call:**
```bash
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "eth_getActivePrecompilesAt",
"params": [],
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"contractDeployerAllowListConfig": {
"adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"],
"blockTimestamp": 0
},
"contractNativeMinterConfig": {
"adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"],
"blockTimestamp": 0
},
"feeManagerConfig": {
"adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"],
"blockTimestamp": 0
}
}
}
```
## `eth_getActiveRulesAt`
`eth_getActiveRulesAt` returns activated rules (precompiles, upgrades) at a specific timestamp. If no
timestamp is provided it returns the latest block timestamp. This API is enabled by default with
`internal-blockchain` namespace.
**Signature:**
```bash
eth_getActiveRulesAt([timestamp uint]) -> {rules: json}
```
* `timestamp` specifies the timestamp to show the rules active at this time. If omitted it shows rules activated at the latest block timestamp.
**Example Call:**
```bash
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "eth_getActiveRulesAt",
"params": [],
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"ethRules": {
"IsHomestead": true,
"IsEIP150": true,
"IsEIP155": true,
"IsEIP158": true,
"IsByzantium": true,
"IsConstantinople": true,
"IsPetersburg": true,
"IsIstanbul": true,
"IsCancun": true
},
"avalancheRules": {
"IsSubnetEVM": true,
"IsDurango": true,
"IsEtna": true
},
"precompiles": {
"contractNativeMinterConfig": {
"timestamp": 0
},
"rewardManagerConfig": {
"timestamp": 1712918700
},
"warpConfig": {
"timestamp": 1714158045
}
}
}
}
```
## `validators.getCurrentValidators`
This API retrieves the list of current validators for the Subnet/L1. It provides detailed information about each validator, including their ID, status, weight, connection, and uptime.
URL: `http:///ext/bc//validators`
**Signature:**
```bash
validators.getCurrentValidators({nodeIDs: []string}) -> {validators: []Validator}
```
* `nodeIDs` is an optional parameter that specifies the node IDs of the validators to retrieve. If omitted, all validators are returned.
**Example Call:**
```bash
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "validators.getCurrentValidators",
"params": {
"nodeIDs": []
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C49rHzk3vLr1w9Z8sY7scrZ69TU4WcD2pRS6ZyzaSn9xA2U9F/validators
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"validators": [
{
"validationID": "nESqWkcNXihfdZESS2idWbFETMzatmkoTCktjxG1qryaQXfS6",
"nodeID": "NodeID-P7oB2McjBGgW2NXXWVYjV8JEDFoW9xDE5",
"weight": 20,
"startTimestamp": 1732025492,
"isActive": true,
"isL1Validator": false,
"isConnected": true,
"uptimeSeconds": 36,
"uptimePercentage": 100
}
]
},
"id": 1
}
```
**Response Fields:**
* `validationID`: (string) Unique identifier for the validation. This returns validation ID for L1s, AddSubnetValidator txID for Subnets.
* `nodeID`: (string) Node identifier for the validator.
* `weight`: (integer) The weight of the validator, often representing stake.
* `startTimestamp`: (integer) UNIX timestamp for when validation started.
* `isActive`: (boolean) Indicates if the validator is active. This returns true if this is L1 validator and has enough continuous subnet staking fees in P-Chain. It always returns true for subnet validators.
* `isL1Validator`: (boolean) Indicates if the validator is a L1 validator or a subnet validator.
* `isConnected`: (boolean) Indicates if the validator node is currently connected to the callee node.
* `uptimeSeconds`: (integer) The number of seconds the validator has been online.
* `uptimePercentage`: (float) The percentage of time the validator has been online.
# Getting Started
URL: /docs/avalanche-l1s
As you begin your Avalanche L1 journey, it's useful to look at the lifecycle of taking an Avalanche L1 from idea to production.
## Figure Out Your Needs
The first step of planning your Avalanche L1 is determining your application's needs. What features do you need that the Avalanche C-Chain doesn't provide?
### When to Choose an Avalanche L1
Building your own Avalanche L1 is a great choice when your project demands capabilities beyond those offered by the C-Chain. For instance, if you need the flexibility to use a custom gas token, require strict access control (for example, by only permitting users who are KYC-verified), or wish to implement a unique transaction fee model, then an Avalanche L1 can provide the necessary options. In addition, if having a completely sovereign network with its own governance and consensus mechanisms is central to your vision, an Avalanche L1 is likely the best path forward.
### Decide What Type of Avalanche L1 You Want
After confirming that an Avalanche L1 suits your project's requirements, the next step is to select the type of virtual machine (VM) that will power your blockchain. Broadly, you can choose among three options.
#### EVM-Based Avalanche L1s
The majority of Avalanche L1s are utilizing the Ethereum Virtual Machine. They support Solidity smart contracts and standard [Ethereum APIs](/docs/api-reference/c-chain/api#ethereum-apis). Ava Labs' implementation, [Subnet-EVM](https://github.com/ava-labs/subnet-evm), is the most mature option available. It is recognized for its robust developer tooling and regular updates, making it the safest and most popular choice for building your blockchain.
#### Custom Avalanche L1s
Custom Avalanche L1s offer an open-ended interface that enables you to build any virtual machine you envision. Whether you fork an existing VM such as Subnet-EVM, integrate a non-Avalanche-native VM like Solana's, or build a completely new VM using any programming language you prefer, the choice is yours. For guidance on how to get started with VM development, see [Introduction to VMs](/docs/virtual-machines).
### Determine Tokenomics
Avalanche L1s are powered by gas tokens, and building your own blockchain gives you the flexibility to determine which token to use and how to distribute it. Whether you decide to leverage AVAX, adapt an existing C-Chain token, or launch a new token entirely, you'll need to plan the allocation of tokens for validator rewards, establish an emission schedule, and decide whether transaction fees should be burned or redistributed as block rewards.
### Decide how to Customize Your Avalanche L1
Once you have selected your virtual machine, further customization may be necessary to align the blockchain with your specific needs. This might involve configuring the token allocation in the genesis block, setting gas fee rates, or making changes to the VM's behavior through precompiles. Such customizations often require careful iterative testing to perfect. For detailed instructions, refer to [Customize Your EVM-Powered Avalanche L1](/docs/avalanche-l1s/upgrade/customize-avalanche-l1).
### Available Subnet-EVM Precompiles
The Subnet-EVM provides several precompiled contracts that you can use in your Avalanche L1 blockchain:
* [AllowList Interface](/docs/avalanche-l1s/evm-configuration/allowlist) - A reusable interface for permission management
* [Permissions](/docs/avalanche-l1s/evm-configuration/permissions) - Control contract deployment and transaction submission
* [Tokenomics](/docs/avalanche-l1s/evm-configuration/tokenomics) - Manage native token supply and minting
* [Transaction Fees & Validator Rewards](/docs/avalanche-l1s/evm-configuration/transaction-fees) - Configure fee parameters and reward mechanisms
* [Warp Messenger](/docs/avalanche-l1s/evm-configuration/warpmessenger) - Perform cross-chain operations
# WAGMI Avalanche L1
URL: /docs/avalanche-l1s/wagmi-avalanche-l1
Learn about the WAGMI Avalanche L1 in this detailed case study.
This is one of the first cases of using Avalanche L1s as a proving ground for changes in a production VM (Coreth). Many underestimate how useful the isolation of Avalanche L1s is for performing complex VM testing on a live network (without impacting the stability of the primary network).
We created a basic WAGMI Explorer [https://subnets-test.avax.network/wagmi](https://subnets-test.avax.network/wagmi) that surfaces aggregated usage statistics about the Avalanche L1.
* SubnetID: [28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY](https://explorer-xp.avax-test.network/avalanche-l1/28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY?tab=validators)
* ChainID: [2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt](https://testnet.avascan.info/blockchain/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt)
### Network Parameters[](#network-parameters "Direct link to heading")
* NetworkID: 11111
* ChainID: 11111
* Block Gas Limit: 20,000,000 (2.5x C-Chain)
* 10s Gas Target: 100,000,000 (\~6.67x C-Chain)
* Min Fee: 1 Gwei (4% of C-Chain)
* Target Block Rate: 2s (Same as C-Chain)
The genesis file of WAGMI can be found [here](https://github.com/ava-labs/public-chain-assets/blob/1951594346dcc91682bdd8929bcf8c1bf6a04c33/chains/11111/genesis.json).
### Adding WAGMI to Core[](#adding-wagmi-to-core "Direct link to heading")
* Network Name: WAGMI
* RPC URL: \[[https://subnets.avax.network/wagmi/wagmi-chain-testnet/rpc](https://subnets.avax.network/wagmi/wagmi-chain-testnet/rpc)]
* WS URL: wss\://avalanche-l1s.avax.network/wagmi/wagmi-chain-testnet/ws
* Chain ID: 11111
* Symbol: WGM
* Explorer: \[[https://subnets.avax.network/wagmi/wagmi-chain-testnet/explorer](https://subnets.avax.network/wagmi/wagmi-chain-testnet/explorer)]
This can be used with other wallets too, such as MetaMask.
## Case Study: WAGMI Upgrades[](#case-study-wagmi-upgrades "Direct link to heading")
This case study uses [WAGMI](https://subnets-test.avax.network/wagmi) Avalanche L1 upgrade to show how a network upgrade on an EVM-based (Ethereum Virtual Machine) Avalanche L1 can be done simply, and how the resulting upgrade can be used to dynamically control fee structure on the Avalanche L1.
### Introduction[](#introduction "Direct link to heading")
[Subnet-EVM](https://github.com/ava-labs/subnet-evm) aims to provide an easy to use toolbox to customize the EVM for your blockchain. It is meant to run out of the box for many Avalanche L1s without any modification. But what happens when you want to add a new feature updating the rules of your EVM?
Instead of hard coding the timing of network upgrades in client code like most EVM chains, requiring coordinated deployments of new code, [Subnet-EVM v0.2.8](https://github.com/ava-labs/subnet-evm/releases/tag/v0.2.8) introduces the long awaited feature to perform network upgrades by just using a few lines of JSON in a configuration file.
### Network Upgrades: Enable/Disable Precompiles[](#network-upgrades-enabledisable-precompiles "Direct link to heading")
Detailed description of how to do this can be found in [Customize an Avalanche L1](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#network-upgrades-enabledisable-precompiles) tutorial. Here's a summary:
1. Network Upgrade utilizes existing precompiles on the Subnet-EVM:
* ContractDeployerAllowList, for restricting smart contract deployers
* TransactionAllowList, for restricting who can submit transactions
* NativeMinter, for minting native coins
* FeeManager, for configuring dynamic fees
* RewardManager, for enabling block rewards
2. Each of these precompiles can be individually enabled or disabled at a given timestamp as a network upgrade, or any of the parameters governing its behavior changed.
3. These upgrades must be specified in a file named `upgrade.json` placed in the same directory where [`config.json`](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#avalanchego-chain-configs) resides: `{chain-config-dir}/{blockchainID}/upgrade.json`.
### Preparation[](#preparation "Direct link to heading")
To prepare for the first WAGMI network upgrade, on August 15, 2022, we had announced on [X](https://x.com/AaronBuchwald/status/1559249414102720512) and shared on other social media such as Discord.
For the second upgrade, on February 24, 2024, we had another announcement on [X](https://x.com/jceyonur/status/1760777031858745701?s=20).
### Deploying upgrade.json[](#deploying-upgradejson "Direct link to heading")
The content of the `upgrade.json` is:
```json
{
"precompileUpgrades": [
{
"feeManagerConfig": {
"adminAddresses": ["0x6f0f6DA1852857d7789f68a28bba866671f3880D"],
"blockTimestamp": 1660658400
}
},
{
"contractNativeMinterConfig": {
"blockTimestamp": 1708696800,
"adminAddresses": ["0x6f0f6DA1852857d7789f68a28bba866671f3880D"],
"managerAddresses": ["0xadFA2910DC148674910c07d18DF966A28CD21331"]
}
}
]
}
```
With the above `upgrade.json`, we intend to perform two network upgrades:
1. The first upgrade is to activate the FeeManager precompile:
* `0x6f0f6DA1852857d7789f68a28bba866671f3880D` is named as the new Admin of the FeeManager precompile.
* `1660658400` is the [Unix timestamp](https://www.unixtimestamp.com/) for Tue Aug 16 2022 14:00:00 GMT+0000 (future time when we made the announcement) when the new FeeManager change would take effect.
2. The second upgrade is to activate the NativeMinter precompile:
* `0x6f0f6DA1852857d7789f68a28bba866671f3880D` is named as the new Admin of the NativeMinter precompile.
* `0xadFA2910DC148674910c07d18DF966A28CD21331` is named as the new Manager of the NativeMinter precompile. Manager addresses are enabled after Durango upgrades which occurred on February 13, 2024.
* `1708696800` is the [Unix timestamp](https://www.unixtimestamp.com/) for Fri Feb 23 2024 14:00:00 GMT+0000 (future time when we made the announcement) when the new NativeMinter change would take effect.
Detailed explanations of feeManagerConfig can be found in [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#configuring-dynamic-fees), and for the contractNativeMinterConfig in [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#minting-native-coins).
We place the `upgrade.json` file in the chain config directory, which in our case is `~/.avalanchego/configs/chains/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/`. After that, we restart the node so the upgrade file is loaded.
When the node restarts, AvalancheGo reads the contents of the JSON file and passes it into Subnet-EVM. We see a log of the chain configuration that includes the updated precompile upgrade. It looks like this:
```bash
INFO [02-22|18:27:06.473] <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain> github.com/ava-labs/subnet-evm/core/blockchain.go:335: Upgrade Config: {"precompileUpgrades":[{"feeManagerConfig":{"adminAddresses":["0x6f0f6da1852857d7789f68a28bba866671f3880d"],"blockTimestamp":1660658400}},{"contractNativeMinterConfig":{"adminAddresses":["0x6f0f6da1852857d7789f68a28bba866671f3880d"],"managerAddresses":["0xadfa2910dc148674910c07d18df966a28cd21331"],"blockTimestamp":1708696800}}]}
```
We note that `precompileUpgrades` correctly shows the upcoming precompile upgrades. Upgrade is locked in and ready.
### Activations[](#activations "Direct link to heading")
When the time passed 10:00 AM EDT August 16, 2022 (Unix timestamp 1660658400), the `upgrade.json` had been executed as planned and the new FeeManager admin address has been activated. From now on, we don't need to issue any new code or deploy anything on the WAGMI nodes to change the fee structure. Let's see how it works in practice!
For the second upgrade on February 23, 2024, the same process was followed. The `upgrade.json` had been executed after Durango, as planned, and the new NativeMinter admin and manager addresses have been activated.
### Using Fee Manager[](#using-fee-manager "Direct link to heading")
The owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D` can now configure the fees on the Avalanche L1 as they see fit. To do that, all that's needed is access to the network, the private key for the newly set manager address and making calls on the precompiled contract.
We will use [Remix](https://remix.ethereum.org/) online Solidity IDE and the [Core Browser Extension](https://support.avax.network/en/articles/6066879-core-extension-how-do-i-add-the-core-extension). Core comes with WAGMI network built-in. MetaMask will do as well but you will need to [add WAGMI](/docs/avalanche-l1s/wagmi-avalanche-l1) yourself.
First using Core, we open the account as the owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D`.
Then we connect Core to WAGMI, Switch on the `Testnet Mode` in `Advanced` page in the hamburger menu:

And then open the `Manage Networks` menu in the networks dropdown. Select WAGMI there by clicking the star icon:

We then switch to WAGMI in the networks dropdown. We are ready to move on to Remix now, so we open it in the browser. First, we check that Remix sees the extension and correctly talks to it. We select `Deploy & run transactions` icon on the left edge, and on the Environment dropdown, select `Injected Provider`. We need to approve the Remix network access in the Core browser extension. When that is done, `Custom (11111) network` is shown:

Good, we're talking to WAGMI Avalanche L1. Next we need to load the contracts into Remix. Using 'load from GitHub' option from the Remix home screen we load two contracts:
* [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol)
* and [IFeeManager.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IFeeManager.sol).
IFeeManager is our precompile, but it references the IAllowList, so we need that one as well. We compile IFeeManager.sol and use deployed contract at the precompile address `0x0200000000000000000000000000000000000003` used on the [Avalanche L1](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/feemanager/module.go#L21).

Now we can interact with the FeeManager precompile from within Remix via Core. For example, we can use the `getFeeConfig` method to check the current fee configuration. This action can be performed by anyone as it is just a read operation.
Once we have the new desired configuration for the fees on the Avalanche L1, we can use the `setFeeConfig` to change the parameters. This action can **only** be performed by the owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D` as the `adminAddress` specified in the [`upgrade.json` above](#deploying-upgradejson).

When we call that method by pressing the `transact` button, a new transaction is posted to the Avalanche L1, and we can see it on [the explorer](https://subnets-test.avax.network/wagmi/block/0xad95ccf04f6a8e018ece7912939860553363cc23151a0a31ea429ba6e60ad5a3):

Immediately after the transaction is accepted, the new fee config takes effect. We can check with the `getFeeCofig` that the values are reflected in the active fee config (again this action can be performed by anyone):

That's it, fees changed! No network upgrades, no complex and risky deployments, just making a simple contract call and the new fee configuration is in place!
### Using NativeMinter[](#using-nativeminter "Direct link to heading")
For the NativeMinter, we can use the same process to connect to the Avalanche L1 and interact with the precompile. We can load INativeMinter interface using 'load from GitHub' option from the Remix home screen with following contracts:
* [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol)
* and [INativeMinter.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/INativeMinter.sol).
We can compile them and interact with the deployed contract at the precompile address `0x0200000000000000000000000000000000000001` used on the [Avalanche L1](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/nativeminter/module.go#L22).

The native minter precompile is used to mint native coins to specified addresses. The minted coins is added to the current supply and can be used by the recipient to pay for gas fees. For more information about the native minter precompile see [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#minting-native-coins).
`mintNativeCoin` method can be only called by enabled, manager and admin addresses. For this upgrade we have added both an admin and a manager address in [`upgrade.json` above](#deploying-upgradejson). The manager address was available after Durango upgrades which occurred on February 13, 2024. We will use the manager address `0xadfa2910dc148674910c07d18df966a28cd21331` to mint native coins.

When we call that method by pressing the `transact` button, a new transaction is posted to the Avalanche L1, and we can see it on [the explorer](https://subnets-test.avax.network/wagmi/tx/0xc4aaba7b5863c1b8f6664ac1d483e2d7d392ab58d1a8feb0b6c318cbae7f1e93):

As a result of this transaction, the native minter precompile minted a new native coin (1 WGM) to the recipient address `0xB78cbAa319ffBD899951AA30D4320f5818938310`. The address page on the explorer [here](https://subnets-test.avax.network/wagmi/address/0xB78cbAa319ffBD899951AA30D4320f5818938310) shows no incoming transaction; this is because the 1 WGM was directly minted by the EVM itself, without any sender.
### Conclusion[](#conclusion "Direct link to heading")
Network upgrades can be complex and perilous procedures to carry out safely. Our continuing efforts with Avalanche L1s is to make upgrades as painless and simple as possible. With the powerful combination of stateful precompiles and network upgrades via the upgrade configuration files we have managed to greatly simplify both the network upgrades and network parameter changes. This in turn enables much safer experimentation and many new use cases that were too risky and complex to carry out with high-coordination efforts required with the traditional network upgrade mechanisms.
We hope this case study will help spark ideas for new things you may try on your own. We're looking forward to seeing what you have built and how easy upgrades help you in managing your Avalanche L1s! If you have any questions or issues, feel free to contact us on our [Discord](https://chat.avalabs.org/). Or just reach out to tell us what exciting new things you have built!
# Why Build Avalanche L1s
URL: /docs/avalanche-l1s/when-to-build-avalanche-l1
Learn key concepts to decide when to build your own Avalanche L1.
## Why Build Your Own Avalanche L1
There are many advantages to running your own Avalanche L1. If you find one or more of these a good match for your project then an Avalanche L1 might be a good solution for you.
### We Want Our Own Gas Token
C-Chain is an Ethereum Virtual Machine (EVM) chain; it requires the gas fees to be paid in its native token. That is, the application may create its own utility tokens (ERC-20) on the C-Chain, but the gas must be paid in AVAX. In the meantime, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) effectively creates an application-specific EVM-chain with full control over native(gas) coins. The operator can pre-allocate the native tokens in the chain genesis, and mint more using the [Subnet-EVM](https://github.com/ava-labs/subnet-evm) precompile contract. And these fees can be either burned (as AVAX burns in C-Chain) or configured to be sent to an address which can be a smart contract.
Note that the Avalanche L1 gas token is specific to the application in the chain, thus unknown to the external parties. Moving assets to other chains requires trusted bridge contracts (or upcoming cross Avalanche L1 communication feature).
### We Want Higher Throughput
The primary goal of the gas limit on C-Chain is to restrict the block size and therefore prevent network saturation. If a block can be arbitrarily large, it takes longer to propagate, potentially degrading the network performance. The C-Chain gas limit acts as a deterrent against any system abuse but can be quite limiting for high throughput applications. Unlike C-Chain, Avalanche L1 can be single-tenant, dedicated to the specific application, and thus host its own set of validators with higher bandwidth requirements, which allows for a higher gas limit thus higher transaction throughput. Plus, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) supports fee configuration upgrades that can be adaptive to the surge in application traffic.
Avalanche L1 workloads are isolated from the Primary Network; which means, the noisy neighbor effect of one workload (for example NFT mint on C-Chain) cannot destabilize the Avalanche L1 or surge its gas price. This failure isolation model in the Avalanche L1 can provide higher application reliability.
### We Want Strict Access Control
The C-Chain is open and permissionless where anyone can deploy and interact with contracts. However, for regulatory reasons, some applications may need a consistent access control mechanism for all on-chain transactions. With [Subnet-EVM](https://github.com/ava-labs/subnet-evm), an application can require that “only authorized users may deploy contracts or make transactions.” Allow-lists are only updated by the administrators, and the allow list itself is implemented within the precompile contract, thus more transparent and auditable for compliance matters.
### We Need EVM Customization
If your project is deployed on the C-Chain then your execution environment is dictated by the setup of the C-Chain. Changing any of the execution parameters means that the configuration of the C-Chain would need to change, and that is expensive, complex and difficult to change. So if your project needs some other capabilities, different execution parameters or precompiles that C-Chain does not provide, then Avalanche L1s are a solution you need. You can configure the EVM in an Avalanche L1 to run however you want, adding precompiles, and setting runtime parameters to whatever your project needs.
### We Need Custom Validator Management
With the Etna upgrade, L1s can implement their own validator management logic through a *ValidatorManager* smart contract. This gives you complete control over your validator set, allowing you to define custom staking rules, implement permissionless proof-of-stake with your own token, or create permissioned proof-of-authority networks. The validator management can be handled directly through smart contracts, giving you programmatic control over validator selection and rewards distribution.
### We Want to Build a Sovereign Network
L1s on Avalanche are truly sovereign networks that operate independently without relying on other systems. You have complete control over your network's consensus mechanisms, transaction processing, and security protocols. This independence allows you to scale horizontally without dependencies on other networks while maintaining full control over your network parameters and upgrades. This sovereignty is particularly important for projects that need complete autonomy over their blockchain's operation and evolution.
## Conclusion
Here we presented some considerations in favor of running your own Avalanche L1 vs. deploying on the C-Chain.
If an application has relatively low transaction rate and no special circumstances that would make the C-Chain a non-starter, you can begin with C-Chain deployment to leverage existing technical infrastructure, and later expand to an Avalanche L1. That way you can focus on working on the core of your project and once you have a solid product/market fit and have gained enough traction that the C-Chain is constricting you, plan a move to your own Avalanche L1.
Of course, we're happy to talk to you about your architecture and help you choose the best path forward. Feel free to reach out to us on [Discord](https://chat.avalabs.org/) or other [community channels](https://www.avax.network/community) we run.
# Asset Requirements
URL: /docs/builderkit/asset-requirements
Required assets and file structure for chain and token logos.
# Asset Requirements
BuilderKit requires specific asset files for displaying chain and token logos. These assets should follow a standardized file structure and naming convention.
## Chain Logos
Chain logos are used by components like `ChainIcon`, `ChainDropdown`, and `TokenIconWithChain`.
### File Structure
Chain logos should be placed at:
```
/chains/logo/{chain_id}.png
```
### Examples
```
/chains/logo/43114.png // Avalanche C-Chain
/chains/logo/43113.png // Fuji Testnet
/chains/logo/173750.png // Echo L1
```
### Requirements
* Format: PNG with transparency
* Dimensions: 32x32px (minimum)
* Background: Transparent
* Shape: Circular or square with rounded corners
* File size: \< 100KB
## Token Logos
Token logos are used by components like `TokenIcon`, `TokenChip`, and `TokenRow`.
### File Structure
Token logos should be placed at:
```
/tokens/logo/{chain_id}/{address}.png
```
### Examples
```
/tokens/logo/43114/0x1234567890123456789012345678901234567890.png // Token on C-Chain
/tokens/logo/43113/0x5678901234567890123456789012345678901234.png // Token on Fuji
```
### Requirements
* Format: PNG with transparency
* Dimensions: 32x32px (minimum)
* Background: Transparent
* Shape: Circular or square with rounded corners
* File size: \< 100KB
## Directory Structure
Your public assets directory should look like this:
```
public/
├── chains/
│ └── logo/
│ ├── 43114.png
│ ├── 43113.png
│ └── 173750.png
└── tokens/
└── logo/
├── 43114/
│ ├── 0x1234....png
│ └── 0x5678....png
└── 43113/
├── 0x9012....png
└── 0xabcd....png
```
# Custom Chain Setup
URL: /docs/builderkit/chains
Configure custom Avalanche L1 chains in your application.
# Custom Chain Setup
Learn how to configure custom Avalanche L1 chains in your BuilderKit application.
## Chain Definition
Define your custom L1 chain using `viem`'s `defineChain`:
```tsx
import { defineChain } from "viem";
export const myL1 = defineChain({
id: 173750, // Your L1 chain ID
name: 'My L1', // Display name
network: 'my-l1', // Network identifier
nativeCurrency: {
decimals: 18,
name: 'Token',
symbol: 'TKN',
},
rpcUrls: {
default: {
http: ['https://api.avax.network/ext/L1/rpc']
},
},
blockExplorers: {
default: {
name: 'Explorer',
url: 'https://explorer.avax.network/my-l1'
},
},
// Optional: Custom metadata
iconUrl: "/chains/logo/my-l1.png",
icm_registry: "0x..." // ICM registry contract
});
```
## Provider Configuration
Add your custom L1 chain to the Web3Provider:
```tsx
import { Web3Provider } from '@avalabs/builderkit';
import { avalanche } from '@wagmi/core/chains';
import { myL1 } from './chains/definitions/my-l1';
function App() {
return (
);
}
```
## Required Properties
| Property | Type | Description |
| ---------------- | -------- | ---------------------------- |
| `id` | `number` | Unique L1 chain identifier |
| `name` | `string` | Human-readable chain name |
| `network` | `string` | Network identifier |
| `nativeCurrency` | `object` | Chain's native token details |
| `rpcUrls` | `object` | RPC endpoint configuration |
| `blockExplorers` | `object` | Block explorer URLs |
## Optional Properties
| Property | Type | Description |
| -------------- | --------- | ------------------------------ |
| `iconUrl` | `string` | Chain logo URL |
| `icm_registry` | `string` | ICM registry contract address |
| `testnet` | `boolean` | Whether the chain is a testnet |
## Example: Echo L1
Here's a complete example using the Echo L1:
```tsx
import { defineChain } from "viem";
export const echo = defineChain({
id: 173750,
name: 'Echo L1',
network: 'echo',
nativeCurrency: {
decimals: 18,
name: 'Ech',
symbol: 'ECH',
},
rpcUrls: {
default: {
http: ['https://subnets.avax.network/echo/testnet/rpc']
},
},
blockExplorers: {
default: {
name: 'Explorer',
url: 'https://subnets-test.avax.network/echo'
},
},
iconUrl: "/chains/logo/173750.png",
icm_registry: "0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228"
});
```
# Contribute
URL: /docs/builderkit/contribute
Guide for contributing to BuilderKit by building hooks, components, and flows.
# Contributing to BuilderKit
We welcome contributions to BuilderKit! Whether you're fixing bugs, adding new features, or improving documentation, your help makes BuilderKit better for everyone.
## What You Can Contribute
### Hooks
Build reusable hooks that handle common Web3 functionality:
* Chain data management
* Token interactions
* Contract integrations
* State management
* API integrations
### Components
Create new UI components or enhance existing ones:
* Form elements
* Display components
* Interactive elements
* Layout components
* Utility components
### Flows
Design complete user journeys by combining components:
* Token swaps
* NFT minting
* Governance voting
* Staking interfaces
* Custom protocols
# Getting Started
URL: /docs/builderkit/getting-started
Quick setup guide for BuilderKit in your React application.
Get started with BuilderKit in your React application.
## Installation
```bash
npm install @avalabs/builderkit
# or
yarn add @avalabs/builderkit
```
## Provider Setup
Wrap your application with the Web3Provider to enable wallet connections and chain management:
```tsx
import { Web3Provider } from '@avalabs/builderkit';
import { avalanche, avalancheFuji } from '@wagmi/core/chains';
import { echo } from './chains/definitions/echo';
import { dispatch } from './chains/definitions/dispatch';
// Configure chains
const chains = [avalanche, avalancheFuji, echo, dispatch];
function App() {
return (
);
}
```
## Next Steps
* Learn about [Token Configuration](/docs/builderkit/tokens)
* Explore [Core Components](/docs/builderkit/components/control)
* Check out [Pre-built Flows](/docs/builderkit/flows/ictt)
# Introduction
URL: /docs/builderkit
A comprehensive React component library for building Web3 applications on Avalanche.
BuilderKit is a powerful collection of React components and hooks designed specifically for building Web3 applications on Avalanche. It provides everything you need to create modern, user-friendly blockchain applications with minimal effort.
## Ready to Use Components
BuilderKit offers a comprehensive set of components that handle common Web3 functionalities:
* **Control Components**: Buttons, forms, and wallet connection interfaces
* **Identity Components**: Address displays and domain name resolution
* **Token Components**: Balance displays, inputs, and price conversions
* **Input Components**: Specialized form inputs for Web3 data types
* **Chain Components**: Network selection and chain information displays
* **Transaction Components**: Transaction submission and status tracking
* **Collectibles Components**: NFT displays and collection management
## Powerful Hooks
BuilderKit provides hooks for seamless integration with Avalanche's ecosystem:
### Blockchain Interaction
Access and manage blockchain data, tokens, and cross-chain operations with hooks for chains, tokens, DEX interactions, and inter-chain transfers.
### Precompile Integration
Easily integrate with Avalanche's precompiled contracts for access control, fee management, native minting, rewards, and cross-chain messaging.
## Getting Started
Get started quickly by installing BuilderKit in your React application:
```bash
npm install @avalabs/builderkit
# or
yarn add @avalabs/builderkit
```
Check out our [Getting Started](/docs/builderkit/getting-started) guide to begin building your Web3 application.
# Token Configuration
URL: /docs/builderkit/tokens
Guide for configuring tokens in BuilderKit flows.
# Token Configuration
BuilderKit flows require proper token configuration to function correctly. This guide explains the required fields for different token configurations.
## Basic Token Structure
All tokens in BuilderKit share a common base structure with these required fields:
```typescript
interface BaseToken {
// Contract address of the token, use "native" for native chain token
address: string;
// Human-readable name of the token
name: string;
// Token symbol/ticker
symbol: string;
// Number of decimal places the token uses
decimals: number;
// ID of the chain where this token exists
chain_id: number;
}
```
## ICTT Token Fields
ICTT tokens extend the base structure with additional fields for cross-chain functionality:
```typescript
interface ICTTToken extends BaseToken {
// Whether this token can be used with ICTT
supports_ictt: boolean;
// Address of the contract that handles transfers
transferer?: string;
// Whether this token instance is a transferer
is_transferer?: boolean;
// Information about corresponding tokens on other chains
mirrors: {
// Contract address of the mirrored token
address: string;
// Transferer contract on the mirror chain
transferer: string;
// Chain ID where the mirror exists
chain_id: number;
// Decimal places of the mirrored token
decimals: number;
// Whether this is the home/original chain
home?: boolean;
}[];
}
```
## Field Requirements
### Base Token Fields
* `address`: Must be a valid contract address or "native"
* `name`: Should be human-readable
* `symbol`: Should match the token's trading symbol
* `decimals`: Must match the token's contract configuration
* `chain_id`: Must be a valid chain ID
### ICTT-Specific Fields
* `supports_ictt`: Required for ICTT functionality
* `transferer`: Required if token supports ICTT
* `is_transferer`: Optional, indicates if token is a transferer
* `mirrors`: Required for ICTT, must contain at least one mirror configuration
### Mirror Configuration Fields
* `address`: Required, contract address on mirror chain
* `transferer`: Required, transferer contract on mirror chain
* `chain_id`: Required, must be different from token's chain\_id
* `decimals`: Required, must match token contract
* `home`: Optional, indicates original/home chain
# ACP-103: Dynamic Fees
URL: /docs/acps/103-dynamic-fees
Details for Avalanche Community Proposal 103: Dynamic Fees
| ACP | 103 |
| :------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Title** | Add Dynamic Fees to the P-Chain |
| **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/104)) |
| **Track** | Standards |
## Abstract
Introduce a dynamic fee mechanism to the P-Chain. Preview a future transition to a multidimensional fee mechanism.
## Motivation
Blockchains are resource-constrained environments. Users are charged for the execution and inclusion of their transactions based on the blockchain's transaction fee mechanism. The mechanism should fluctuate based on the supply of and demand for said resources to serve as a deterrent against spam and denial-of-service attacks.
With a fixed fee mechanism, users are provided with simplicity and predictability but network congestion and resource constraints are not taken into account. There is no incentive for users to withhold transactions since the cost is fixed regardless of the demand. The fee does not adjust the execution and inclusion fee of transactions to the market clearing price.
The C-Chain, in [Apricot Phase 3](https://medium.com/avalancheavax/apricot-phase-three-c-chain-dynamic-fees-432d32d67b60), employs a dynamic fee mechanism to raise the price during periods of high demand and lowering the price during periods of low demand. As the price gets too expensive, network utilization will decrease, which drops the price. This ensures the execution and inclusion fee of transactions closely matches the market clearing price.
The P-Chain currently operates under a fixed fee mechanism. To more robustly handle spikes in load expected from introducing the improvements in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md), it should be migrated to a dynamic fee mechanism.
The X-Chain also currently operates under a fixed fee mechanism. However, due to the current lower usage and lack of new feature introduction, the migration of the X-Chain to a dynamic fee mechanism is deferred to a later ACP to reduce unnecessary additional technical complexity.
## Specification
### Dimensions
There are four dimensions that will be used to approximate the computational cost of, or "gas" consumed in, a transaction:
1. Bandwidth $B$ is the amount of network bandwidth used for transaction broadcast. This is set to the size of the transaction in bytes.
2. Reads $R$ is the number of state/database reads used in transaction execution.
3. Writes $W$ is the number of state/database writes used in transaction execution.
4. Compute $C$ is the total amount of compute used to verify and execute a transaction, measured in microseconds.
The gas consumed $G$ in a transaction is:
$G = B + 1000R + 1000W + 4C$
A future ACP could remove the merging of these dimensions to granularly meter usage of each resource in a multidimensional scheme.
### Mechanism
This mechanism aims to maintain a target gas consumption $T$ per second and adjusts the fee based on the excess gas consumption $x$, defined as the difference between the current gas consumption and $T$.
Prior to the activation of this mechanism, $x$ is initialized:
$x = 0$
At the start of building/executing block $b$, $x$ is updated:
$x = \max(x - T \cdot \Delta{t}, 0)$
Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp.
The gas price for block $b$ is:
$M \cdot \exp\left(\frac{x}{K}\right)$
Where:
* $M$ is the minimum gas price
* $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification
```python
# Approximates factor * e ** (numerator / denominator) using Taylor expansion
def fake_exponential(factor: int, numerator: int, denominator: int) -> int:
i = 1
output = 0
numerator_accum = factor * denominator
while numerator_accum > 0:
output += numerator_accum
numerator_accum = (numerator_accum * numerator) // (denominator * i)
i += 1
return output // denominator
```
* $K$ is a constant to control the rate of change of the gas price
After processing block $b$, $x$ is updated with the total gas consumed in the block $G$:
$x = x + G$
Whenever $x$ increases by $K$, the gas price increases by a factor of `~2.7`. If the gas price gets too expensive, average gas consumption drops, and $x$ starts decreasing, dropping the price. The gas price constantly adjusts to make sure that, on average, the blockchain consumes $T$ gas per second.
A [token bucket](https://en.wikipedia.org/wiki/Token_bucket) is employed to meter the maximum rate of gas consumption. Define $C$ as the capacity of the bucket, $R$ as the amount of gas to add to the bucket per second, and $r$ as the amount of gas currently in the bucket.
Prior to the activation of this mechanism, $r$ is initialized:
$r = 0$
At the beginning of processing block $b$, $r$ is set:
$r = \min\left(r + R \cdot \Delta{t}, C\right)$
Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The maximum gas consumed in a given $\Delta{t}$ is $r + R \cdot \Delta{t}$. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$.
After processing block $b$, the total gas consumed in $b$, or $G$, will be known. If $G \gt r$, $b$ is considered an invalid block. If $b$ is a valid block, $r$ is updated:
$r = r - G$
A block gas limit does not need to be set as it is implicitly derived from $r$.
The parameters at activation are:
| Parameter | P-Chain Configuration |
| ------------------------------------ | --------------------- |
| $T$ - target gas consumed per second | 50,000 |
| $M$ - minimum gas price | 1 nAVAX |
| $K$ - gas price update constant | 2\_164\_043 |
| $C$ - maximum gas capacity | 1,000,000 |
| $R$ - gas capacity added per second | 100,000 |
$K$ was chosen such that at sustained maximum capacity ($R=100,000$ gas/second), the fee rate will double every \~30 seconds.
As the network gains capacity to handle additional load, this algorithm can be tuned to increase the gas consumption rate.
#### A note on $e^x$
There is a subtle reason why an exponential adjustment function was chosen: The adjustment function should be *equally* reactive irrespective of the actual fee.
Define $b_n$ as the current block's gas fee, $b_{n+1}$ as the next block's gas fee, and $x$ as the excess gas consumption.
Let's use a linear adjustment function:
$b_{n+1} = b_n + 10x$
Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 + 10 \cdot 1 = 110$, an increase of `10%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 + 10 \cdot 1 = 10,010$, an increase of `0.1%`. The fee is *less* reactive as the fee increases. This is because the rate of change *does not scale* with $x$.
Now, let's use an exponential adjustment function:
$b_{n+1} = b_n \cdot e^x$
Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 \cdot e^1 \approx 271.828$, an increase of `171%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 \cdot e^1 \approx 27,182.8$, an increase of `171%` again. The fee is *equally* reactive as the fee increases. This is because the rate of change *scales* with $x$.
### Block Building Procedure
When a transaction is constructed on the P-Chain, the amount of $AVAX burned is given by `sum($AVAX outputs) - sum($AVAX inputs)`. The amount of gas consumed by the transaction can be deterministically calculated after construction. Dividing the amount of $AVAX burned by the amount of gas consumed yields the maximum gas price that the transaction can pay.
Instead of using a FIFO queue for the mempool (like the P-Chain does now), the mempool should use a priority queue ordered by the maximum gas price of each transaction. This ensures that higher paying transactions are included first.
## Backwards Compatibility
Modification of a fee mechanism is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation.
After this ACP is activated, any transaction issued on the P-Chain must account for the fee mechanism defined above. Users are responsible for reconstructing their transactions to include a larger fee for quicker inclusion when the fee increases.
## Reference Implementation
ACP-103 was implemented into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp103` label [here](https://github.com/ava-labs/avalanchego/pulls?q=is%3Apr+label%3Aacp103).
## Security Considerations
The current fixed fee mechanism on the X-Chain and P-Chain does not robustly handle spikes in load. Migrating the P-Chain to a dynamic fee mechanism will ensure that any additional load caused by demand for new P-Chain features (such as those introduced in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md)) is properly priced given allotted processing capacity. The X-Chain, in comparison, currently has significantly lower usage, making it less likely for the demand for blockspace on it to exceed the current static fee rates. If necessary or desired, a future ACP can reuse the mechanism introduced here to add dynamic fee rates to the X-Chain.
## Acknowledgements
Thank you to [@aaronbuchwald](https://github.com/aaronbuchwald) and [@patrick-ogrady](https://github.com/patrick-ogrady) for providing feedback prior to publication.
Thank you to the authors of [EIP-4844](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4844.md) for creating the fee design that inspired the above mechanism.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-108: Evm Event Importing
URL: /docs/acps/108-evm-event-importing
Details for Avalanche Community Proposal 108: Evm Event Importing
| ACP | 108 |
| :------------ | :------------------------------------------------------------------------------------ |
| **Title** | EVM Event Importing Standard |
| **Author(s)** | Michael Kaplan ([@mkaplan13](https://github.com/mkaplan13)) |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/114)) |
| **Track** | Best Practices Track |
## Abstract
Defines a standard smart contract interface and abstract implementation for importing EVM events from any blockchain within Avalanche using [Avalanche Warp Messaging](https://docs.avax.network/build/cross-chain/awm/overview).
## Motivation
The implementation of Avalanche Warp Messaging within `coreth` and `subnet-evm` exposes a [mechanism for getting authenticated hashes of blocks](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IWarpMessenger.sol#L43) that have been accepted on blockchains within Avalanche. Proofs of acceptance of blocks, such as those introduced in [ACP-75](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/75-acceptance-proofs), can be used to prove arbitrary events and state changes that occured in those blocks. However, there is currently no clear standard for using authenticated block hashes in smart contracts within Avalanche, making it difficult to build applications that leverage this mechanism. In order to make effective use of authenticated block hashes, contracts must be provided encoded block headers that match the authenticated block hashes and also Merkle proofs that are verified against the state or receipts root contained in the block header.
With a standard interface and abstract contract implemetation that handles the authentication of block hashes and verification of Merkle proofs, smart contract developers on Avalanche will be able to much more easily create applications that leverage data from other Avalanche blockchains. These type of cross-chain application do not require any direct interaction on the source chain.
## Specification
### Event Importing Interface
We propose that smart contracts importing EVM events emitted by other blockchains within Avalanche implement the following interface.
#### Methods
Imports the EVM event uniquely identified by the source blockchain ID, block header, transaction index, and log index.
The `blockHeader` must be validated to match the authenticated block hash from the `sourceBlockchainID`. The specification for EVM block headers can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/block.go#L73).
The `txIndex` identifies the key of receipts trie of the given block header that the `receiptProof` must prove inclusion of. The value obtained by verifying the `receiptProof` for that key is the encoded transaction receipt. The specification for EVM transaction receipts can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/receipt.go#L62).
The `logIndex` identifies which event log from the given transaction receipt is to be imported.
Must emit an `EventImported` event upon success.
```solidity
function importEvent(
bytes32 sourceBlockchainID,
bytes calldata blockHeader,
uint256 txIndex,
bytes[] calldata receiptProof,
uint256 logIndex
) external;
```
This interface does not require that the Warp precompile is used to authenticate block hashes. Implementations could:
* Use the Warp precompile to authenticate block hashes provided directly in the transaction calling `importEvent`.
* Check previously authenticated block hashes using an external contract.
* Allows for a block hash to be authenticated once and used in arbitrarily many transactions afterwards.
* Allows for alternative authentication mechanisms to be used, such as trusted oracles.
#### Events
Must trigger when an EVM event is imported.
```solidity
event EventImported(
bytes32 indexed sourceBlockchainID,
bytes32 indexed sourceBlockHash,
address indexed loggerAddress,
uint256 txIndex,
uint256 logIndex
);
```
### Event Importing Abstract Contract
Applications importing EVM events emitted by other blockchains within Avalanche should be able to use a standard abstract implementation of the `importEvent` interface. This abstract implementation must handle:
* Authenticating block hashes from other chains.
* Verifying that the encoded `blockHeader` matches the imported block hash.
* Verifying the Merkle `receiptProof` for the given `txIndex` against the receipt root of the provided `blockHeader`.
* Decoding the event log identified by `logIndex` from the receipt obtained from verifying the `receiptProof`.
As noted above, implementations could directly use the Warp precompile's `getVerifiedWarpBlockHash` interface method for authenticating block hashes, as is done in the reference implementation [here](https://github.com/ava-labs/event-importer-poc/blob/main/contracts/src/EventImporter.sol#L51). Alternatively, implementations could use the `sourceBlockchainID` and `blockHeader` provided in the parameters to check with an external contract that the block has been accepted on the given chain. The specifics of such an external contract are outside the scope of this ACP, but for illustrative purposes, this could look along the lines of:
```solidity
bool valid = blockHashRegistry.checkAuthenticatedBlockHash(
sourceBlockchainID,
keccack256(blockHeader)
);
require(valid, "Invalid block header");
```
Inheriting contracts should only need to define the logic to be executed when an event is imported. This is done by providing an implementation of the following internal function, called by `importEvent`.
```solidity
function _onEventImport(EVMEventInfo memory eventInfo) internal virtual;
```
Where the `EVMEventInfo` struct is defined as:
```solidity
struct EVMLog {
address loggerAddress;
bytes32[] topics;
bytes data;
}
struct EVMEventInfo {
bytes32 blockchainID;
uint256 blockNumber;
uint256 txIndex;
uint256 logIndex;
EVMLog log;
}
```
The `EVMLog` struct is meant to match the `Log` type definition in the EVM [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/log.go#L39).
## Reference Implementation
See reference implementation on [Github here](https://github.com/ava-labs/event-importer-poc).
In addition to implementing the interface and abstract contract described above, the reference implementation shows how transactions can be constructed to import events using Warp block hash signatures.
## Open Questions
See [here](https://github.com/ava-labs/event-importer-poc?tab=readme-ov-file#open-questions-and-considerations).
## Security Considerations
The correctness of a contract using block hashes to prove that a specific event was emitted within that block depends on the correctness of:
1. The mechanism for authenticating that a block hash was finalized on another blockchain.
2. The Merkle proof validation library used to prove that a specific transaction receipt was included in the given block.
For considerations on using Avalanche Warp Messaging to authenticate block hashes, see [here](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/30-avalanche-warp-x-evm#security-considerations).
To improve confidence in the correctness of the Merkle proof validation used in implementations, well-audited and widely used libraries should be used.
## Acknowledgements
Using Merkle proofs to verify events/state against root hashes is not a new idea. Protocols such as [IBC](https://ibc.cosmos.network/v8/), [Rainbow Bridge](https://github.com/Near-One/rainbow-bridge), and [LayerZero](https://layerzero.network/publications/LayerZero_Whitepaper_V1.1.0.pdf), among others, have previously suggested using Merkle proofs in a similar manner.
Thanks to [@aaronbuchwald](https://github.com/aaronbuchwald) for proposing the `getVerifiedWarpBlockHash` interface be included in the AWM implemenation within Avalanche EVMs, which enables this type of use case.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-113: Provable Randomness
URL: /docs/acps/113-provable-randomness
Details for Avalanche Community Proposal 113: Provable Randomness
| ACP | 113 |
| :------------ | :--------------------------------------------------------------------------------- |
| **Title** | Provable Virtual Machine Randomness |
| **Author(s)** | Tsachi Herman [http://github.com/tsachiherman](http://github.com/tsachiherman) |
| **Status** | Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/142)) |
| **Track** | Standards |
## Future Work
This ACP was marked as stale due to its documented security concerns.
In order to safely utilize randomness produced by this mechanism, the consumer of the randomness must:
1. Define a security threshold `x` which is the maximum number of consecutive blocks which can be proposed by a malicious entity.
2. After committing to a request for randomness, the consumer must wait for `x` blocks.
3. After waiting for `x` blocks, the consumer must verify that the randomness was not biased during the `x` blocks.
4. If the randomness was biased, it would be insufficient to request randomness again, as this would allow the malicious block producer to discard any randomness that it did not like. If using the randomness mechanism proposed in this ACP, the consumer of the randomness must be able to terminate the request for randomness in such a way that no participant would desire the outcome. Griefing attacks would likely result from such a construction.
### Alternative Mechanisms
There are alternative mechanisms that would not result in such security concerns, such as:
* Utilizing a deterministic threshold signature scheme to finalize a block in consensus would allow the threshold signature to be used during the execution of the block.
* Utilizing threshold commit-reveal schemes that guarantee that committed values will always be revealed in a timely manner.
However, these mechanisms are likely too costly to be introduced into the Avalanche Primary Network due to its validator set size.
It is left to a future ACP to specify the implementation of one of these alternative schemes for L1 networks with smaller sized validator sets.
## Abstract
Avalanche offers developers flexibility through subnets and EVM-compatible smart contracts. However, the platform's deterministic block execution limits the use of traditional random number generators within these contracts.
To address this, a mechanism is proposed to generate verifiable, non-cryptographic random number seeds on the Avalanche platform. This method ensures uniformity while allowing developers to build more versatile applications.
## Motivation
Reliable randomness is essential for building exciting applications on Avalanche. Games, participant selection, dynamic content, supply chain management, and decentralized services all rely on unpredictable outcomes to function fairly. Randomness also fuels functionalities like unique identifiers and simulations. Without a secure way to generate random numbers within smart contracts, Avalanche applications become limited.
Avalanche's traditional reliance on external oracles for randomness creates complexity and bottlenecks. These oracles inflate costs, hinder transaction speed, and are cumbersome to integrate. As Avalanche scales to more Subnets, this dependence on external systems becomes increasingly unsustainable.
A solution for verifiable random number generation within Avalanche solves these problems. It provides fair randomness functionality across the chains, at no additional cost. This paves the way for a more efficient Avalanche ecosystem.
## Specification
### Changes Summary
The existing Avalanche protocol breaks the block building into two parts : external and internal. The external block is the Snowman++ block, whereas the internal block is the actual virtual machine block.
To support randomness, a BLS based VRF implementation is used, that would be recursively signing its own signatures as its message. Since the BLS signatures are deterministic, they provide a great way to construct a reliable VRF.
For proposers that do not have a BLS key associated with their node, the hash of the signature from the previous round is used in place of their signature.
In order to bootstrap the signatures chain, a missing signature would be replaced with a byte slice that is the hash product of a verifiable and trustable seed.
The changes proposed here would affect the way a blocks are validated. Therefore, when this change gets implemented, it needs to be deployed as a mandatory upgrade.
```
+-----------------------+ +-----------------------+
| Block n | <-------- | Block n+1 |
+-----------------------+ +-----------------------+
| VRF-Sig(n) | | VRF-Sig(n+1) |
| ... | | ... |
+-----------------------+ +-----------------------+
+-----------------------+ +-----------------------+
| VM n | | VM n+1 |
+-----------------------+ +-----------------------+
| VRF-Out(n) | | VRF-Out(n+1) |
+-----------------------+ +-----------------------+
VRF-Sig(n+1) = Sign(VRF-Sig(n), Block n+1 proposer's BLS key)
VRF-Out(n) = Hash(VRF-Sig(n))
```
### Changes Details
#### Step 1. Adding BLS signature to proposed blocks
```go
type statelessUnsignedBlock struct {
…
vrfSig []byte `serialize:”true”`
}
```
#### Step 2. Populate signature
When a block proposer attempts to build a new block, it would need to use the parent block as a reference.
The `vrfSig` field within each block is going to be daisy-chained to the `vrfSig` field from it's parent block.
Populating the `vrfSig` would following this logic:
1. The current proposer has a BLS key
a. If the parent block has an empty `vrfSig` signature, the proposer would sign the bootStrappingBlockSignature with its BLS key. See the bootStrappingBlockSignature details below. This is the base case.
b. If the parent block does not have an empty `vrfSig` signature, that signature would be signed using the proposer’s BLS key.
2. The current proposer does not have a BLS key
a. If the parent block has a non-empty `vrfSig` signature, the proposer would set the proposed block `vrfSig` to the 32 byte hash result of the following preimage:
```
+-------------------------+----------+------------+
| prefix : | [8]byte | "rng-derv" |
+-------------------------+----------+------------+
| vrfSig : | [96]byte | 96 bytes |
+-------------------------+----------+------------+
```
b. If the parent block has an empty `vrfSig` signature, the proposer would leave the `vrfSig` on the new block empty.
The bootStrappingBlockSignature that would be used above is the hash of the following preimage:
```
+-----------------------+----------+------------+
| prefix : | [8]byte | "rng-root" |
+-----------------------+----------+------------+
| networkID: | uint32 | 4 bytes |
+-----------------------+----------+------------+
| chainID : | [32]byte | 32 bytes |
+-----------------------+----------+------------+
```
#### Step 3. Signature Verification
This signature verification would perform the exact opposite of what was done in step 2, and would verify the cryptographic correctness of the operation.
Validating the `vrfSig` would following this logic:
1. The proposer has a BLS key
a. If the parent block's `vrfSig` was non-empty , then the `vrfSig` in the proposed block is verified to be a valid BLS signature of the parent block's `vrfSig` value for the proposer's BLS public key.
b. If the parent block's `vrfSig` was empty, then a BLS signature verification of the proposed block `vrfSig` against the proposer’s BLS public key and bootStrappingBlockSignature would take place.
2. The proposer does not have a BLS key
a. If the parent block had a non-empty `vrfSig`, then the hash of the preimage ( as described above ) would be compared against the proposed `vrfSig`.
b. If the parent block has an empty `vrfSig` then the proposer's `vrfSig` would be validated to be empty.
#### Step 4. Extract the VRF Out and pass to block builders
Calculating the VRF Out would be done by hashing the preimage of the following struct:
```
+-----------------------+----------+------------+
| prefix : | [8]byte | "vrfout " |
+-----------------------+----------+------------+
| vrfout: | [96]byte | 96 bytes |
+-----------------------+----------+------------+
```
Before calculating the VRF Out, the method needs to explicitly check the case where the `vrfSig` is empty. In that case, the output of the VRF Out needs to be empty as well.
## Backwards Compatibility
The above design has taken backward compatibility considerations. The chain would keep working as before, and at some point, would have the newly added `vrfSig` populated.
From usage perspective, each VM would need to make its own decision on whether it should use the newly provided random seed. Initially, this random seed would be all zeros - and would get populated once the feature rolled out to a sufficient number of nodes.
Also, as mentioned in the summary, these changes would necessitate a network upgrade.
## Reference Implementation
A full reference implementation has not been provided yet. It will be provided once this ACP is considered `Implementable`.
## Security Considerations
Virtual machine random seeds, while appearing to offer a source of randomness within smart contracts, fall short when it comes to cryptographic security. Here's a breakdown of the critical issues:
* Limited Permutation Space: The number of possible random values is derived from the number of validators. While no validator, nor a validator set, would be able to manipulate the randomness into any single value, a nefarious actor(s) might be able to exclude specific numbers.
* Predictability Window: The seed value might be accessible to other parties before the smart contract can benefit from its uniqueness. This predictability window creates a vulnerability. An attacker could potentially observe the seed generation process and predict the sequence of "random" numbers it will produce, compromising the entire cryptographic foundation of your smart contract.
Despite these limitations appearing severe, attackers face significant hurdles to exploit them. First, the attacker can't control the random number, limiting the attack's effectiveness to how that number is used. Second, a substantial amount of AVAX is needed. And last, such an attack would likely decrease AVAX's value, hurting the attacker financially.
One potential attack vector involves collusion among multiple proposers to manipulate the random number selection. These attackers could strategically choose to propose or abstain from proposing blocks, effectively introducing a bias into the system. By working together, they could potentially increase their chances of generating a random number favorable to their goals.
However, the effectiveness of this attack is significantly limited for the following reasons:
* Limited options: While colluding attackers expand their potential random number choices, the overall pool remains immense (2^256 possibilities). This drastically reduces their ability to target a specific value.
* Protocol's countermeasure: The protocol automatically eliminates any bias introduced by previous proposals once an honest proposer submits their block.
* Detectability: Exploitation of this attack vector is readily identifiable. A successful attack necessitates coordinated collusion among multiple nodes to synchronize their proposer slots for a specific block height ( the proposer slot order are known in advance ). Subsequent to this alignment, a designated node constructs the block proposal. The network maintains a record of the proposer slot utilized for each block. A value of zero for the proposer slot unequivocally indicates the absence of an exploit. Increasing values correlate with a heightened risk of exploitation. It is important to note that non-zero slot numbers may also arise from transient network disturbances.
While this attack is theoretically possible, its practical impact is negligible due to the vast number of potential outcomes and the protocol's inherent safeguards.
## Open Questions
### How would the proposed changes impact the proposer selection and their inherit bias ?
The proposed modifications will not influence the selection process for block proposers.
Proposers retain the ability to determine which transactions are included in a block.
This inherent proposer bias remains unchanged and is unaffected by the proposed changes.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-118: Warp Signature Request
URL: /docs/acps/118-warp-signature-request
Details for Avalanche Community Proposal 118: Warp Signature Request
| ACP | 118 |
| :------------ | :------------------------------------------------------------------------------------- |
| **Title** | Warp Signature Interface Standard |
| **Author(s)** | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/123)) |
| **Track** | Best Practices Track |
## Abstract
Proposes a standard [AppRequest](https://github.com/ava-labs/avalanchego/blob/master/proto/p2p/p2p.proto#L385) payload format type for requesting Warp signatures for the provided bytes, such that signatures may be requested in a VM-agnostic manner. To make this concrete, this standard type should be defined in AvalancheGo such that VMs can import it at the source code level. This will simplify signature aggregator implementations by allowing them to depend only on AvalancheGo for message construction, rather than individual VM codecs.
## Motivation
Warp message signatures consist of an aggregate BLS signature composed of the individual signatures of a subnet's validators. Individual signatures need to be retreivable by the party that wishes to construct an aggregate signature. At present, this is left to VMs to implement, as is the case with [Subnet EVM](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/message/signature_request.go#20) and [Coreth](https://github.com/ava-labs/coreth/blob/v0.13.6-rc.0/plugin/evm/message/signature_request.go#L20)
This creates friction in applications that are intended to operate across many VMs (or distinct implementations of the same VM). As an example, the reference Warp message relayer implementation, [awm-relayer](https://github.com/ava-labs/awm-relayer), fetches individual signatures from validators and aggregates them before sending the Warp message to its destination chain for verification. However, Subnet EVM and Coreth have distinct codecs, requiring the relayer to [switch](https://github.com/ava-labs/awm-relayer/blob/v1.4.0-rc.0/relayer/application_relayer.go#L372) according to the target codebase.
Another example is ACP-75, which aims to implement acceptance proofs using Warp. The signature aggregation mechanism is not [specified](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/75-acceptance-proofs/README.md#signature-aggregation), which is a blocker for that ACP to be marked implementable.
Standardizing the Warp Signature Request interface by defining it as a format for `AppRequest` message payloads in AvalancheGo would simplify the implementation of ACP-75, and streamline signature aggregation for out-of-protocol services such as Warp message relayers.
## Specification
We propose the following types, implemented as Protobuf types that may be decoded from the `AppRequest`/`AppResponse` `app_bytes` field. By way of example, this approach is currently used to [implement](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/proto/sdk/sdk.proto#7) and [parse](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/gossip/message.go#22) gossip `AppRequest` types.
* `SignatureRequest` includes two fields. `message` specifies the payload that the returned signature should correspond to, namely a serialized unsigned Warp message. `justification` specifies arbitrary data that the requested node may use to decide whether or not it is willing to sign `message`. `justification` may not be required by every VM implementation, but `message` should always contain the bytes to be signed. It is up to the VM to define the validity requirements for the `message` and `justification` payloads.
```protobuf
message SignatureRequest {
bytes message = 1;
bytes justification = 2;
}
```
* `SignatureResponse` is the corresponding `AppResponse` type that returns the requested signature.
```protobuf
message SignatureResponse {
bytes signature = 1;
}
```
### Handlers
For each of the above types, VMs must implement corresponding `AppRequest` and `AppResponse` handlers. The `AppRequest` handler should be [registered](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/network.go#L173) using the canonical handler ID, defined as `2`.
## Use Cases
Generally speaking, `SignatureRequest` can be used to request a signature over a Warp message by serializing the unsigned Warp message into `message`, and populating `justification` as needed.
### Sign a known Warp Message
Subnet EVM and Coreth store messages that have been seen (i.e. on-chain message sent through the [Warp Precompile](https://github.com/ava-labs/subnet-evm/tree/v0.6.7/precompile/contracts/warp) and [off-chain](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/config.go#L226) Warp messages) such that a signature over that message can be provided on request. `SignatureRequest` can be used for this case by specifying the Warp message in `message`. The queried node may then look up the Warp message in its database and return the signature. In this case, `justification` is not needed.
### Attest to an on-chain event
Subnet EVM and Coreth also support attesting to block hashes via Warp, by serving signature requests made using the following `AppRequest` type:
```
type BlockSignatureRequest struct {
BlockID ids.ID
}
```
`SignatureRequest` can achieve this by specifying an unsigned Warp message with the `BlockID` as the payload, and serializing that message into `message`. `justification` may optionally be used to provide additional context, such as a the block height of the given block ID.
### Confirm that an event did not occur
With [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets), Subnets will have the ability to manage their own validator sets. The Warp message payload contained in a `RegisterSubnetValidatorTx` includes an `expiry`, after which the specified validation ID (i.e. a unique hash over the Subnet ID, node ID, stake weight, and expiry) becomes invalid. The Subnet needs to know that this validation ID is expired so that it can keep its locally tracked validator set in sync with the P-Chain. We also assume that the P-Chain will not persist expired or invalid validation IDs.
We can use `SignatureRequest` to construct a Warp message attesting that the validation ID expired. We do so by serializing an unsigned Warp message containing the validation ID into `message`, and providing the validation ID hash preimage in `justification` for the P-Chain to reconstruct the expired validation ID.
## Security Considerations
VMs have full latitude when implementing `SignatureRequest` handlers, and should take careful consideration of what `message` payloads their implementation should be willing to sign, given a `justification`. Some considerations include, but are not limited to:
* Input validation. Handlers should validate `message` and `justification` payloads to ensure that they decode to coherent types, and that they contain only expected data.
* Signature DoS. AvalancheGo's peer-to-peer networking stack implements message rate limiting to mitigate the risk of DoS, but VMs should also consider the cost of parsing and signing a `message` payload.
* Payload collision. `message` payloads should be implemented as distinct types that do not overlap with one another within the context of signed Warp messages from the VM. For instance, a `message` payload specifying 32-byte hash may be interpreted as a transaction hash, a block hash, or a blockchain ID.
## Backwards Compatibility
This change is backwards compatible for VMs, as nodes running older versions that do not support the new message types will simply drop incoming messages.
## Reference Implementation
A reference implementation containing the Protobuf types and the canonical handler ID can be found [here](https://github.com/ava-labs/avalanchego/pull/3218).
## Acknowledgements
Thanks to @joshua-kim, @iansuvak, @aaronbuchwald, @michaelkaplan13, and @StephenButtolph for discussion and feedback on this ACP.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-125: Basefee Reduction
URL: /docs/acps/125-basefee-reduction
Details for Avalanche Community Proposal 125: Basefee Reduction
| ACP | 125 |
| :------------ | :------------------------------------------------------------------------------------------------------------------------------------ |
| **Title** | Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX |
| **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/127)) |
| **Track** | Standards |
## Abstract
Reduce the minimum base fee on the Avalanche C-Chain from 25 nAVAX to 1 nAVAX.
## Motivation
With dynamic fees, the gas price is supposed to be a result of a continuous auction such that the consumed gas per second converges to the target gas usage per second.
When dynamic fees were first introduced, safeguards were added to ensure the mechanism worked as intended, such as a relatively high minimum gas price and a maximum gas price.
The maximum gas price has since been entirely removed. The minimum gas price has been reduced significantly. However, the base fee is often observed pinned to this minimum. This shows that it is higher than what the market demands, and therefore it is artificially reducing network usage.
## Specification
The dynamic fee calculation currently must enforce a minimum base fee of 25 nAVAX.
This change proposes reducing the minimum base fee to 1 nAVAX upon the next network upgrade activation.
## Backwards Compatibility
Modifies the consensus rules for the C-Chain, therefore it requires a network upgrade.
## Reference Implementation
A draft implementation of this ACP for the coreth VM can be found [here](https://github.com/ava-labs/coreth/pull/604/files).
## Security Considerations
Lower gas costs may increase state bloat. However, we note that the dynamic fee algorithm responded appropriately during periods of high use (such as Dec. 2023), which gives reasonable confidence that enforcing a 25 nAVAX minimum fee is no longer necessary.
## Open Questions
N/A
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-13: Subnet Only Validators
URL: /docs/acps/13-subnet-only-validators
Details for Avalanche Community Proposal 13: Subnet Only Validators
| ACP | 13 |
| :---------------- | :----------------------------------------------------------------------------------------------------- |
| **Title** | Subnet-Only Validators (SOVs) |
| **Author(s)** | Patrick O'Grady ([contact@patrickogrady.xyz](mailto:contact@patrickogrady.xyz)) |
| **Status** | Stale |
| **Track** | Standards |
| **Superseded-By** | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) |
## Abstract
Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network. Require SOVs to pay a refundable fee of 500 $AVAX on the P-Chain to register as a Subnet Validator instead of staking at least 2000 $AVAX, the minimum requirement to become a Primary Network Validator. Preview a future transition to Pay-As-You-Go Subnet Validation and \$AVAX-Augmented Subnet Security.
*This ACP does not modify/deprecate the existing Subnet Validation semantics for Primary Network Validators.*
## Motivation
Each node operator must stake at least 2000 $AVAX ($20k at the time of writing) to first become a Primary Network Validator before they qualify to become a Subnet Validator. Most Subnets aim to launch with at least 8 Subnet Validators, which requires staking 16000 $AVAX ($160k at time of writing). All Subnet Validators, to satisfy their role as Primary Network Validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating.
Avalanche Warp Messaging (AWM), the native interoperability mechanism for the Avalanche Network, provides a way for Subnets to communicate with each other/C-Chain without a trusted intermediary. Any Subnet Validator must be able to register a BLS key and participate in AWM, otherwise a Subnet may not be able to generate a BLS Multi-Signature with sufficient participating stake.
Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) can’t launch a Subnet because they can’t opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain \<-> Subnets using AWM/Teleporter).
A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network Validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline).
Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet Validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. *Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load.*
Elastic Subnets allow any community to weight Subnet Validation based on some staking token and reward Subnet Validators with high uptime with said staking token. However, there is no way for \$AVAX holders on the Primary Network to augment the security of such Subnets.
## Specification
### Required Changes
1. Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network
2. Introduce a refundable fee (called a "lock") of 500 \$AVAX that nodes must pay to become an SOV
3. Introduce a non-refundable fee of 0.1 \$AVAX that SOVs must pay to become an SOV
4. Introduce a new transaction type on the P-Chain to register as an SOV (i.e. `AddSubnetOnlyValidatorTx`)
5. Add a mode to ANCs that allows SOVs to optionally disable full Primary Network verification (only need to verify P-Chain)
6. ANCs track IPs for SOVs to ensure Subnet Validators can find peers whether or not they are Primary Network Validators
7. Provide a guaranteed rate limiting allowance for SOVs like Primary Network Validators
Because SOVs do not validate the Primary Network, they will not be rewarded with $AVAX for "locking" the 500 $AVAX required to become an SOV. This enables people interested in validating Subnets to opt for a lower upfront $AVAX commitment and lower infrastructure costs instead of $AVAX rewards. Additionally, SOVs will only be required to sync the P-chain (not X/C-Chain) to track any validator set changes in their Subnet and to support Cross-Subnet communication via AWM (see “Primary Network Partial Sync” mode introduced in [Cortina 8](https://github.com/ava-labs/avalanchego/releases/tag/v1.10.8)). The lower resource requirement in this "minimal mode" will provide Subnets with greater flexibility of validation hardware requirements as operators are not required to reserve any resources for C-Chain/X-Chain operation. If an SOV wishes to sync the entire Primary Network, they still can.
### Future Work
The previously described specification is a minimal, additive change to Subnet Validation semantics that prepares the Avalanche Network for a more flexible Subnet model. It alone, however, fails to communicate this flexibility nor provides an alternative use of \$AVAX that would have otherwise been used to create Subnet Validators.
Below are two high-level ideas (Pay-As-You-Go Subnet Validation Registration Fees and \$AVAX-Augmented Security) that highlight how this initial change could be extended in the future. If the Avalanche Community is interested in their adoption, they should each be proposed as a unique ACP where they can be properly specified. **These ideas are only suggestions for how the Avalanche Network could be modified in the future if this ACP is adopted. Supporting this ACP does not require supporting these ideas or committing to their rollout.**
#### Pay-As-You-Go Subnet Validation Registration Fees
*Transition Subnet Validator registration to a dynamically priced, continuously charged fee (that doesn't require locking large amounts of \$AVAX upfront).*
While it would be possible to just transition to a lower required "lock" amount, many think that it would be more competitive to transition to a dynamically priced, continuous payment mechanism to register as a Subnet Validator. This new mechanism would target some $Y nAVAX fee that would be paid by each Subnet Validator per Subnet per second (pulling from a "Subnet Validator's Account") instead of requiring a large upfront lockup of $AVAX.
The rate of nAVAX/second should be set by the demand for validating Subnets on Avalanche compared to some usage target per Subnet and across all Subnets. This rate should be locked for each Subnet Validation period to ensure operators are not subject to surprise costs if demand rises significantly over time. The optimization work outlined in [BLS Multi-Signature Voting](https://hackmd.io/@patrickogrady/100k-subnets#How-will-BLS-Multi-Signature-uptime-voting-work) should allow the min rate to be set as low as \~512-4096 nAVAX/second (or 1.3-10.6 \$AVAX/month).
Fees paid to the Avalanche Network for PAYG could be burned, like all other P-Chain, X-Chain, and C-Chain transactions, or they could be partially rewarded to Primary Network Validators as a "boost" over the existing staking rewards. The nice byproduct of the latter approach is that it better aligns Primary Network Validators with the growth of Subnets.
#### \$AVAX-Augmented Subnet Security
*Allow pledging unstaked $AVAX to Subnet Validators on Elastic Subnets that can be slashed if said Subnet Validator commits an attributable fault (i.e. proposes/signs conflicting blocks/AWM payloads). Reward locked $AVAX associated with Subnet Validators that were not slashed with Elastic Subnet staking rewards.*
Currently, the only way to secure an Elastic Subnet is to stake its custom staking token (defined in the `TransformSubnetTx`). Many have requested the option to use $AVAX for this token, however, this could easily allow an adversary to take over small Elastic Subnets (where the amount of $AVAX staked may be much less than the circulating supply).
$AVAX-Augmented Subnet Security would allow anyone holding $AVAX to lock it to specific Subnet Validators and earn Elastic Subnet reward tokens for supporting honest participants. Recall, all stake management on the Avalanche Network (even for Subnets) occurs on the P-Chain. Thus, staked tokens ($AVAX and/or custom staking tokens used in Elastic Subnets) and stake weights (used for AWM verification) are secured by the full $AVAX stake of the Primary Network. $AVAX-Augmented Subnet Security, like staking, would be implemented on the P-Chain and enjoy the full security of the Primary Network. This approach means locking $AVAX occurs on the Primary Network (no need to transfer \$AVAX to a Subnet, which may not be secured by meaningful value yet) and proofs of malicious behavior are processed on the Primary Network (a colluding Subnet could otherwise choose not to process a proof that would lead to their "lockers" being slashed).
*This native approach is comparable to the idea of using $ETH to secure DA on [EigenLayer](https://www.eigenlayer.xyz/) (without reusing stake) or $BTC to secure Cosmos Zones on [Babylon](https://babylonchain.io/) (but not using an external ecosystem).*
## Backwards Compatibility
* Existing Subnet Validation semantics for Primary Network Validators are not modified by this ACP. This means that All existing Subnet Validators can continue validating both the Primary Network and whatever Subnets they are validating. This change would just provide a new option for Subnet Validators that allows them to sacrifice their staking rewards for a smaller upfront \$AVAX commitment and lower infrastructure costs.
* Support for this ACP would require adding a new transaction type to the P-Chain (i.e. `AddSubnetOnlyValidatorTx`). This new transaction is an execution-breaking change that would require a mandatory Avalanche Network upgrade to activate.
## Reference Implementation
A full implementation will be provided once this ACP is considered `Implementable`. However, some initial ideas are presented below.
### `AddSubnetOnlyValidatorTx`
```text
type AddSubnetOnlyValidatorTx struct {
// Metadata, inputs and outputs
BaseTx `serialize:"true"`
// Describes the validator
// The NodeID included in [Validator] must be the Ed25519 public key.
Validator `serialize:"true" json:"validator"`
// ID of the subnet this validator is validating
Subnet ids.ID `serialize:"true" json:"subnetID"`
// [Signer] is the BLS key for this validator.
// Note: We do not enforce that the BLS key is unique across all validators.
// This means that validators can share a key if they so choose.
// However, a NodeID does uniquely map to a BLS key
Signer signer.Signer `serialize:"true" json:"signer"`
// Where to send locked tokens when done validating
LockOuts []*avax.TransferableOutput `serialize:"true" json:"lock"`
// Where to send validation rewards when done validating
ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"`
// Where to send delegation rewards when done validating
DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"`
// Fee this validator charges delegators as a percentage, times 10,000
// For example, if this validator has DelegationShares=300,000 then they
// take 30% of rewards from delegators
DelegationShares uint32 `serialize:"true" json:"shares"`
}
```
*`AddSubnetOnlyValidatorTx` is almost the same as [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/vms/platformvm/txs/add_permissionless_validator_tx.go#L33-L58), the only exception being that `StakeOuts` are now `LockOuts`.*
### `GetSubnetPeers`
To support tracking SOV IPs, a new message should be added to the P2P specification that allows Subnet Validators to request the IP of all peers a node knows about on a Subnet (these Signed IPs won't be gossiped like they are for Primary Network Validators because they don't need to be known by the entire Avalanche Network):
```text
message GetSubnetPeers {
bytes subnet_id = 1;
}
```
*It would be a nice addition if a bloom filter could also be provided here so that an ANC only sends IPs of peers that the original sender does not know.*
ANCs should respond to this incoming message with a [`PeerList` message](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/proto/p2p/p2p.proto#L135-L148).
## Security Considerations
* Any Subnet Validator running in "Partial Sync Mode" will not be able to verify Atomic Imports on the P-Chain and will rely entirely on Primary Network consensus to only accept valid P-Chain blocks.
* High-throughput Subnets will be better isolated from the Primary Network and should improve its resilience (i.e. surges of traffic on some Subnet cannot destabilize a Primary Network Validator).
* Avalanche Network Clients (ANCs) must track IPs and provide allocated bandwidth for SOVs even though they are not Primary Network Validators.
## Open Questions
* To help orient the Avalanche Community around this wide-ranging and likely to be long-running conversation around the relationship between the Primary Network and Subnets, should we come up with a project name to describe the effort? I've been casually referring to all of these things as the *Astra Upgrade Track* but definitely up for discussion (may be more confusing than it is worth to do this).
## Appendix
A draft of this ACP was posted on in the ["Ideas" Discussion Board](https://github.com/avalanche-foundation/ACPs/discussions/10#discussioncomment-7373486), as suggested by the [ACP README](https://github.com/avalanche-foundation/ACPs#step-1-post-your-idea-to-github-discussions). Feedback on this draft was collected and addressed on both the "Ideas" Discussion Board and on [HackMD](https://hackmd.io/@patrickogrady/100k-subnets#Feedback-to-Draft-Proposal).
## Acknowledgements
Thanks to @luigidemeo1, @stephenbuttolph, @aaronbuchwald, @dhrubabasu, and @abi87 for their feedback on these ideas.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-131: Cancun Eips
URL: /docs/acps/131-cancun-eips
Details for Avalanche Community Proposal 131: Cancun Eips
| ACP | 131 |
| :------------ | :--------------------------------------------------------------------------------------------------------------- |
| **Title** | Activate Cancun EIPs on C-Chain and Subnet-EVM chains |
| **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/139)) |
| **Track** | Standards, Subnet |
## Abstract
Enable new EVM opcodes and opcode changes in accordance with the following EIPs on the Avalanche C-Chain and Subnet-EVM chains:
* [EIP-4844: BLOBHASH opcode](https://eips.ethereum.org/EIPS/eip-4844)
* [EIP-7516: BLOBBASEFEE opcode](https://eips.ethereum.org/EIPS/eip-7516)
* [EIP-1153: Transient storage](https://eips.ethereum.org/EIPS/eip-1153)
* [EIP-5656: MCOPY opcode](https://eips.ethereum.org/EIPS/eip-5656)
* [EIP-6780: SELFDESTRUCT only in same transaction](https://eips.ethereum.org/EIPS/eip-6780)
Note blob transactions from EIP-4844 are excluded and blocks containing them will still be considered invalid.
## Motivation
The listed EIPs were activated on Ethereum mainnet as part of the [Cancun upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/cancun.md#included-eips). This proposal is to activate them on the Avalanche C-Chain in the next network upgrade, to maintain compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler defaults >= [0.8.25](https://github.com/ethereum/solidity/releases/tag/v0.8.25)). Additionally, it recommends the activation of the same EIPs on Subnet-EVM chains.
## Specification & Reference Implementation
The opcodes (EVM exceution modifications) and block header modifications should be adopted as specified in the EIPs themselves. Other changes such as enabling new transaction types or mempool modifications are not in scope (specifically blob transactions from EIP-4844 are excluded and blocks containing them are considered invalid). ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.13.8](https://github.com/ethereum/go-ethereum/releases/tag/v1.13.8) release in this [PR](https://github.com/ava-labs/coreth/pull/550). In particular, note the following code:
* [Activation of new opcodes](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/core/vm/jump_table.go#L93)
* Activation of Cancun in next Avalanche upgrade:
* [C-Chain](https://github.com/ava-labs/coreth/pull/610)
* [Subnet-EVM chains](https://github.com/ava-labs/subnet-evm/blob/fa909031ed148484c5072d949c5ed73d915ce1ed/params/config_extra.go#L186)
* `ParentBeaconRoot` is enforced to be included and the zero value [here](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/plugin/evm/block_verification.go#L287-L288). This field is retained for future use and compatibility with upstream tooling.
* Forbids blob transactions by enforcing `BlobGasUsed` to be 0 [here](https://github.com/ava-labs/coreth/pull/611/files#diff-532a2c6a5365d863807de5b435d8d6475552904679fd611b1b4b10d3bf4f5010R267).
*Note:* Subnets are sovereign in regards to their validator set and state transition rules, and can choose to opt out of this proposal by making a code change in their respective Subnet-EVM client.
## Backwards Compatibility
The original EIP authors highlighted the following considerations. For full details, refer to the original EIPs:
* [EIP-4844](https://eips.ethereum.org/EIPS/eip-4844#backwards-compatibility): Blob transactions are not proposed to be enabled on Avalanche, so concerns related to mempool or transaction data availability are not applicable.
* [EIP-6780](https://eips.ethereum.org/EIPS/eip-6780#backwards-compatibility) "Contracts that depended on re-deploying contracts at the same address using CREATE2 (after a SELFDESTRUCT) will no longer function properly if the created contract does not call SELFDESTRUCT within the same transaction."
Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. It is recommended that Subnet-EVM chains also adopt this ACP and follow the same upgrade time as Avalanche's next network upgrade.
## Security Considerations
Refer to the original EIPs for security considerations:
* [EIP 1153](https://eips.ethereum.org/EIPS/eip-1153#security-considerations)
* [EIP 4788](https://eips.ethereum.org/EIPS/eip-4788#security-considerations)
* [EIP 4844](https://eips.ethereum.org/EIPS/eip-4844#security-considerations)
* [EIP 5656](https://eips.ethereum.org/EIPS/eip-5656#security-considerations)
* [EIP 6780](https://eips.ethereum.org/EIPS/eip-6780#security-considerations)
* [EIP 7516](https://eips.ethereum.org/EIPS/eip-7516#security-considerations)
## Open Questions
No open questions.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-151: Use Current Block Pchain Height As Context
URL: /docs/acps/151-use-current-block-pchain-height-as-context
Details for Avalanche Community Proposal 151: Use Current Block Pchain Height As Context
| ACP | 151 |
| :------------ | :------------------------------------------------------------------------------------- |
| **Title** | Use current block P-Chain height as context for state verification |
| **Author(s)** | Ian Suvak ([@iansuvak](https://github.com/iansuvak)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/152)) |
| **Track** | Standards |
## Abstract
Proposes that the ProposerVM passes inner VMs the P-Chain block height of the current block being built rather than the P-Chain block height of the parent block. Inner VMs use this P-Chain height for verifying aggregated signatures of Avalanche Interchain Messages (ICM). This will allow for a more reliable way to determine which validators should participate in signing the message, and remove unnecessary waiting periods.
## Motivation
Currently the ProposerVM passes the P-Chain height of the parent block to inner VMs, which use the value to verify ICM messages in the current block. Using the parent block's P-Chain height is necessary for verifying the proposer and reaching consensus on the current block, but it is not necessary for verifying ICM messages within the block.
Using the P-Chain height of the current block being built would make operations using ICM messages to modify the validator set, such as ones specified in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) be verifiable sooner and more reliably. Currently at least two new P-Chain blocks need to be produced after the relevant state change for it to be reflected for purposes of ICM aggregate signature verification.
## Specification
The [block context](https://github.com/ava-labs/avalanchego/blob/d2e9d12ed2a1b6581b8fd414cbfb89a6cfa64551/snow/engine/snowman/block/block_context_vm.go#L14) contains a `PChainHeight` field that is passed from the ProposerVM to the inner VMs building the block. It is later used by the inner VMs to fetch the canonical validator set for verification of ICM aggregated signatures.
The `PChainHeight` currently passed in by the ProposerVM is the P-Chain height of the parent block. The proposed change is to instead have the ProposerVM pass in the P-Chain height of the current block.
## Backwards Compatibility
This change requires an upgrade to make sure that all validators verifying the validity of the ICM messages use the same P-Chain height and therefore the same validator set. Prior to activation nodes should continue to use P-Chain height of the parent block.
## Reference Implementation
An implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/3459)
## Security Considerations
ProposerVM needs to use the parent block's P-Chain height to verify proposers for security reasons but we don't have such restrictions for verifying ICM message validity in the current block being built. Therefore, this should be a safe change.
## Acknowledgments
Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@michaelkaplan13](https://github.com/michaelkaplan13) for discussion and feedback on this ACP.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates
URL: /docs/acps/176-dynamic-evm-gas-limit-and-price-discovery-updates
Details for Avalanche Community Proposal 176: Dynamic Evm Gas Limit And Price Discovery Updates
| ACP | 176 |
| :------------ | :------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Title** | Dynamic EVM Gas Limits and Price Discovery Updates |
| **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/178)) |
| **Track** | Standards |
## Abstract
Proposes that the C-Chain and Subnet-EVM chains adopt a dynamic fee mechanism similar to the one [introduced on the P-Chain as part of ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md), with modifications to allow for block proposers (i.e. validators) to dynamically adjust the target gas consumption per unit time.
## Motivation
Currently, the C-Chain has a static gas target of [15,000,000 gas](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L32) per [10 second rolling window](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L36), and uses a modified version of the [EIP-1559](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md) dynamic fee mechanism to adjust the base fee of blocks based on the gas consumed in the previous 10 second window. This has two notable drawbacks:
1. The windower mechanism used to determine the base fee of blocks can lead to outsized spikes in the gas price when there is a large block. This is because after a large block that uses all of its gas limit, blocks that follow in the same window continue to result in increased gas prices even if they are relatively small blocks that are under the target gas consumption.
2. The static gas target necessitates a required network upgrade in order to modify. This is cumbersome and makes it difficult for the network to adjust its capacity in response to performance optimizations or hardware requirement increases.
To better position Avalanche EVM chains, including the C-Chain, to be able to handle future increases in load, we propose replacing the above mechanism with one that better handles blocks that consume a large amount of gas, and that allows for validators to dynamically adjust the target rate of consumption.
## Specification
### Gas Price Determination
The mechanism to determine the base fee of a block is the same as the one used in [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) to determine the gas price of a block on the P-Chain. This mechanism calculates the gas price for a given block $b$ based on the following parameters:
| | |
| --- | ---------------------------------- |
| $T$ | the target gas consumed per second |
| $M$ | minimum gas price |
| $K$ | gas price update constant |
| $C$ | maximum gas capacity |
| $R$ | gas capacity added per second |
### Making $T$ Dynamic
As noted above, the gas price determination mechanism relies on a target gas consumption per second, $T$, in order to calculate the gas price for a given block. $T$ will be adjusted dynamically according to the following specification.
Let $q$ be a non-negative integer that is initialized to 0 upon activation of this mechanism. Let the target gas consumption per second be expressed as:
$T = P \cdot e^{\frac{q}{D}}$
where $P$ is the global minimum allowed target gas consumption rate for the network, and $D$ is a constant that helps control the rate of change of the target gas consumption.
After the execution of transactions in block $b$, the value of $q$ can be increased or decreased up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder.
Block builders (i.e. validators), may set their desired value for $T$ (i.e. their desired gas consumption rate) in their configuration, and their desired value for $q$ can then be calculated as:
$q_{desired} = D \cdot ln\left(\frac{T_{desired}}{P}\right)$
Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{T_{desired}}{P}\right)$, and round the resulting value to the nearest integer.
When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to:
```python
# Calculates a node's new desired value for q given for a given block
def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int:
if q_desired > q_current:
return q_current + min(q_desired - q_current, max_change)
else:
return q_current - min(q_current - q_desired, max_change)
```
As $q$ is updated after the execution of transactions within the block, $T$ is also updated such that $T = P \cdot e^{\frac{q}{D}}$ at all times. As the value of $T$ adjusts, the value of $R$ (capacity added per second) is also updated such that:
$R = 2 \cdot T$
This ensures that the gas price can increase and decrease at the same rate.
The value of $C$ must also adjust proportionately, so we set:
$C = 10 \cdot T$
This means that the maximum stored gas capacity would be reached after 5 seconds where no blocks have been accepted.
In order to keep roughly constant the time it takes for the gas price to double at sustained maximum network capacity usage, the value of $K$ used in the gas price determination mechanism must be updated proportionally to $T$ such that:
$K = 87 \cdot T$
In order to have the gas price not be directly impacted by the change in $K$, we also update $x$ (excess gas consumption) proportionally. When updating $x$ after executing a block, instead of setting $x = x + G$ as specified in ACP-103, we set:
$x_{n+1} = (x + G) \cdot \frac{K_{n+1}}{K_{n}}$
Note that the value of $q$ (and thus also $T$, $R$, $C$, $K$, and $x$) are updated **after** the execution of block $b$, which means they only take effect in determining the gas price of block $b+1$. The change to each of these values in block $b$ does not effect the gas price for transactions included in block $b$ itself.
Allowing block builders to adjust the target gas consumption rate in blocks that they produce makes it such that the effective target gas consumption rate should converge over time to the point where 50% of the voting stake weight wants it increased and 50% of the voting stake weight wants it decreased. This is because the number of blocks each validator produces is proportional to their stake weight.
As noted in ACP-103, the maximum gas consumed in a given period of time $\Delta{t}$, is $r + R \cdot \Delta{t}$, where $r$ is the remaining gas capacity at the end of previous block execution. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. Phrased differently, the maximum amount of gas that can be consumed by any given block $b$ is:
$gasLimit_{b} = min(r + R \cdot \Delta{t}, C)$
### Configuration Parameters
As noted above, the gas price determination mechanism depends on the values of $T$, $M$, $K$, $C$, and $R$ to be set as parameters. $T$ is adjusted dynamically from its initial value based on $D$ and $P$, and the values of $R$ and $C$ are derived from $T$.
Parameters at activation on the C-Chain are:
$P$ was chosen as a safe bound on the minimum target gas usage on the C-Chain. The current gas target of the C-Chain is $1,500,000$ per second. The target gas consumption rate will only stay at $P$ if the majority of stake weight of the network specifies $P$ as their desired gas consumption rate target.
$D$ and $Q$ were chosen to give each block builder the ability to adjust the value of $T$ by roughly $\frac{1}{1024}$ of its current value, which matches the [gas limit bound divisor that Ethereum currently uses](https://github.com/ethereum/go-ethereum/blob/52766bedb9316cd6cddacbb282809e3bdfba143e/params/protocol_params.go#L26) to limit the amount that validators can change the execution layer gas limit in a single block. $D$ and $Q$ were scaled up by a factor of $2^{15}$ to provide block builders more granularity in the adjustments to $T$ that they can make.
$M$ was chosen as the minimum possible denomination of the native EVM asset, such that the gas price will be more likely to consistently be in a range of price discovery. The price discovery mechanism has already been battle tested on the P-Chain (and prior to that on Ethereum for blob gas prices as defined by EIP-4844), giving confidence that it will correctly react to any increase in network usage in order to prevent a DOS attack.
$K$ was chosen such that at sustained maximum capacity ($T*2$ gas/second), the fee rate will double every \~60.3 seconds. For comparison, EIP-1559 can double about \~70 seconds, and the C-Chain's current implementation can double about every \~50 seconds, depending on the time between blocks.
The maximum instantaneous price multiplier is:
$e^\frac{C}{K} = e^\frac{10 \cdot T}{87 \cdot T} = e^\frac{10}{87} \simeq 1.12$
### Choosing $T_{desired}$
As mentioned above, this new mechanism allows for validators to specify their desired target gas consumption rate ($T_{desired}$) in their configuration, and the value that they set impacts the effective target gas consumption rate of the network over time. The higher the value of $T$, the more resources (storage, compute, etc) that are able to be used by the network. When choosing what value makes sense for them, validators should consider the resources that are required to properly support that level of gas consumption, the utility the network provides by having higher transaction per second throughput, and the stability of network should it reach that level of utilization.
While Avalanche Network Clients can set default configuration values for the desired target gas consumption rate, each validator can choose to set this value independently based on their own considerations.
## Backwards Compatibility
The changes proposed in this ACP require a required network upgrade in order to take effect. Prior to its activation, the current gas limit and price discovery mechanisms will continue to be used. Its activation should have relatively minor compatibility effects on any developer tooling. Notably, transaction formats, and thus wallets, are not impacted. After its activation, given that the value of $C$ is dynamically adjusted, the maximum possible gas consumed by an individual block, and thus maximum possible consumed by an individual transaction, will also dynamically adjust. The upper bound on the amount of gas consumed by a single transaction fluctuating means that transactions that are considered invalid at one time may be considered valid at a different point in time, and vice versa. While potentially unintuitive, as long as the minimum gas consumption rate is set sufficiently high this should not have significant practical impact, and is also currently the case on the Ethereum mainnet.
> \[!NOTE]
> After the activation of this ACP, concerns were raised around the latency of inclusion for large transactions when the fee is increasing. To address these concerns, block producers SHOULD only produce blocks when there is sufficient capacity to include large transactions. Prior to this ACP, the maximum size of a transaction was $15$ million gas. Therefore, the recommended heuristic is to only produce blocks when there is at least $\min(8 \cdot T, 15 \text{ million})$ capacity. *At the time of writing, this ensures transactions with up to 12.8 million gas will be able to bid for block space.*
## Reference Implementation
This ACP was implemented and merged into Coreth behind the `Fortuna` upgrade flag. The full implementation can be found in [coreth@v0.14.1-acp-176.1](https://github.com/ava-labs/coreth/releases/tag/v0.14.1-acp-176.1).
## Security Considerations
This ACP changes the mechanism for determining the gas price on Avalanche EVM chains. The gas price is meant to adapt dynamically to respond to changes in demand for using the chain. If it does not react as expected, the chain could be at risk for a DOS attack (if the usage price is too low), or over charge users during period of low activity. This price discovery mechanism has already been employed on the P-Chain, but should again be thoroughly tested for use on the C-Chain prior to activation on the Avalanche Mainnet.
Further, this ACP also introduces a mechanism for validators to change the gas limit of the C-Chain. If this limit is set too high, it is possible that validator nodes will not be able to keep up in the processing of blocks. An upper bound on the maximum possible gas limit could be considered to try to mitigate this risk, though it would then take further required network upgrades to scale the network past that limit.
## Acknowledgments
Thanks to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP.
* [Emin Gün Sirer](https://x.com/el33th4xor)
* [Luigi D'Onorio DeMeo](https://x.com/luigidemeo)
* [Darioush Jalali](https://github.com/darioush)
* [Aaron Buchwald](https://github.com/aaronbuchwald)
* [Geoff Stuart](https://github.com/geoff-vball)
* [Meag FitzGerald](https://github.com/meaghanfitzgerald)
* [Austin Larson](https://github.com/alarso16)
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-181: P Chain Epoched Views
URL: /docs/acps/181-p-chain-epoched-views
Details for Avalanche Community Proposal 181: P Chain Epoched Views
| ACP | 181 |
| :------------ | :----------------------------------------------------------------------------------------- |
| **Title** | P-Chain Epoched Views |
| **Author(s)** | Cam Schultz [@cam-schultz](https://github.com/cam-schultz) |
| **Status** | Implementable ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/211)) |
| **Track** | Standards |
## Abstract
Proposes a standard P-Chain epoching scheme such that any VM that implements it uses a P-Chain block height known prior to the generation of its next block. This would enable VMs to optimize validator set retrievals, which currently must be done during block execution. This standard does *not* introduce epochs to the P-Chain's VM directly. Instead, it provides a standard that may be implemented by layers that inject P-Chain state into VMs, such as the ProposerVM.
## Motivation
The P-Chain maintains a registry of L1 and Subnet validators (including Primary Network validators). Validators are added, removed, or their weights changed by issuing P-Chain transactions that are included in P-Chain blocks. When describing an L1 or Subnet's validator set, what is really being described are the weights, BLS keys, and Node IDs of the active validators at a particular P-Chain height. Use cases that require on-demand views of L1 or Subnet validator sets need to fetch validator sets at arbitrary P-Chain heights, while use cases that require up-to-date views need to fetch them as often as every P-Chain block.
Epochs during which the P-Chain height is fixed would widen this window to a predictable epoch duration, allowing these use cases to implement optimizations such as pre-fetching validator sets once per epoch, or allowing more efficient backwards traversal of the P-Chain to fetch historical validator sets.
## Specification
### Assumptions
In the following specification, we assume that a block $b_m$ has timestamp $t_m$ and P-Chain height $p_m$.
### Epoch Definition
An epoch is defined as a contiguous range of blocks that share the same three values:
* An Epoch Number
* An Epoch P-Chain Height
* An Epoch Start Time
Let $E_N$ denote an epoch with epoch number $N$. $E_N$'s start time is denoted as $T_{start}^N$, and its P-Chain height as $P_N$.
Let block $b_a$ be the block that activates this ACP. The first epoch ($E_0$) has $T_{start}^0 = t_{a-1}$, and $P_0 = p_{a-1}$. In other words, the first epoch start time is the timestamp of the last block prior to the activation of this ACP, and similarly, the first epoch P-Chain height is the P-Chain height of last block prior to the activation of this ACP.
### Epoch Sealing
An epoch $E_N$ is *sealed* by the first block with a timestamp greater than or equal to $T_{start}^N + D$, where $D$ is a constant defined in the network upgrade that activates this ACP. Let $B_{S_N}$ denote the block that sealed $E_N$.
The sealing block is defined to be a member of the epoch it seals. This guarantees that every epoch will contain at least one block.
### Advancing an Epoch
We advance from the current epoch $E_N$ to the next epoch $E_{N+1}$ when the next block after $B_{S_N}$ is produced. This block will be a member of $E_{N+1}$, and will have the values:
* $P_{N+1}$ equal to the P-Chain height of $B_{S_N}$
* $T_{start}^{N+1}$ equal to $B_{S_N}$'s timestamp
* The epoch number, $N+1$ increments the previous epoch's epoch number by exactly $1$
## Properties
### Epoch Duration Bounds
Since an epoch's start time is set to the [timestamp of the sealing block of the previous epoch](#advancing-an-epoch), all epochs are guaranteed to have a duration of at least $D$, as measured from the epoch's starting time to the timestamp of the epoch's sealing block. However, since a sealing block is [defined](#epoch-sealing) to be a member of the epoch it seals, there is no upper bound on an epoch's duration, since that sealing block may be produced at any point in the future beyond $T_{start}^N + D$.
### Fixing the P-Chain Height
When building a block, Avalanche blockchains use the P-Chain height [embedded in the block](#assumptions) to determine the validator set. If instead the epoch P-Chain height is used, then we can ensure that when a block is built, the validator set to be used for the next block is known. To see this, suppose block $b_m$ seals epoch $E_N$. Then the next block, $b_{m+1}$ will begin a new epoch, $E_{N+1}$ with $P_{N+1}$ equal to $b_m$'s P-Chain height, $p_m$. If instead $b_m$ does not seal $E_N$, then $b_{m+1}$ will continue to use $P_{N}$. Both candidates for $b_{m+1}$'s P-Chain height ($p_m$ and $P_N$) are known at $b_m$ build time.
## Use Cases
### ICM Verification Optimization
For a validator to verify an ICM message, the signing L1/Subnet's validator set must be retrieved during block verification by traversing backward from the current P-Chain height to the P-Chain height provided by the ProposerVM. The traversal depth is highly variable, so to account for the worst case, VM implementations charge a large amount of gas to perform this verification.
With epochs, validator set retrieval occurs at fixed P-Chain heights that increment at regular intervals, which provides opportunities to optimize this retrieval. For instance, validator retrieval may be done asynchronously from block verification as soon as an epoch has been sealed. Further, validator sets at a given height can be more effectively cached or otherwise kept in memory, because the same height will be used verify all ICM messages for the remainder of an epoch. Each of these VM optimizations allow for the potential of ICM verification costs to be safely reduced by a significant amount within VM implementations.
### Improved Relayer Reliability
Current ICM VM implementations verify ICM messages against the local P-Chain state, as determined by the P-Chain height set by the ProposerVM. Off-chain relayers perform the following steps to deliver ICM messages:
1. Fetch the sending chain's validator set at the verifying chain's current proposed height
2. Collect BLS signatures from that validator set to construct the signed ICM message
3. Submit the transaction containing the signed message to the verifying chain
If the validator set changes between steps 1 and 3, the ICM message will fail verification.
Epochs improve upon this by fixing the P-Chain height used to verify ICM messages for a duration of time that is predictable to off-chain relayers. A relayer should be able to derive the epoch boundaries based on the specification above, or they could retrieve that information via a node API. Relayers could use that information to decide the validator set to query, knowing that it will be stable for the duration of the epoch. Further, VMs could relax the verification rules to allow ICM messages to be verified against the previous epoch as a fallback, eliminating edge cases around the epoch boundary.
## EVM ICM Verification Gas Cost Updates
Since the activation of [ACP-30](https://github.com/avalanche-foundation/ACPs/tree/60cbfc32e7ee2cffed33d8daee980d7a85dded48/ACPs/30-avalanche-warp-x-evm#gas-costs), the cost to verify ICM messages in the Avalanche EVM implementations (i.e. `coreth` and `subnet-evm`) using the `WarpPrecompile` have been based on the worst-case verification flow, including the relatively expensive lookup of the source chain's validator set at an aribtrary P-Chain height used by each new block. This ACP allows for optimizing this verification, as described above.
Prior to this ACP, the gas costs of relevant `WarpPrecompile` functions were:
```
const (
GetVerifiedWarpMessageBaseCost = 2
GetBlockchainIDGasCost = 2
GasCostPerWarpSigner = 500
GasCostPerWarpMessageChunk = 3_200
GasCostPerSignatureVerification = 200_000
)
```
With optimizations implemented, based on the results of [new benchmarks](https://github.com/ava-labs/coreth/pull/1331) of the `WarpPrecompile` and roughly targeting processing 150 million gas per second, Avalanche EVM chains with this ACP activated use the following gas costs for the `WarpPrecompile`.
```
const (
GetVerifiedWarpMessageBaseCost = 750
GetBlockchainIDGasCost = 200
GasCostPerWarpSigner = 250
GasCostPerWarpMessageChunk = 512
GasCostPerSignatureVerification = 125_000
)
```
While the performance of `GetVerifiedWarpMessageBaseCost`, `GetBlockchainIDGasCost`, and `GasCostPerWarpMessageChunk` are not directly impacted by this ACP, updated benchmark numbers show the new gas costs to be better aligned with relative time that the operations take to perform.
## Backwards Compatibility
This change requires a network upgrade and is therefore not backwards compatible.
Any downstream entities that depend on a VM's view of the P-Chain will also need to account for epoched P-Chain views. For instance, ICM messages are signed by an L1's validator set at a specific P-Chain height. Currently, the constructor of the signed message can in practice use the validator set at the P-Chain tip, since all deployed Avalanche VMs are at most behind the P-Chain by a fixed number of blocks. With epoching, however, the ICM message constructor must take into account the epoch P-Chain height of the verifying chain, which may be arbitrarily far behind the P-Chain tip.
## Reference Implementation
The following pseudocode illustrates how an epoch may be calculated for a block:
```go
// Epoch Duration
const D time.Duration
type Epoch struct {
PChainHeight uint64
Number uint64
StartTime time.Time
}
type Block interface {
Timestamp() time.Time
PChainHeight() uint64
Epoch() Epoch
}
func GetPChainEpoch(parent Block) Epoch {
parentTimestamp := parent.Timestamp()
parentEpoch := parent.Epoch()
epochEndTime := parentEpoch.StartTime.Add(D)
if parentTimestamp.Before(epochEndTime) {
// If the parent was issued before the end of its epoch, then it did not
// seal the epoch.
return parentEpoch
}
// The parent sealed the epoch, so the child is the first block of the new
// epoch.
return Epoch{
PChainHeight: parent.PChainHeight(),
Number: parentEpoch.Number + 1,
StartTime: parentTimestamp,
}
}
```
* If the parent sealed its epoch, the current block [advances the epoch](#advancing-an-epoch), refreshing the epoch height, incrementing the epoch number, and setting the epoch starting time.
* Otherwise, the current block uses the current epoch height, number, and starting time, regardless of whether it seals the epoch.
A full reference implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/4238).
### Setting the Epoch Duration
The epoch duration $D$ is set on a network-wide level. For both Fuji (network ID 5) and Mainnet (network ID 1), $D$ will be set to 5 minutes upon activation of this ACP. Any changes to $D$ in the future would require another network upgrade.
#### Changing the Epoch Duration
Future network upgrades may change the value of $D$ to some new duration $D'$. $D'$ should not take effect until the end of the current epoch, rather than the activation time of the network upgrade that defines $D'$. This ensures an in progress epoch at the upgrade activation time cannot have a realized duration less than both $D$ and $D'$.
## Security Considerations
### Epoch P-Chain Height Skew
Because epochs may have [unbounded duration](#epoch-duration-bounds), it is possible for a block's `PChainEpochHeight` to be arbitrarily far behind the tip of the P-Chain. This does not affect the *validity* of ICM verification within a VM that implements P-Chain epoched views, since the validator set at `PChainEpochHeight` is always known. However, the following considerations should be made under this scenario:
1. As validators exit the validator set, their physical nodes may be unavailable to serve BLS signature requests, making it more difficult to construct a valid ICM message
2. A valid ICM message may represent an attestation by a stale validator set. Signatures from validators that have exited the validator set between `PChainEpochHeight` and the current P-Chain tip will not represent active stake.
Both of these scenarios may be mitigated by having shorter epoch lengths, which limit the delay in time between when the P-Chain is updated and when those updates are taken into account for ICM verification on a given L1, and by ensuring consistent block production, so that epochs always advance soon after $D$ time has passed.
### Excessive Validator Churn
If an epoched view of the P-Chain is used by the consensus engine, then validator set changes over an epoch's duration will be concentrated into a single block at the epoch's boundary. Excessive validator churn can cause consensus failures and other dangerous behavior, so it is imperative that the amount of validator weight change at the epoch boundary is limited. One strategy to accomplish this is to queue validator set changes and spread them out over multiple epochs. Another strategy is to batch updates to the same validator together such that increases and decreases to that validator's weight cancel each other out. Given the primary use case of ICM verification improvements, which occur at the VM level, mechanisms to mitigate against this are omitted from this ACP.
## Open Questions
* What should the epoch duration $D$ be set to?
* Is it safe for `PChainEpochHeight` and `PChainHeight` to differ significantly within a block, due to [unbounded epoch duration](#epoch-duration-bounds)?
## Acknowledgements
Thanks to [@iansuvak](https://github.com/iansuvak), [@geoff-vball](https://github.com/geoff-vball), [@yacovm](https://github.com/yacovm), [@michaelkaplan13](https://github.com/michaelkaplan13), [@StephenButtolph](https://github.com/StephenButtolph), and [@aaronbuchwald](https://github.com/aaronbuchwald) for discussion and feedback on this ACP.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-191: Seamless L1 Creation
URL: /docs/acps/191-seamless-l1-creation
Details for Avalanche Community Proposal 191: Seamless L1 Creation
| ACP | 191 |
| :------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Title** | Seamless L1 Creations (CreateL1Tx) |
| **Author(s)** | Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)) |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/197)) |
| **Track** | Standards |
## Abstract
This ACP introduces a new P-Chain transaction type called `CreateL1Tx` that simplifies the creation of Avalanche L1s. It consolidates three existing transaction types (`CreateSubnetTx`, `CreateChainTx`, and `ConvertSubnetToL1Tx`) into a single atomic operation. This streamlines the L1 creation process, removes the need for the intermediary Subnet creation step, and eliminates the management of temporary `SubnetAuth` credentials.
## Motivation
[ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) introduced Avalanche L1s, providing greater sovereignty and flexibility compared to Subnets. However, creating an L1 currently requires a three-step process:
1. `CreateSubnetTx`: Create the Subnet record on the P-Chain and specify the `SubnetAuth`
2. `CreateChainTx`: Add a blockchain to the Subnet (can be called multiple times)
3. `ConvertSubnetToL1Tx`: Convert the Subnet to an L1, specifying the initial validator set and the validator manager location
This process has several drawbacks:
* It requires orchestrating three separate transactions that could be handled in one.
* The `SubnetAuth` must be managed during creation but becomes irrelevant after conversion.
* The multi-step process increases complexity and potential for errors.
* It introduces unnecessary state transitions and storage overhead on the P-Chain.
By introducing a single `CreateL1Tx` transaction, we can simplify the process, reduce overhead, and improve the developer experience for creating L1s.
## Specification
### New Transaction Type
The following new transaction type is introduced:
```go
// ChainConfig represents the configuration for a chain to be created
type ChainConfig struct {
// A human readable name for the chain; need not be unique
ChainName string `serialize:"true" json:"chainName"`
// ID of the VM running on the chain
VMID ids.ID `serialize:"true" json:"vmID"`
// IDs of the feature extensions running on the chain
FxIDs []ids.ID `serialize:"true" json:"fxIDs"`
// Byte representation of genesis state of the chain
GenesisData []byte `serialize:"true" json:"genesisData"`
}
// CreateL1Tx is an unsigned transaction to create a new L1 with one or more chains
type CreateL1Tx struct {
// Metadata, inputs and outputs
BaseTx `serialize:"true"`
// Chain configurations for the L1 (can be multiple)
Chains []ChainConfig `serialize:"true" json:"chains"`
// Chain where the L1 validator manager lives
ManagerChainID ids.ID `serialize:"true" json:"managerChainID"`
// Address of the L1 validator manager
ManagerAddress types.JSONByteSlice `serialize:"true" json:"managerAddress"`
// Initial pay-as-you-go validators for the L1
Validators []*L1Validator `serialize:"true" json:"validators"`
}
```
The `L1Validator` structure follows the same definition as in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md#convertsubnettol1tx).
### Transaction Processing
When a `CreateL1Tx` transaction is processed, the P-Chain performs the following operations atomically:
1. Create a new L1.
2. Create chain records for each chain configuration in the `Chains` array.
3. Set up the L1 validator manager with the specified `ManagerChainID` and `ManagerAddress`.
4. Register the initial validators specified in the `Validators` array.
### IDs
* `subnetID`: The `subnetID` of the L1 is the transaction hash.
* `blockchainID`: the `blockchainID` for each blockchain is is defined as the SHA256 hash of the 37 bytes resulting from concatenating the 32 byte `subnetID` with the `0x00` byte and the 4 byte `chainIndex` (index in the `Chains` array within the transaction)
* `validationID`: The `validationID` for the initial validators added through `CreateL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction).
Note: Even with this updated definition of the `blockchainID`s for chains created using this new flow, the `validationID`s of the L1s initial set of validators is still compatible with the existing reference validator manager contracts as defined [here](https://github.com/ava-labs/icm-contracts/blob/4a897ba913958def3f09504338a1b9cd48fe5b2d/contracts/validator-manager/ValidatorManager.sol#L247).
### Restrictions and Validation
The `CreateL1Tx` transaction has the following restrictions and validation criteria:
1. The `Chains` array must contain at least one chain configuration
2. The `ManagerChainID` must be a valid blockchain ID, but cannot be the P-Chain blockchain ID
3. Validator nodes must have unique NodeIDs within the transaction
4. Each validator must have a non-zero weight and a non-zero balance
5. The transaction inputs must provide sufficient AVAX to cover the transaction fee and all validator balances
### Warp Message
After the transaction is accepted, the P-Chain must be willing to sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to the new L1, similar to what would happen after a `ConvertSubnetToL1Tx`. This ensures compatibility with existing systems that expect this message, such as the validator manager contracts.
## Backwards Compatibility
This ACP introduces a new transaction type and does not modify the behavior of existing transaction types. Existing Subnets and L1s created through the three-step process will continue to function as before. This change is purely additive and does not require any changes to existing L1s or Subnets.
The existing transactions `CreateSubnetTx`, `CreateChainTx` and `ConvertSubnetToL1Tx` remain unchanged for now, but may be removed in a future ACP to ensure systems have sufficient time to update to the new process.
## Reference Implementation
A reference implementation must be provided in order for this ACP to be considered implementable.
## Security Considerations
The `CreateL1Tx` transaction follows the same security model as the existing three-step process. By making the L1 creation atomic, it reduces the risk of partial state transitions that could occur if one of the transactions in the three-step process fails.
The same continuous fee mechanism introduced in ACP-77 applies to L1s created through this new transaction type, ensuring proper metering of validator resources.
The transaction verification process must ensure that all validator properties are properly validated, including unique NodeIDs, valid BLS signatures, and sufficient balances.
## Rationale and Alternatives
The primary alternative is to maintain the status quo - requiring three separate transactions to create an L1. However, this approach has clear disadvantages in terms of complexity, transaction overhead, and user experience.
Another alternative would be to modify the existing `ConvertSubnetToL1Tx` to allow specifying chain configurations directly. However, this would complicate the conversion process for existing Subnets and would not fully address the desire to eliminate the Subnet intermediary step for new L1 creation.
The chosen approach of introducing a new transaction type provides a clean solution that addresses all identified issues while maintaining backward compatibility.
## Acknowledgements
The idea for this PR was originally formulated by Aaron Buchwald in our discussion about the creation of L1s. Special thanks to the authors of ACP-77 for their groundbreaking work on Avalanche L1s, and to the projects that have shared their experiences and challenges with the current validator manager framework.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-194: Streaming Asynchronous Execution
URL: /docs/acps/194-streaming-asynchronous-execution
Details for Avalanche Community Proposal 194: Streaming Asynchronous Execution
| ACP | 194 |
| :------------ | :------------------------------------------------------------------------------------------------------------------------------- |
| **Title** | Streaming Asynchronous Execution |
| **Author(s)** | Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/196)) |
| **Track** | Standards |
## Abstract
Streaming Asynchronous Execution (SAE) decouples consensus and execution by introducing a queue upon which consensus is performed.
A concurrent execution stream is responsible for clearing the queue and reporting a delayed state root for recording by later rounds of consensus.
Validation of transactions to be pushed to the queue is lightweight but guarantees eventual execution.
## Motivation
### Performance improvements
1. Concurrent consensus and execution streams eliminate node context switching, reducing latency caused by each waiting on the other.
In particular, "VM time" (akin to CPU time) more closely aligns with wall time since it is no longer eroded by consensus.
This increases gas per wall-second even without an increase in gas per VM-second.
2. Lean, execution-only clients can rapidly execute the queue agreed upon by consensus, providing accelerated receipt issuance and state computation.
Without the need to compute state *roots*, such clients can eschew expensive Merkle data structures.
End users see expedited but identical transaction results.
3. Irregular stop-the-world events like database compaction are amortised over multiple blocks.
4. Introduces additional bursty throughput by eagerly accepting transactions, without a reduction in security guarantees.
5. Third-party accounting of non-data-dependent transactions, such as EOA-to-EOA transfers of value, can be performed prior to execution.
### Future features
Performing transaction execution after consensus sequencing allows the usage of consensus artifacts in execution. This unblocks some additional future improvements:
1. Exposing a real-time VRF during transaction execution.
2. Using an encrypted mempool to reduce front-running.
This ACP does not introduce these, but some form of asynchronous execution is required to correctly implement them.
### User stories
1. A sophisticated DeFi trader runs a highly optimised execution client, locally clearing the transaction queue well in advance of the network—setting the stage for HFT DeFi.
2. A custodial platform filters the queue for only those transactions sent to one of their EOAs, immediately crediting user balances.
## Description
In all execution models, a block is *proposed* and then verified by validators before being *accepted*. To assess a block's validity in *synchronous* execution, its transactions are first *executed* and only then *accepted* by consensus. This immediately and implicitly *settles* all of the block's transactions by including their execution results at the time of *acceptance*.
E[Executed] --> A[Accepted/Settled]`}
/>
Under SAE, a block is considered valid if all of its transactions can be paid for when eventually *executed*, after which the block is *accepted* by consensus. The act of *acceptance* enqueues the block to be *executed* asynchronously. In the future, some as-yet-unknown later block will reference the execution results and *settle* all transactions from the *executed* block.
A[Accepted]
A -->|variable delay| E[Executed]
E -->|τ seconds| S[Settled]
A -. guarantees .-> S`}
/>
### Block lifecycle
#### Proposing blocks
The validator selection mechanism for block production is unchanged. However, block builders are no longer expected to execute transactions during block building.
The block builder is expected to include transactions by building upon the most recently settled state and to apply worst-case bounds on the execution of the ancestor blocks prior to the most recently settled block.
The worst-case bounds enforce minimum balances of sender accounts and the maximum required base fee. The worst-case bounds are described [below](#block-validity-and-building).
Prior to adding a proposed block to consensus, all validators MUST verify that the block builder correctly enforced the worst-case bounds while building the block. This guarantees that the block can be executed successfully if it is accepted.
> \[!NOTE]
> The worst-case bounds guarantee does not provide assurance about whether or not a transaction will revert nor whether its computation will run out of gas by reaching the specified limit. The verification only ensures the transaction is capable of paying for the accrued fees.
#### Accepting blocks
Once a block is marked as accepted by consensus, the block is put in a FIFO execution queue.
#### Executing blocks
Each client runs a block executor in parallel, which constantly executes the blocks from the FIFO queue.
In addition to executing the blocks, the executor provides deterministic timestamps for the beginning and end of each block's execution.
Time is measured two ways by the block executor:
1. The timestamp included in the block header.
2. The amount of gas charged during the execution of blocks.
> \[!NOTE]
> Execution timestamps are more granular than block header timestamps to allow sub-second block execution times.
As soon as there is a block available in the execution queue, the block executor starts processing the block.
If the executor's current timestamp is prior to the current block's timestamp, the executor's timestamp is advanced to match the block's.
Advancing the timestamp in this scenario results in unused gas capacity, reducing the gas *excess* from which the price is determined.
The block is then executed on top of the last executed (not settled) state.
After executing the block, the executor advances its timestamp based on the gas usage of the block, also increasing the gas *excess* for the pricing algorithm.
The block's execution time is now timestamped and the block is available to be settled.
#### Settling blocks
Already-executed blocks are settled once a following block that includes the results of the executed block is accepted.
The results are included by setting the state root to that of the last executed block and the receipt root to that of a MPT of all receipts since last settlement, possibly from more than one block.
The following block's timestamp is used to determine which blocks to settle—blocks are settled if said timestamp is greater than or equal to the execution time of the executed block plus a constant delay.
The additional delay amortises any sporadic slowdowns the block executor may have encountered.
## Specification
### Background
ACP-103 introduced the following variables for calculating the gas price:
| | |
| --- | ---------------------------------- |
| $T$ | the target gas consumed per second |
| $M$ | minimum gas price |
| $K$ | gas price update constant |
| $R$ | gas capacity added per second |
ACP-176 provided a mechanism to make $T$ dynamic and set:
$$
\begin{align}
R &= 2 \cdot T \\
K &= 87 \cdot T
\end{align}
$$
The *excess* actual consumption $x \ge 0$ beyond the target $T$ is tracked via numerical integration and used to calculate the gas price as:
$M \cdot \exp\left(\frac{x}{K}\right)$
### Gas charged
We introduce $g_L$, $g_U$, and $g_C$ as the gas *limit*, *used*, and *charged* per transaction, respectively. We define
$$
g_C := \max\left(g_U, \frac{g_L}{\lambda}\right)
$$
where $\lambda$ enforces a lower bound on the gas charged based on the gas limit.
> \[!NOTE]
> $\dfrac{g_L}{\lambda}$ is rounded up by actually calculating $\dfrac{g_L + \lambda - 1}{\lambda}$
In all previous instances where execution referenced gas used, from now on, we will reference gas charged. For example, the gas excess $x$ will be modified by $g_C$ rather than $g_U$.
### Block size
The constant time delay between block execution and settlement is defined as $\tau$ seconds.
The maximum allowed size of a block is defined as:
$$
\omega_B ~:= R \cdot \tau \cdot \lambda
$$
Any block whose total sum of gas limits for transactions exceed $\omega_B$ MUST be considered invalid.
### Queue size
The maximum allowed size of the execution queue *prior* to adding a new block is defined as:
$$
\omega_Q ~:= 2 \cdot \omega_B
$$
Any block that attempts to be enqueued while the current size of the queue is larger than $\omega_Q$ MUST be considered invalid.
> \[!NOTE]
> By restricting the size of the queue *prior* to enqueueing the new block, $\omega_B$ is guaranteed to be the only limitation on block size.
### Block executor
During the activation of SAE, the block executor's timestamp $t_e$ is initialised to the timestamp of the last accepted block.
Prior to executing a block with timestamp $t_b$, the executor's timestamp and excess is updated:
$$
\begin{align}
\Delta{t} &~:= \max\left(0, t_b - t_e\right) \\
t_e &~:= t_e + \Delta{t} \\
x &~:= \max\left(x - T \cdot \Delta{t}, 0\right) \\
\end{align}
$$
The block is then executed with the gas price calculated from the current value of $x$.
After executing a block that charged $g_C$ gas in total, the executor's timestamp and excess is updated:
$$
\begin{align}
\Delta{t} &~:= \frac{g_C}{R} \\
t_e &~:= t_e + \Delta{t} \\
x &~:= x + \Delta{t} \cdot (R - T) \\
\end{align}
$$
> \[!NOTE]
> The update rule here assumes that $t_e$ is a timestamp that tracks the passage of time both by gas and by wall-clock time. $\frac{g_C}{R}$ MUST NOT be simply rounded. Rather, the gas accumulation MUST be left as a fraction.
$t_e$ is now this block's execution timestamp.
### Handling gas target changes
When a block is produced that modifies $T$, both the consensus thread and the execution thread will update to the modified $T$ after their own handling of the block.
For example, restrictions of the queue size MUST be calculated based on the parent block's $T$.
Similarly, the time spent executing a block MUST be calculated based on the parent block's $T$.
### Block settlement
For a *proposed* block that includes timestamp $t_b$, all ancestors whose execution timestamp $t_e$ is $t_e \leq t_b - \tau$ are considered settled.
Note that $t_e$ is not an integer as it tracks fractional seconds with gas consumption, which is not the case for $t_b$.
The *proposed* block MUST include the `stateRoot` produced by the execution of the most recently settled block.
For any *newly* settled blocks, the *proposed* block MUST include all execution artifacts:
* `receiptsRoot`
* `logsBloom`
* `gasUsed`
The receipts root MUST be computed as defined in [EIP-2718](https://eips.ethereum.org/EIPS/eip-2718) except that the tree MUST be built from the concatenation of receipts from all blocks being settled.
> \[!NOTE]
> If the block executor has fallen behind, the node may not be able to determine precisely which ancestors should be considered settled. If this occurs, validators MUST allow the block executor to catch up prior to deciding the block's validity.
### Block validity and building
After determining which blocks to settle, all remaining ancestors of the new block must be inspected to determine the worst-case bounds on $x$ and account balances. Account nonces are able to be known immediately.
The worst-case bound on $x$ can be calculated by following the block executor update rules using $g_L$ rather than $g_C$.
The worst-case bound on account balances can be calculated by charging the worst-case gas cost to the sender of a transaction along with deducting the value of the transaction from the sender's account balance.
The `baseFeePerGas` field MUST be populated with the gas price based on the worst-case bound on $x$ at the start of block execution.
### Configuration Parameters
As noted above, SAE depends on the values of $\tau$ and $\lambda$ to be set as parameters and the values of $\omega_B$ and $\omega_Q$ are derived from $T$.
Parameters to specify for the C-Chain are:
| Parameter | Description | C-Chain Configuration |
| --------- | ------------------------------------------------ | --------------------- |
| $\tau$ | duration between execution and settlement | $5s$ |
| $\lambda$ | minimum conversion from gas limit to gas charged | $2$ |
## Backwards Compatibility
This ACP modifies the meaning of multiple fields in the block. A comprehensive list of changes will be produced once a reference implementation is available.
Likely fields to change include:
* `stateRoot`
* `receiptsRoot`
* `logsBloom`
* `gasUsed`
* `extraData`
## Reference Implementation
A reference implementation is still a work-in-progress. This ACP will be updated to include a reference implementation once one is available.
## Security Considerations
### Worst-case transaction validity
To avoid a DoS vulnerability on execution, we require an upper bound on transaction gas cost (i.e. amount $\times$ price) beyond the regular requirements for transaction validity (e.g. nonce, signature, etc.). We therefore introduced "worst-case cost" validity.
We can prove that if every transaction were to use its full gas limit this would result in the greatest possible:
1. Consumption of gas units (by definition of the gas limit); and
2. Gas excess $x$ (and therefore gas price) at the time of execution.
For a queue of blocks $Q = \\{i\\}_ {i \ge 0}$ the gas excess $x_j$ immediately prior to execution of block $j \in Q$ is a monotonic, non-decreasing function of the gas usage of all preceding blocks in the queue; i.e. $x_j~:=~f(\\{g_i\\}_{i 0$.
Hence any decrease of $x$ is $\ge$ predicted.
The excess, and hence gas price, for every later block $x_{i>k}$ is therefore reduced:
$$
\downarrow g_k \implies
\begin{cases}
\downarrow \Delta^+x \propto g_k \\
\uparrow \Delta^-x \propto R-g_k
\end{cases}
\implies \downarrow \Delta x_k
\implies \downarrow M \cdot \exp\left(\frac{x_{i>k}}{K}\right)
$$
Given maximal gas consumption under (1), the monotonicity of $f$ implies (2).
Since we are working with non-negative integers, it follows that multiplying a transaction's gas limit by the hypothetical gas price of (2) results in its worst-case gas cost.
Any sender able to pay for this upper bound (in addition to value transfers) is guaranteed to be able to pay for the actual execution cost.
Transaction *acceptance* under worst-case cost validity is therefore a guarantee of *settlement*.
### Queue DoS protection
Worst-case cost validity only protects against DoS at the point of execution but leaves the queue vulnerable to high-limit, low-usage transactions.
For example, a malicious user could send a transfer-only transaction (21k gas) with a limit set to consume the block's full gas limit.
Although they would have to have sufficient funds to theoretically pay for all the reserved gas, they would never actually be charged this amount. Pushing a sufficient number of such transactions to the queue would artificially inflate the worst-case cost of other users.
Therefore, the gas charged was modified from being equal to the gas usage to the above $g_C := \max\left(g_U, \frac{g_L}{\lambda}\right)$
The gas limit is typically set higher than the predicted gas consumption to allow for a buffer should the prediction be imprecise.
This precludes setting $\lambda := 1$.
Conversely, setting $\lambda := \infty$ would allow users to attack the queue with high-limit, low-consumption transactions.
Setting $\lambda ~:= 2$ allows for a 100% buffer on gas-usage estimates without penalising the sender, while still disincentivising falsely high limits.
#### Upper bound on queue DoS
Recall $R$ (gas capacity per second) for rate and $g_C$ (gas charged) as already defined.
The actual gas excess $x_A$ has an upper bound of the worst-case excess $x_W$, both of which can be used to calculate respective base fees $f_A$ and $f_W$ (the variable element of gas prices) from the existing exponential function:
$$
f := M \cdot \exp\left( \frac{x}{K} \right).
$$
Mallory is attempting to maximize the DoS ratio
$$
D := \frac{f_W}{f_A}
$$
by maximizing $\Sigma_{\forall i} (g_L - g_U)_i$ to maximize $x_W - x_A$.
> \[!TIP]
> Although $D$ shadows a variable in ACP-176, that one is very different to anything here so there won't be confusion.
Recall that the increasing excess occurs such that
$$
x := x + g \cdot \frac{(R - T)}{R}
$$
Since the largest allowed size of the queue when enqueuing a new block is $\omega_Q$, we can derive an upper bound on the difference in the changes to worst-case and actual gas excess caused by the transactions in the queue before the new block is added:
$$
\begin{align}
\Delta x_A &\ge \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\
\Delta x_W &= \omega_Q \cdot \frac{(R - T)}{R} \\
\Delta x_W - \Delta x_A &\le \omega_Q \cdot \frac{(R - T)}{R} - \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\
&= \omega_Q \cdot \frac{(R - T)}{R} \cdot \left(1-\frac{1}{\lambda}\right) \\
&= \omega_Q \cdot \frac{(2 \cdot T - T)}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\
&= \omega_Q \cdot \frac{T}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\
&= \frac{\omega_Q}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\
&= \frac{2 \cdot \omega_B}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\
&= \omega_B \cdot \left(1-\frac{1}{\lambda}\right) \\
&= R \cdot \tau \cdot \lambda \cdot \left(1-\frac{1}{\lambda}\right) \\
&= R \cdot \tau \cdot (\lambda-1) \\
&= 2 \cdot T \cdot \tau \cdot (\lambda-1)
\end{align}
$$
Note that we can express Mallory's DoS quotient as:
$$
\begin{align}
D &= \frac{f_W}{f_A} \\
&= \frac{ M \cdot \exp \left( \frac{x_W}{K} \right)}{ M \cdot \exp \left( \frac{x_A}{K} \right)} \\
& = \exp \left( \frac{x_W - x_A}{K} \right).
\end{align}
$$
When the queue is empty (i.e. the execution stream has caught up with accepted transactions), the worst-case fee estimate $f_W$ is known to be the actual base fee $f_A$; i.e. $Q = \emptyset \implies D=1$. The previous bound on $\Delta x_W - \Delta x_A$ also bounds Mallory's ability such that:
$$
\begin{align}
D &\le \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{K} \right)\\
&= \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{87 \cdot T} \right)\\
&= \exp \left( \frac{2 \cdot \tau \cdot (\lambda-1)}{87} \right)\\
\end{align}
$$
Therefore, for the values suggested by this ACP:
$$
\begin{align}
D &\le \exp \left( \frac{2 \cdot 5 \cdot (2 - 1)}{87} \right)\\
&= \exp \left( \frac{10}{87} \right)\\
&\simeq 1.12\\
\end{align}
$$
In summary, Mallory can require users to increase their gas price by at most \~12%. In practice, the gas price often fluctuates more than 12% on a regular basis. Therefore, this does not appear to be a significant attack vector.
However, any deviation that dislodges the gas price bidding mechanism from a true bidding mechanism is of note.
## Appendix
### JSON RPC methods
Although asynchronous execution decouples the transactions and receipts recorded by a specific block, APIs MUST NOT alter their behavior to mirror this.
In particular, the API method `eth_getBlockReceipts` MUST return the receipts corresponding to the block's transactions, not the receipts settled in the block.
#### Named blocks
The Ethereum Mainnet APIs allow for retrieving blocks by named parameters that the API server resolves based on their consensus mechanism.
Other than the *earliest* (genesis) named block, which MUST be interpreted in the same manner, all other named blocks are mapped to SAE in terms of the *execution* status of blocks and MUST be interpreted as follows:
* *pending*: the most recently *accepted* block;
* *latest*: the block that was most recently *executed*;
* *safe* and *finalized*: the block that was most recently *settled*.
> \[!NOTE]
> The finality guarantees of Snowman consensus remove any distinction between *safe* and *finalized*.
> Furthermore, the *latest* block is not at risk of re-org, only of a negligible risk of data corruption local to the API node.
### Observations around transaction prioritisation
As EOA-to-EOA transfers of value are entirely guaranteed upon *acceptance*, block builders MAY choose to prioritise other transactions for earlier execution.
A reliable marker of such transactions is a gas limit of 21,000 as this is an indication from the sender that they do not intend to execute bytecode.
However, this could delay the ability to issue transactions that depend on these EOA-to-EOA transfers.
Block builders are free to make their own decisions around which transactions to include.
## Acknowledgments
Thank you to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP.
* [Aaron Buchwald](https://github.com/aaronbuchwald)
* [Angharad Thomas](https://x.com/divergenceharri)
* [Martin Eckardt](https://github.com/martineckardt)
* [Meaghan FitzGerald](https://github.com/meaghanfitzgerald)
* [Michael Kaplan](https://github.com/michaelkaplan13)
* [Yacov Manevich](https://github.com/yacovm)
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-20: Ed25519 P2p
URL: /docs/acps/20-ed25519-p2p
Details for Avalanche Community Proposal 20: Ed25519 P2p
| ACP | 20 |
| :------------ | :----------------------------------------------------------------------------------- |
| **Title** | Ed25519 p2p |
| **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/21)) |
| **Track** | Standards |
## Abstract
Support Ed25519 TLS certificates for p2p communications on the Avalanche network. Permit usage of Ed25519 public keys for Avalanche Network Client (ANC) NodeIDs. Support Ed25519 signatures in the ProposerVM.
## Motivation
Avalanche Network Clients (ANCs) rely on TLS handshakes to facilitate p2p communications. AvalancheGo (and by extension, the Avalanche Network) only supports TLS certificates that use RSA or ECDSA as the signing algorithm and explicitly prohibits any other signing algorithms.
If a TLS certificate is not present, AvalancheGo will generate and persist to disk a 4096 bit RSA private key on start-up. This key is subsequently used to generate the TLS certificate which is also persisted to disk. Finally, the TLS certificate is hashed to generate a 20 byte NodeID. Authenticated p2p messaging was required when the network started and it was sufficient to simply use a hash of the TLS certificate. With the introduction of Snowman++, validators were then required to produce shareable message signatures. The Snowman++ block headers (specified [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension)) were then required to include the full TLS `Certificate` along with the `Signature`.
However, TLS certificates support Ed25519 as their signing algorithm. Ed25519 are IETF recommendations ([RFC8032](https://datatracker.ietf.org/doc/html/rfc8032)) with some very nice properties, a large one being their size:
* 32 byte public key
* 64 byte private key
* 64 byte signature
Because of the small size of the public key, it can be used for the NodeID directly with a marginal hit to size (an additional 12 bytes). Additionally, the brittle reliance on static TLS certificates can be removed. Using the Ed25519 private key, a TLS certificate can be generated in-memory on node startup and used for p2p communications. This reduces the maintenance burden on node operators as they will only need to backup the Ed25519 private key instead of the TLS certificate and the RSA private key.
Ed25519 has wide adoption, including in the crypto industry. A non-exhaustive list of things that use Ed25519 can be found [here](https://ianix.com/pub/ed25519-deployment.html). More information about the Ed25519 protocol itself can be found [here](https://ed25519.cr.yp.to).
## Specification
### Required Changes
1. Support registration of 32-byte NodeIDs on the P-chain
2. Generate an Ed25519 key by default (`staker.key`) on node startup
3. Use the Ed25519 key to generate a TLS certificate on node startup
4. Add support for Ed25519 keys + signatures to the proposervm
5. Remove the TLS certificate embedding in proposervm blocks when an Ed25519 NodeID is the proposer
6. Add support for Ed25519 in `PeerList` messages
Changes to the p2p layer will be minimal as TLS handshakes are used to do p2p communication. Ed25519 will need to be added as a supported algorithm.
The P-chain will also need to be modified to support registration of 32-byte NodeIDs. During serialization, the length of the NodeID is not serialized and was assumed to always be 20 bytes. Implementers of this ACP must take care to continue parsing old transactions correctly.
This ACP could be implemented by adding a new tx type that requires Ed25519 NodeIDs only. If the implementer chooses to do this, a separate follow-up ACP must be submitted detailing the format of that transaction.
### Future Work
In the future, usage of non-Ed25519 TLS certificates should be prohibited to remove any dependency on them. This will further secure the Avalanche network by reducing complexity. The path to doing so is not outlined in this ACP.
## Backwards Compatibility
An implementation of this proposal should not introduce any backwards compatibility issues. NodeIDs that are 20 bytes should continue to be treated as hashes of TLS certificates. NodeIDs of 32 bytes (size of Ed25519 public key) should be supported following implementation of this proposal.
## Reference Implementation
TLS certificate generation using an Ed25519 private key is standard. The golang standard library has a reference [implementation](https://github.com/golang/go/blob/go1.20.10/src/crypto/tls/generate_cert.go).
Parsing TLS certificates and extracting the public key is also standard. AvalancheGo already contains [code](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/staking/verify.go#L55-L65) to verify the public key from a TLS certificate.
## Security Considerations
### Validation Criteria
Although Ed25519 is standardized in [RFC8032](https://datatracker.ietf.org/doc/html/rfc8032), it does not define strict validation criteria. This has led to inconsistencies in the validation criteria across implementations of the signature scheme. This is unacceptable for any protocol that requires participants to reach consensus on signature validity. Henry de Valance highlights the complexity of this issue [here](https://hdevalence.ca/blog/2020-10-04-its-25519am).
From [Chalkias et al. 2020](https://eprint.iacr.org/2020/1244.pdf):
* The RFC 8032 and the NIST FIPS186-5 draft both require to reject non-canonically encoded points, but not all of the implementations follow those guidelines.
* The RFC 8032 allows optionality between using a permissive verification equation and a more strict verification equation. Different implementations use different equations meaning validation results can vary even across implementations that follow RFC 8032.
Zcash adopted [ZIP-215](https://zips.z.cash/zip-0215) (proposed by Henry de Valance) to explicitly define the Ed25519 validation criteria. Implementers of this ACP **must** use the ZIP-215 validation criteria.
The [`ed25519consensus`](https://github.com/hdevalence/ed25519consensus) golang library is a minimal fork of golang's `crypto/ed25519` package with support for ZIP-215 verification. It is maintained by [Filippo Valsorda](https://github.com/FiloSottile) who also maintains many golang stdlib cryptography packages. It is strongly recommended to use this library for golang implementations.
## Open Questions
*Can this Ed25519 key be used in alternative communication protocols?*
Yes. Ed25519 can be used for alternative communication protocols like [QUIC](https://datatracker.ietf.org/group/quic/about) or [NOISE](http://www.noiseprotocol.org/noise.html). This ACP removes the reliance on TLS certificates and associates a Ed25519 public key with NodeIDs. This allows for experimentation with different communication protocols that may be better suited for a high throughput blockchain like Avalanche.
*Can this Ed25519 key be used for Verifiable Random Functions?*
Yes. VRFs, as specified in [RFC9381](https://datatracker.ietf.org/doc/html/rfc9381), can be constructed using elliptic curves that are secure in the cryptographic random oracle model. Ed25519 test vectors are provided in the RFC for implementers of an Elliptic Curve VRF (ECVRF). This allows for Avalanche validators to generate a VRF per block using their associated Ed25519 keys, including for Subnets.
## Acknowledgements
Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-204: Precompile Secp256r1
URL: /docs/acps/204-precompile-secp256r1
Details for Avalanche Community Proposal 204: Precompile Secp256r1
# ACP-204: Precompile for secp256r1 Curve Support
| ACP | 204 |
| :------------ | :----------------------------------------------------------------------------------------- |
| **Title** | Precompile for secp256r1 Curve Support |
| **Author(s)** | [Santiago Cammi](https://github.com/scammi), [Arran Schlosberg](https://github.com/ARR4N) |
| **Status** | Implementable ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/212)) |
| **Track** | Standards |
## Abstract
This proposal introduces a precompiled contract that performs signature verifications for the secp256r1 elliptic curve on Avalanche's C-Chain. The precompile will be implemented at address `0x0000000000000000000000000000000000000100` and will enable native verification of P-256 signatures, significantly improving gas efficiency for biometric authentication systems, WebAuthn, and modern device-based signing mechanisms.
## Motivation
The secp256r1 (P-256) elliptic curve is the standard cryptographic curve used by modern device security systems, including Apple's Secure Enclave, Android Keystore, WebAuthn, and Passkeys. However, Avalanche currently only supports secp256k1 natively, forcing developers to use expensive Solidity-based verification that costs [200k-330k gas per signature verification](https://hackmd.io/@1ofB8klpQky-YoR5pmPXFQ/SJ0nuzD1T#Smart-Contract-Based-Verifiers).
This ACP proposes implementing EIP-7951's secp256r1 precompiled contract to unlock significant ecosystem benefits:
### Enterprise & Institutional Adoption
* Reduced onboarding friction: Enterprises can leverage existing biometric authentication infrastructure instead of managing seed phrases or hardware wallets
* Regulatory compliance: Institutions can utilize their approved device security standards and identity management systems
* Cost optimization: \~50x gas reduction (from 200k-330k to 6,900 gas) makes enterprise-scale applications economically viable
The 100x gas cost reduction makes these use cases economically viable while maintaining the security properties institutions and users expect from their existing devices.
Adding the precompiled contract at the same address as used in [RIP-7212](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md) provides consistency across ecosystems, and allows for any libraries that have been developed to interact with the precompile to be used unmodified across ecosystems.
## Specification
This ACP implements [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md) for secp256r1 signature verification on Avalanche. The specification follows EIP-7951 exactly, with the precompiled contract deployed at address `0x0000000000000000000000000000000000000100`.
### Core Functionality
* Input: 160 bytes (message hash + signature components r,s + public key coordinates x,y)
* Output: success: 32 bytes `0x...01`; failure: no data returned
* Gas Cost: 6,900 gas (based on EIP-7951 benchmarking)
* Validation: Full compliance with NIST FIPS 186-3 specification
### Activation
This precompile may be activated as part of Avalanche's next network upgrade. Individual Avalanche L1s and subnets could adopt this enhancement independently through their respective client software updates.
For complete technical specifications, validation requirements, and implementation details, refer to [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md).
## Backwards Compatibility
This ACP introduces a new precompiled contract and does not modify existing functionality. No backwards compatibility issues are expected since:
1. The precompile uses a previously unused address
2. No existing opcodes or consensus rules are modified
3. The change is additive and opt-in for applications
Adoption requires a coordinated network upgrade for the C-Chain. Other EVM L1s can adopt this enhancement independently by upgrading their client software.
## Security Considerations
### Cryptographic Security
* The secp256r1 curve is standardized by NIST and widely vetted
* Security properties are comparable to secp256k1 (used by ECRECOVER)
* Implementation follows NIST FIPS 186-3 specification exactly
### Implementation Security
* Signature verification (vs public-key recovery) approach maximizes compatibility with existing P-256 ecosystem
* No malleability check included to match NIST specification, but wrapper libraries may choose to add this
* Input validation prevents invalid curve points and out-of-range signature components
### Network Security
* Gas cost prevents potential DoS attacks through expensive computation
* No consensus-level security implications beyond standard precompile considerations
## Reference Implementation
The implementation builds upon existing work:
1. EIP-7951 Reference: The \[Go-Ethereum implementation][https://github.com/ethereum/go-ethereum/pull/31991](https://github.com/ethereum/go-ethereum/pull/31991)) of EIP-7951 provides the foundation
2. Coreth Implementation: Integration with Avalanche's C-Chain (Avalanche's fork of go-ethereum)
3. Cryptographic Library: Implementation utilizes Go's standard library `crypto/ecdsa` and `crypto/elliptic` packages, which implement NIST P-256 per FIPS 186-3 ([Go documentation](https://pkg.go.dev/crypto/elliptic#P256))
The implementation follows established patterns for precompile integration, adding the contract to the precompile registry and implementing the verification logic using established cryptographic libraries.
This ACP was implemented and merged into Coreth and Subnet-EVM behind the `Granite` upgrade flag. The full implementation can be found in [coreth@v0.15.4-rc.4](https://github.com/ava-labs/coreth/releases/tag/v0.15.4-rc.4), [subnet-evm@v0.8.0-fuji-rc.2](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.0-fuji-rc.2) and [libevm@v1.13.14-0.3.0.release](https://github.com/ava-labs/libevm/releases/tag/v1.13.14-0.3.0.release).
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-209: Eip7702 Style Account Abstraction
URL: /docs/acps/209-eip7702-style-account-abstraction
Details for Avalanche Community Proposal 209: Eip7702 Style Account Abstraction
| ACP | 209 |
| :------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Title** | EIP-7702-style Set Code for EOAs |
| **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/216)) |
| **Track** | Standards |
## Abstract
[EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md) was activated on the Ethereum mainnet in May 2025 as part of the Pectra upgrade, and introduced a new "set code transaction" type that allows Externally Owned Accounts (EOAs) to set the code in their account. This enabled several UX improvements, including batching multiple operations into a single atomic transaction, sponsoring transactions on behalf of another account, and privilege de-escalation for EOAs.
This ACP proposes adding a similar transaction type and functionality to Avalanche EVM implementations in order to have them support the same style of UX available on Ethereum. Modifications to the handling of account nonce and balances are required in order for it to be safe when used in conjunction with the streaming asynchronous execution (SAE) mechanism proposed in [ACP-194](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution).
## Motivation
The motivation for this ACP is the same as the motivation described in [EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#motivation). However, EIP-7702 as implemented for Ethereum breaks invariants required for EVM chains that use the ACP-194 SAE mechanism.
There has been strong community feedback in support of ACP-194 for its potential to:
* Allow for increasing the target gas rate of Avalanche EVM chains, including the C-Chain
* Enable the use of an encrypted mempool to prevent front-running
* Enable the use of real time VRF during transaction execution
Given the strong support for ACP-194, bringing EIP-7702-style functionality to Avalanche EVMs requires modifications to preserve its necessary invariants, described below.
### Invariants needed for ACP-194
There are [two invariants explicitly broken by EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#backwards-compatibility) that are required for SAE. They are:
1. An account balance can only decrease as a result of a transaction originating from that account.
2. An EOA nonce may not increase after transaction execution has begun.
These invariants are required for SAE in order to be able to statically analyze (i.e. determine without executing the transaction) that a transaction:
* Has the proper nonce
* Will have sufficient balance to pay for its worst case transaction fee plus the balance it sends
As described in the ACP-194, this lightweight analysis of transactions in blocks allows blocks to be accepted by consensus with the guarantee that they can be executed successfully. Only after block acceptance are the transactions within the block then put into a queue to be executed asynchronously. If the execution of transactions in the queue can decrease an EOA's account balance or change an EOA's current nonce, then block verification is unable to ensure that transactions in the block will be valid when executed. If transactions accepted into blocks can be invalidated prior to their execution, this poses DOS vulnerabilities because the invalidated transactions use up space in the pending execution queue according to their gas limits, but they do not pay any fees.
Notably, EIP-7702's violation of these invariants already presents challenges for mempool verification on Ethereum. As [noted in the security considerations section](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#transaction-propagation), EIP-7702 makes it "possible to cause transactions from other accounts to become stale" and this "poses some challenges for transaction propagation" because nodes now cannot "statically determine the validity of transactions for that account". In synchronous execution environments such as Ethereum, these issues only pose potential DOS risks to the public transaction mempool. Under an asynchronous execution scheme, the issues pose DOS risks to the chain itself since the invalidated transactions can be included in blocks prior to their execution.
## Specification
The same [set code transaction as specified in EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#set-code-transaction) will be added to Avalanche EVM implementations. The behavior of the transaction is the same as specified in EIP-7702. However, in order to keep the guarantee of transaction validity upon inclusion in an accepted block, two modifications are made to the transaction verification and execution rules.
1. Delegated accounts must maintain a "reserved balance" to ensure they can always pay for the transaction fees and transferred balance of transactions sent from the account. The reserved balances are managed via a new `ReservedBalanceManager` precompile, as specified below.
2. The handling of account nonces during execution is separated from the verification of nonces during block verification, as specified below.
### Reserved balances
To ensure that all transactions can cover their worst case transaction fees and transferred balances upon inclusion in an accepted block, a "reserved balance" mechanism is introduced for accounts. Reserved balances are required for delegated accounts to guarantee that subsequent transactions they send after setting code for their account can still cover their fees and transfer amounts, even if transactions from other accounts reduce the account's balance prior to their execution.
To allow for managing reserved balances, a new `ReservedBalanceManager` stateful precompile will be added at address `0x0200000000000000000000000000000000000006`. The `ReservedBalanceManager` precompile will have the following interface:
```solidity
interface IReservedBalanceManager {
/// @dev Emitted whenever an account's reserved balance is modified.
event ReservedBalanceUpdated(address indexed account, uint256 newBalance);
/// @dev Called to deposit the native token balance provided into the account's
/// reserved balance.
function depositReservedBalance(address account) external payable;
/// @dev Returns the current reserved balance for the given account.
function getReservedBalance(address account) external view returns (uint256 balance);
}
```
The precompile will maintain a mapping of accounts to their current reserved balances. The precompile itself intentionally only allows for *increasing* an account's reserved balance. Reducing an account's reserved balance is only ever done by the EVM when a transaction is sent from the account, as specified below.
During transaction verification, the following rules are applied:
* If the sender EOA account has not set code via an EIP-7702 transaction, no reserved balance is required.
* The transaction is confirmed to be able to pay for its worst case transaction fee and transferred balance by looking at the sender account's regular balance and accounting for prior transactions it has sent that are still in the pending execution queue, as specified in ACP-194.
* Otherwise, if the sender EOA account has previously been delegated via an EIP-7702 transaction (even if that transaction is still in the pending execution queue), then the account's current "[settled](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution#settling-blocks)" reserved balance must be sufficient to cover the sum of the worst case transaction fees and balances sent for all of the transactions in the pending execution queue after the set code transaction.
During transaction execution, the following rules are applied:
* When initially deducting balance from the sender EOA account for the maximum transaction fee and balance sent with the transaction, the account's regular balance is used first. The account's reserved balance is only reduced if the regular balance is insufficient.
* In the execution of code as part of a transaction, only regular account balances are available. The only possible modification to reserved balances during code execution is increases via calls to the `ReservedBalanceManager` precompile `depositReservedBalance` function.
* If there is a gas refund at the end of the transaction execution, the balance is first credited to the sender account's reserved balance, up to a maximum of the account's reserved balance prior to the transaction. Any remaining refund is credited to the account's regular balance.
### Handling of nonces
To account for EOA account nonces being incremented during contract execution and potentially invalidating transactions from that EOA that have already been accepted, we separate the rules for how nonces are verified during block verification and how they are handled during execution.
During block verification, all transactions must be verified to have a correct nonce value based on the latest "settled" state root, as defined in ACP-194, and the number of transactions from the sender account in the pending execution queue. Specifically, the required nonce is derived from the settled state root and incremented by one for each of the sender’s transactions already accepted into the pending execution queue or current block.
During execution, the nonce used must be one greater than the latest nonce used by the account, accounting for both all transactions from the account and all contracts created by the account. This means that the actual nonce used by a transaction may differ from the nonce assigned in the raw transaction itself and used in verification.
Separating the nonce values used for block verification and execution ensures that transactions accepted in blocks cannot be invalidated by the execution of transactions before them in the pending execution queue. It still provides the same level of replay protection to transactions, as a transaction with a given nonce from an EOA can be accepted at most once. However, this separation has a subtle potential impact on contract creation. Previously, the resulting address of a contract could be deterministically derived from a contract creation transaction based on its sender address and the nonce set in the transaction. Now, since the nonce used in execution is separate from that set in the transaction, this is no longer guaranteed.
## Backwards Compatibility
The introduction of EIP-7702 transactions will require a network upgrade to be scheduled.
Upon activation, a few invariants will be broken:
* (From EIP-7702) `tx.origin == msg.sender` can only be true in the topmost frame of execution.
* Once an account has been delegated, it can invoke multiple calls per transaction.
* (From EIP-7702) An EOA nonce may not increase after transaction execution has begun.
* Once an account has been delegated, the account may call a create operation during execution, causing the nonce to increase.
* The contract address of a contract deployed by an EOA (via transaction with an empty "to" address) can be derived from the sender address and the transaction's nonce.
* If earlier transactions cause the nonce to increase before execution, the actual nonce used in a contract creation transaction may differ from the one in the transaction payload, altering the resulting contract address.
* Note that this can only occur for accounts that have been delegated, and whose delegated code involves contract creation.
Additionally, at all points after the acceptance of a set code transaction, an EOA must have sufficient reserved balance to cover the sum of the worst case transactions fees and balances sent for all transactions in the pending execution queue after the set code transaction. Notably, this means that:
* If a delegated account has zero reserved balance at any point, it will be unable to send any further transactions until a different account provides it with reserved balance via the `ReservedBalanceManager` precompile.
* In order to initially "self-fund" its own reserved balance, an account must deposit reserved balance via the `ReservedBalanceManager` precompile prior to sending a set code transaction.
* In order to transfer its full (regular + reserved) account balance, a delegated account must first deposit all of its regular balance into reserved balance.
In order to support wallets as seamlessly as possible, the `eth_getBalance` RPC implementations should be updated to return the sum of an accounts regular and reserved balances. Additionally, clients should provide a new `eth_getReservedBalance` RPC method to allow for querying the reserved balance of a given account.
## Reference Implementation
A reference implementation is not yet available and must be provided for this ACP to be considered implementable.
## Security Considerations
All of the [security considerations from the EIP-7702 specification](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#security-considerations) apply here as well, except for the considerations regarding "sponsored transaction relayers" and "transaction propagation". Those two considerations do not apply here, as they are accounted for by the modifications made to introduce reserved balances and separate the handling of nonces in execution from verification.
Additionally, given that an account's reserved balance may need be updated in state when a transfer is sent from the account it must be confirmed that 21,000 gas is still a sufficiently high cost for the potential more expensive operation. Charging more gas for basic transfer transactions in this case could otherwise be an option, but would likely cause further backwards compatibility issues for smart contracts and off-chain services.
## Open Questions
1. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth the UX improvements introduced by the new set code transaction type?
* Except for having a contract spend an account's native token balance, most, if not all, of the UX improvements associated with the new transaction type could theoretically be implemented at the contract layer rather than the protocol layer. However, not all contracts provide support for account abstraction functionality via standards such as [ERC-2771](https://eips.ethereum.org/EIPS/eip-2771).
2. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth giving delegate contracts the ability to spend native token balances?
* An alternative may be to disallow delegate contracts from spending native token balances at all, and revert if they attempt to. They could use "wrapped native token" ERC20 implementations (i.e. WAVAX) to achieve the same effect. However, this may be equally or more complex at the implementation level, and would cause incompatibilies in delegate contract implementations for Ethereum.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-224: Dynamic Gas Limit In Subnet Evm
URL: /docs/acps/224-dynamic-gas-limit-in-subnet-evm
Details for Avalanche Community Proposal 224: Dynamic Gas Limit In Subnet Evm
| ACP | 224 |
| :------------ | :---------------------------------------------------------------------------------------------------------------------------- |
| **Title** | Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM |
| **Author(s)** | Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/230)) |
| **Track** | Standards |
## Abstract
Proposes implementing [ACP-176](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md) in Subnet-EVM, along with the addition of a new optional `ACP224FeeManagerPrecompile` that can be used to configure fee parameters on-chain dynamically after activation, in the same way that the existing `FeeManagerPrecompile` can be used today prior to ACP-176.
## Motivation
ACP-176 updated the EVM dynamic fee mechanism to more accurately achieve the target gas consumption on-chain. It also added a mechanism for the target gas consumption rate to be dynamically updated. Until now, ACP-176 was only added to Coreth (C-Chain), primarily because most L1s prefer to control their fees and gas targets through the `FeeManagerPrecompile` and `FeeConfig` in genesis chain configuration, and the existing `FeeManagerPrecompile` is not compatible with the ACP-176 fee mechanism.
[ACP-194](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/194-streaming-asynchronous-execution/README.md) (SAE) depends on having a gas target and capacity mechanism aligned with ACP-176. Specifically, there must be a known gas capacity added per second, and maximum gas capacity. The existing windower fee mechanism employed by Subnet-EVM does not provide these properties because it does not have a fixed capacity rate, making it difficult to calculate worst-case bounds for gas prices. As such, adding ACP-176 into Subnet-EVM is a functional requirement for L1s to be able to use SAE in the future. Adding ACP-176 fee dynamics to Subnet-EVM also has the added benefit of aligning with Coreth such that only a single mechanism needs to be maintained on a go-forwards basis.
While both ACP-176 and ACP-194 will be required upgrades for L1s, this ACP aims to provide similar controls for chains with a new precompile. A new dynamic fee configuration and fee manager precompile that maps well into the ACP-176 mechanism will be added, optionally allowing admins to adjust fee parameters dynamically.
## Specification
### ACP-176 Parameters
This ACP uses the same parameters as in the [ACP-176 specification](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md#configuration-parameters), and allows their values to be configured on a chain-by-chain basis. The parameters and their current values used by the C-Chain are as follows:
| Parameter | Description | C-Chain Configuration |
| :-------- | :----------------------------------------------------- | :-------------------- |
| $T$ | target gas consumed per second | dynamic |
| $R$ | gas capacity added per second | 2\*T |
| $C$ | maximum gas capacity | 10\*T |
| $P$ | minimum target gas consumption per second | 1,000,000 |
| $D$ | target gas consumption rate update constant | 2^25 |
| $Q$ | target gas consumption rate update factor change limit | 2^15 |
| $M$ | minimum gas price | 1x10^-18 AVAX |
| $K$ | initial gas price update factor | 87\*T |
### Prior Subnet-EVM Fee Configuration Parameters
Prior to this ACP, the Subnet-EVM fee configuration and fee manager precompile used the following parameters to control the fee mechanism:
**GasLimit**:
Sets the max amount of gas consumed per block.
**TargetBlockRate**:
Sets the target rate of block production in seconds used for fee adjustments. If the actual block rate is faster than this target, block gas cost will be increased, and vice versa.
**MinBaseFee**:
The minimum base fee sets a lower bound on the EIP-1559 base fee of a block. Since the block's base fee sets the minimum gas price for any transaction included in that block, this effectively sets a minimum gas price for any transaction.
**TargetGas**:
Specifies the targeted amount of gas (including block gas cost) to consume within a rolling 10s window. When the dynamic fee algorithm observes that network activity is above/below the `TargetGas`, it increases/decreases the base fee proportionally to how far above/below the target actual network activity is.
**BaseFeeChangeDenominator**:
Divides the difference between actual and target utilization to determine how much to increase/decrease the base fee. A larger denominator indicates a slower changing, stickier base fee, while a lower denominator allows the base fee to adjust more quickly.
**MinBlockGasCost**:
Sets the minimum amount of gas to charge for the production of a block.
**MaxBlockGasCost**:
Sets the maximum amount of gas to charge for the production of a block.
**BlockGasCostStep**:
Determines how much to increase/decrease the block gas cost depending on the amount of time elapsed since the previous block. If the block is produced at the target rate, the block gas cost will stay the same as the block gas cost for the parent block. If it is produced faster/slower, the block gas cost will be increased/decreased by the step value for each second faster/slower than the target block rate accordingly.
Note: if the `BlockGasCostStep` is set to a very large number, it effectively requires block production to go no faster than the `TargetBlockRate`.
Ex: if a block is produced two seconds faster than the target block rate, the block gas cost will increase by `2 * BlockGasCostStep`.
### ACP-176 Parameters in Subnet-EVM
ACP-176 will make `GasLimit` and `BaseFeeChangeDenominator` configurations obsolete in Subnet-EVM.
`TargetBlockRate`, `MinBlockGasCost`, `MaxBlockGasCost`, and `BlockGasCostStep` will be also removed by [ACP-226](https://github.com/avalanche-foundation/ACPs/tree/ce51dfab/ACPs/226-dynamic-minimum-block-times).
`MinGasPrice` is equivalent to `M` in ACP-176 and will be used to set the minimum gas price for ACP-176. This is similar to `MinBaseFee` in old Subnet-EVM fee configuration, and roughly gives the same effect. Currently the default value is `25 * 10^-18` (25 nAVAX/Gwei). This default will be changed to the minimum possible denomination of the native EVM asset (1 Wei), which is aligned with the C-Chain.
`TargetGas` is equivalent to `T` (target gas consumed per second) in ACP-176 and will be used to set the target gas consumed per second for ACP-176.
`MaxCapacityFactor` is equivalent to the factor in `C` in ACP-176 and controls the maximum gas capacity (i.e. block gas limit). This determines the `C` as `C = MaxCapacityFactor * T`. The default value will be 10, which is aligned with the C-Chain.
`TimeToDouble` will be used to control the speed of the fee adjustment (`K`). This determines the `K` as `K = (RMult * TimeToDouble) / ln(2)`, where `RMult` is the factor in `R` which is defined as 2. The default value for `TimeToDouble` will be 60 (seconds), making `K=~87*T`, which is aligned with the C-Chain.
As a result parameters will be set as follows:
| Parameter | Description | Default Value | Is Configurable |
| :-------- | :----------------------------------------------------- | :------------ | :------------------------------------------------------------ |
| $T$ | target gas consumed per second | 1,000,000 | :white\_check\_mark: |
| $R$ | gas capacity added per second | 2\*T | :x: |
| $C$ | maximum gas capacity | 10\*T | :white\_check\_mark: Through `MaxCapacityFactor` (default 10) |
| $P$ | minimum target gas consumption per second | 1,000,000 | :x: |
| $D$ | target gas consumption rate update constant | 2^25 | :x: |
| $Q$ | target gas consumption rate update factor change limit | 2^15 | :x: |
| $M$ | minimum gas price | 1 Wei | :white\_check\_mark: |
| $K$ | gas price update constant | \~87\*T | :white\_check\_mark: Through `TimeToDouble` (default 60s) |
The gas capacity added per second (`R`) always being equal to `2*T` keeps it such that the gas price is capable of increases and decrease at the same rate. The values of `Q` and `D` affect the magnitude of change to `T` that each block can have, and the granularity at which the target gas consumption rate can be updated. The proposed values match the C-Chain, allowing each block to modify the current gas target by roughly $\frac{1}{1024}$ of its current value. This has provided sufficient responsiveness and granularity as is, removing the need to make `D` and `Q` dynamic or configurable. Similarly, 1,000,000 gas/second should be a low enough minimum target gas consumption for any EVM L1. The target gas for a given L1 will be able to be increased from this value dynamically and has no maximum.
### Genesis Configuration
There will be a new genesis chain configuration to set the parameters for the chain without requiring the ACP224FeeManager precompile to be activated. This will be similar to the existing fee configuration parameters in chain configuration. If there is no genesis configuration for the new fee parameters the default values for the C-Chain will be used. This will look like the following:
```json
{
...
"acp224Timestamp": uint64
"acp224FeeConfig": {
"minGasPrice": uint64
"maxCapacityFactor": uint64
"timeToDouble": uint64
}
}
```
### Dynamic Gas Target Via Validator Preference
For L1s that want their gas target to be dynamically adjusted based on the preferences of their validator sets, the same mechanism introduced on the C-Chain in ACP-176 will be employed. Validators will be able to set their `gas-target` preference in their node's configuration, and block builders can then adjust the target excess in blocks that they propose based on their preference.
### Dynamic Gas Target & Fee Configuration Via `ACP224FeeManagerPrecompile`
For L1s that want an "admin" account to be able to dynamically configuration their gas target and other fee parameters, a new optional `ACP224FeeManagerPrecompile` will be introduced and can be activated. The precompile will offer similar controls as the existing `FeeManagerPrecompile` implemented in Subnet-EVM [here](https://github.com/ava-labs/subnet-evm/tree/53f5305/precompile/contracts/feemanager). The solidity interface will be as follows:
```solidity
//SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import "./IAllowList.sol";
/// @title ACP-224 Fee Manager Interface
/// @notice Interface for managing dynamic gas limit and fee parameters
/// @dev Inherits from IAllowList for access control
interface IACP224FeeManager is IAllowList {
/// @notice Configuration parameters for the dynamic fee mechanism
struct FeeConfig {
uint256 targetGas; // Target gas consumption per second
uint256 minGasPrice; // Minimum gas price in wei
uint256 maxCapacityFactor; // Maximum capacity factor (C = factor * T)
uint256 timeToDouble; // Time in seconds for gas price to double at max capacity
}
/// @notice Emitted when fee configuration is updated
/// @param sender Address that triggered the update
/// @param oldFeeConfig Previous configuration
/// @param newFeeConfig New configuration
event FeeConfigUpdated(address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig);
/// @notice Set the fee configuration
/// @param config New fee configuration parameters
function setFeeConfig(FeeConfig calldata config) external;
/// @notice Get the current fee configuration
/// @return config Current fee configuration
function getFeeConfig() external view returns (FeeConfig memory config);
/// @notice Get the block number when fee config was last changed
/// @return blockNumber Block number of last configuration change
function getFeeConfigLastChangedAt() external view returns (uint256 blockNumber);
}
```
For chains with the precompile activated, `setFeeConfig` can be used to dynamically change each of the values in the fee configurations. Importantly, any updates made via calls to `setFeeConfig` in a transaction will take effect only as of *settlement* of the transaction, not as of *acceptance* or *execution* (for transaction life cycles/status, refer to ACP-194 [here](https://github.com/avalanche-foundation/ACPs/tree/61d2a2a/ACPs/194-streaming-asynchronous-execution#description)). This ensures that all nodes apply the same worst-case bounds validation on transactions being accepted into the queue, since the worst-case bounds are affected by changes to the fee configuration.
In addition to storing the latest fee configuration to be returned by `getFeeConfig`, the precompile will also maintain state storing the latest values of $q$ and $K$. These values can be derived from the `targetGas` and `timeToDouble` values given to the precompile, respectively. The value of $q$ can be deterministically calculated using the same method as Coreth currently employs to calculate a node's desired target excess [here](https://github.com/ava-labs/coreth/blob/b4c8300490afb7f234df704fdcc446f227e4ec2f/plugin/evm/upgrade/acp176/acp176.go#L170). Similarly, the value of $K$ could be computed directly according to:
$K = \frac{targetGas \cdot timeToDouble}{ln(2)}$
However, floating point math may introduce inaccuracies. Instead, a similar approach will be employed using binary search to determine the closest integer solution for $K$.
Similar to the [desired target excess calculation in Coreth](https://github.com/ava-labs/coreth/blob/0255516f25964cf4a15668946f28b12935a50e0c/plugin/evm/upgrade/acp176/acp176.go#L170), which takes a node's desired gas target and calculates its desired target excess value, the `ACP224FeeManagerPrecompile` will use binary search to determine the resulting dynamic target excess value given the `targetGas` value passed to `setFeeConfig`. All blocks accepted after the settlement of such a call must have the correct target excess value as derived from the binary search result.
Block building logic can follow the below diagram for determining the target excess of blocks.
B{Is ACP224FeeManager precompile active?}
B -- Yes --> C[Use targetExcess from precompile storage at latest settled root]
B -- No --> D{Is gas-target set in node chain config file?}
D -- Yes --> E[Calculate targetExcess from configured preference and allowed update bounds]
D -- No --> F{Does parent block have ACP176 fields?}
F -- Yes --> G[Use parent block ACP176 gas target]
F -- No --> H[Use MinTargetPerSecond]`}
/>
#### Adjustment to ACP-176 calculations for price discovery
ACP-176 defines the gas price for a block as:
$gas\_price = M \cdot e^{\frac{x}{K}}$
Now, whenever $M$ (`minGasPrice`) or $K$ (derived from `timeToDouble`) are changed via the `ACP224FeeManagerPrecompile`, $x$ must also be updated.
Specifically, when $M$ is updated from $M_0$ to $M_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$. $x_1$ theoretically could be calculated directly as:
$x_1 = ln(\frac{M_0}{M_1}) \cdot K + x_0$
However, this would introduce floating point inaccuracies. Instead $x_1$ can be approximated using binary search to find the minimum non-negative integer such that the resulting gas price calculated using $M_1$ is greater than or equal to the current gas price prior to the change in $M$. In effect, this means that both reducing the minimum gas price and increasing the minimum gas price to a value less than the current gas price have no immediate effect on the current gas price. However, increasing the minimum gas price to value greater than the current gas price will cause the gas price to immediately step up to the new minimum value.
Similarly, when $K$ is updated from $K_0$ to $K_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$, where $x_1$ is calculated as:
$x_1 = x_0 \cdot \frac{K_1}{K_0}$
This makes it such that the current gas price stays the same when $K$ is changed. Changes to $K$ only impact how quickly or slowly the gas price can change going forward based on usage.
## Backwards Compatibility
ACP-224 will require a network update in order to activate the new fee mechanism. Another activation will also be required to activate the new fee manager precompile. The activation of precompile should never occur before the activation of ACP-224 (the fee mechanism) since the precompile depends on ACP-224’s fee update logic to function correctly.
Activation of ACP-224 mechanism will deactivate the prior fee mechanism and the prior fee manager precompile. This ensures that there is no ambiguity or overlap between legacy and new pricing logic. In order to provide a configuration for existing networks, a network upgrade override for both activation time and ACP-176 configuration parameters will be introduced.
These upgrades will be optional at the moment. However, with introduction of ACP-194 (SAE), it will be required to activate this ACP; otherwise the network will not be able to use ACP-194.
## Reference Implementation
A reference implementation is not yet available and must be provided for this ACP to be considered implementable.
## Security Considerations
Generally, this has the same security considerations as ACP-176. However, due to the dynamic nature of parameters exposed in the `ACP224FeeManagerPrecompile` there is an additional risk of misconfiguration. Misconfiguration of parameters could leave the network vulnerable to a DoS attack or result in higher transaction fees than necessary.
## Open Questions
* Should activation of the `ACP224FeeManager` precompile disable the old precompile itself or should we require it to be manually disabled as a separate upgrade?
* Should we use `targetGas` in genesis/chain config as an optional field signaling whether the chain config should have a precedence over the validator preferences?
* Similarly above, should we have a toggle in `ACP224FeeManager` precompile to give control to validators for `targetGas`?
## Acknowledgements
* [Stephen Buttolph](https://github.com/StephenButtolph)
* [Arran Schlosberg](https://github.com/ARR4N)
* [Austin Larson](https://github.com/alarso16)
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-226: Dynamic Minimum Block Times
URL: /docs/acps/226-dynamic-minimum-block-times
Details for Avalanche Community Proposal 226: Dynamic Minimum Block Times
| ACP | 226 |
| :------------ | :------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Title** | Dynamic Minimum Block Times |
| **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) |
| **Status** | Implementable ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/228)) |
| **Track** | Standards |
## Abstract
Proposes replacing the current block production rate limiting mechanism on Avalanche EVM chains with a new mechanism where validators collectively and dynamically determine the minimum time between blocks.
## Motivation
Currently, Avalanche EVM chains employ a mechanism to limit the rate of block production by increasing the "block gas cost" that must be burned if blocks are produced more frequently than the target block rate specified for the chain. The block gas cost is paid by summing the "priority fee" amounts that all transactions included in the block collectively burn. This mechanism has a few notable suboptimal aspects:
1. There is no explicit minimum block delay time. Validators are capable of producing blocks as frequently as they would like by paying the additional fee, and too rapid block production could cause network stability issues.
2. The target block rate can only be changed in a required network upgrade, which makes updates difficult to coordinate and operationalize.
3. The target block rate can only be specified with 1-second granularity, which does not allow for configuring sub-second block times as performance improvements are made to make them feasible.
With the prospect of ACP-194 removing block execution from consensus and allowing for increases to the gas target through the dynamic ACP-176 mechanism, Avalanche EVM chains would be better suited by having a dynamic minimum block delay time denominated in milliseconds. This allows networks to ensure that blocks are never produced more frequently than the minimum block delay, and allows validators to dynamically influence the minimum block delay value by setting their preference.
## Specification
### Block Header Changes
Upon activation of this ACP, the `blockGasCost` field in block headers will be required to be set to 0. This means that no validation of the cumulative priority fee amounts of transactions within the block exceeding the block gas cost is required. Additionally, two new fields are added to EVM block headers: `timestampMilliseconds` and `minimumBlockDelayExcess`.
#### `timestampMilliseconds`
The canonical serialization and interpretation of EVM blocks already contains a block timestamp specified in seconds. Altering this would require deep changes to the EVM codebase, as well as cause breaking changes to tooling such as indexers and block explorers. Instead, a new field is added representing the unix timestamp in milliseconds. Header verification should verify the `block.timestamp` (in seconds) is aligned with the `block.timeMilliseconds`, more precisely: `block.timestampMilliseconds / 1000 == block.timestamp`.
Existing tools that do not need millisecond granularity do not need to parse the new field, which limits the amount of breaking changes.
The `timestampMilliseconds` field is represented in block headers as a `uint64`.
#### `minimumBlockDelayExcess`
The new `minimumBlockDelayExcess` field in the block header is used to derive the minimum number of milliseconds that must pass before the next block is allowed to be accepted. Specifically, if block $B$ has a `minimumBlockDelayExcess` of $q$, then the effective timestamp of block $B+1$ in milliseconds must be at least $M * e^{\frac{q}{D}}$ greater than the effective timestamp of block $B$ in milliseconds. $M$, $q$, and $D$ are defined below in the mechanism specification.
The `minimumBlockDelayExcess` field is represented in block headers as a `uint64`.
The value of `minimumBlockDelayExcess` can be updated in each block, similar to the gas target excess field introduced in ACP-176. The mechanism is specified below.
### Dynamic `minimumBlockDelay` mechanism
The `minimumBlockDelay` can be defined as:
$m = M * e^{\frac{q}{D}}$
Where:
* $M$ is the global minimum `minimumBlockDelay` value in milliseconds
* $q$ is a non-negative integer that is initialized upon the activation of this mechanism, referred to as the `minimumBlockDelayExcess`
* $D$ is a constant that helps control the rate of change of `minimumBlockDelay`
After the execution of transactions in block $b$, the value of $q$ can be increased or decreased by up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder.
Block builders (i.e., validators) may set their desired value for $M$ (i.e., their desired `minimumBlockDelay`) in their configuration, and their desired value for $q$ can then be calculated as:
$q_{desired} = D \cdot ln\left(\frac{M_{desired}}{M}\right)$
Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{M_{desired}}{M}\right)$ and round the resulting value to the nearest integer. Alternatively, client implementations can choose to use binary search to find the closest integer solution, as `coreth` [does to calculate a node's desired target excess](https://github.com/ava-labs/coreth/blob/ebaa8e028a3a8747d11e6822088b4af7863451d8/plugin/evm/upgrade/acp176/acp176.go#L170).
When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to:
```python
# Calculates a node's new desired value for q for a given block
def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int:
if q_desired > q_current:
return q_current + min(q_desired - q_current, max_change)
else:
return q_current - min(q_current - q_desired, max_change)
```
As $q$ is updated after the execution of transactions within the block, $m$ is also updated such that $m = M \cdot e^{\frac{q}{D}}$ at all times. As noted above, the change to $m$ only takes effect for subsequent block production, and cannot change the time at which block $b$ can be produced itself.
### Gas Accounting Updates
Currently, the amount of gas capacity available is only incremented on a per second basis, as defined by ACP-176. With this ACP, it is expected for chains to be able to have sub-second block times. However, in the case when a chain's gas capacity is fully consumed (i.e. during period of heavy transaction load), blocks would not be able to produced at sub-second intervals because at least one second would need to elapse for new gas capacity to be added. To correct this, upon activation of this ACP, gas capacity is added on a per millisecond basis.
The ACP-176 mechanism for determining the target gas consumption per second remains unchanged, but its result is now used to derive the target gas consumption per millisecond by dividing by 1000, and gas capacity is added at that rate as each block advances time by some number of milliseconds.
### Activation Parameters for the C-Chain
Parameters at activation on the C-Chain are:
$M$ was chosen as a lower bound for `minimumBlockDelay` values to allow high-performance Avalanche L1s to be able to realize maximum performance and minimal transaction latency.
Based on the 1 millisecond value for $M$, $q$ was chosen such that the effective `minimumBlockDelay` value at time of activation is as close as possible to the current target block rate of the C-Chain, which is 2 seconds.
$D$ and $Q$ were chosen such that it takes approximately 3,600 consecutive blocks of the maximum allowed change in $q$ for the effective `minimumBlockDelay` value to either halve or double.
### ProposerVM `MinBlkDelay`
The ProposerVM currently offers a static, configurable `MinBlkDelay` seconds for consecutive blocks. With this ACP enforcing a dynamic minimum block delay time, any EVM instance adopting this ACP that also leverages the ProposerVM should ensure that the ProposerVM `MinBlkDelay` is set to 0.
### Note on Block Building
While there is no longer a requirement for blocks to burn a minimum block gas cost after the activation of this ACP, block builders should still take priority fees into account when building blocks to allow for transaction prioritization and to maximize the amount of native token (AVAX) burned in the block.
From a user (transaction issuer) perspective, this means that a non-zero priority fee would only ever need to be set to ensure inclusion during periods of maximum gas utilization.
## Backwards Compatibility
While this proposal requires a network upgrade and updates the EVM block header format, it does so in a way that tries to maintain as much backwards compatibility as possible. Specifically, applications that currently parse and use the existing timestamp field that is denominated in seconds can continue to do so. The `timestampMilliseconds` header value only needs to be used in cases where more granular timestamps are required.
## Reference Implementation
This ACP was implemented and merged into Coreth and Subnet-EVM behind the `Granite` upgrade flag. The full implementation can be found in [coreth@v0.15.4-rc.4](https://github.com/ava-labs/coreth/releases/tag/v0.15.4-rc.4) and [subnet-evm@v0.8.0-fuji-rc.0](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.0-fuji-rc.0).
## Security Considerations
Too rapid block production may cause availability issues if validators of the given blockchain are not able to keep up with blocks being proposed to consensus. This new mechanism allows validators to help influence the maximum frequency at which blocks are allowed to be produced, but potential misconfiguration or overly aggressive settings may cause problems for some validators.
The mechanism for the minimum block delay time to adapt based on validator preference has already been used previously to allow for dynamic gas targets based on validator preference on the C-Chain, providing more confidence that it is suitable for controlling this network parameter as well. However, because each block is capable of changing the value of the minimum block delay by a certain amount, the lower the minimum block delay is, the more blocks that can be produced in a given time, and the faster the minimum block delay value will be able to change. This creates a dynamic where the mechanism for controlling `minimumBlockDelay` is more reactive at lower values, and less reactive at higher values. The global minimum `minimumBlockDelay` ($M$) provides a lower bound of how quickly blocks can ever be produced, but it is left to validators to ensure that the effective value does not exceed their collective preference.
## Acknowledgments
Thanks to [Luigi D'Onorio DeMeo](https://x.com/luigidemeo) for continually bringing up the idea of reducing block times to provide better UX for users of Avalanche blockchains.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-23: P Chain Native Transfers
URL: /docs/acps/23-p-chain-native-transfers
Details for Avalanche Community Proposal 23: P Chain Native Transfers
| ACP | 23 |
| :------------ | :--------------------------------------------------------- |
| **Title** | P-Chain Native Transfers |
| **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) |
| **Status** | Activated |
| **Track** | Standards |
## Abstract
Support native transfers on P-chain. This enables users to transfer P-chain assets without leaving the P-chain or using a transaction type that's not meant for native transfers.
## Motivation
Currently, the P-chain has no simple transfer transaction type. The X-chain supports this functionality through a `BaseTx`. Although the P-chain contains transaction types that extend `BaseTx`, the `BaseTx` transaction type itself is not a valid transaction. This leads to abnormal implementations of P-chain native transfers like in the AvalancheGo wallet which abuses [`CreateSubnetTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.15/wallet/chain/p/builder.go#L54-L63) to replicate the functionality contained in `BaseTx`.
With the growing number of subnets slated for launch on the Avalanche network, simple transfers will be demanded more by users. While there are work-arounds as mentioned before, the network should support it natively to provide a cheaper option for both validators and end-users.
## Specification
To support `BaseTx`, Avalanche Network Clients (like AvalancheGo) must register `BaseTx` with the type ID `0x22` in codec version `0x00`.
For the specification of the transaction itself, see [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/base_tx.go#L29). Note that most other P-chain transactions extend this type, the only change in this ACP is to register it as a valid transaction itself.
## Backwards Compatibility
Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the added `BaseTx` transaction type.
## Reference Implementation
An implementation of `BaseTx` support was created [here](https://github.com/ava-labs/avalanchego/pull/2232) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo.
If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation.
## Security Considerations
The P-chain has fixed fees which does not place any limits on chain throughput. A potentially popular transaction type like `BaseTx` may cause periods of high usage. The reference implementation in AvalancheGo sets the transaction fee to 0.001 AVAX as a deterrent (equivalent to `ImportTx` and `ExportTx`). This should be sufficient for the time being but a dynamic fee mechanism will need to be added to the P-chain in the future to mitigate this security concern. This is not addressed in this ACP as it requires a larger change to the fee dynamics on the P-chain as a whole.
## Open Questions
No open questions.
## Acknowledgements
Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-24: Shanghai Eips
URL: /docs/acps/24-shanghai-eips
Details for Avalanche Community Proposal 24: Shanghai Eips
| ACP | 24 |
| :------------ | :--------------------------------------------------------- |
| **Title** | Activate Shanghai EIPs on C-Chain |
| **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)) |
| **Status** | Activated |
| **Track** | Standards |
## Abstract
This ACP proposes the adoption of the following EIPs on the Avalanche C-Chain network:
* [EIP-3651: Warm COINBASE](https://eips.ethereum.org/EIPS/eip-3651)
* [EIP-3855: PUSH0 instruction](https://eips.ethereum.org/EIPS/eip-3855)
* [EIP-3860: Limit and meter initcode](https://eips.ethereum.org/EIPS/eip-3860)
* [EIP-6049: Deprecate SELFDESTRUCT](https://eips.ethereum.org/EIPS/eip-6049)
## Motivation
The listed EIPs were activated on Ethereum mainnet as part of the [Shanghai upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/shanghai.md#included-eips). This ACP proposes their activation on the Avalanche C-Chain in the next network upgrade. This maintains compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler >= [0.8.20](https://github.com/ethereum/solidity/releases/tag/v0.8.20)).
## Specification & Reference Implementation
This ACP proposes the EIPs be adopted as specified in the EIPs themselves. ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.12.0](https://github.com/ethereum/go-ethereum/releases/tag/v1.12.0) release in this [PR](https://github.com/ava-labs/coreth/pull/277). In particular, note the following code:
* [Activation of new opcode and dynamic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/vm/jump_table.go#L92)
* [EIP-3860 intrinsic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state_transition.go#L112-L113)
* [EIP-3651 warm coinbase](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state/statedb.go#L1197-L1199)
* Note EIP-6049 marks SELFDESTRUCT as deprecated, but does not remove it. The implementation in coreth is unchanged.
## Backwards Compatibility
The following backward compatibility considerations were highlighted by the original EIP authors:
* [EIP-3855](https://eips.ethereum.org/EIPS/eip-3855#backwards-compatibility): "... introduces a new opcode which did not exist previously. Already deployed contracts using this opcode could change their behaviour after this EIP".
* [EIP-3860](https://eips.ethereum.org/EIPS/eip-3860#backwards-compatibility) "Already deployed contracts should not be effected, but certain transactions (with initcode beyond the proposed limit) would still be includable in a block, but result in an exceptional abort."
Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade.
## Security Considerations
Refer to the original EIPs for security considerations:
* [EIP 3855](https://eips.ethereum.org/EIPS/eip-3855#security-considerations)
* [EIP 3860](https://eips.ethereum.org/EIPS/eip-3860#security-considerations)
## Open Questions
No open questions.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-25: Vm Application Errors
URL: /docs/acps/25-vm-application-errors
Details for Avalanche Community Proposal 25: Vm Application Errors
| ACP | 25 |
| :------------ | :-------------------------------------------------------- |
| **Title** | Virtual Machine Application Errors |
| **Author(s)** | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) |
| **Status** | Activated |
| **Track** | Standards |
## Abstract
Support a way for a Virtual Machine (VM) to signal application-defined error conditions to another VM.
## Motivation
VMs are able to build their own peer-to-peer application protocols using the `AppRequest`, `AppResponse`, and `AppGossip` primitives.
`AppRequest` is a message type that requires a corresponding `AppResponse` to indicate a successful response. In the unhappy path where an `AppRequest` is unable to be served, there currently is no native way for a peer to signal an error condition. VMs currently resort to timeouts in failure cases, where a client making a request will fallback to marking its request as failed after some timeout period has expired.
Having a native application error type would offer a more powerful abstraction where Avalanche nodes would be able to score peers based on perceived errors. This is not currently possible because Avalanche networking isn't aware of the specific implementation details of the messages being delivered to VMs. A native application error type would also guarantee that all clients can potentially expect an `AppError` message to unblock an unsuccessful `AppRequest` and only rely on a timeout when absolutely necessary, significantly decreasing the latency for a client to unblock its request in the unhappy path.
## Specification
### Message
This modifies the p2p specification by introducing a new [protobuf](https://protobuf.dev/) message type:
```
message AppError {
bytes chain_id = 1;
uint32 request_id = 2;
uint32 error_code = 3;
string error_message = 4;
}
```
1. `chain_id`: Reserves field 1. Senders **must** use the same chain id of from the original `AppRequest` this `AppError` message is being sent in response to.
2. `request_id`: Reserves field 2. Senders **must** use the same request id from the original `AppRequest` this `AppError` message is being sent in response to.
3. `error_code`: Reserves field 3. Application defined error code. Implementations *should* use the same error codes for the same conditions to allow clients to error match. Negative error codes are reserved for protocol defined errors. VMs may reserve any error code greater than zero.
4. `error_message`: Reserves field 4. Application defined human-readable error message that *should not* be used for error matching. For error matching, use `error_code`.
### Reserved Errors
The following error codes are currently reserved by the Avalanche protocol:
| Error Code | Description |
| ---------- | --------------- |
| 0 | undefined |
| -1 | network timeout |
### Handling
Clients **must** respond to an inbound `AppRequest` message with either a corresponding `AppResponse` to indicate a successful response, or an `AppError` to indicate an error condition by the requested `deadline` in the original `AppRequest`.
## Backwards Compatibility
This new message type requires a network activation to require either an `AppResponse` or an `AppError` as a required response to an `AppRequest`.
## Reference Implementation
* Message definition: [https://github.com/ava-labs/avalanchego/pull/2111](https://github.com/ava-labs/avalanchego/pull/2111)
* Handling: [https://github.com/ava-labs/avalanchego/pull/2248](https://github.com/ava-labs/avalanchego/pull/2248)
## Security Considerations
Optional section that discusses the security implications/considerations relevant to the proposed change.
Clients should be aware that peers can arbitrarily send `AppError` messages to invoke error handling logic in a VM.
## Open Questions
Optional section that lists any concerns that should be resolved prior to implementation.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-30: Avalanche Warp X Evm
URL: /docs/acps/30-avalanche-warp-x-evm
Details for Avalanche Community Proposal 30: Avalanche Warp X Evm
| ACP | 30 |
| :------------ | :------------------------------------------------------------------------------- |
| **Title** | Integrate Avalanche Warp Messaging into the EVM |
| **Author(s)** | Aaron Buchwald ([aaron.buchwald56@gmail.com](mailto:aaron.buchwald56@gmail.com)) |
| **Status** | Activated |
| **Track** | Standards |
## Abstract
Integrate Avalanche Warp Messaging into the C-Chain and Subnet-EVM in order to bring Cross-Subnet Communication to the EVM on Avalanche.
## Motivation
Avalanche Subnets enable the creation of independent blockchains within the Avalanche Network. Each Avalanche Subnet registers its validator set on the Avalanche P-Chain, which serves as an effective "membership chain" for the entire Avalanche Ecosystem.
By providing read access to the validator set of every Subnet on the Avalanche Network, any Subnet can look up the validator set of any other Subnet within the Avalanche Ecosystem to verify an Avalanche Warp Message, which replaces the need for point-to-point exchange of validator set info between Subnets. This enables a light weight protocol that allows seamless, on-demand communication between Subnets.
For more information on the Avalanche Warp Messaging message and payload formats see here:
* [AWM Message Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/README.md)
* [Payload Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/payload/README.md)
This ACP proposes to activate Avalanche Warp Messaging on the C-Chain and offer compatible support in Subnet-EVM to provide the first standard implementation of AWM in production on the Avalanche Network.
## Specification
The specification will be broken down into the Solidity interface of the Warp Precompile, a Golang example implementation, the predicate verification, and the proposed gas costs for the Warp Precompile.
The Warp Precompile address is `0x0200000000000000000000000000000000000005`.
### Precompile Solidity Interface
```solidity
// (c) 2022-2023, Ava Labs, Inc. All rights reserved.
// See the file LICENSE for licensing terms.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
struct WarpMessage {
bytes32 sourceChainID;
address originSenderAddress;
bytes payload;
}
struct WarpBlockHash {
bytes32 sourceChainID;
bytes32 blockHash;
}
interface IWarpMessenger {
event SendWarpMessage(address indexed sender, bytes32 indexed messageID, bytes message);
// sendWarpMessage emits a request for the subnet to send a warp message from [msg.sender]
// with the specified parameters.
// This emits a SendWarpMessage log from the precompile. When the corresponding block is accepted
// the Accept hook of the Warp precompile is invoked with all accepted logs emitted by the Warp
// precompile.
// Each validator then adds the UnsignedWarpMessage encoded in the log to the set of messages
// it is willing to sign for an off-chain relayer to aggregate Warp signatures.
function sendWarpMessage(bytes calldata payload) external returns (bytes32 messageID);
// getVerifiedWarpMessage parses the pre-verified warp message in the
// predicate storage slots as a WarpMessage and returns it to the caller.
// If the message exists and passes verification, returns the verified message
// and true.
// Otherwise, returns false and the empty value for the message.
function getVerifiedWarpMessage(uint32 index) external view returns (WarpMessage calldata message, bool valid);
// getVerifiedWarpBlockHash parses the pre-verified WarpBlockHash message in the
// predicate storage slots as a WarpBlockHash message and returns it to the caller.
// If the message exists and passes verification, returns the verified message
// and true.
// Otherwise, returns false and the empty value for the message.
function getVerifiedWarpBlockHash(
uint32 index
) external view returns (WarpBlockHash calldata warpBlockHash, bool valid);
// getBlockchainID returns the snow.Context BlockchainID of this chain.
// This blockchainID is the hash of the transaction that created this blockchain on the P-Chain
// and is not related to the Ethereum ChainID.
function getBlockchainID() external view returns (bytes32 blockchainID);
}
```
### Warp Predicates and Pre-Verification
Signed Avalanche Warp Messages are encoded in the [EIP-2930 Access List](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2930.md) of a transaction, so that they can be pre-verified before executing the transactions in the block.
The access list can specify any number of access tuples: a pair of an address and an array of storage slots in EIP-2930. Warp Predicate verification borrows this functionality to encode signed warp messages according to the serialization format defined [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Predicate.md).
Each Warp specific access tuple included in the access list specifies the Warp Precompile address as the address. The first tuple that specifies the Warp Precompile address is considered to be at index. Each subsequent access tuple that specifies the Warp Precompile address increases the Warp Message index by 1. Access tuples that specify any other address are not included in calculating the index for a specific warp message.
Avalanche Warp Messages are pre-verified (prior to block execution), and outputs a bitset for each transaction where a 1 indicates an Avalanche Warp Message that failed verification at that index. Throughout the EVM execution, the Warp Precompile checks the status of the resulting bit set to determine whether pre-verified messages are considered valid. This has the additional benefit of encoding the Warp pre-verification results in the block, so that verifying a historical block can use the encoded results instead of needing to access potentially old P-Chain state. The result bitset is encoded in the block according to the predicate result specification [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Results.md).
Each Warp Message in the access list is charged gas to pay for verifying the Warp Message (gas costs are covered below) and is verified with the following steps (see [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) for reference implementation):
1. Unpack the predicate bytes
2. Parse the signed Avalanche Warp Message
3. Verify the signature according to the AWM spec in AvalancheGo [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) (the quorum numerator/denominator for the C-Chain is 67/100 and is configurable in Subnet-EVM)
### Precompile Implementation
All types, events, and function arguments/outputs are encoded using the ABI package according to the official [Solidity ABI Specification](https://docs.soliditylang.org/en/latest/abi-spec.html).
When the precompile is invoked with a given `calldata` argument, the first four bytes (`calldata[0:4]`) are read as the [function selector](https://docs.soliditylang.org/en/latest/abi-spec.html#function-selector). If the function selector matches the function selector of one of the functions defined by the Solidity interface, the contract invokes the corresponding execution function with the remaining calldata ie. `calldata[4:]`.
For the full specification of the execution functions defined in the Solidity interface, see the reference implementation here:
* [sendWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L226)
* [getVerifiedWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L187)
* [getVerifiedWarpBlockHash](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L145)
* [getBlockchainID](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L96)
### Gas Costs
The Warp Precompile charges gas during the verification of included Avalanche Warp Messages, which is included in the intrinsic gas cost of the transaction, and during the execution of the precompile.
#### Verification Gas Costs
Pre-verification charges the following costs for each Avalanche Warp Message:
* GasCostPerSignatureVerification: 20000
* GasCostPerWarpMessageBytes: 100
* GasCostPerWarpSigner: 500
These numbers were determined experimentally using the benchmarks available [here](https://github.com/ava-labs/subnet-evm/blob/master/x/warp/predicate_test.go#L687) to target approximately the same mgas/s as existing precompile benchmarks in the EVM, which ranges between 50-200 mgas/s.
In addition to the benchmarks, the following assumptions and goals were taken into account:
* BLS Public Key Aggregation is extremely fast, resulting in charging more for the base cost of a single BLS Multi-Signature Verification than for adding an additional public key
* The cost per byte included in the transaction should be strictly higher for including Avalanche Warp Messages than via transaction calldata, so that the Warp Precompile does not change the worst case maximum block size
#### Execution Gas Costs
The execution gas costs were determined by summing the cost of the EVM operations that are performed throughout the execution of the precompile with special consideration for added functionality that does not have an existing corollary within the EVM.
##### sendWarpMessage
`sendWarpMessage` charges a base cost of 41,500 gas + 8 gas / payload byte
This is comprised of charging for the following components:
* 375 gas / log operation
* 3 topics \* 375 gas / topic
* 20k gas to produce and serve a BLS Signature
* 20k gas to store the Unsigned Warp Message
* 8 gas / payload byte
This charges 20k gas for storing an Unsigned Warp Message although the message is stored in an independent key-value database instead of the active state. This makes it less expensive to store, so 20k gas is a conservative estimate.
Additionally, the cost of serving valid signatures is significantly cheaper than serving state sync and bootstrapping requests, so the cost to validators of serving signatures over time is not considered a significant concern.
`sendWarpMessage` also charges for the log operation it includes commensurate with the gas cost of a standard log operation in the EVM.
A single `SendWarpMessage` log is charged:
* 375 gas base cost
* 375 gas per topic (`eventID`, `sender`, `messageID`)
* 8 byte per / payload byte encoded in the `message` field
Topics are indexed fields encoded as 32 byte values to support querying based on given specified topic values.
##### getBlockchainID
`getBlockchainID` charges 2 gas to serve an already in-memory 32 byte valu commensurate with existing in-memory operations.
##### getVerifiedWarpBlockHash / getVerifiedWarpMessage
`GetVerifiedWarpMessageBaseCost` charges 2 gas for serving a Warp Message (either payload type). Warp message are already in-memory, so it charges 2 gas for access.
`GasCostPerWarpMessageBytes` charges 100 gas per byte of the Avalanche Warp Message that is unpacked into a Solidity struct.
## Backwards Compatibility
Existing EVM opcodes and precompiles are not modified by activating Avalanche Warp Messaging in the EVM. This is an additive change to activate a Warp Precompile on the Avalanche C-Chain and can be scheduled for activation in any VM running on Avalanche Subnets that are capable of sending / verifying the specified payload types.
## Reference Implementation
A full reference implementation can be found in Subnet-EVM v0.5.9 [here](https://github.com/ava-labs/subnet-evm/tree/v0.5.9/x/warp).
## Security Considerations
Verifying an Avalanche Warp Message requires reading the source subnet's validator set at the P-Chain height specified in the [Snowman++ Block Extension](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension). The Avalanche PlatformVM provides the current state of the Avalanche P-Chain and maintains reverse diff-layers in order to compute Subnets' validator sets at historical points in time.
As a result, verifying a historical Avalanche Warp Message that references an old P-Chain height requires applying diff-layers from the current state back to the referenced P-Chain height. As Subnets and the P-Chain continue to produce and accept new blocks, verifying the Warp Messages in historical blocks becomes increasingly expensive.
To efficiently handle historical blocks containing Avalanche Warp Messages, the EVM uses the result bitset encoded in the block to determine the validity of Avalanche Warp Messages without requiring a historical P-Chain state lookup. This is considered secure because the network already verified the Avalanche Warp Messages when they were originally verified and accepted.
## Open Questions
*How should validator set lookups in Warp Message verification be effectively charged for gas?*
The verification cost of performing a validator set lookup on the P-Chain is currently excluded from the implementation. The cost of this lookup is variable depending on how old the referenced P-Chain height is from the perspective of each validator.
[Ongoing work](https://github.com/ava-labs/avalanchego/pull/1611) can parallelize P-Chain validator set lookups and message verification to reduce the impact on block verification latency to be negligible and reduce costs to reflect the additional bandwidth of encoding Avalanche Warp Messages in the transaction.
## Acknowledgements
Avalanche Warp Messaging and this effort to integrate it into the EVM has been a monumental effort. Thanks to all of the contributors who contributed their ideas, feedback, and development to this effort.
@stephenbuttolph
@patrick-ogrady
@michaelkaplan13
@minghinmatthewlam
@cam-schultz
@xanderdunn
@darioush
@ceyonur
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-31: Enable Subnet Ownership Transfer
URL: /docs/acps/31-enable-subnet-ownership-transfer
Details for Avalanche Community Proposal 31: Enable Subnet Ownership Transfer
| ACP | 31 |
| :------------ | :--------------------------------------------------------- |
| **Title** | Enable Subnet Ownership Transfer |
| **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) |
| **Status** | Activated |
| **Track** | Standards |
## Abstract
Allow the current owner of a Subnet to transfer ownership to a new owner.
## Motivation
Once a Subnet is created on the P-chain through a [CreateSubnetTx](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/create_subnet_tx.go#L14-L19), the `Owner` of the subnet is currently immutable. Subnet operators may want to transition ownership of the Subnet to a new owner for a number of reasons, not least of all being rotating their control key(s) periodically.
## Specification
Implement a new transaction type (`TransferSubnetOwnershipTx`) that:
1. Takes in a `Subnet`
2. Verifies that the `SubnetAuth` has the right to remove the node from the subnet by verifying it against the `Owner` field in the `CreateSubnetTx` that created the `Subnet`.
3. Takes in a new `Owner` and assigning it as the new owner of `Subnet`
This transaction type should have the following format (code below is presented in Golang):
```go
type TransferSubnetOwnershipTx struct {
// Metadata, inputs and outputs
BaseTx `serialize:"true"`
// ID of the subnet this tx is modifying
Subnet ids.ID `serialize:"true" json:"subnetID"`
// Proves that the issuer has the right to remove the node from the subnet.
SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"`
// Who is now authorized to manage this subnet
Owner fx.Owner `serialize:"true" json:"newOwner"`
}
```
This transaction type should have type ID `0x21` in codec version `0x00`.
This transaction type should have a fee of `0.001 AVAX`, equivalent to adding a subnet validator/delegator.
## Backwards Compatibility
Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the `TransferSubnetOwnershipTx` type.
## Reference Implementation
An implementation of `TransferSubnetOwnershipTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2178) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo.
If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation.
## Security Considerations
No security considerations.
## Open Questions
No open questions.
## Acknowledgements
Thank you [@friskyfoxdk](https://github.com/friskyfoxdk) for filing an [issue](https://github.com/ava-labs/avalanchego/issues/1946) requesting this feature. Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-41: Remove Pending Stakers
URL: /docs/acps/41-remove-pending-stakers
Details for Avalanche Community Proposal 41: Remove Pending Stakers
| ACP | 41 |
| :------------ | :--------------------------------------------------------- |
| **Title** | Remove Pending Stakers |
| **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) |
| **Status** | Activated |
| **Track** | Standards |
## Abstract
Remove user-specified `StartTime` for stakers. Start the staking period for a staker as soon as their staking transaction is accepted. This greatly reduces the computational load on the P-chain, increasing the efficiency of all Avalanche Network validators.
## Motivation
Stakers currently set a `StartTime` for their staking period. This means that Avalanche Network Clients, like AvalancheGo, need to maintain a pending set of all stakers that have not yet started. This places a nontrivial amount of work on the P-chain:
* When a new delegator transaction is verified, the pending set needs to be checked to ensure that the validator they are delegating to will not exceed `MaxValidatorStake` while they are active
* When a new staker transaction is accepted, it gets added to the pending set
* When time is advanced on the P-chain, any stakers in the pending set whose `StartTime <= CurrentTime` need to be moved to the current set
By immediately starting every staker on acceptance, the validators do not have to do the above work when validating the P-chain. `MaxValidatorStake` will become an `O(1)` operation as only the current stake of the validator needs to be checked. The pending set can be fully removed.
## Specification
1. When adding a new staker, the current on-chain time should be used for the staker's start time.
2. When determining when to remove the staker from the staker set, the `EndTime` specified in the transaction should continue to be used. Staking transactions should now be rejected if it does not satisfy `MinStakeDuration <= EndTime - CurrentTime <= MaxStakeDuration`. `StartTime` will no longer be validated.
## Backwards Compatibility
Modifying the state transition of a transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to not alter the execution behavior prior to activation. This ACP only details the new state transition.
Current wallet implementations will continue to work as-is post-activation of this ACP since no transaction formats are modified or added. Wallet implementations may run into issues with their txs being rejected as a result of this ACP if `EndTime >= CurrentChainTime + MaxStakeDuration`. `CurrentChainTime` is guaranteed to be >= the latest block timestamp on the P-chain.
## Reference Implementation
A reference implementation has not been created for this ACP since it deals with state management. Each ANC will need to adjust their execution step to follow the Specification detailed above. For AvalancheGo, this work is tracked in this PR: [https://github.com/ava-labs/avalanchego/pull/2175](https://github.com/ava-labs/avalanchego/pull/2175)
If modifications are made to the specification of the new execution behavior as part of the ACP process, the code must be updated prior to activation.
## Security Considerations
No security considerations.
## Open Questions
*How will stakers stake for `MaxStakeDuration` if they cannot determine their `StartTime`?*
As mentioned above, the beginning of your staking period is the block acceptance timestamp. Unless you can accurately predict the block timestamp, you will *not* be able to fully stake for `MaxStakeDuration`. This is an explicit trade-off to guarantee that stakers will receive their original stake + any staking rewards at `EndTime`.
Delegators can maximize their staking period by setting the same `EndTime` as the Validator they are delegating to.
## Acknowledgements
Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on these ideas.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-62: Disable Addvalidatortx And Adddelegatortx
URL: /docs/acps/62-disable-addvalidatortx-and-adddelegatortx
Details for Avalanche Community Proposal 62: Disable Addvalidatortx And Adddelegatortx
| ACP | 62 |
| :------------ | :------------------------------------------------------------------------------------------------------------------------- |
| **Title** | Disable `AddValidatorTx` and `AddDelegatorTx` |
| **Author(s)** | Jacob Everly ([@JacobEv3rly](https://twitter.com/JacobEv3rly)), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) |
| **Status** | Activated |
| **Track** | Standards |
## Abstract
Disable `AddValidatorTx` and `AddDelegatorTx` to push all new stakers to use `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx`. `AddPermissionlessValidatorTx` requires validators to register a BLS key. Wide adoption of registered BLS keys accelerates the timeline for future P-Chain upgrades. Additionally, this reduces the number of ways to participate in Primary Network validation from two to one.
## Motivation
`AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` were activated on the Avalanche Network in October 2022 with Banff (v1.9.0). This unlocked the ability for Subnet creators to activate Proof-of-Stake validation using their own token on their own Subnet. See more details about Banff [here](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c).
These new transaction types can also be used to register a Primary Network validator, leaving two redundant transactions: `AddValidatorTx` and `AddDelegatorTx`.
[`AddPermissionlessDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_delegator_tx.go#L25-L37) contains the same fields as [`AddDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_delegator_tx.go#L29-L39) with an additional `Subnet` field.
[`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_validator_tx.go#L35-L59) contains the same fields as [`AddValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_validator_tx.go#L29-L42) with additional `Subnet` and `Signer` fields. `RewardsOwner` was also split into `ValidationRewardsOwner` and `DelegationRewardsOwner` letting validators divert rewards they receive from delegators into a separate rewards owner.
By disabling support of `AddValidatorTx`, all new validators on the Primary Network must use `AddPermissionlessValidatorTx` and register a BLS key with their NodeID. As more validators attach BLS keys to their nodes, future upgrades using these BLS keys can be activated through the ACP process. BLS keys can be used to efficiently sign a common message via [Public Key Aggregation](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html). Applications of this include, but are not limited to:
* **Arbitrary Subnet Rewards**: The P-Chain currently restricts Elastic Subnets to follow the reward curve defined in a `TransformSubnetTx`. With sufficient BLS key adoption, Elastic Subnets can define their own reward curve and reward conditions. The P-Chain can be modified to take in a message indicating if a Subnet validator should be rewarded with how many tokens signed with a BLS Multi-Signature.
* **Subnet Attestations**: Elastic Subnets can attest to the state of their Subnet with a BLS Multi-Signature. This can enable clients to fetch the current state of the Subnet without syncing the entire Subnet. `StateSync` enables clients to download chain state from peers up to a recent block near tip. However, it is up to the client to query these peers and resolve any potential conflicts in the responses. With Subnet Attestations, clients can query an API node to prove information about a Subnet without querying the Subnet's validators. This can especially be useful for [Subnet-Only Validators](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/13-subnet-only-validators.md) to prove information about the C-Chain.
To accelerate future BLS-powered advancements in the Avalanche Network, this ACP aims to disable `AddValidatorTx` and `AddDelegatorTx` in Durango.
## Specification
`AddValidatorTx` and `AddDelegatorTx` should be marked as dropped when added to the mempool after activation. Any blocks including these transactions should be considered invalid.
## Backwards Compatibility
Disabling a transaction type is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation.
After this ACP is activated, any new issuance of `AddValidatorTx` or `AddDelegatorTx` will be considered invalid and dropped by the network. Any consumers of these transactions must transition to using `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` to participate in Primary Network validation. The [Avalanche Ledger App](https://github.com/LedgerHQ/app-avalanche) supports both of these transaction types.
Note that `AddSubnetValidatorTx` and `RemoveSubnetValidatorTx` are unchanged by this ACP.
## Reference Implementation
An implementation disabling `AddValidatorTx` and `AddDelegatorTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2662). Until activation, these transactions will continue to be accepted by AvalancheGo.
If modifications are made to the specification as part of the ACP process, the code must be updated prior to activation.
## Security Considerations
No security considerations.
## Open Questions
## Acknowledgements
Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-75: Acceptance Proofs
URL: /docs/acps/75-acceptance-proofs
Details for Avalanche Community Proposal 75: Acceptance Proofs
| ACP | 75 |
| :------------ | :----------------------------------------------------------------------------------- |
| **Title** | Acceptance Proofs |
| **Author(s)** | Joshua Kim |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/82)) |
| **Track** | Standards |
## Abstract
Introduces support for a proof of a block’s acceptance in consensus.
## Motivation
Subnets are able to prove arbitrary events using warp messaging, but native support for proving block acceptance at the protocol layer enables more utility. Acceptance proofs are introduced to prove that a block has been accepted by a subnet. One example use case for acceptance proofs is to provide stronger fault isolation guarantees from the primary network to subnets.
Subnets use the [ProposerVM](https://github.com/ava-labs/avalanchego/blob/416fbdf1f783c40f21e7009a9f06d192e69ba9b5/vms/proposervm/README.md) to implement soft leader election for block proposal. The ProposerVM determines the block producer schedule from a randomly shuffled validator set at a specified P-Chain block height. Validators are therefore required to have the P-Chain block referenced in a block's header to verify the block producer against the expected block producer schedule. If a block's header specifies a P-Chain height that has not been accepted yet, the block is treated as invalid. If a block referencing an unknown P-Chain height was produced virtuously, it is expected that the validator will eventually discover the block as its P-Chain height advances and accept the block.
If many validators disagree about the current tip of the P-Chain, it can lead to a liveness concern on the subnet where block production entirely stalls. In practice, this almost never occurs because nodes produce blocks with a lagging P-Chain height because it’s likely that most nodes will have accepted a sufficiently stale block. This however, relies on an assumption that validators are constantly making progress in consensus on the P-Chain to prevent the subnet from stalling. This leaves an open concern where the P-Chain stalling on a node would prevent it from verifying any blocks, leading to a subnet unable to produce blocks if many validators stalled at different P-Chain heights.
***
Figure 1: A Validator that has synced P-Chain blocks `A` and `B` fails verification of a block proposed at block `C`.
***
We introduce "acceptance proofs", so that a peer can verify any block accepted by consensus. In the aforementioned use-case, if a P-Chain block is unknown by a peer, it can request the block and proof at the provided height from a peer. If a block's proof is valid, the block can be executed to advance the local P-Chain and verify the proposed subnet block. Peers can request blocks from any peer without requiring consensus locally or communication with a validator. This has the added benefit of reducing the number of required connections and p2p message load served by P-Chain validators.
***
Figure 2: A Validator is verifying a subnet’s block `Z` which references an unknown P-Chain block `C` in its block header
Figure 3: A Validator requests the blocks and proofs for `B` and `C` from a peer
Figure 4: The Validator accepts the P-Chain blocks and is now able to verify `Z`
***
## Specification
Note: The following is pseudocode.
### P2P
#### Aggregation
```diff
+ message GetAcceptanceSignatureRequest {
+ bytes chain_id = 1;
+ uint32 request_id = 2;
+ bytes block_id = 3;
+ }
```
The `GetAcceptanceSignatureRequest` message is sent to a peer to request their signature for a given block id.
```diff
+ message GetAcceptanceSignatureResponse {
+ bytes chain_id = 1;
+ uint32 request_id = 2;
+ bytes bls_signature = 3;
+ }
```
`GetAcceptanceSignatureResponse` is sent to a peer as a response for `GetAcceptanceSignatureRequest`. `bls_signature` is the peer’s signature using their registered primary network BLS staking key over the requested `block_id`. An empty `bls_signature` field indicates that the block was not accepted yet.
## Security Considerations
Nodes that bootstrap using state sync may not have the entire history of the
P-Chain and therefore will not be able to provide the entire history for a block
that is referenced in a block that they propose. This would be needed to unblock a node that is attempting to fast-forward their P-Chain, as they require the entire ancestry between their current accepted tip and the block they are attempting to forward to. It is assumed that nodes will have some minimum amount of recent state so that the requester can eventually be unblocked by retrying, as only one node with the requested ancestry is required to unblock the requester.
An alternative is to make a churn assumption and validate the proposed block's proof with a stale validator set to avoid complexity, but this introduces more security concerns.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-77: Reinventing Subnets
URL: /docs/acps/77-reinventing-subnets
Details for Avalanche Community Proposal 77: Reinventing Subnets
| ACP | 77 |
| :------------ | :-------------------------------------------------------------------------------------------------------- |
| **Title** | Reinventing Subnets |
| **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) |
| **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/78)) |
| **Track** | Standards |
| **Replaces** | [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) |
## Abstract
Overhaul Subnet creation and management to unlock increased flexibility for Subnet creators by:
* Separating Subnet validators from Primary Network validators (Primary Network Partial Sync, Removal of 2000 \$AVAX requirement)
* Moving ownership of Subnet validator set management from P-Chain to Subnets (ERC-20/ERC-721/Arbitrary Staking, Staking Reward Management)
* Introducing a continuous P-Chain fee mechanism for Subnet validators (Continuous Subnet Staking)
This ACP supersedes [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) and borrows some of its language.
## Motivation
Each node operator must stake at least 2000 $AVAX ($70k at time of writing) to first become a Primary Network validator before they qualify to become a Subnet validator. Most Subnets aim to launch with at least 8 Subnet validators, which requires staking 16000 $AVAX ($560k at time of writing). All Subnet validators, to satisfy their role as Primary Network validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating.
Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) cannot launch a Subnet because they cannot opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain \<-> Subnets using Avalanche Warp Messaging/Teleporter).
A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline).
Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. *Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load.*
Elastic Subnets, introduced in [Banff](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c), enabled Subnet creators to activate Proof-of-Stake validation and uptime-based rewards using their own token. However, this token is required to be an ANT (created on the X-Chain) and locked on the P-Chain. All staking rewards were distributed on the P-Chain with the reward curve being defined in the `TransformSubnetTx` and, once set, was unable to be modified.
With no Elastic Subnets live on Mainnet, it is clear that Permissionless Subnets as they stand today could be more desirable. There are many successful Permissioned Subnets in production but many Subnet creators have raised the above as points of concern. In summary, the Avalanche community could benefit from a more flexible and affordable mechanism to launch Permissionless Subnets.
### A Note on Nomenclature
Avalanche Subnets are subnetworks validated by a subset of the Primary Network validator set. The new network creation flow outlined in this ACP does not require any intersection between the new network's validator set and the Primary Network's validator set. Moreover, the new networks have greater functionality and sovereignty than Subnets. To distinguish between these two kinds of networks, the community has been referring to these new networks as *Avalanche Layer 1s*, or L1s for short.
All networks created through the old network creation flow will continue to be referred to as Avalanche Subnets.
## Specification
At a high-level, L1s can manage their validator sets externally to the P-Chain by setting the blockchain ID and address of their *validator manager*. The P-Chain will consume Warp messages that modify the L1's validator set. To confirm modification of the L1's validator set, the P-Chain will also produce Warp messages. L1 validators are not required to validate the Primary Network, and do not have the same 2000 $AVAX stake requirement that Subnet validators have. To maintain an active L1 validator, a continuous fee denominated in $AVAX is assessed. L1 validators are only required to sync the P-Chain (not X/C-Chain) in order to track validator set changes and support cross-L1 communication.
### P-Chain Warp Message Payloads
To enable management of an L1's validator set externally to the P-Chain, Warp message verification will be added to the [`PlatformVM`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm). For a Warp message to be considered valid by the P-Chain, at least 67% of the `sourceChainID`'s weight must have participated in the aggregate BLS signature. This is equivalent to the threshold set for the C-Chain. A future ACP may be proposed to support modification of this threshold on a per-L1 basis.
The following Warp message payloads are introduced on the P-Chain:
* `SubnetToL1ConversionMessage`
* `RegisterL1ValidatorMessage`
* `L1ValidatorRegistrationMessage`
* `L1ValidatorWeightMessage`
The method of requesting signatures for these messages is left unspecified. A viable option for supporting this functionality is laid out in [ACP-118](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/118-warp-signature-request/README.md) with the `SignatureRequest` message.
All node IDs contained within the message specifications are represented as variable length arrays such that they can support new node IDs types should the P-Chain add support for them in the future.
The serialization of each of these messages is as follows.
#### `SubnetToL1ConversionMessage`
The P-Chain can produce a `SubnetToL1ConversionMessage` for consumers (i.e. validator managers) to be aware of the initial validator set.
The following serialization is defined as the `ValidatorData`:
| Field | Type | Size |
| -------------: | ---------: | -----------------------: |
| `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes |
| `blsPublicKey` | `[48]byte` | 48 bytes |
| `weight` | `uint64` | 8 bytes |
| | | 60 + len(`nodeID`) bytes |
The following serialization is defined as the `ConversionData`:
| Field | Type | Size |
| ---------------: | ----------------: | ---------------------------------------------------------: |
| `codecID` | `uint16` | 2 bytes |
| `subnetID` | `[32]byte` | 32 bytes |
| `managerChainID` | `[32]byte` | 32 bytes |
| `managerAddress` | `[]byte` | 4 + len(`managerAddress`) bytes |
| `validators` | `[]ValidatorData` | 4 + sum(`validatorLengths`) bytes |
| | | 74 + len(`managerAddress`) + sum(`validatorLengths`) bytes |
* `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
* `sum(validatorLengths)` is the sum of the lengths of `ValidatorData` serializations included in `validators`.
* `subnetID` identifies the Subnet that is being converted to an L1 (described further below).
* `managerChainID` and `managerAddress` identify the validator manager for the newly created L1. This is the (blockchain ID, address) tuple allowed to send Warp messages to modify the L1's validator set.
* `validators` are the initial continuous-fee-paying validators for the given L1.
The `SubnetToL1ConversionMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of:
| Field | Type | Size |
| -------------: | ---------: | -------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `conversionID` | `[32]byte` | 32 bytes |
| | | 38 bytes |
* `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
* `typeID` is the payload type identifier and is `0x00000000` for this message
* `conversionID` is the SHA256 hash of the `ConversionData` from a given `ConvertSubnetToL1Tx`
#### `RegisterL1ValidatorMessage`
The P-Chain can consume a `RegisterL1ValidatorMessage` from validator managers through a `RegisterL1ValidatorTx` to register an addition to the L1's validator set.
The following is the serialization of a `PChainOwner`:
| Field | Type | Size |
| ----------: | -----------: | ---------------------------------: |
| `threshold` | `uint32` | 4 bytes |
| `addresses` | `[][20]byte` | 4 + len(`addresses`) \\\* 20 bytes |
| | | 8 + len(`addresses`) \\\* 20 bytes |
* `threshold` is the number of `addresses` that must provide a signature for the `PChainOwner` to authorize an action.
* Validation criteria:
* If `threshold` is `0`, `addresses` must be empty
* `threshold` \<= len(`addresses`)
* Entries of `addresses` must be unique and sorted in ascending order
The `RegisterL1ValidatorMessage` is specified as an `AddressedCall` with a payload of:
| Field | Type | Size |
| ----------------------: | ------------: | --------------------------------------------------------------------------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `subnetID` | `[32]byte` | 32 bytes |
| `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes |
| `blsPublicKey` | `[48]byte` | 48 bytes |
| `expiry` | `uint64` | 8 bytes |
| `remainingBalanceOwner` | `PChainOwner` | 8 + len(`addresses`) \\\* 20 bytes |
| `disableOwner` | `PChainOwner` | 8 + len(`addresses`) \\\* 20 bytes |
| `weight` | `uint64` | 8 bytes |
| | | 122 + len(`nodeID`) + (len(`addresses1`) + len(`addresses2`)) \\\* 20 bytes |
* `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
* `typeID` is the payload type identifier and is `0x00000001` for this payload
* `subnetID`, `nodeID`, `weight`, and `blsPublicKey` are for the validator being added
* `expiry` is the time at which this message becomes invalid. As of a P-Chain timestamp `>= expiry`, this Avalanche Warp Message can no longer be used to add the `nodeID` to the validator set of `subnetID`
* `remainingBalanceOwner` is the P-Chain owner where leftover \$AVAX from the validator's Balance will be issued to when this validator it is removed from the validator set.
* `disableOwner` is the only P-Chain owner allowed to disable the validator using `DisableL1ValidatorTx`, specified below.
#### `L1ValidatorRegistrationMessage`
The P-Chain can produce an `L1ValidatorRegistrationMessage` for consumers to verify that a validation period has either begun or has been invalidated.
The `L1ValidatorRegistrationMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of:
| Field | Type | Size |
| -------------: | ---------: | -------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `validationID` | `[32]byte` | 32 bytes |
| `registered` | `bool` | 1 byte |
| | | 39 bytes |
* `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
* `typeID` is the payload type identifier and is `0x00000002` for this message
* `validationID` identifies the validator for the message
* `registered` is a boolean representing the status of the `validationID`. If true, the `validationID` corresponds to a validator in the current validator set. If false, the `validationID` does not correspond to a validator in the current validator set, and never will in the future.
#### `L1ValidatorWeightMessage`
The P-Chain can consume an `L1ValidatorWeightMessage` through a `SetL1ValidatorWeightTx` to update the weight of an existing validator. The P-Chain can also produce an `L1ValidatorWeightMessage` for consumers to verify that the validator weight update has been effectuated.
The `L1ValidatorWeightMessage` is specified as an `AddressedCall` with the following payload. When sent from the P-Chain, the `sourceChainID` is set to the P-Chain ID, and the `sourceAddress` is set to an empty byte array.
| Field | Type | Size |
| -------------: | ---------: | -------: |
| `codecID` | `uint16` | 2 bytes |
| `typeID` | `uint32` | 4 bytes |
| `validationID` | `[32]byte` | 32 bytes |
| `nonce` | `uint64` | 8 bytes |
| `weight` | `uint64` | 8 bytes |
| | | 54 bytes |
* `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000`
* `typeID` is the payload type identifier and is `0x00000003` for this message
* `validationID` identifies the validator for the message
* `nonce` is a strictly increasing number that denotes the latest validator weight update and provides replay protection for this transaction
* `weight` is the new `weight` of the validator
### New P-Chain Transaction Types
Both before and after this ACP, to create a Subnet, a `CreateSubnetTx` must be issued on the P-Chain. This transaction includes an `Owner` field which defines the key that today can be used to authorize any validator set additions (`AddSubnetValidatorTx`) or removals (`RemoveSubnetValidatorTx`).
To be considered a permissionless network, or Avalanche Layer 1:
* This `Owner` key must no longer have the ability to modify the validator set.
* New transaction types must support modification of the validator set via Warp messages.
The following new transaction types are introduced on the P-Chain to support this functionality:
* `ConvertSubnetToL1Tx`
* `RegisterL1ValidatorTx`
* `SetL1ValidatorWeightTx`
* `DisableL1ValidatorTx`
* `IncreaseL1ValidatorBalanceTx`
#### `ConvertSubnetToL1Tx`
To convert a Subnet into an L1, a `ConvertSubnetToL1Tx` must be issued to set the `(chainID, address)` pair that will manage the L1's validator set. The `Owner` key defined in `CreateSubnetTx` must provide a signature to authorize this conversion.
The `ConvertSubnetToL1Tx` specification is:
```go
type PChainOwner struct {
// The threshold number of `Addresses` that must provide a signature in order for
// the `PChainOwner` to be considered valid.
Threshold uint32 `json:"threshold"`
// The 20-byte addresses that are allowed to sign to authenticate a `PChainOwner`.
// Note: It is required for:
// - len(Addresses) == 0 if `Threshold` is 0.
// - len(Addresses) >= `Threshold`
// - The values in Addresses to be sorted in ascending order.
Addresses []ids.ShortID `json:"addresses"`
}
type L1Validator struct {
// NodeID of this validator
NodeID []byte `json:"nodeID"`
// Weight of this validator used when sampling
Weight uint64 `json:"weight"`
// Initial balance for this validator
Balance uint64 `json:"balance"`
// [Signer] is the BLS public key and proof-of-possession for this validator.
// Note: We do not enforce that the BLS key is unique across all validators.
// This means that validators can share a key if they so choose.
// However, a NodeID + L1 does uniquely map to a BLS key
Signer signer.ProofOfPossession `json:"signer"`
// Leftover $AVAX from the [Balance] will be issued to this
// owner once it is removed from the validator set.
RemainingBalanceOwner PChainOwner `json:"remainingBalanceOwner"`
// The only owner allowed to disable this validator on the P-Chain.
DisableOwner PChainOwner `json:"disableOwner"`
}
type ConvertSubnetToL1Tx struct {
// Metadata, inputs and outputs
BaseTx
// ID of the Subnet to transform
// Restrictions:
// - Must not be the Primary Network ID
Subnet ids.ID `json:"subnetID"`
// BlockchainID where the validator manager lives
ChainID ids.ID `json:"chainID"`
// Address of the validator manager
Address []byte `json:"address"`
// Initial continuous-fee-paying validators for the L1
Validators []L1Validator `json:"validators"`
// Authorizes this conversion
SubnetAuth verify.Verifiable `json:"subnetAuthorization"`
}
```
After this transaction is accepted, `CreateChainTx` and `AddSubnetValidatorTx` are disabled on the Subnet. The only action that the `Owner` key is able to take is removing Subnet validators with `RemoveSubnetValidatorTx` that had been added using `AddSubnetValidatorTx`. Unless removed by the `Owner` key, any Subnet validators added previously with an `AddSubnetValidatorTx` will continue to validate the Subnet until their [`End`](https://github.com/ava-labs/avalanchego/blob/a1721541754f8ee23502b456af86fea8c766352a/vms/platformvm/txs/validator.go#L27) time is reached. Once all Subnet validators added with `AddSubnetValidatorTx` are no longer in the validator set, the `Owner` key is powerless. `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` must be used to manage the L1's validator set.
The `validationID` for validators added through `ConvertSubnetToL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction).
Once this transaction is accepted, the P-Chain must be willing sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to `ConversionData` populated with the values from this transaction.
#### `RegisterL1ValidatorTx`
After a `ConvertSubnetToL1Tx` has been accepted, new validators can only be added by using a `RegisterL1ValidatorTx`. The specification of this transaction is:
```go
type RegisterL1ValidatorTx struct {
// Metadata, inputs and outputs
BaseTx
// Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee.
Balance uint64 `json:"balance"`
// [Signer] is a BLS signature proving ownership of the BLS public key specified
// below in `Message` for this validator.
// Note: We do not enforce that the BLS key is unique across all validators.
// This means that validators can share a key if they so choose.
// However, a NodeID + L1 does uniquely map to a BLS key
Signer [96]byte `json:"signer"`
// A RegisterL1ValidatorMessage payload
Message warp.Message `json:"message"`
}
```
The `validationID` of validators added via `RegisterL1ValidatorTx` is defined as the SHA256 hash of the `Payload` of the `AddressedCall` in `Message`.
When a `RegisterL1ValidatorTx` is accepted on the P-Chain, the validator is added to the L1's validator set. A `minNonce` field corresponding to the `validationID` will be stored on addition to the validator set (initially set to `0`). This field will be used when validating the `SetL1ValidatorWeightTx` defined below.
This `validationID` will be used for replay protection. Used `validationID`s will be stored on the P-Chain. If a `RegisterL1ValidatorTx`'s `validationID` has already been used, the transaction will be considered invalid. To prevent storing an unbounded number of `validationID`s, the `expiry` of the `RegisterL1ValidatorMessage` is required to be no more than 24 hours in the future of the time the transaction is issued on the P-Chain. Any `validationIDs` corresponding to an expired timestamp can be flushed from the P-Chain's state.
L1s are responsible for defining the procedure on how to retrieve the above information from prospective validators.
An EVM-compatible L1 may choose to implement this step like so:
* Use the number of tokens the user has staked into a smart contract on the L1 to determine the weight of their validator
* Require the user to submit an on-chain transaction with their validator information
* Generate the Warp message
For a `RegisterL1ValidatorTx` to be valid, `Signer` must be a valid proof-of-possession of the `blsPublicKey` defined in the `RegisterL1ValidatorMessage` contained in the transaction.
After a `RegisterL1ValidatorTx` is accepted, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the given `validationID` with `registered` set to `true`. This remains the case until the time at which the validator is removed from the validator set using a `SetL1ValidatorWeightTx`, as described below.
When it is known that a given `validationID` *is not and never will be* registered, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the `validationID` with `registered` set to `false`. This could be the case if the `expiry` time of the message has passed prior to the message being delivered in a `RegisterL1ValidatorTx`, or if the validator was successfully registered and then later removed. This enables the P-Chain to prove to validator managers that a validator has been removed or never added. The P-Chain must refuse to sign any `L1ValidatorRegistrationMessage` where the `validationID` does not correspond to an active validator and the `expiry` is in the future.
#### `SetL1ValidatorWeightTx`
`SetL1ValidatorWeightTx` is used to modify the voting weight of a validator. The specification of this transaction is:
```go
type SetL1ValidatorWeightTx struct {
// Metadata, inputs and outputs
BaseTx
// An L1ValidatorWeightMessage payload
Message warp.Message `json:"message"`
}
```
Applications of this transaction could include:
* Increase the voting weight of a validator if a delegation is made on the L1
* Increase the voting weight of a validator if the stake amount is increased (by staking rewards for example)
* Decrease the voting weight of a misbehaving validator
* Remove an inactive validator
The validation criteria for `L1ValidatorWeightMessage` is:
* `nonce >= minNonce`. Note that `nonce` is not required to be incremented by `1` with each successive validator weight update.
* When `minNonce == MaxUint64`, `nonce` must be `MaxUint64` and `weight` must be `0`. This prevents L1s from being unable to remove `nodeID` in a subsequent transaction.
* If `weight == 0`, the validator being removed must not be the last one in the set. If all validators are removed, there are no valid Warp messages that can be produced to register new validators through `RegisterL1ValidatorMessage`. With no validators, block production will halt and the L1 is unrecoverable. This validation criteria serves as a guardrail against this situation. A future ACP can remove this guardrail as users get more familiar with the new L1 mechanics and tooling matures to fork an L1.
When `weight != 0`, the weight of the validator is updated to `weight` and `minNonce` is updated to `nonce + 1`.
When `weight == 0`, the validator is removed from the validator set. All state related to the validator, including the `minNonce` and `validationID`, are reaped from the P-Chain state. Tracking these post-removal is not required since `validationID` can never be re-initialized due to the replay protection provided by `expiry` in `RegisterL1ValidatorTx`. Any unspent \$AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that `RemainingBalanceOwner` is specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`).
Note: There is no explicit `EndTime` for L1 validators added in a `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`. The only time when L1 validators are removed from the L1's validator set is through this transaction when `weight == 0`.
#### `DisableL1ValidatorTx`
L1 validators can use `DisableL1ValidatorTx` to mark their validator as inactive. The specification of this transaction is:
```go
type DisableL1ValidatorTx struct {
// Metadata, inputs and outputs
BaseTx
// ID corresponding to the validator
ValidationID ids.ID `json:"validationID"`
// Authorizes this validator to be disabled
DisableAuth verify.Verifiable `json:"disableAuthorization"`
}
```
The `DisableOwner` specified for this validator must sign the transaction. Any unspent \$AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that both `DisableOwner` and `RemainingBalanceOwner` are specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`).
For full removal from an L1's validator set, a `SetL1ValidatorWeightTx` must be issued with weight `0`. To do so, a Warp message is required from the L1's validator manager. However, to support the ability to claim the unspent `Balance` for a validator without authorization is critical for failed L1s.
Note that this does not modify an L1's total staking weight. This transaction marks the validator as inactive, but does not remove it from the L1's validator set. Inactive validators can re-activate at any time by increasing their balance with an `IncreaseL1ValidatorBalanceTx`.
L1 creators should be aware that there is no notion of `MinStakeDuration` that is enforced by the P-Chain. It is expected that L1s who choose to enforce a `MinStakeDuration` will lock the validator's Stake for the L1's desired `MinStakeDuration`.
#### `IncreaseL1ValidatorBalanceTx`
L1 validators are required to maintain a non-zero balance used to pay the continuous fee on the P-Chain in order to be considered active. The `IncreaseL1ValidatorBalanceTx` can be used by anybody to add additional \$AVAX to the `Balance` to a validator. The specification of this transaction is:
```go
type IncreaseL1ValidatorBalanceTx struct {
// Metadata, inputs and outputs
BaseTx
// ID corresponding to the validator
ValidationID ids.ID `json:"validationID"`
// Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee
Balance uint64 `json:"balance"`
}
```
If the validator corresponding to `ValidationID` is currently inactive (`Balance` was exhausted or `DisableL1ValidatorTx` was issued), this transaction will move them back to the active validator set.
Note: The \$AVAX added to `Balance` can be claimed at any time by the validator using `DisableL1ValidatorTx`.
### Bootstrapping L1 Nodes
Bootstrapping a node/validator is the process of securely recreating the latest state of the blockchain locally. At the end of this process, the local state of a node/validator must be in sync with the local state of other virtuous nodes/validators. The node/validator can then verify new incoming transactions and reach consensus with other nodes/validators.
To bootstrap a node/validator, a few critical questions must be answered: How does one discover peers in the network? How does one determine that a discovered peer is honestly participating in the network?
For standalone networks like the Avalanche Primary Network, this is done by connecting to a hardcoded [set](https://github.com/ava-labs/avalanchego/blob/master/genesis/bootstrappers.json) of trusted bootstrappers to then discover new peers. Ethereum calls their set [bootnodes](https://ethereum.org/developers/docs/nodes-and-clients/bootnodes).
Since L1 validators are not required to be Primary Network validators, a list of validator IPs to connect to (the functional bootstrappers of the L1) cannot be provided by simply connecting to the Primary Network validators. However, the Primary Network can enable nodes tracking an L1 to seamlessly connect to the validators by tracking and gossiping L1 validator IPs. L1s will not need to operate and maintain a set of bootstrappers and can rely on the Primary Network for peer discovery.
### Sidebar: L1 Sovereignty
After this ACP is activated, the P-Chain will no longer support staking of any assets other than $AVAX for the Primary Network. The P-Chain will not support the distribution of staking rewards for L1s. All staking-related operations for L1 validation must be managed by the L1's validator manager. The P-Chain simply requires a continuous fee per validator. If an L1 would like to manage their validator's balances on the P-Chain, it can cover the cost for all L1 validators by posting the $AVAX balance on the P-Chain. L1s can implement any mechanism they want to pay the continuous fee charged by the P-Chain for its participants.
The L1 has full ownership over its validator set, not the P-Chain. There are no restrictions on what requirements an L1 can have for validators to join. Any stake that is required to join the L1's validator set is not locked on the P-Chain. If a validator is removed from the L1's validator set via a `SetL1ValidatorWeightTx` with weight `0`, the stake will continue to be locked outside of the P-Chain. How each L1 handles stake associated with the validator is entirely left up to the L1 and can be treated independently to what happens on the P-Chain.
The relationship between the P-Chain and L1s provides a dynamic where L1s can use the P-Chain as an impartial judge to modify parameters (in addition to its existing role of helping to validate incoming Avalanche Warp Messages). If a validator is misbehaving, the L1 validators can collectively generate a BLS multisig to reduce the voting weight of a misbehaving validator. This operation is fully secured by the Avalanche Primary Network (225M $AVAX or $8.325B at the time of writing).
Follow-up ACPs could extend the P-Chain \<-> L1 relationship to include parametrization of the 67% threshold to enable L1s to choose a different threshold based on their security model (e.g. a simple majority of 51%).
### Continuous Fee Mechanism
Every additional validator on the P-Chain adds persistent load to the Avalanche Network. When a validator transaction is issued on the P-Chain, it is charged for the computational cost of the transaction itself but is not charged for the cost of an active validator over the time they are validating on the network (which may be indefinitely). This is a common problem in blockchains, spawning many state rent proposals in the broader blockchain space to address it. The following fee mechanism takes advantage of the fact that each L1 validator uses the same amount of computation and charges each L1 validator the dynamic base fee for every discrete unit of time it is active.
To charge each L1 validator, the notion of a `Balance` is introduced. The `Balance` of a validator will be continuously charged during the time they are active to cover the cost of storing the associated validator properties (BLS key, weight, nonce) in memory and to track IPs (in addition to other services provided by the Primary Network). This `Balance` is initialized with the `RegisterL1ValidatorTx` that added them to the active validator set. `Balance` can be increased at any time using the `IncreaseL1ValidatorBalanceTx`. When this `Balance` reaches `0`, the validator will be considered "inactive" and will no longer participate in validating the L1. Inactive validators can be moved back to the active validator set at any time using the same `IncreaseL1ValidatorBalanceTx`. Once a validator is considered inactive, the P-Chain will remove these properties from memory and only retain them on disk. All messages from that validator will be considered invalid until it is revived using the `IncreaseL1ValidatorBalanceTx`. L1s can reduce the amount of inactive weight by removing inactive validators with the `SetL1ValidatorWeightTx` (`Weight` = 0).
Since each L1 validator is charged the same amount at each point in time, tracking the fees for the entire validator set is straight-forward. The accumulated dynamic base fee for the entire network is tracked in a single uint. This accumulated value should be equal to the fee charged if a validator was active from the time the accumulator was instantiated. The validator set is maintained in a priority queue. A pseudocode implementation of the continuous fee mechanism is provided below.
```python
# Pseudocode
class ValidatorQueue:
def __init__(self, fee_getter):
self.acc = 0
self.queue = PriorityQueue()
self.fee_getter = fee_getter
# At each time period, increment the accumulator and
# pop all validators from the top of the queue that
# ran out of funds.
# Note: The amount of work done in a single block
# should be bounded to prevent a large number of
# validator operations from happening at the same
# time.
def time_elapse(self, t):
self.acc = self.acc + self.fee_getter(t)
while True:
vdr = self.queue.peek()
if vdr.balance < self.acc:
self.queue.pop()
continue
return
# Validator was added
def validator_enter(self, vdr):
vdr.balance = vdr.balance + self.acc
self.queue.add(vdr)
# Validator was removed
def validator_remove(self, vdrNodeID):
vdr = find_and_remove(self.queue, vdrNodeID)
vdr.balance = vdr.balance - self.acc
vdr.refund() # Refund [vdr.balance] to [RemainingBalanceOwner]
self.queue.remove()
# Validator's balance was topped up
def validator_increase(self, vdrNodeID, balance):
vdr = find_and_remove(self.queue, vdrNodeID)
vdr.balance = vdr.balance + balance
self.queue.add(vdr)
```
#### Fee Algorithm
[ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) proposes a dynamic fee mechanism for transactions on the P-Chain. This mechanism is repurposed with minor modifications for the active L1 validator continuous fee.
At activation, the number of excess active L1 validators $x$ is set to `0`.
The fee rate per second for an active L1 validator is:
$M \cdot \exp\left(\frac{x}{K}\right)$
Where:
* $M$ is the minimum price for an active L1 validator
* $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification
```python
# Approximates factor * e ** (numerator / denominator) using Taylor expansion
def fake_exponential(factor: int, numerator: int, denominator: int) -> int:
i = 1
output = 0
numerator_accum = factor * denominator
while numerator_accum > 0:
output += numerator_accum
numerator_accum = (numerator_accum * numerator) // (denominator * i)
i += 1
return output // denominator
```
* $K$ is a constant to control the rate of change for the L1 validator price
After every second, $x$ will be updated:
$x = \max(x + (V - T), 0)$
Where:
* $V$ is the number of active L1 validators
* $T$ is the target number of active L1 validators
Whenever $x$ increases by $K$, the price per active L1 validator increases by a factor of `~2.7`. If the price per active L1 validator gets too expensive, some active L1 validators will exit the active validator set, decreasing $x$, dropping the price. The price per active L1 validator constantly adjusts to make sure that, on average, the P-Chain has no more than $T$ active L1 validators.
#### Block Processing
Before processing the transactions inside a block, all validators that no longer have a sufficient (non-zero) balance are deactivated.
After processing the transactions inside a block, all validators that do not have a sufficient balance for the next second are deactivated.
##### Block Timestamp Validity Change
To ensure that validators are charged accurately, blocks are only considered valid if advancing the chain times would not cause a validator to have a negative balance.
This upholds the expectation that the number of L1 validators remains constant between blocks.
The block building protocol is modified to account for this change by first checking if the wall clock time removes any validator due to a lack of funds. If the wall clock time does not remove any L1 validators, the wall clock time is used to build the block. If it does, the time at which the first validator gets removed is used.
##### Fee Calculation
The total validator fee assessed in $\Delta t$ is:
```python
# Calculate the fee to charge over Δt
def cost_over_time(V:int, T:int, x:int, Δt: int) -> int:
cost = 0
for _ in range(Δt):
x = max(x + V - T, 0)
cost += fake_exponential(M, x, K)
return cost
```
#### Parameters
The parameters at activation are:
| Parameter | Definition | Value |
| --------- | ------------------------------------------- | ---------------- |
| $T$ | target number of validators | 10\_000 |
| $C$ | capacity number of validators | 20\_000 |
| $M$ | minimum fee rate | 512 nAVAX/s |
| $K$ | constant to control the rate of fee changes | 1\_246\_488\_515 |
An $M$ of 512 nAVAX/s equates to \~1.33 AVAX/month to run an L1 validator, so long as the total number of continuous-fee-paying L1 validators stays at or below $T$.
$K$ was chosen to set the maximum fee doubling rate to \~24 hours. This is in the extreme case that the network has $C$ validators for prolonged periods of time; if the network has $T$+1 validators for example, the fee rate would double every \~27 years.
A future ACP can adjust the parameters to increase $T$, reduce $M$, and/or modify $K$.
#### User Experience
L1 validators are continuously charged a fee, albeit a small one. This poses a challenge for L1 validators: How do they maintain the balance over time?
Node clients should expose an API to track how much balance is remaining in the validator's account. This will provide a way for L1 validators to track how quickly it is going down and top-up when needed. A nice byproduct of the above design is the balance in the validator's account is claimable. This means users can top-up as much \$AVAX as they want and rest-assured knowing they can always retrieve it if there is an excessive amount.
The expectation is that most users will not interact with node clients or track when or how much they need to top-up their validator account. Wallet providers will abstract away most of this process. For users who desire more convenience, L1-as-a-Service providers will abstract away all of this process.
## Backwards Compatibility
This new design for Subnets proposes a large rework to all L1-related mechanics. Rollout should be done on a going-forward basis to not cause any service disruption for live Subnets. All current Subnet validators will be able to continue validating both the Primary Network and whatever Subnets they are validating.
Any state execution changes must be coordinated through a mandatory upgrade. Implementors must take care to continue to verify the existing ruleset until the upgrade is activated. After activation, nodes should verify the new ruleset. Implementors must take care to only verify the presence of 2000 \$AVAX prior to activation.
### Deactivated Transactions
* P-Chain
* `TransformSubnetTx`
After this ACP is activated, Elastic Subnets will be disabled. `TransformSubnetTx` will not be accepted post-activation. As there are no Mainnet Elastic Subnets, there should be no production impact with this deactivation.
### New Transactions
* P-Chain
* `ConvertSubnetToL1Tx`
* `RegisterL1ValidatorTx`
* `SetL1ValidatorWeightTx`
* `DisableL1ValidatorTx`
* `IncreaseL1ValidatorBalanceTx`
## Reference Implementation
ACP-77 was implemented and will be merged into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp77` label [here](https://github.com/ava-labs/avalanchego/issues?q=sort%3Aupdated-desc+label%3Aacp77).
Since Etna is not yet activated, all new transactions introduced in ACP-77 will be rejected by AvalancheGo. If any modifications are made to ACP-77 as part of the ACP process, the implementation must be updated prior to activation.
## Security Considerations
This ACP introduces Avalanche Layer 1s, a new network type that costs significantly less than Avalanche Subnets. This can lead to a large increase in the number of networks and, by extension, the number of validators. Each additional validator adds consistent RAM usage to the P-Chain. However, this should be appropriately metered by the continuous fee mechanism outlined above.
With the sovereignty L1s have from the P-Chain, L1 staking tokens are not locked on the P-Chain. This poses a security consideration for L1 validators: Malicious chains can choose to remove validators at will and take any funds that the validator has locked on the L1. The P-Chain only provides the guarantee that L1 validators can retrieve the remaining \$AVAX Balance for their validator via a `DisableL1ValidatorTx`. Any assets on the L1 is entirely under the purview of the L1. The onus is on L1 validators to vet the L1's security for any assets transferred onto the L1.
With a long window of expiry (24 hours) for the Warp message in `RegisterL1ValidatorTx`, spam of validator registration could lead to high memory pressure on the P-Chain. A future ACP can reduce the window of expiry if 24 hours proves to be a problem.
NodeIDs can be added to an L1's validator set involuntarily. However, it is important to note that any stake/rewards are *not* at risk. For a node operator who was added to a validator set involuntarily, they would only need to generate a new NodeID via key rotation as there is no lock-up of any stake to create a NodeID. This is an explicit tradeoff for easier on-boarding of NodeIDs. This mirrors the Primary Network validators guarantee of no stake/rewards at risk.
The continuous fee mechanism outlined above does not apply to inactive L1 validators since they are not stored in memory. However, inactive L1 validators are persisted on disk which can lead to persistent P-Chain state growth. A future ACP can introduce a mechanism to decrease the rate of P-Chain state growth or provide a state expiry path to reduce the amount of P-Chain state.
## Acknowledgements
Special thanks to [@StephenButtolph](https://github.com/StephenButtolph), [@aaronbuchwald](https://github.com/aaronbuchwald), and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. Thank you to the broader Ava Labs Platform Engineering Group for their feedback on this ACP prior to publication.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-83: Dynamic Multidimensional Fees
URL: /docs/acps/83-dynamic-multidimensional-fees
Details for Avalanche Community Proposal 83: Dynamic Multidimensional Fees
| ACP | 83 |
| :---------------- | :------------------------------------------------------------------------------------------------ |
| **Title** | Dynamic multidimensional fees for P-chain and X-chain |
| **Author(s)** | Alberto Benegiamo ([@abi87](https://github.com/abi87)) |
| **Status** | Stale |
| **Track** | Standards |
| **Superseded-By** | [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) |
## Abstract
Introduce a dynamic and multidimensional fees scheme for the P-chain and X-chain.
Dynamic fees helps to preserve the stability of the chain as it provides a feedback mechanism that increases the cost of resources when the network operates above its target utilization.
Multidimensional fees ensures that high demand for orthogonal resources does not drive up the price of underutilized resources. For example, networks provide and consume orthogonal resources including, but not limited to, bandwidth, chain state, read/write throughput, and CPU. By independently metering each resource, they can be granularly priced and stay closer to optimal resource utilization.
## Motivation
The P-Chain and X-Chain currently have fixed fees and in some cases those fees are fixed to zero.
This makes transaction issuance predictable, but does not provide feedback mechanism to preserve chain stability under high load. In contrast, the C-Chain, which has the highest and most regular load among the chains on the Primary Network, already supports dynamic fees. This ACP proposes to introduce a similar dynamic fee mechanism for the P-Chain and X-Chain to further improve the Primary Network's stability and resilience under load.
However, unlike the C-Chain, we propose a multidimensional fee scheme with an exponential update rule for each fee dimension. The [HyperSDK](https://github.com/ava-labs/hypersdk) already utilizes a multidimensional fee scheme with optional priority fees and its efficiency is backed by [academic research](https://arxiv.org/abs/2208.07919).
Finally, we split the fee into two parts: a `base fee` and a `priority fee`. The `base fee` is calculated by the network each block to accurately price each resource at a given point in time. Whatever amount greater than the base fee is burnt is treated as the `priority fee` to prioritize faster transaction inclusion.
## Specification
We introduce the multidimensional scheme first and then how to apply the dynamic fee update rule for each fee dimension. Finally we list the new block verification rules, valid once the new fee scheme activates.
### Multidimensional scheme components
We define four fee dimensions, `Bandwidth`, `Reads`, `Writes`, `Compute`, to describe transactions complexity. In more details:
* `Bandwidth` measures the transaction size in bytes, as encoded by the AvalancheGo codec. Byte length is a proxy for the network resources needed to disseminate the transaction.
* `Reads` measures the number of DB reads needed to verify the transactions. DB reads include UTXOs reads and any other state quantity relevant for the specific transaction.
* `Writes` measures the number of DB writes following the transaction verification. DB writes include UTXOs generated as outputs of the transactions and any other state quantity relevant for the specific transaction.
* `Compute` measures the number of signatures to be verified, including UTXOs ones and those related to authorization of specific operations.
For each fee dimension $i$, we define:
* *fee rate* $r_i$ as the price, denominated in AVAX, to be paid for a transaction with complexity $u_i$ along the fee dimension $i$.
* *base fee* as the minimal fee needed to accept a transaction. Base fee is given be the formula
$base \ fee = \sum_{i=0}^3 r_i \times u_i$
* *priority fee* as an optional fee paid on top of the base fee to speed up the transaction inclusion in a block.
### Dynamic scheme components
Fee rates are updated in time, to allow a fee increase when network is getting congested. Each new block is a potential source of congestion, as its transactions carry complexity that each validator must process to verify and eventually accept the block. The more complexity carries a block, and the more rapidly blocks are produced, the higher the congestion.
We seek a scheme that rapidly increases the fees when blocks complexity goes above a defined threshold and that equally rapidly decreases the fees once complexity goes down (because blocks carry less/simpler transactions, or because they are produced more slowly). We define the desired threshold as a *target complexity rate* $T$: we would want to process every second a block whose complexity is $T$. Any complexity more than that causes some congestion that we want to penalize via fees.
In order to update fees rates we track, per each block and each fee dimension, a parameter called cumulative excess complexity. Fee rates applied to a block will be defined in terms of cumulative excess complexity as we show in the following.
Suppose that a block $B_t$ is the current chain tip. $B_t$ has the following features:
* $t$ is its timestamp.
* $\Delta C_t$ is the cumulative excess complexity along fee dimension $i$.
Say a new block $B_{t + \Delta T}$ is built on top of $B$, with the following features:
* $t + \Delta T$ is its timestamp
* $C_{t + \Delta T}$ is its complexity along fee dimension $i$.
Then the fee rate $r_{t + \Delta T}$ applied for the block $B_{t + \Delta T}$ along dimension $i$ will be:
$r_{t + \Delta T} = r^{min} \times e^{\frac{max(0, \Delta C_t - T \times \Delta T)}{Denom}}$
where
* $r^{min}$ is the minimal fee rate along fee dimension $i$
* $T$ is the target complexity rate along fee dimension $i$
* $Denom$ is a normalization constant for the fee dimension $i$
Moreover, once the block $B_{t + \Delta T}$ is accepted, the cumulative excess complexity is updated as follows:
$\Delta C_{t + \Delta T} = max\large(0, \Delta C_{t} - T \times \Delta T\large) + C_{t + \Delta T}$
The fee rate update formula guarantees that fee rates increase if incoming blocks are complex (large $C_{t + \Delta T}$) and if blocks are emitted rapidly (small $\Delta T$). Symmetrically, fee rates decrease to the minimum if incoming blocks are less complex and if blocks are produced less frequently.\
The update formula has a few paramenters to be tuned, independently, for each fee dimension. We defer discussion about tuning to the [implementation section](#tuning-the-update-formula).
## Block verification rules
Upon activation of the dynamic multidimensional fees scheme we modify block processing as follows:
* **Bound block complexity**. For each fee dimension $i$, we define a *maximal block complexity* $Max$. A block is only valid if the block complexity $C$ is less than the maximum block complexity: $C \leq Max$.
* **Verify transaction fee**. When verifying each transaction in a block, we confirm that it can cover its own base fee. Note that both base fee and optional priority fees are burned.
## User Experience
### How will the wallets estimate the fees?
AvalancheGo nodes will provide new APIs exposing the current and expected fee rates, as they are likely to change block by block. Wallets can then use the fees rates to select UTXOs to pay the transaction fees. Moreover, the AvalancheGo implementation proposed above offers a `fees.Calculator` struct that can be reused by wallets and downstream projects to evaluate calculate fees.
### How will wallets be able to re-issue Txs at a higher fee?
Wallets should be able to simply re-issue the transaction since current AvalancheGo implementation drops mempool transactions whose fee rate is lower than current one. More specifically, a transaction may be valid the moment it enters the mempool and it won’t be re-verified as long as it stays in there. However, as soon as the transaction is selected to be included in the next block, it is re-verified against the latest preferred tip. If fees are not enough by this time, the transaction is dropped and the wallet can simply re-issue it at a higher fee, or wait for the fee rate to go down. Note that priority fees offer some buffer space against an increase in the fee rate. A transaction paying just the base fee will be evicted from the mempool in the face of a fee rate increase, while a transaction paying some extra priority fee may have enough buffer room to stay valid after some amount of fee increase.
### How does priority fees guarantee a faster block inclusion?
AvalancheGo mempool will be restructured to order transactions by priority fees. Transactions paying priority fees will be selected for block inclusion first, without violating any spend dependency.
## Backwards Compatibility
Modifying the fee scheme for P-Chain and X-Chain requires a mandatory upgrade for activation. Moreover, wallets must be modified to properly handle the new fee scheme once activated.
## Reference Implementation
The implementation is split across multiple PRs:
* P-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2707](https://github.com/ava-labs/avalanchego/issues/2707)
* X-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2708](https://github.com/ava-labs/avalanchego/issues/2708)
A very important implementation step is tuning the update formula parameters for each chain and each fee dimension. We show here the principles we followed for tuning and a simulation based on historical data.
### Tuning the update formula
The basic idea is to measure the complexity of blocks already accepted and derive the parameters from it. You can find the historical data in [this repo](https://github.com/abi87/complexities).\
To simplify the exposition I am purposefully ignoring chain specifics (like P-chain proposal blocks). We can account for chain specifics while processing the historical data. Here are the principles:
* **Target block complexity rate $T$**: calculate the distribution of block complexity and pick a high enough quantile.
* **Max block complexity $Max$**: this is probably the trickiest parameter to set.
Historically we had [pretty big transactions](https://subnets.avax.network/p-chain/tx/27pjHPRCvd3zaoQUYMesqtkVfZ188uP93zetNSqk3kSH1WjED1) (more than $1.000$ referenced utxos). Setting a max block complexity so high that these big transactions are allowed is akin to setting no complexity cap.
On the other side, we still want to allow, even encourage, UTXO consolidation, so we may want to allow transactions [like this](https://subnets.avax.network/p-chain/tx/2LxyHzbi2AGJ4GAcHXth6pj5DwVLWeVmog2SAfh4WrqSBdENhV).
A principled way to set max block complexity may be the following:
* calculate the target block complexity rate (see previous point)
* calculate the median time elapsed among consecutive blocks
* The product of these two quantities should gives us something like a target block complexity.
* Set the max block complexity to say $\times 50$ the target value.
* **Normalization coefficient $Denom$**: I suggest we size it as follows:
* Find the largest historical peak, i.e. the sequence of consecutive blocks which contained the most complexity in the shortest period of time
* Tune $Denom$ so that it would cause a $\times 10000$ increase in the fee rate for such a peak. This increase would push fees from the milliAVAX we normally pay under stable network condition up to tens of AVAX.
* **Minimal fee rates $r^{min}$**: we could size them so that transactions fees do not change very much with respect to the currently fixed values.
We simulate below how the update formula would behave on an peak period from Avalanche mainnet.
/>
/>
Figure 1 shows a peak period, starting with block [wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1](https://subnets.avax.network/p-chain/block/wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1) and going for roughly 30 blocks. We only show `Bandwidth` for clarity, but other fees dimensions have similar behaviour.
The network load is much larger than target and sustained.\
Figure 2 show the fee dynamic in response to the peak: fees scale up from a few milliAVAX up to around 25 AVAX. Moreover as soon as the peak is over, and complexity goes back to the target value, fees are reduced very rapidly.
## Security Considerations
The new fee scheme is expected to help network stability as it offers economic incentives to users to hold transactions issuance in times of high load. While fees are expected to remain generally low when the system is not loaded, a sudden load increase, with fuller blocks, would push the dynamic fees algo to increase fee rates. The increase is expected to continue until the load is reduced. Load reduction happens by both dropping unconfirmed transactions whose fee-rate is not sufficient anymore and by pushing users that optimize their transactions costs to delay transaction issuance until the fee rate goes down to an acceptable level.\
Note finally that the exponential fee update mechanism detailed above is [proven](https://ethresear.ch/t/multidimensional-eip-1559/11651) to be robust against strategic behaviors of users delaying transactions issuance and then suddenly push a bulk of transactions once the fee rate is low enough.
## Acknowledgements
Thanks to @StephenButtolph @patrick-ogrady and @dhrubabasu for their feedback on these ideas.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-84: Table Preamble
URL: /docs/acps/84-table-preamble
Details for Avalanche Community Proposal 84: Table Preamble
| ACP | 84 |
| :------------ | :------------------------------------------------------------ |
| **Title** | Table Preamble for ACPs |
| **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) |
| **Status** | Activated |
| **Track** | Meta |
## Abstract
The current ACP template features a plain-text code block containing "RFC 822 style headers" as `Preamble` (see [What belongs in a successful ACP?](https://github.com/avalanche-foundation/ACPs?tab=readme-ov-file#what-belongs-in-a-successful-acp)). This header includes multiple links to discussions, authors, and other ACPs.
This ACP proposes to replace the `Preamble` code block with a Markdown table format (similar to what is used in [Ethereum EIPs](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md)).
## Motivation
The current ACPs `Preamble` is (i) not very readable and (ii) not user-friendly as links are not clickable. The proposed table format aims to fix these issues.
## Specification
The following Markdown table format is proposed:
| ACP | PR Number |
| :------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Title** | ACP title |
| **Author(s)** | A list of the author's name(s) and optionally contact info: FirstName LastName ([@GitHubUsername](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) or [email@address.com](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) |
| **Status** | Proposed, Implementable, Activated, Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) |
| **Track** | Standards, Best Practices, Meta, Subnet |
| \**Replaces (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) |
| \**Superseded-By (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) |
It features all the existing fields of the current ACP template, and would replace the current `Preamble` code block in [ACPs/TEMPLATE.md](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/TEMPLATE.md).
## Backwards Compatibility
Existing ACPs could be updated to use the new table format, but it is not mandatory.
## Reference Implementation
For this ACP, the table would look like this:
| ACP | 84 |
| :------------ | :----------------------------------------------------------------------------------- |
| **Title** | Table Preamble for ACPs |
| **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) |
| **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/86)) |
| **Track** | Meta |
## Security Considerations
NA
## Open Questions
NA
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# ACP-99: Validatorsetmanager Contract
URL: /docs/acps/99-validatorsetmanager-contract
Details for Avalanche Community Proposal 99: Validatorsetmanager Contract
| ACP | 99 |
| :----------- | :-------------------------------------------------------------------------------------------------------------------------- |
| Title | Validator Manager Solidity Standard |
| Author(s) | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) |
| Status | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/165)) |
| Track | Best Practices |
| Dependencies | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) |
## Abstract
Define a standard Validator Manager Solidity smart contract to be deployed on any Avalanche EVM chain.
This ACP relies on concepts introduced in [ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets). It depends on it to be marked as `Implementable`.
## Motivation
[ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets) opens the door to managing an L1 validator set (stored on the P-Chain) from any chain on the Avalanche Network. The P-Chain allows a Subnet to specify a "validator manager" if it is converted to an L1 using `ConvertSubnetToL1Tx`. This `(blockchainID, address)` pair is responsible for sending ICM messages contained within `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` on the P-Chain. This enables an on-chain program to add, modify the weight of, and remove validators.
On each validator set change, the P-Chain is willing to sign an `AddressedCall` to notify any on-chain program tracking the validator set. On-chain programs must be able to interpret this message, so they can trigger the appropriate action. The 2 kinds of `AddressedCall`s [defined in ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#p-chain-warp-message-payloads) are `L1ValidatorRegistrationMessage` and `L1ValidatorWeightMessage`.
Given these assumptions and the fact that most of the active blockchains on Avalanche Mainnet are EVM-based, we propose `ACP99Manager` as the standard Solidity contract specification that can:
1. Hold relevant information about the current L1 validator set
2. Send validator set updates to the P-Chain by generating `AdressedCall`s defined in ACP-77
3. Correctly update the validator set by interpreting notification messages received from the P-Chain
4. Be easily integrated into validator manager implementations that utilize various security models (e.g. Proof-of-Stake).
Having an audited and open-source reference implementation freely available will contribute to lowering the cost of launching L1s on Avalanche.
Once deployed, the `ACP99Manager` implementation contract can be used as the `Address` in the [`ConvertSubnetToL1Tx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#convertsubnettol1tx).
## Specification
> **Note:**: The naming convention followed for the interfaces and contracts are inspired from the way [OpenZeppelin Contracts](https://docs.openzeppelin.com/contracts/5.x/) are named after ERC standards, using `ACP` instead of `ERC`.
### Type Definitions
The following type definitions are used in the function signatures described in [Contract Specification](#contract-specification)
```solidity
/**
* @notice Description of the conversion data used to convert
* a subnet to an L1 on the P-Chain.
* This data is the pre-image of a hash that is authenticated by the P-Chain
* and verified by the Validator Manager.
*/
struct ConversionData {
bytes32 subnetID;
bytes32 validatorManagerBlockchainID;
address validatorManagerAddress;
InitialValidator[] initialValidators;
}
/// @notice Specifies an initial validator, used in the conversion data.
struct InitialValidator {
bytes nodeID;
bytes blsPublicKey;
uint64 weight;
}
/// @notice L1 validator status.
enum ValidatorStatus {
Unknown,
PendingAdded,
Active,
PendingRemoved,
Completed,
Invalidated
}
/**
* @notice Specifies the owner of a validator's remaining balance or disable owner on the P-Chain.
* P-Chain addresses are also 20-bytes, so we use the address type to represent them.
*/
struct PChainOwner {
uint32 threshold;
address[] addresses;
}
/**
* @notice Contains the active state of a Validator.
* @param status The validator status.
* @param nodeID The NodeID of the validator.
* @param startingWeight The weight of the validator at the time of registration.
* @param sentNonce The current weight update nonce sent by the manager.
* @param receivedNonce The highest nonce received from the P-Chain.
* @param weight The current weight of the validator.
* @param startTime The start time of the validator.
* @param endTime The end time of the validator.
*/
struct Validator {
ValidatorStatus status;
bytes nodeID;
uint64 startingWeight;
uint64 sentNonce;
uint64 receivedNonce;
uint64 weight;
uint64 startTime;
uint64 endTime;
}
```
#### About `Validator`s
A `Validator` represents the continuous time frame during which a node is part of the validator set.
Each `Validator` is identified by its `validationID`. If a validator was added as part of the initial set of continuous dynamic fee paying validators, its `validationID` is the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `ConvertSubnetToL1Tx` transaction ID and the 4 byte index of the initial validator within the transaction. If a validator was added to the L1's validator set post-conversion, its `validationID` is the SHA256 of the payload of the `AddressedCall` in the `RegisterL1ValidatorTx` used to add it, as defined in ACP-77.
### Contract Specification
The standard `ACP99Manager` functionality is defined by a set of events, public methods, and private methods that must be included by a compliant implementation.
For a full implementation, please see the [Reference Implementation](#reference-implementation)
#### Events
```solidity
/**
* @notice Emitted when an initial validator is registered.
* @notice The field index is the index of the initial validator in the conversion data.
* This is used along with the subnetID as the ACP-118 justification in
* signature requests to P-Chain validators over a L1ValidatorRegistrationMessage
* when removing the validator
*/
event RegisteredInitialValidator(
bytes32 indexed validationID,
bytes20 indexed nodeID,
bytes32 indexed subnetID,
uint64 weight,
uint32 index
);
/// @notice Emitted when a validator registration to the L1 is initiated.
event InitiatedValidatorRegistration(
bytes32 indexed validationID,
bytes20 indexed nodeID,
bytes32 registrationMessageID,
uint64 registrationExpiry,
uint64 weight
);
/// @notice Emitted when a validator registration to the L1 is completed.
event CompletedValidatorRegistration(bytes32 indexed validationID, uint64 weight);
/// @notice Emitted when removal of an L1 validator is initiated.
event InitiatedValidatorRemoval(
bytes32 indexed validationID,
bytes32 validatorWeightMessageID,
uint64 weight,
uint64 endTime
);
/// @notice Emitted when removal of an L1 validator is completed.
event CompletedValidatorRemoval(bytes32 indexed validationID);
/// @notice Emitted when a validator weight update is initiated.
event InitiatedValidatorWeightUpdate(
bytes32 indexed validationID, uint64 nonce, bytes32 weightUpdateMessageID, uint64 weight
);
/// @notice Emitted when a validator weight update is completed.
event CompletedValidatorWeightUpdate(bytes32 indexed validationID, uint64 nonce, uint64 weight);
```
#### Public Methods
```solidity
/// @notice Returns the SubnetID of the L1 tied to this manager
function subnetID() public view returns (bytes32 id);
/// @notice Returns the validator details for a given validation ID.
function getValidator(bytes32 validationID)
public
view
returns (Validator memory validator);
/// @notice Returns the total weight of the current L1 validator set.
function l1TotalWeight() public view returns (uint64 weight);
/**
* @notice Verifies and sets the initial validator set for the chain by consuming a
* SubnetToL1ConversionMessage from the P-Chain.
*
* Emits a {RegisteredInitialValidator} event for each initial validator in {conversionData}.
*
* @param conversionData The Subnet conversion message data used to recompute and verify against the ConversionID.
* @param messsageIndex The index that contains the SubnetToL1ConversionMessage ICM message containing the
* ConversionID to be verified against the provided {conversionData}.
*/
function initializeValidatorSet(
ConversionData calldata conversionData,
uint32 messsageIndex
) public;
/**
* @notice Completes the validator registration process by returning an acknowledgement of the registration of a
* validationID from the P-Chain. The validator should not be considered active until this method is successfully called.
*
* Emits a {CompletedValidatorRegistration} event on success.
*
* @param messageIndex The index of the L1ValidatorRegistrationMessage to be received providing the acknowledgement.
* @return validationID The ID of the registered validator.
*/
function completeValidatorRegistration(uint32 messageIndex)
public
returns (bytes32 validationID);
/**
* @notice Completes validator removal by consuming an RegisterL1ValidatorMessage from the P-Chain acknowledging
* that the validator has been removed.
*
* Emits a {CompletedValidatorRemoval} on success.
*
* @param messageIndex The index of the RegisterL1ValidatorMessage.
*/
function completeValidatorRemoval(uint32 messageIndex)
public
returns (bytes32 validationID);
/**
* @notice Completes the validator weight update process by consuming an L1ValidatorWeightMessage from the P-Chain
* acknowledging the weight update. The validator weight change should not have any effect until this method is successfully called.
*
* Emits a {CompletedValidatorWeightUpdate} event on success.
*
* @param messageIndex The index of the L1ValidatorWeightMessage message to be received providing the acknowledgement.
* @return validationID The ID of the validator, retreived from the L1ValidatorWeightMessage.
* @return nonce The nonce of the validator, retreived from the L1ValidatorWeightMessage.
*/
function completeValidatorWeightUpdate(uint32 messageIndex)
public
returns (bytes32 validationID, uint64 nonce);
```
> Note: While `getValidator` provides a way to fetch a `Validator` based on its `validationID`, no method that returns all active validators is specified. This is because a `mapping` is a reasonable way to store active validators internally, and Solidity `mapping`s are not iterable. This can be worked around by storing additional indexing metadata in the contract, but not all applications may wish to incur that added complexity.
#### Private Methods
The following methods are specified as `internal` to account for different semantics of initiating validator set changes, such as checking uptime attested to via ICM message, or transferring funds to be locked as stake. Rather than broaden the definitions of these functions to cover all use cases, we leave it to the implementer to define a suitable external interface and call the appropriate `ACP99Manager` function internally.
```solidity
/**
* @notice Initiates validator registration by issuing a RegisterL1ValidatorMessage. The validator should
* not be considered active until completeValidatorRegistration is called.
*
* Emits an {InitiatedValidatorRegistration} event on success.
*
* @param nodeID The ID of the node to add to the L1.
* @param blsPublicKey The BLS public key of the validator.
* @param remainingBalanceOwner The remaining balance owner of the validator.
* @param disableOwner The disable owner of the validator.
* @param weight The weight of the node on the L1.
* @return validationID The ID of the registered validator.
*/
function _initiateValidatorRegistration(
bytes memory nodeID,
bytes memory blsPublicKey,
PChainOwner memory remainingBalanceOwner,
PChainOwner memory disableOwner,
uint64 weight
) internal returns (bytes32 validationID);
/**
* @notice Initiates validator removal by issuing a L1ValidatorWeightMessage with the weight set to zero.
* The validator should be considered inactive as soon as this function is called.
*
* Emits an {InitiatedValidatorRemoval} on success.
*
* @param validationID The ID of the validator to remove.
*/
function _initiateValidatorRemoval(bytes32 validationID) internal;
/**
* @notice Initiates a validator weight update by issuing an L1ValidatorWeightMessage with a nonzero weight.
* The validator weight change should not have any effect until completeValidatorWeightUpdate is successfully called.
*
* Emits an {InitiatedValidatorWeightUpdate} event on success.
*
* @param validationID The ID of the validator to modify.
* @param weight The new weight of the validator.
* @return nonce The validator nonce associated with the weight change.
* @return messageID The ID of the L1ValidatorWeightMessage used to update the validator's weight.
*/
function _initiateValidatorWeightUpdate(
bytes32 validationID,
uint64 weight
) internal returns (uint64 nonce, bytes32 messageID);
```
##### About `DisableL1ValidatorTx`
In addition to calling `_initiateValidatorRemoval`, a validator may be disabled by issuing a `DisableL1ValidatorTx` on the P-Chain. This transaction allows the `DisableOwner` of a validator to disable it directly from the P-Chain to claim the unspent `Balance` linked to the validator of a failed L1. Therefore it is not meant to be called in the `Manager` contract.
## Backwards Compatibility
`ACP99Manager` is a reference specification. As such, it doesn't have any impact on the current behavior of the Avalanche protocol.
## Reference Implementation
A reference implementation will be provided in Ava Labs' [ICM Contracts](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager) repository. This reference implementation will need to be updated to conform to `ACP99Manager` before this ACP may be marked `Implementable`.
### Example Integrations
`ACP99Manager` is designed to be easily incorporated into any architecture. Two example integrations are included in this ACP, each of which uses a different architecture.
#### Multi-contract Design
The multi-contract design consists of a contract that implements `ACP99Manager`, and separate "security module" contracts that implement security models, such as PoS or PoA. Each `ACP99Manager` implementation contract is associated with one or more "security modules" that are the only contracts allowed to call the `ACP99Manager` functions that initiate validator set changes (`initiateValidatorRegistration`, and `initiateValidatorWeightUpdate`). Every time a validator is added/removed or a weight change is initiated, the `ACP99Manager` implementation will, in turn, call the corresponding function of the "security module" (`handleValidatorRegistration` or `handleValidatorWeightChange`). We recommend that the "security modules" reference an immutable `ACP99Manager` contract address for security reasons.
It is up to the "security module" to decide what action to take when a validator is added/removed or a weight change is confirmed by the P-Chain. Such actions could be starting the withdrawal period and allocating rewards in a PoS L1.
|Own| SecurityModule
Safe -.->|Own| Manager
SecurityModule <-.->|Reference| Manager
Safe -->|addValidator| SecurityModule
SecurityModule -->|initiateValidatorRegistration| Manager
Manager -->|sendWarpMessage| P
P -->|completeValidatorRegistration| Manager
Manager -->|handleValidatorRegistration| SecurityModule`}
/>
"Security modules" could implement PoS, Liquid PoS, etc. The specification of such smart contracts is out of the scope of this ACP.
A work in progress implementation is available in the [Suzaku Contracts Library](https://github.com/suzaku-network/suzaku-contracts-library/blob/main/README.md#acp99-contracts-library) repository. It will be updated until this ACP is considered `Implementable` based on the outcome of the discussion.
Ava Labs' V2 Validator Manager also implements this architecture for a Proof-of-Stake security module, and is available in their [ICM Contracts Repository](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v2.0.0/contracts/validator-manager/StakingManager.sol).
#### Single-contract Design
The single-contract design consists of a class hierarchy with the base class implementing `ACP99Manager`. The `PoAValidatorManager` child class in the below diagram may be swapped out for another class implementing a different security model, such as PoS.
> ACP99Manager
class ValidatorManager {
completeValidatorRegistration
}
<> ValidatorManager
class PoAValidatorManager {
initiateValidatorRegistration
initiateEndValidation
completeEndValidation
}
ACP99Manager <|--ValidatorManager
ValidatorManager <|-- PoAValidatorManager`}
/>
No reference implementation is provided for this architecture in particular, but Ava Labs' V1 [Validator Manager](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v1.0.0/contracts/validator-manager) implements much of the functional behavior described by the specification. It predates the specification, however, so there are some deviations. It should at most be treated as a model of an approximate implementation of this standard.
## Security Considerations
The audit process of `ACP99Manager` and reference implementations is of the utmost importance for the future of the Avalanche ecosystem as most L1s would rely upon it to secure their L1.
## Open Questions
### Is there an interest to keep historical information about the validator set on the manager chain?
It is left to the implementor to decide if `getValidator` should return information about historical validators. Information about past validator performance may not be relevant for all applications (e.g. PoA has no need to know about past validator's uptimes). This information will still be available in archive nodes and offchain tools (e.g. explorers), but it is not enforced at the contract level.
### Should `ACP99Manager` include a churn control mechanism?
The Ava Labs [implementation](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/ValidatorManager.sol) of the `ValidatorManager` contract includes a churn control mechanism that prevents too much weight from being added or removed from the validator set in a short amount of time. Excessive churn can cause consensus failures, so it may be appropriate to require that churn tracking is implemented in some capacity.
## Acknowledgments
Special thanks to [@leopaul36](https://github.com/leopaul36), [@aaronbuchwald](https://github.com/aaronbuchwald), [@dhrubabasu](https://github.com/dhrubabasu), [@minghinmatthewlam](https://github.com/minghinmatthewlam) and [@michaelkaplan13](https://github.com/michaelkaplan13) for their reviews of previous versions of this ACP!
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# Avalanche Community Proposals (ACPs)
URL: /docs/acps
Official framework for proposing improvements and gathering consensus around changes to the Avalanche Network
>
## What is an Avalanche Community Proposal (ACP)?
An Avalanche Community Proposal is a concise document that introduces a change or best practice for adoption on the [Avalanche Network](https://www.avax.com). ACPs should provide clear technical specifications of any proposals and a compelling rationale for their adoption.
ACPs are an open framework for proposing improvements and gathering consensus around changes to the Avalanche Network. ACPs can be proposed by anyone and will be merged into this repository as long as they are well-formatted and coherent. Once an overwhelming majority of the Avalanche Network/Community have [signaled their support for an ACP](https://docs.avax.network/nodes/configure/avalanchego-config-flags#avalanche-community-proposals), it may be scheduled for activation on the Avalanche Network by Avalanche Network Clients (ANCs). It is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible ANC, such as [AvalancheGo](https://github.com/ava-labs/avalanchego).
## ACP Tracks
There are three kinds of ACP:
* A `Standards Track` ACP describes a change to the design or function of the Avalanche Network, such as a change to the P2P networking protocol, P-Chain design, Subnet architecture, or any change/addition that affects the interoperability of Avalanche Network Clients (ANCs).
* A `Best Practices Track` ACP describes a design pattern or common interface that should be used across the Avalanche Network to make it easier to integrate with Avalanche or for Subnets to interoperate with each other. This would include things like proposing a smart contract interface, not proposing a change to how smart contracts are executed.
* A `Meta Track` ACP describes a change to the ACP process or suggests a new way for the Avalanche Community to collaborate.
* A `Subnet Track` ACP describes a change to a particular Subnet. This would include things like configuration changes or coordinated Subnet upgrades.
## ACP Statuses
There are four statuses of an ACP:
* A `Proposed` ACP has been merged into the main branch of the ACP repository. It is actively being discussed by the Avalanche Community and may be modified based on feedback.
* An `Implementable` ACP is considered "ready for implementation" by the author(s) and will no longer change meaningfully from its current form (which would require a new ACP).
* An `Activated` ACP has been activated on the Avalanche Network via a coordinated upgrade by the Avalanche Community. Once an ACP is `Activated`, it is locked.
* A `Stale` ACP has been abandoned by its author(s) because it is not supported by the Avalanche Community or has been replaced with another ACP.
## ACP Workflow
### Step 0: Think of a Novel Improvement to Avalanche
The ACP process begins with a new idea for Avalanche. Each potential ACP must have an author(s): someone who writes the ACP using the style and format described below, shepherds the associated GitHub Discussion, and attempts to build consensus around the idea. Note that ideas and any resulting ACP is public. Authors should not post any ideas or anything in an ACP that the Author wants to keep confidential or to keep ownership rights in (such as intellectual property rights).
### Step 1: Post Your Idea to [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/ideas)
The author(s) should first attempt to ascertain whether there is support for their idea by posting in the "Ideas" category of GitHub Discussions. Vetting an idea publicly before going as far as writing an ACP is meant to save both the potential author(s) and the wider Avalanche Community time. Asking the Avalanche Community first if an idea is original helps prevent too much time being spent on something that is guaranteed to be rejected based on prior discussions (searching the Internet does not always do the trick). It also helps to make sure the idea is applicable to the entire community and not just the author(s). Small enhancements or patches often don't need standardization between multiple projects; these don't need an ACP and should be injected into the relevant development workflow with a patch submission to the applicable ANC issue tracker.
### Step 2: Propose an ACP via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls)
Once the author(s) feels confident that an idea has a decent chance of acceptance, an ACP should be drafted and submitted as a pull request (PR). This draft must be written in ACP style as described below. It is highly recommended that a single ACP contain a single key proposal or new idea. The more focused the ACP, the more successful it tends to be. If in doubt, split your ACP into several well-focused ones. The PR number of the ACP will become its assigned number.
### Step 3: Build Consensus on [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/discussion) and Provide an Implementation (if Applicable)
ACPs will be merged by ACP maintainers if the proposal is generally well-formatted and coherent. ACP editors will attempt to merge anything worthy of discussion, regardless of feasibility or complexity, that is not a duplicate or incomplete. After an ACP is merged, an official GitHub Discussion will be opened for the ACP and linked to the proposal for community discussion. It is recommended for author(s) or supportive Avalanche Community members to post an accompanying non-technical overview of their ACP for general consumption in this GitHub Discussion. The ACP should be reviewed and broadly supported before a reference implementation is started, again to avoid wasting the author(s) and the Avalanche Community's time, unless a reference implementation will aid people in studying the ACP.
### Step 4: Mark ACP as `Implementable` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls)
Once an ACP is considered complete by the author(s), it should be marked as `Implementable`. At this point, all open questions should be addressed and an associated reference implementation should be provided (if applicable). As mentioned earlier, the Avalanche Foundation meets periodically to recommend the ratification of specific ACPs but it is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible Avalanche Network Client (ANC), such as [AvalancheGo](https://github.com/ava-labs/avalanchego).
### \[Optional] Step 5: Mark ACP as `Stale` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls)
An ACP can be superseded by a different ACP, rendering the original obsolete. If this occurs, the original ACP will be marked as `Stale`. ACPs may also be marked as `Stale` if the author(s) abandon work on it for a prolonged period of time (12+ months). ACPs may be reopened and moved back to `Proposed` if the author(s) restart work.
### Maintenance
ACP maintainers will only merge PRs updating an ACP if it is created or approved by at least one of the author(s). ACP maintainers are not responsible for ensuring ACP author(s) approve the PR. ACP author(s) are expected to review PRs that target their unlocked ACP (`Proposed` or `Implementable`). Any PRs opened against a locked ACP (`Activated` or `Stale`) will not be merged by ACP maintainers.
## What belongs in a successful ACP?
Each ACP must have the following parts:
* `Preamble`: Markdown table containing metadata about the ACP, including the ACP number, a short descriptive title, the author(s), and optionally the contact info for each author, etc.
* `Abstract`: Concise (\~200 word) description of the ACP
* `Motivation`: Rationale for adopting the ACP and the specific issue/challenge/opportunity it addresses
* `Specification`: Complete description of the semantics of any change should allow any ANC/Avalanche Community member to implement the ACP
* `Security Considerations`: Security implications of the proposed ACP
Each ACP can have the following parts:
* `Open Questions`: Questions that should be resolved before implementation
Each `Standards Track` ACP must have the following parts:
* `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community
* `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change
Each `Best Practices Track` ACP can have the following parts:
* `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community
* `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change
### ACP Formats and Templates
Each ACP is allocated a unique subdirectory in the `ACPs` directory. The name of this subdirectory must be of the form `N-T` where `N` is the ACP number and `T` is the ACP title with any spaces replaced by hyphens. ACPs must be written in [markdown](https://daringfireball.net/projects/markdown/syntax) format and stored at `ACPs/N-T/README.md`. Please see the [ACP template](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/TEMPLATE.md) for an example of the correct layout.
### Auxiliary Files
ACPs may include auxiliary files such as diagrams or code snippets. Such files should be stored in the ACP's subdirectory (`ACPs/N-T/*`). There is no required naming convention for auxiliary files.
### Waived Copyright
ACP authors must waive any copyright claims before an ACP will be merged into the repository. This can be done by including the following text in an ACP:
```text
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
```
## Proposals
*You can view the status of each ACP on the [ACP Tracker](https://github.com/orgs/avalanche-foundation/projects/1/views/1).*
| Number | Title | Author(s) | Type |
| :------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- |
| [13](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/13-subnet-only-validators/README.md) | Subnet-Only Validators (SOVs) | Patrick O'Grady ([contact@patrickogrady.xyz](mailto:contact@patrickogrady.xyz)) | Standards |
| [20](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/20-ed25519-p2p/README.md) | Ed25519 p2p | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards |
| [23](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/23-p-chain-native-transfers/README.md) | P-Chain Native Transfers | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards |
| [24](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/24-shanghai-eips/README.md) | Activate Shanghai EIPs on C-Chain | Darioush Jalali ([@darioush](https://github.com/darioush)) | Standards |
| [25](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/25-vm-application-errors/README.md) | Virtual Machine Application Errors | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) | Standards |
| [30](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/30-avalanche-warp-x-evm/README.md) | Integrate Avalanche Warp Messaging into the EVM | Aaron Buchwald ([aaron.buchwald56@gmail.com](mailto:aaron.buchwald56@gmail.com)) | Standards |
| [31](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/31-enable-subnet-ownership-transfer/README.md) | Enable Subnet Ownership Transfer | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards |
| [41](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/41-remove-pending-stakers/README.md) | Remove Pending Stakers | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards |
| [62](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/README.md) | Disable `AddValidatorTx` and `AddDelegatorTx` | Jacob Everly ([https://twitter.com/JacobEv3rly](https://twitter.com/JacobEv3rly)), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards |
| [75](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/75-acceptance-proofs/README.md) | Acceptance Proofs | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) | Standards |
| [77](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/77-reinventing-subnets/README.md) | Reinventing Subnets | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards |
| [83](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/83-dynamic-multidimensional-fees/README.md) | Dynamic Multidimensional Fees for P-Chain and X-Chain | Alberto Benegiamo ([@abi87](https://github.com/abi87)) | Standards |
| [84](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/84-table-preamble/README.md) | Table Preamble for ACPs | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | Meta |
| [99](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/99-validatorsetmanager-contract/README.md) | Validator Manager Solidity Standard | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | Best Practices |
| [103](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/103-dynamic-fees/README.md) | Add Dynamic Fees to the X-Chain and P-Chain | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | Standards |
| [108](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/108-evm-event-importing/README.md) | EVM Event Importing | Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Best Practices |
| [113](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/113-provable-randomness/README.md) | Provable Virtual Machine Randomness | Tsachi Herman ([@tsachiherman](https://github.com/tsachiherman)) | Standards |
| [118](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/118-warp-signature-request/README.md) | Standardized P2P Warp Signature Request Interface | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | Best Practices |
| [125](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/125-basefee-reduction/README.md) | Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush)) | Standards |
| [131](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/131-cancun-eips/README.md) | Activate Cancun EIPs on C-Chain and Subnet-EVM chains | Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)) | Standards |
| [151](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/151-use-current-block-pchain-height-as-context/README.md) | Use current block P-Chain height as context for state verification | Ian Suvak ([@iansuvak](https://github.com/iansuvak)) | Standards |
| [176](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md) | Dynamic EVM Gas Limits and Price Discovery Updates | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards |
| [181](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/181-p-chain-epoched-views/README.md) | P-Chain Epoched Views | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | Standards |
| [191](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/191-seamless-l1-creation/README.md) | Seamless L1 Creations (CreateL1Tx) | Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meag FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)) | Standards |
| [194](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/194-streaming-asynchronous-execution/README.md) | Streaming Asynchronous Execution | Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | Standards |
| [204](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/204-precompile-secp256r1/README.md) | Precompile for secp256r1 Curve Support | Santiago Cammi ([@scammi](https://github.com/scammi)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)) | Standards |
| [209](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/209-eip7702-style-account-abstraction/README.md) | EIP-7702-style Set Code for EOAs | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards |
| [224](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/224-dynamic-gas-limit-in-subnet-evm/README.md) | Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM | Ceyhun Onur ([@ceyonur](https://github.com/ceyonur), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards |
| [226](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/226-dynamic-minimum-block-times/README.md) | Dynamic Minimum Block Times | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards |
## Contributing
Before contributing to ACPs, please read the [ACP Terms of Contribution](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/CONTRIBUTING.md).
# When to Build on C-Chain
URL: /docs/dapps/c-chain-or-avalanche-l1
Learn key concepts to decide when to build on the Avalanche C-Chain.
Here are some advantages of the Avalanche C-Chain that you should take into account.
## High Composability with C-Chain Assets[](#we-want-high-composability-with-c-chain-assets "Direct link to heading")
C-Chain is a better option for seamless integration with existing C-Chain assets and contracts. It is easier to build a DeFi application on C-Chain, as it provides larger liquidity pools and thus allows for efficient exchange between popular assets.
## Low Initial Cost[](#we-want-low-initial-cost "Direct link to heading")
C-Chain has economic advantages of low-cost deployment and cheap transactions. The recent Etna upgrade trim down the base fee of Avalanche C-Chain by 25x, which results in much lower transaction costs.
## Low Operational Costs[](#we-want-low-operational-costs "Direct link to heading")
C-Chain is run and operated by thousands of nodes, it is highly decentralized and reliable, and all the infrastructure (explorers, indexers, exchanges, bridges) has already been built out by dedicated teams that maintain them for you at no extra charge. Project deployed on the C-Chain can leverage all of that basically for free.
## High Security[](#we-want-high-security "Direct link to heading")
The security of Avalanche Primary Network is a function of the security of the underlying validators and stake delegators. You can choose C-Chain in order to achieve maximum security by utilizing thousands of Avalanche Primary Network validators.
## Conclusion
If an application has relatively low transaction rate and no special circumstances that would make the C-Chain a non-starter, you can begin with C-Chain deployment to leverage existing technical infrastructure, and later expand to an Avalanche L1. That way you can focus on working on the core of your project and once you have a solid product/market fit and have gained enough traction that the C-Chain is constricting you, plan a move to your own Avalanche L1.
Of course, we're happy to talk to you about your architecture and help you choose the best path forward. Feel free to reach out to us on [Discord](https://chat.avalabs.org/) or other [community channels](https://www.avax.network/community) we run.
# Introduction
URL: /docs/dapps
Learn about the Avalanche C-Chain.
Avalanche is a [network of networks](/docs/quick-start/primary-network). One of the chains running on Avalanche Primary Network is an EVM fork called the C-Chain (contract chain).
C-Chain runs a fork of [`go-ethereum`](https://geth.ethereum.org/docs/rpc/server) called [`coreth`](https://github.com/ava-labs/coreth) that has the networking and consensus portions replaced with Avalanche equivalents. What's left is the Ethereum VM, which runs Solidity smart contracts and manages data structures and blocks on the chain.
As a result, you get a blockchain that can run all the Solidity smart contracts from Ethereum, but with much greater transaction bandwidth and instant finality that [Avalanche's revolutionary consensus](/docs/quick-start/avalanche-consensus) enables.
Coreth is loaded as a plugin into [AvalancheGo](https://github.com/ava-labs/avalanchego), the client node application used to run Avalanche network. Any dApp deployed to Avalanche C-Chain will function the same as on Ethereum, but much faster and cheaper.
## Add C-Chain to Wallet
### Avalanche C-Chain Mainnet
* **Network Name**: Avalanche Mainnet C-Chain
* **RPC URL**: [https://api.avax.network/ext/bc/C/rpc](https://api.avax.network/ext/bc/C/rpc)
* **WebSocket URL**: wss\://api.avax.network/ext/bc/C/ws
* **ChainID**: `43114`
* **Symbol**: `AVAX`
* **Explorer**: [https://subnets.avax.network/c-chain](https://subnets.avax.network/c-chain)
### Avalanche Fuji Testnet
* **Network Name**: Avalanche Fuji C-Chain
* **RPC URL**: [https://api.avax-test.network/ext/bc/C/rpc](https://api.avax-test.network/ext/bc/C/rpc)
* **WebSocket URL**: wss\://api.avax-test.network/ext/bc/C/ws
* **ChainID**: `43113`
* **Symbol**: `AVAX`
* **Explorer**: [https://subnets-test.avax.network/c-chain](https://subnets-test.avax.network/c-chain)
### Via Block Explorers
Head to either explorer linked above and select "Add Avalanche C-Chain to Wallet" under "Chain Info" to automatically add the network.
Alternatively, visit [chainlist.org](https://chainlist.org/?search=Avalanche\&testnets=true) and connect your wallet.
# Avalanche Community Proposals
URL: /docs/quick-start/avalanche-community-proposals
Learn about community proposals and how to create them.
An Avalanche Community Proposal is a concise document that introduces a change or best practice for adoption on the [Avalanche Network](https://www.avax.network/). ACPs should provide clear technical specifications of any proposals and a compelling rationale for their adoption.
ACPs are an open framework for proposing improvements and gathering consensus around changes to the Avalanche Network. ACPs can be proposed by anyone and will be merged into this repository as long as they are well-formatted and coherent. Once an overwhelming majority of the Avalanche Network/Community have [signaled their support for an ACP](/docs/nodes/configure/configs-flags#avalanche-community-proposals), it may be scheduled for activation on the Avalanche Network by Avalanche Network Clients (ANCs). It is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible ANC, such as [AvalancheGo](https://github.com/ava-labs/avalanchego).
## ACP Tracks
There are three kinds of ACP:
* A `Standards Track` ACP describes a change to the design or function of the Avalanche Network, such as a change to the P2P networking protocol, P-Chain design, Avalanche L1 architecture, or any change/addition that affects the interoperability of Avalanche Network Clients (ANCs).
* A `Best Practices Track` ACP describes a design pattern or common interface that should be used across the Avalanche Network to make it easier to integrate with Avalanche or for Avalanche L1s to interoperate with each other. This would include things like proposing a smart contract interface, not proposing a change to how smart contracts are executed.
* A `Meta Track` ACP describes a change to the ACP process or suggests a new way for the Avalanche Community to collaborate.
* A `Subnet Track` ACP describes a change to a particular Avalanche L1. This would include things like configuration changes or coordinated Layer 1 upgrades.
## ACP Statuses
There are four statuses of an ACP:
* A `Proposed` ACP has been merged into the main branch of the ACP repository. It is actively being discussed by the Avalanche Community and may be modified based on feedback.
* An `Implementable` ACP is considered "ready for implementation" by the author and will no longer change meaningfully from its current form (which would require a new ACP).
* An `Activated` ACP has been activated on the Avalanche Network via a coordinated upgrade by the Avalanche Community. Once an ACP is `Activated`, it is locked.
* A `Stale` ACP has been abandoned by its author because it is not supported by the Avalanche Community or has been replaced with another ACP.
## ACP Workflow
### Step 0: Think of a Novel Improvement to Avalanche
The ACP process begins with a new idea for Avalanche. Each potential ACP must have an author: someone who writes the ACP using the style and format described below, shepherds the associated GitHub Discussion, and attempts to build consensus around the idea. Note that ideas and any resulting ACP is public. Authors should not post any ideas or anything in an ACP that the Author wants to keep confidential or to keep ownership rights in (such as intellectual property rights).
### Step 1: Post Your Idea to [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/ideas)
The author should first attempt to ascertain whether there is support for their idea by posting in the "Ideas" category of GitHub Discussions. Vetting an idea publicly before going as far as writing an ACP is meant to save both the potential author and the wider Avalanche Community time. Asking the Avalanche Community first if an idea is original helps prevent too much time being spent on something that is guaranteed to be rejected based on prior discussions (searching the Internet does not always do the trick). It also helps to make sure the idea is applicable to the entire community and not just the author. Small enhancements or patches often don't need standardization between multiple projects; these don't need an ACP and should be injected into the relevant development workflow with a patch submission to the applicable ANC issue tracker.
### Step 2: Propose an ACP via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls)
Once the author feels confident that an idea has a decent chance of acceptance, an ACP should be drafted and submitted as a pull request (PR). This draft must be written in ACP style as described below. It is highly recommended that a single ACP contain a single key proposal or new idea. The more focused the ACP, the more successful it tends to be. If in doubt, split your ACP into several well-focused ones. The PR number of the ACP will become its assigned number.
### Step 3: Build Consensus on [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/discussion) and Provide an Implementation (if Applicable)
ACPs will be merged by ACP maintainers if the proposal is generally well-formatted and coherent. ACP editors will attempt to merge anything worthy of discussion, regardless of feasibility or complexity, that is not a duplicate or incomplete. After an ACP is merged, an official GitHub Discussion will be opened for the ACP and linked to the proposal for community discussion. It is recommended for author or supportive Avalanche Community members to post an accompanying non-technical overview of their ACP for general consumption in this GitHub Discussion. The ACP should be reviewed and broadly supported before a reference implementation is started, again to avoid wasting the author and the Avalanche Community's time, unless a reference implementation will aid people in studying the ACP.
### Step 4: Mark ACP as `Implementable` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls)
Once an ACP is considered complete by the author, it should be marked as `Implementable`. At this point, all open questions should be addressed and an associated reference implementation should be provided (if applicable). As mentioned earlier, the Avalanche Foundation meets periodically to recommend the ratification of specific ACPs but it is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible Avalanche Network Client (ANC), such as [AvalancheGo](https://github.com/ava-labs/avalanchego).
### \[Optional] Step 5: Mark ACP as `Stale` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls)
An ACP can be superseded by a different ACP, rendering the original obsolete. If this occurs, the original ACP will be marked as `Stale`. ACPs may also be marked as `Stale` if the author abandon work on it for a prolonged period of time (12+ months). ACPs may be reopened and moved back to `Proposed` if the author restart work.
## What Belongs in a Successful ACP?
Each ACP must have the following parts:
* `Preamble`: Markdown table containing metadata about the ACP, including the ACP number, a short descriptive title, the author, and optionally the contact info for each author, etc.
* `Abstract`: Concise (\~200 word) description of the ACP
* `Motivation`: Rationale for adopting the ACP and the specific issue/challenge/opportunity it addresses
* `Specification`: Complete description of the semantics of any change should allow any ANC/Avalanche Community member to implement the ACP
* `Security Considerations`: Security implications of the proposed ACP
Each ACP can have the following parts:
* `Open Questions`: Questions that should be resolved before implementation
Each `Standards Track` ACP must have the following parts:
* `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community
* `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change
Each `Best Practices Track` ACP can have the following parts:
* `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community
* `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change
### ACP Formats and Templates
Each ACP is allocated a unique subdirectory in the `ACPs` directory. The name of this subdirectory must be of the form `N-T` where `N` is the ACP number and `T` is the ACP title with any spaces replaced by hyphens. ACPs must be written in [markdown](https://daringfireball.net/projects/markdown/syntax) format and stored at `ACPs/N-T/README.md`. Please see the [ACP template](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/TEMPLATE.md) for an example of the correct layout.
### Auxiliary Files
ACPs may include auxiliary files such as diagrams or code snippets. Such files should be stored in the ACP's subdirectory (`ACPs/N-T/*`). There is no required naming convention for auxiliary files.
### Waived Copyright
ACP authors must waive any copyright claims before an ACP will be merged into the repository. This can be done by including the following text in an ACP:
```
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
```
## Contributing
Before contributing to ACPs, please read the [ACP Terms of Contribution](https://github.com/avalanche-foundation/ACPs/blob/main/CONTRIBUTING.md).
# Avalanche Consensus
URL: /docs/quick-start/avalanche-consensus
Learn about the groundbreaking Avalanche Consensus algorithms.
Consensus is the task of getting a group of computers (a.k.a. nodes) to come to an agreement on a decision. In blockchain, this means that all the participants in a network have to agree on the changes made to the shared ledger.
This agreement is reached through a specific process, a consensus protocol, that ensures that everyone sees the same information and that the information is accurate and trustworthy.
## Avalanche Consensus
Avalanche Consensus is a consensus protocol that is scalable, robust, and decentralized. It combines features of both classical and Nakamoto consensus mechanisms to achieve high throughput, fast finality, and energy efficiency. For the whitepaper, see [here](https://www.avalabs.org/whitepapers).
Key Features Include:
* Speed: Avalanche consensus provides sub-second, immutable finality, ensuring that transactions are quickly confirmed and irreversible.
* Scalability: Avalanche consensus enables high network throughput while ensuring low latency.
* Energy Efficiency: Unlike other popular consensus protocols, participation in Avalanche consensus is neither computationally intensive nor expensive.
* Adaptive Security: Avalanche consensus is designed to resist various attacks, including sybil attacks, distributed denial-of-service (DDoS) attacks, and collusion attacks. Its probabilistic nature ensures that the consensus outcome converges to the desired state, even when the network is under attack.
## Conceptual Overview
Consensus protocols in the Avalanche family operate through repeated sub-sampled voting. When a node is determining whether a [transaction](http://support.avalabs.org/en/articles/4587384-what-is-a-transaction) should be accepted, it asks a small, random subset of [validator nodes](http://support.avalabs.org/en/articles/4064704-what-is-a-blockchain-validator) for their preference. Each queried validator replies with the transaction that it prefers, or thinks should be accepted.
Consensus will never include a transaction that is determined to be **invalid**. For example, if you were to submit a transaction to send 100 AVAX to a friend, but your wallet only has 2 AVAX, this transaction is considered **invalid** and will not participate in consensus.
If a sufficient majority of the validators sampled reply with the same preferred transaction, this becomes the preferred choice of the validator that inquired.
In the future, this node will reply with the transaction preferred by the majority.
The node repeats this sampling process until the validators queried reply with the same answer for a sufficient number of consecutive rounds.
* The number of validators required to be considered a "sufficient majority" is referred to as "α" (*alpha*).
* The number of consecutive rounds required to reach consensus, a.k.a. the "Confidence Threshold," is referred to as "β" (*beta*).
* Both α and β are configurable.
When a transaction has no conflicts, finalization happens very quickly. When conflicts exist, honest validators quickly cluster around conflicting transactions, entering a positive feedback loop until all correct validators prefer that transaction. This leads to the acceptance of non-conflicting transactions and the rejection of conflicting transactions.

Avalanche Consensus guarantees that if any honest validator accepts a transaction, all honest validators will come to the same conclusion.
For a great visualization, check out [this demo](https://tedyin.com/archive/snow-bft-demo/#/snow).
## Deep Dive Into Avalanche Consensus
### Intuition
First, let's develop some intuition about the protocol. Imagine a room full of people trying to agree on what to get for lunch. Suppose it's a binary choice between pizza and barbecue. Some people might initially prefer pizza while others initially prefer barbecue. Ultimately, though, everyone's goal is to achieve **consensus**.
Everyone asks a random subset of the people in the room what their lunch preference is. If more than half say pizza, the person thinks, "OK, looks like things are leaning toward pizza. I prefer pizza now." That is, they adopt the *preference* of the majority. Similarly, if a majority say barbecue, the person adopts barbecue as their preference.
Everyone repeats this process. Each round, more and more people have the same preference. This is because the more people that prefer an option, the more likely someone is to receive a majority reply and adopt that option as their preference. After enough rounds, they reach consensus and decide on one option, which everyone prefers.
### Snowball
The intuition above outlines the Snowball Algorithm, which is a building block of Avalanche consensus. Let's review the Snowball algorithm.
#### Parameters
* *n*: number of participants
* *k* (sample size): between 1 and *n*
* α (quorum size): between 1 and *k*
* β (decision threshold): >= 1
#### Algorithm
```
preference := pizza
consecutiveSuccesses := 0
while not decided:
ask k random people their preference
if >= α give the same response:
preference := response with >= α
if preference == old preference:
consecutiveSuccesses++
else:
consecutiveSuccesses = 1
else:
consecutiveSuccesses = 0
if consecutiveSuccesses > β:
decide(preference)
```
#### Algorithm Explained
Everyone has an initial preference for pizza or barbecue. Until someone has *decided*, they query *k* people (the sample size) and ask them what they prefer. If α or more people give the same response, that response is adopted as the new preference. α is called the *quorum size*. If the new preference is the same as the old preference, the `consecutiveSuccesses` counter is incremented. If the new preference is different then the old preference, the `consecutiveSuccesses` counter is set to `1`. If no response gets a quorum (an α majority of the same response) then the `consecutiveSuccesses` counter is set to `0`.
Everyone repeats this until they get a quorum for the same response β times in a row. If one person decides pizza, then every other person following the protocol will eventually also decide on pizza.
Random changes in preference, caused by random sampling, cause a network preference for one choice, which begets more network preference for that choice until it becomes irreversible and then the nodes can decide.
In our example, there is a binary choice between pizza or barbecue, but Snowball can be adapted to achieve consensus on decisions with many possible choices.
The liveness and safety thresholds are parameterizable. As the quorum size, α, increases, the safety threshold increases, and the liveness threshold decreases. This means the network can tolerate more byzantine (deliberately incorrect, malicious) nodes and remain safe, meaning all nodes will eventually agree whether something is accepted or rejected. The liveness threshold is the number of malicious participants that can be tolerated before the protocol is unable to make progress.
These values, which are constants, are quite small on the Avalanche Network. The sample size, *k*, is `20`. So when a node asks a group of nodes their opinion, it only queries `20` nodes out of the whole network. The quorum size, α, is `14`. So if `14` or more nodes give the same response, that response is adopted as the querying node's preference. The decision threshold, β, is `20`. A node decides on choice after receiving `20` consecutive quorum (α majority) responses.
Snowball is very scalable as the number of nodes on the network, *n*, increases. Regardless of the number of participants in the network, the number of consensus messages sent remains the same because in a given query, a node only queries `20` nodes, even if there are thousands of nodes in the network.
Everything discussed to this point is how Avalanche is described in [the Avalanche white-paper](https://assets-global.website-files.com/5d80307810123f5ffbb34d6e/6009805681b416f34dcae012_Avalanche%20Consensus%20Whitepaper.pdf). The implementation of the Avalanche consensus protocol by Ava Labs (namely in AvalancheGo) has some optimizations for latency and throughput.
### Blocks
A block is a fundamental component that forms the structure of a blockchain. It serves as a container or data structure that holds a collection of transactions or other relevant information. Each block is cryptographically linked to the previous block, creating a chain of blocks, hence the term "blockchain."
In addition to storing a reference of its parent, a block contains a set of transactions. These transactions can represent various types of information, such as financial transactions, smart contract operations, or data storage requests.
If a node receives a vote for a block, it also counts as a vote for all of the block's ancestors (its parent, the parents' parent, etc.).
### Finality
Avalanche consensus is probabilistically safe up to a safety threshold. That is, the probability that a correct node accepts a transaction that another correct node rejects can be made arbitrarily low by adjusting system parameters. In Nakamoto consensus protocol (as used in Bitcoin and Ethereum, for example), a block may be included in the chain but then be removed and not end up in the canonical chain. This means waiting an hour for transaction settlement. In Avalanche, acceptance/rejection are **final and irreversible** and only take a few seconds.
### Optimizations
It's not safe for nodes to just ask, "Do you prefer this block?" when they query validators. In Ava Labs' implementation, during a query a node asks, "Given that this block exists, which block do you prefer?" Instead of getting back a binary yes/no, the node receives the other node's preferred block.
Nodes don't only query upon hearing of a new block; they repeatedly query other nodes until there are no blocks processing.
Nodes may not need to wait until they get all *k* query responses before registering the outcome of a poll. If a block has already received *alpha* votes, then there's no need to wait for the rest of the responses.
### Validators
If it were free to become a validator on the Avalanche network, that would be problematic because a malicious actor could start many, many nodes which would get queried very frequently. The malicious actor could make the node act badly and cause a safety or liveness failure. The validators, the nodes which are queried as part of consensus, have influence over the network. They have to pay for that influence with real-world value in order to prevent this kind of ballot stuffing. This idea of using real-world value to buy influence over the network is called Proof of Stake.
To become a validator, a node must **bond** (stake) something valuable (**AVAX**). The more AVAX a node bonds, the more often that node is queried by other nodes. When a node samples the network it's not uniformly random. Rather, it's weighted by stake amount. Nodes are incentivized to be validators because they get a reward if, while they validate, they're sufficiently correct and responsive.
Avalanche doesn't have slashing. If a node doesn't behave well while validating, such as giving incorrect responses or perhaps not responding at all, its stake is still returned in whole, but with no reward. As long as a sufficient portion of the bonded AVAX is held by correct nodes, then the network is safe, and is live for virtuous transactions.
### Big Ideas
Two big ideas in Avalanche are **subsampling** and **transitive voting**.
Subsampling has low message overhead. It doesn't matter if there are twenty validators or two thousand validators; the number of consensus messages a node sends during a query remains constant.
Transitive voting, where a vote for a block is a vote for all its ancestors, helps with transaction throughput. Each vote is actually many votes in one.
### Loose Ends
Transactions are created by users which call an API on an [AvalancheGo](https://github.com/ava-labs/avalanchego) full node or create them using a library such as [AvalancheJS](https://github.com/ava-labs/avalanchejs).
### Other Observations
Conflicting transactions are not guaranteed to be live. That's not really a problem because if you want your transaction to be live then you should not issue a conflicting transaction.
Snowman is the name of Ava Labs' implementation of the Avalanche consensus protocol for linear chains.
If there are no undecided transactions, the Avalanche consensus protocol *quiesces*. That is, it does nothing if there is no work to be done. This makes Avalanche more sustainable than Proof-of-work where nodes need to constantly do work.
Avalanche has no leader. Any node can propose a transaction and any node that has staked AVAX can vote on every transaction, which makes the network more robust and decentralized.
## Why Do We Care?
Avalanche is a general consensus engine. It doesn't matter what type of application is put on top of it. The protocol allows the decoupling of the application layer from the consensus layer. If you're building a dapp on Avalanche then you just need to define a few things, like how conflicts are defined and what is in a transaction. You don't need to worry about how nodes come to an agreement. The consensus protocol is a black box that put something into it and it comes back as accepted or rejected.
Avalanche can be used for all kinds of applications, not just P2P payment networks. Avalanche's Primary Network has an instance of the Ethereum Virtual Machine, which is backward compatible with existing Ethereum Dapps and dev tooling. The Ethereum consensus protocol has been replaced with Avalanche consensus to enable lower block latency and higher throughput.
Avalanche is very performant. It can process thousands of transactions per second with one to two second acceptance latency.
## Summary
Avalanche consensus is a radical breakthrough in distributed systems. It represents as large a leap forward as the classical and Nakamoto consensus protocols that came before it. Now that you have a better understanding of how it works, check out other documentations for building game-changing Dapps and financial instruments on Avalanche.
# Avalanche L1s
URL: /docs/quick-start/avalanche-l1s
Explore the multi-chain architecture of Avalanche ecosystem.
An Avalanche L1 is a sovereign network which defines its own rules regarding its membership and token economics. It is composed of a dynamic subset of Avalanche validators working together to achieve consensus on the state of one or more blockchains. Each blockchain is validated by exactly one Avalanche L1, while an Avalanche L1 can validate many blockchains.
Avalanche's [Primary Network](/docs/quick-start/primary-network) is a special Avalanche L1 running three blockchains:
* The Platform Chain [(P-Chain)](/docs/quick-start/primary-network#p-chain)
* The Contract Chain [(C-Chain)](/docs/quick-start/primary-network#c-chain)
* The Exchange Chain [(X-Chain)](/docs/quick-start/primary-network#x-chain)

Every validator of an Avalanche L1 **must** sync the P-Chain of the Primary Network for interoperability.
Node operators that validate an Avalanche L1 with multiple chains do not need to run multiple machines for validation. For example, the Primary Network is an Avalanche L1 with three coexisting chains, all of which can be validated by a single node, or a single machine.
## Advantages
### Independent Networks
* Avalanche L1s use virtual machines to specify their own execution logic, determine their own fee regime, maintain their own state, facilitate their own networking, and provide their own security.
* Each Avalanche L1's performance is isolated from other Avalanche L1s in the ecosystem, so increased usage on one Avalanche L1 won't affect another.
* Avalanche L1s can have their own token economics with their own native tokens, fee markets, and incentives determined by the Avalanche L1 deployer.
* One Avalanche L1 can host multiple blockchains with customized [virtual machines](/docs/quick-start/virtual-machines).
### Native Interoperability
Avalanche Warp Messaging enables native cross-Avalanche L1 communication and allows Virtual Machine (VM) developers to implement arbitrary communication protocols between any two Avalanche L1s.
### Accommodate App-Specific Requirements
Different blockchain-based applications may require validators to have certain properties such as large amounts of RAM or CPU power.
an Avalanche L1 could require that validators meet certain [hardware requirements](/docs/nodes/system-requirements#hardware-and-operating-systems) so that the application doesn't suffer from low performance due to slow validators.
### Launch Networks Designed With Compliance
Avalanche's L1 architecture makes regulatory compliance manageable. As mentioned above, an Avalanche L1 may require validators to meet a set of requirements.
Some examples of requirements the creators of an Avalanche L1 may choose include:
* Validators must be located in a given country.
* Validators must pass KYC/AML checks.
* Validators must hold a certain license.
### Control Privacy of On-Chain Data
Avalanche L1s are ideal for organizations interested in keeping their information private.
Institutions conscious of their stakeholders' privacy can create a private Avalanche L1 where the contents of the blockchains would be visible only to a set of pre-approved validators.
Define this at creation with a [single parameter](/docs/nodes/configure/avalanche-l1-configs#private-avalanche-l1).
### Validator Sovereignty
In a heterogeneous network of blockchains, some validators will not want to validate certain blockchains because they simply have no interest in those blockchains.
The Avalanche L1 model enables validators to concern themselves only with blockchain networks they choose to participate in. This greatly reduces the computational burden on validators.
## Develop Your Own Avalanche L1
Avalanche L1s on Avalanche are deployed by default with [Subnet-EVM](https://github.com/ava-labs/subnet-evm#subnet-evm), a fork of go-ethereum. It implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client functionality.
To get started, check out our [L1 Toolbox](/tools/l1-toolbox) or the tutorials in the [Avalanche CLI](/docs/tooling/create-avalanche-l1) section.
# AVAX Token
URL: /docs/quick-start/avax-token
Learn about the native token of Avalanche Primary Network.
AVAX is the native utility token of Avalanche. It's a hard-capped, scarce asset that is used to pay for fees, secure the platform through staking, and provide a basic unit of account between the multiple Avalanche L1s created on Avalanche.
`1 nAVAX` is equal to `0.000000001 AVAX`.
## Utility
AVAX is a capped-supply (up to 720M) resource in the Avalanche ecosystem that's used to power the
network. AVAX is used to secure the ecosystem through staking and for day-to-day operations like
issuing transactions.
AVAX represents the weight that each node has in network decisions. No single actor owns
the Avalanche Network, so each validator in the network is given a proportional weight in the
network's decisions corresponding to the proportion of total stake that they own through proof
of stake (PoS).
Any entity trying to execute a transaction on Avalanche pays a corresponding fee (commonly known as
"gas") to run it on the network. The fees used to execute a transaction on Avalanche is burned,
or permanently removed from circulating supply.
## Tokenomics
A fixed amount of 360M AVAX was minted at genesis, but a small amount of AVAX is constantly minted
as a reward to validators. The protocol rewards validators for good behavior by minting them AVAX
rewards at the end of their staking period. The minting process offsets the AVAX burned by
transactions fees. While AVAX is still far away from its supply cap, it will almost always remain an
inflationary asset.
Avalanche does not take away any portion of a validator's already staked tokens (commonly known as
"slashing") for negligent/malicious staking periods, however this behavior is disincentivized as
validators who attempt to do harm to the network would expend their node's computing resources
for no reward.
AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at
year $j$, with $R_1 = 360M$, and $R_l$ representing the last year that the values of
$\gamma,\lambda \in \R$ were changed; $c_j$ is the yet un-minted supply of coins to reach $720M$ at
year $j$ such that $c_j \leq 360M$; $u$ represents a staker, with $u.s_{amount}$ representing the
total amount of stake that $u$ possesses, and $u.s_{time}$ the length of staking for $u$.
AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at:
$$
R_j = R_l + \sum_{\forall u} \rho(u.s_{amount}, u.s_{time}) \times \frac{c_j}{L} \times \left( \sum_{i=0}^{j}\frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda}\right)^i} \right)
$$
where,
$$
L = \left(\sum_{i=0}^{\infty} \frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda} \right)^i} \right)
$$
At genesis, $c_1 = 360M$. The values of $\gamma$ and $\lambda$ are governable, and if changed,
the function is recomputed with the new value of $c_*$. We have that $\sum_{*}\rho(*) \le 1$.
$\rho(*)$ is a linear function that can be computed as follows ($u.s_{time}$ is measured in weeks,
and $u.s_{amount}$ is measured in AVAX tokens):
$$
\rho(u.s_{amount}, u.s_{time}) = (0.002 \times u.s_{time} + 0.896) \times \frac{u.s_{amount}}{R_j}
$$
If the entire supply of tokens at year $j$ is staked for the maximum amount of staking time (one
year, or 52 weeks), then $\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 1$. If, instead,
every token is staked continuously for the minimal stake duration of two weeks, then
$\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 0.9$. Therefore, staking for the maximum
amount of time incurs an additional 11.11% of tokens minted, incentivizing stakers to stake
for longer periods.
Due to the capped-supply, the above function guarantees that
AVAX will never exceed a total of $720M$ tokens, or $\lim_{j \to \infty} R(j) = 720M$.
# Disclaimer
URL: /docs/quick-start/disclaimer
The Knowledge Base, including all the Help articles on this site, is provided for technical support purposes only, without representation, warranty or guarantee of any kind.
Not an offer to sell or solicitation of an offer to buy any security or other regulated financial instrument. Not technical, investment, financial, accounting, tax, legal or other advice; please consult your own professionals.
Please conduct your own research before connecting to or interacting with any dapp or third party or making any investment or financial decisions.
MoonPay, ParaSwap and any other third party services or dapps you access are offered by third parties unaffiliated with us.
Please review this [Notice](https://assets.website-files.com/602e8e4411398ca20cfcafd3/60ec9607c853cd466383f1ad_Important%20Notice%20-%20avalabs.org.pdf) and the [Terms of Use](https://core.app/terms/core).
# Introduction
URL: /docs/quick-start
Learn about Avalanche Protocol and its unique features.
Avalanche is an open-source platform for building decentralized applications in one interoperable, decentralized, and highly scalable ecosystem.
Powered by a uniquely powerful [consensus mechanism](/docs/quick-start/avalanche-consensus), Avalanche is the first ecosystem designed to accommodate the scale of global finance, with near-instant transaction finality.
## Blazingly Fast
Avalanche employs the fastest consensus mechanism of any Layer 1 blockchain. The unique consensus mechanism enables quick finality and low latency: in less than 2 seconds, your transaction is effectively processed and verified.
## Built to Scale
Developers who build on Avalanche can build application-specific blockchains with complex rulesets or build on existing private or public Avalanche L1s in any language.
Avalanche is incredibly energy-efficient and can run easily on consumer-grade hardware. The entire Avalanche network consumes the same amount of energy as 46 US households, equivalent to 0.0005% of the amount of energy consumed by Bitcoin.
Solidity developers can build on Avalanche's implementation of the EVM straight out-of-the box, or build their own custom Virtual Machine (VM) for advanced use cases.
## Advanced Security
Avalanche consensus scales to thousands of concurrent validators without suffering performance degradation making it one of the most secure protocols for internet scaling systems.
Permissionless and permissioned custom blockchains deployed as an Avalanche L1s can include custom rulesets designed to be compliant with legal and jurisdictional considerations.
# Primary Network
URL: /docs/quick-start/primary-network
Learn about the Avalanche Primary Network and its three blockchains.
Avalanche is a heterogeneous network of blockchains. As opposed to homogeneous networks, where all applications reside in the same chain, heterogeneous networks allow separate chains to be created for different applications.
The Primary Network is a special [Avalanche L1](/docs/quick-start/avalanche-l1s) that runs three blockchains:
* The Platform Chain [(P-Chain)](/docs/quick-start/primary-network#p-chain)
* The Contract Chain [(C-Chain)](/docs/quick-start/primary-network#c-chain)
* The Exchange Chain [(X-Chain)](/docs/quick-start/primary-network#x-chain)
[Avalanche Mainnet](/docs/quick-start/networks/mainnet) is comprised of the Primary Network and all deployed Avalanche L1s.
A node can become a validator for the Primary Network by staking at least **2,000 AVAX**.

## The Chains
All validators of the Primary Network are required to validate and secure the following:
### C-Chain
The **C-Chain** is an implementation of the Ethereum Virtual Machine (EVM). The [C-Chain's API](/docs/api-reference/c-chain/api) supports Geth's API and supports the deployment and execution of smart contracts written in Solidity.
The C-Chain is an instance of the [Coreth](https://github.com/ava-labs/coreth) Virtual Machine.
### P-Chain
The **P-Chain** is responsible for all validator and Avalanche L1-level operations. The [P-Chain API](/docs/api-reference/p-chain/api) supports the creation of new blockchains and Avalanche L1s, the addition of validators to Avalanche L1s, staking operations, and other platform-level operations.
The P-Chain is an instance of the Platform Virtual Machine.
### X-Chain
The **X-Chain** is responsible for operations on digital smart assets known as **Avalanche Native Tokens**. A smart asset is a representation of a real-world resource (for example, equity, or a bond) with sets of rules that govern its behavior, like "can't be traded until tomorrow." The [X-Chain API](/docs/api-reference/x-chain/api) supports the creation and trade of Avalanche Native Tokens.
One asset traded on the X-Chain is AVAX. When you issue a transaction to a blockchain on Avalanche, you pay a fee denominated in AVAX.
The X-Chain is an instance of the Avalanche Virtual Machine (AVM).
# Rewards Formula
URL: /docs/quick-start/rewards-formula
Learn about the rewards formula for the Avalanche Primary Network validator
## Primary Network Validator Rewards
Consider a Primary Network validator which stakes a $Stake$ amount of `AVAX` for $StakingPeriod$ seconds.
The potential reward is calculated **at the beginning of the staking period**. At the beginning of the staking period there is a $Supply$ amount of `AVAX` in the network. The maximum amount of `AVAX` is $MaximumSupply$. At the end of its staking period, a responsive Primary Network validator receives a reward.
$$
Potential Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{Staking Period}{Minting Period} \times EffectiveConsumptionRate
$$
where,
$$
MaximumSupply - Supply = \text{the number of AVAX tokens left to emit in the network}
$$
$$
\frac{Stake}{Supply} = \text{the individual's stake as a percentage of all available AVAX tokens in the network}
$$
$$
\frac{StakingPeriod}{MintingPeriod} = \text{time tokens are locked up divided by the $MintingPeriod$}
$$
$$
\text{$MintingPeriod$ is one year as configured by the network).}
$$
$$
EffectiveConsumptionRate =
$$
$$
\frac{MinConsumptionRate}{PercentDenominator} \times \left(1- \frac{Staking Period}{Minting Period}\right) + \frac{MaxConsumptionRate}{PercentDenominator} \times \frac{Staking Period}{Minting Period}
$$
Note that $StakingPeriod$ is the staker's entire staking period, not just the staker's uptime, that is the aggregated time during which the staker has been responsive. The uptime comes into play only to decide whether a staker should be rewarded; to calculate the actual reward only the staking period duration is taken into account.
$EffectiveConsumptionRate$ is the rate at which the Primary Network validator is rewarded based on $StakingPeriod$ selection.
$MinConsumptionRate$ and $MaxConsumptionRate$ bound $EffectiveConsumptionRate$:
$$
MinConsumptionRate \leq EffectiveConsumptionRate \leq MaxConsumptionRate
$$
The larger $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MaxConsumptionRate$. The smaller $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MinConsumptionRate$.
A staker achieves the maximum reward for its stake if $StakingPeriod$ = $Minting Period$. The reward is:
$$
Max Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{MaxConsumptionRate}{PercentDenominator}
$$
Note that this formula is the same as the reward formula at the top of this section because $EffectiveConsumptionRate$ = $MaxConsumptionRate$.
For reference, you can find all the Primary network parameters in [the section below](#primary-network-parameters-on-mainnet).
## Delegators Weight Checks
There are bounds set of the maximum amount of delegators' stake that a validator can receive.
The maximum weight $MaxWeight$ a validator $Validator$ can have is:
$$
MaxWeight = \min(Validator.Weight \times MaxValidatorWeightFactor, MaxValidatorStake)
$$
where $MaxValidatorWeightFactor$ and $MaxValidatorStake$ are the Primary Network Parameters described above.
A delegator won't be added to a validator if the combination of their weights and all other validator's delegators' weight is larger than $MaxWeight$. Note that this must be true at any point in time.
Note that setting $MaxValidatorWeightFactor$ to 1 disables delegation since the $MaxWeight = Validator.Weight$.
## Notes on Percentages
`PercentDenominator = 1_000_000` is the denominator used to calculate percentages.
It allows you to specify percentages up to 4 digital positions. To denominate your percentage in `PercentDenominator` just multiply it by `10_000`. For example:
* `100%` corresponds to `100 * 10_000 = 1_000_000`
* `1%` corresponds to `1* 10_000 = 10_000`
* `0.02%` corresponds to `0.002 * 10_000 = 200`
* `0.0007%` corresponds to `0.0007 * 10_000 = 7`
## Primary Network Parameters on Mainnet
For reference we list below the Primary Network parameters on Mainnet:
* `AssetID = Avax`
* `InitialSupply = 240_000_000 Avax`
* `MaximumSupply = 720_000_000 Avax`.
* `MinConsumptionRate = 0.10 * reward.PercentDenominator`.
* `MaxConsumptionRate = 0.12 * reward.PercentDenominator`.
* `Minting Period = 365 * 24 * time.Hour`.
* `MinValidatorStake = 2_000 Avax`.
* `MaxValidatorStake = 3_000_000 Avax`.
* `MinStakeDuration = 2 * 7 * 24 * time.Hour`.
* `MaxStakeDuration = 365 * 24 * time.Hour`.
* `MinDelegationFee = 20000`, that is `2%`.
* `MinDelegatorStake = 25 Avax`.
* `MaxValidatorWeightFactor = 5`. This is a platformVM parameter rather than a genesis one, so it's shared across networks.
* `UptimeRequirement = 0.8`, that is `80%`.
### Interactive Graph
The graph below demonstrates the reward as a function of the length of time
staked. The x-axis depicts $\frac{StakingPeriod}{MintingPeriod}$ as a percentage
while the y-axis depicts $Reward$ as a percentage of $MaximumSupply - Supply$,
the amount of tokens left to be emitted.
Graph variables correspond to those defined above:
* `h` (high) = $MaxConsumptionRate$
* `l` (low) = $MinConsumptionRate$
* `s` = $\frac{Stake}{Supply}$
# Validator Management
URL: /docs/quick-start/validator-manager
Learn about the Validator Manager contract suite for Avalanche L1s
The Validator Manager contract suite allows Avalanche Layer 1s (L1s) to manage and enforce custom logic for validator sets through smart contracts.
### Choosing Between Proof of Authority and Proof of Stake Chains
Organizations may opt to run a Proof of Authority (PoA) or a Proof of Stake (PoS) chain based on their specific needs and objectives.
#### Proof of Authority
In a PoA chain, a limited number of validators are pre-approved and recognized entities. This model is ideal for organizations that require:
* **Control and Compliance**: Regulatory compliance or the need for trusted validators.
* **Simplified Governance**: Easier coordination among validators.
PoA is often used by private enterprises, consortiums, or government agencies where validator identity is crucial, and a controlled environment is preferred.
#### Proof of Stake
In a PoS chain, validators are selected based on the amount of stake (tokens) they hold and are willing to lock up. This model is suitable for organizations aiming for:
* **Decentralization**: Encouraging a wide distribution of validators.
* **Security**: Economic incentives align validator behavior with network health.
* **Community Participation**: Allowing token holders to participate in network validation.
PoS chains are ideal for public networks or organizations that wish to build an open ecosystem with active community involvement.
### Enforcing Custom Validation Logic via Smart Contracts
Avalanche L1s have the unique capability to enforce any validation logic that can be encoded via smart contracts. This flexibility allows developers and organizations to define custom rules and conditions for validator participation in their networks. By leveraging smart contracts, L1s can implement complex validation mechanisms, such as dynamic validator sets, customized staking requirements, or hybrid consensus models.
Smart contracts act as the governing code that dictates how validators are selected, how they behave, and under what conditions they can participate in the network. This programmable approach ensures that the validation logic is transparent, auditable, and can be updated or modified as needed to adapt to changing requirements or threats.
***
[Learn more about the Validator Manager contract suite](/docs/avalanche-l1s/validator-manager/contract)
[Build your first Avalanche L1](/docs/tooling/create-avalanche-l1)
# Virtual Machines
URL: /docs/quick-start/virtual-machines
Learn about blockchain VMs and how you can build a custom VM-enabled blockchain in Avalanche.
A **Virtual Machine** (VM) is the blueprint for a blockchain, meaning it defines a blockchain's complete application logic by specifying the blockchain's state, state transitions, transaction rules, and API interface.
Developers can use the same VM to create multiple blockchains, each of which follows identical rules but is independent of all others.
All Avalanche validators of the **Avalanche Primary Network** are required to run three VMs:
* **Coreth**: Defines the Contract Chain (C-Chain); supports smart contract functionality and is EVM-compatible.
* **Platform VM**: Defines the Platform Chain (P-Chain); supports operations on staking and Avalanche L1s.
* **Avalanche VM**: Defines the Exchange Chain (X-Chain); supports operations on Avalanche Native Tokens.
All three can easily be run on any computer with [AvalancheGo](/docs/nodes).
## Custom VMs on Avalanche
Developers with advanced use-cases for utilizing distributed ledger technology are often forced to build everything from scratch - networking, consensus, and core infrastructure - before even starting on the actual application.
Avalanche eliminates this complexity by:
* Providing VMs as simple blueprints for defining blockchain behavior
* Supporting development in any programming language with familiar tools
* Handling all low-level infrastructure automatically
This lets developers focus purely on building their dApps, ecosystems, and communities, rather than wrestling with blockchain fundamentals.
### How Custom VMs Work
Customized VMs can communicate with Avalanche over a language agnostic request-response protocol known as [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call). This allows the VM framework to open a world of endless possibilities, as developers can implement their dApps using the languages, frameworks, and libraries of their choice.
Validators can install additional VMs on their node to validate additional [Avalanche L1s](/docs/quick-start/avalanche-l1s) in the Avalanche ecosystem. In exchange, validators receive staking rewards in the form of a reward token determined by the Avalanche L1s.
## Building a Custom VM
You can start building your first custom virtual machine in two ways:
1. Use the ready-to-deploy Subnet-EVM for Solidity-based development
2. Create a custom VM in Golang, Rust, or your preferred language
The choice depends on your needs. Subnet-EVM provides a quick start with Ethereum compatibility, while custom VMs offer maximum flexibility.
### Golang Examples
See here for a tutorial on [How to Build a Simple Golang VM](/docs/virtual-machines/golang-vms/simple-golang-vm).
### Rust Examples
See here for a tutorial on [How to Build a Simple Rust VM](/docs/virtual-machines/rust-vms/setting-up-environment).
# Introduction
URL: /docs/nodes
A brief introduction to the concepts of nodes and validators within the Avalanche ecosystem.
The Avalanche network is a decentralized platform designed for high throughput and low latency, enabling a wide range of applications. At the core of the network are nodes and validators, which play vital roles in maintaining the network's security, reliability, and performance.
## What is a Node?
A node in the Avalanche network is any computer that participates in the network by maintaining a copy of the blockchain, relaying information, and validating transactions. Nodes can be of different types depending on their role and level of participation in the network’s operations.
### Types of Nodes
* **Full Node**: Stores the entire blockchain data and helps propagate transactions and blocks across the network. It does not participate directly in consensus but is crucial for the network's health and decentralization. **Archival full nodes** store the entire blockchain ledger, including all transactions from the beginning to the most recent. **Pruned full nodes** download the blockchain ledger, then delete blocks starting with the oldest to save memory.
* **Validator Node**: A specialized type of full node that actively participates in the consensus process by validating transactions, producing blocks, and securing the network. Validator nodes are required to stake AVAX tokens as collateral to participate in the consensus mechanism.
* **RPC (Remote Procedure Call) Node**: These nodes act as an interface, enabling third-party applications to query and interact with the blockchain.
## More About Validator Nodes
A validator node participates in the network's consensus protocol by validating transactions and creating new blocks. Validators play a critical role in ensuring the integrity, security, and decentralization of the network.
#### Key Functions of Validators:
* **Transaction Validation**: Validators verify the legitimacy of transactions before they are added to the blockchain.
* **Block Production**: Validators produce and propose new blocks to the network. This involves reaching consensus with other validators to agree on which transactions should be included in the next block.
* **Security and Consensus**: Validators work together to secure the network and ensure that only valid transactions are confirmed. This is done through the Avalanche Consensus protocol, which allows validators to achieve agreement quickly and with high security.
### Primary Network Validators
To become a validator on the Primary Network, you must stake **2,000 AVAX**. This will grant you the ability to validate transactions across all three chains in the Primary Network: the P-Chain, C-Chain, and X-Chain.
### Avalanche L1 Validator
To become a validator on an Avalanche L1, you must meet the specific validator management criteria for that network. If the L1 operates on a Proof-of-Stake (PoS) model, you will need to stake the required amount of tokens to be eligible.
In addition to meeting these criteria, there is a monthly fee of **1.33 AVAX** per validator.
# System Requirements
URL: /docs/nodes/system-requirements
This document provides information about the system and networking requirements for running an AvalancheGo node.
## Hardware and Operating Systems
Avalanche is an incredibly lightweight protocol, so nodes can run on commodity hardware. Note that as network usage increases, hardware requirements may change.
* **CPU**: Equivalent of 8 AWS vCPU
* **RAM**: 8 GiB (16 GiB recommended)
* **Storage**: 1 TiB SSD
* **OS**: Ubuntu 22.04 or MacOS >= 12
Nodes which choose to use a HDD may get poor and random read/write latencies, therefore reducing performance and reliability. An SSD is strongly suggested.
## Networking
To run successfully, AvalancheGo needs to accept connections from the Internet on the network port `9651`. Before you proceed with the installation, you need to determine the networking environment your node will run in.
### On a Cloud Provider
If your node is running on a cloud provider computer instance, it will have a static IP. Find out what that static IP is, or set it up if you didn't already.
### On a Home Connection
If you're running a node on a computer that is on a residential internet connection, you have a dynamic IP; that is, your IP will change periodically. **For the sake of demonstration, you can ignore the following information.** Otherwise, you will need to set up inbound port forwarding of port `9651` from the internet to the computer the node is installed on.
As there are too many models and router configurations, we cannot provide instructions on what exactly to do, but there are online guides to be found (like [this](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/), or [this](https://www.howtogeek.com/66214/how-to-forward-ports-on-your-router/) ), and your service provider support might help too.
Please note that a fully connected Avalanche node maintains and communicates over a couple of thousand of live TCP connections.
For some under-powered and older home routers, that might be too much to handle. If that is the case, you may experience lagging on other computers connected to the same router, node getting benched, or failing to sync and similar issues.
# Introduction
URL: /docs/cross-chain
Learn about different interoperability protocols in the Avalanche ecosystem.
# Introduction
URL: /docs/virtual-machines/evm-l1-customization
Learn how to customize the Ethereum Virtual Machine with EVM and Precompiles.
Welcome to the EVM customization guide. This documentation provides an overview of **EVM**, the purpose of **Validator Manager Contracts**, the capabilities of **precompiles**, and how you can create custom precompiles to extend the functionality of the Ethereum Virtual Machine (EVM).
## Overview of EVM
EVM is Avalanche's customized version of the Ethereum Virtual Machine, tailored to run on Avalanche L1s. It allows developers to deploy Solidity smart contracts with enhanced capabilities, benefiting from Avalanche's high throughput and low latency. EVM enables more flexibility and performance optimizations compared to the standard EVM.
## Validator Manager Contracts
Validator Manager Contracts (VMCs) are smart contracts that manage the validators of an L1. They allow you to define rules and criteria for validator participation directly within smart contracts. VMCs enable dynamic validator sets, making it easier to add or remove validators without requiring a network restart. This provides greater control over the L1's validator management and enhances network governance.
## Precompiles
Precompiles are specialized smart contracts that execute native Go code within the EVM context. They act as a bridge between Solidity and lower-level functionalities, allowing for performance optimizations and access to features not available in Solidity alone.
### Default Precompiles in EVM
EVM comes with a set of default precompiles that extend the EVM's functionality. For detailed documentation on each precompile, visit the [Avalanche L1s Precompiles](/docs/avalanche-l1s/evm-configuration/evm-l1-customization#precompiles) section:
* [AllowList](/docs/avalanche-l1s/evm-configuration/allowlist): A reusable interface for permission management
* [Permissions](/docs/avalanche-l1s/evm-configuration/permissions): Control contract deployment and transaction submission
* [Tokenomics](/docs/avalanche-l1s/evm-configuration/tokenomics): Manage native token supply and minting
* [Transaction Fees](/docs/avalanche-l1s/evm-configuration/transaction-fees): Configure fee parameters and reward mechanisms
* [Warp Messenger](/docs/avalanche-l1s/evm-configuration/warpmessenger): Perform cross-chain operations
## Custom Precompiles
One of the powerful features of EVM is the ability to create custom precompiles. By writing Go code and integrating it as a precompile, you can extend the EVM's functionality to suit specific use cases. Custom precompiles allow you to:
* Achieve higher performance for computationally intensive tasks.
* Access lower-level system functions not available in Solidity.
* Implement custom cryptographic functions or algorithms.
* Interact with external systems or data sources.
Creating custom precompiles opens up a wide range of possibilities for developers to optimize and expand their decentralized applications on Avalanche L1s.
By leveraging EVM, Validator Manager Contracts, and precompiles, you can build customized and efficient decentralized applications with greater control and enhanced functionality. Explore the following sections to learn how to implement and utilize these powerful features.
# Introduction
URL: /docs/virtual-machines
Learn about the execution layer of a blockchain network.
A Virtual Machine (VM) is a blueprint for a blockchain. Blockchains are instantiated from a VM, similar to how objects are instantiated from a class definition. VMs can define anything you want, but will generally define transactions that are executed and how blocks are created.
## Blocks and State
Virtual Machines deal with blocks and state. The functionality provided by VMs is to:
* Define the representation of a blockchain's state
* Represent the operations in that state
* Apply the operations in that state
Each block in the blockchain contains a set of state transitions. Each block is applied in order from the blockchain's initial genesis block to its last accepted block to reach the latest state of the blockchain.
## Blockchain
A blockchain relies on two major components: The **Consensus Engine** and the **VM**. The VM defines application specific behavior and how blocks are built and parsed to create the blockchain. All VMs run on top of the Avalanche Consensus Engine, which allows nodes in the network to agree on the state of the blockchain. Here's a quick example of how VMs interact with consensus:
1. A node wants to update the blockchain's state
2. The node's VM will notify the consensus engine that it wants to update the state
3. The consensus engine will request the block from the VM
4. The consensus engine will verify the returned block using the VM's implementation of `Verify()`
5. The consensus engine will get the network to reach consensus on whether to accept or reject the newly verified block. Every virtuous (well-behaved) node on the network will have the same preference for a particular block
6. Depending upon the consensus results, the engine will either accept or reject the block. What happens when a block is accepted or rejected is specific to the implementation of the VM
AvalancheGo provides the consensus engine for every blockchain on the Avalanche Network. The consensus engine relies on the VM interface to handle building, parsing, and storing blocks as well as verifying and executing on behalf of the consensus engine.
This decoupling between the application and consensus layer allows developers to build their applications quickly by implementing virtual machines, without having to worry about the consensus layer managed by Avalanche which deals with how nodes agree on whether or not to accept a block.
## Installing a VM
VMs are supplied as binaries to a node running `AvalancheGo`. These binaries must be named the VM's assigned **VMID**. A VMID is a 32-byte hash encoded in CB58 that is generated when you build your VM.
In order to install a VM, its binary must be installed in the `AvalancheGo` plugin path. See [here](/docs/nodes/configure/configs-flags#--plugin-dir-string) for more details. Multiple VMs can be installed in this location.
Each VM runs as a separate process from AvalancheGo and communicates with `AvalancheGo` using gRPC calls. This functionality is enabled by **RPCChainVM**, a special VM which wraps around other VM implementations and bridges the VM and AvalancheGo, establishing a standardized communication protocol between them.
During VM creation, handshake messages are exchanged via **RPCChainVM** between AvalancheGo and the VM installation. Ensure matching **RPCChainVM** protocol versions to avoid errors, by updating your VM or using a [different version of AvalancheGo](https://github.com/ava-labs/AvalancheGo/releases).
Note that some VMs may not support the latest protocol version.
### API Handlers
Users can interact with a blockchain and its VM through handlers exposed by the VM's API.
VMs expose two types of handlers to serve responses for incoming requests:
* **Blockchain Handlers**: Referred to as handlers, these expose APIs to interact with a blockchain instantiated by a VM. The API endpoint will be different for each chain. The endpoint for a handler is `/ext/bc/[chainID]`.
* **VM Handlers**: Referred to as static handlers, these expose APIs to interact with the VM directly. One example API would be to parse genesis data to instantiate a new blockchain. The endpoint for a static handler is `/ext/vm/[vmID]`.
For any readers familiar with object-oriented programming, static and non-static handlers on a VM are analogous to static and non-static methods on a class. Blockchain handlers can be thought of as methods on an object, whereas VM handlers can be thought of as static methods on a class.
### Instantiate a VM
The `vm.Factory` interface is implemented to create new VM instances from which a blockchain can be initialized. The factory's `New` method shown below provides `AvalancheGo` with an instance of the VM. It's defined in the [`factory.go`](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/factory.go) file of the `timestampvm` repository.
```go
// Returning a new VM instance from VM's factory
func (f *Factory) New(*snow.Context) (interface{}, error) { return &vm.VM{}, nil }
```
### Initializing a VM to Create a Blockchain
Before a VM can run, AvalancheGo will initialize it by invoking its `Initialize` method. Here, the VM will bootstrap itself and sets up anything it requires before it starts running.
This might involve setting up its database, mempool, genesis state, or anything else the VM requires to run.
```go
if err := vm.Initialize(
ctx.Context,
vmDBManager,
genesisData,
chainConfig.Upgrade,
chainConfig.Config,
msgChan,
fxs,
sender,
);
```
You can refer to the [implementation](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/vm.go#L75) of `vm.initialize` in the TimestampVM repository.
## Interfaces
Every VM should implement the following interfaces:
### `block.ChainVM`
To reach a consensus on linear blockchains, Avalanche uses the Snowman consensus engine. To be compatible with Snowman, a VM must implement the `block.ChainVM` interface.
For more information, see [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/vm.go).
```go title="snow/engine/snowman/block/vm.go"
// ChainVM defines the required functionality of a Snowman VM.
//
// A Snowman VM is responsible for defining the representation of the state,
// the representation of operations in that state, the application of operations
// on that state, and the creation of the operations. Consensus will decide on
// if the operation is executed and the order operations are executed.
//
// For example, suppose we have a VM that tracks an increasing number that
// is agreed upon by the network.
// The state is a single number.
// The operation is setting the number to a new, larger value.
// Applying the operation will save to the database the new value.
// The VM can attempt to issue a new number, of larger value, at any time.
// Consensus will ensure the network agrees on the number at every block height.
type ChainVM interface {
common.VM
Getter
Parser
// Attempt to create a new block from data contained in the VM.
//
// If the VM doesn't want to issue a new block, an error should be
// returned.
BuildBlock() (snowman.Block, error)
// Notify the VM of the currently preferred block.
//
// This should always be a block that has no children known to consensus.
SetPreference(ids.ID) error
// LastAccepted returns the ID of the last accepted block.
//
// If no blocks have been accepted by consensus yet, it is assumed there is
// a definitionally accepted block, the Genesis block, that will be
// returned.
LastAccepted() (ids.ID, error)
}
// Getter defines the functionality for fetching a block by its ID.
type Getter interface {
// Attempt to load a block.
//
// If the block does not exist, an error should be returned.
//
GetBlock(ids.ID) (snowman.Block, error)
}
// Parser defines the functionality for fetching a block by its bytes.
type Parser interface {
// Attempt to create a block from a stream of bytes.
//
// The block should be represented by the full byte array, without extra
// bytes.
ParseBlock([]byte) (snowman.Block, error)
}
```
### `common.VM`
`common.VM` is a type that every `VM` must implement. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/common/vm.go).
```go title="snow/engine/common/vm.go"
// VM describes the interface that all consensus VMs must implement
type VM interface {
// Contains handlers for VM-to-VM specific messages
AppHandler
// Returns nil if the VM is healthy.
// Periodically called and reported via the node's Health API.
health.Checkable
// Connector represents a handler that is called on connection connect/disconnect
validators.Connector
// Initialize this VM.
// [ctx]: Metadata about this VM.
// [ctx.networkID]: The ID of the network this VM's chain is running on.
// [ctx.chainID]: The unique ID of the chain this VM is running on.
// [ctx.Log]: Used to log messages
// [ctx.NodeID]: The unique staker ID of this node.
// [ctx.Lock]: A Read/Write lock shared by this VM and the consensus
// engine that manages this VM. The write lock is held
// whenever code in the consensus engine calls the VM.
// [dbManager]: The manager of the database this VM will persist data to.
// [genesisBytes]: The byte-encoding of the genesis information of this
// VM. The VM uses it to initialize its state. For
// example, if this VM were an account-based payments
// system, `genesisBytes` would probably contain a genesis
// transaction that gives coins to some accounts, and this
// transaction would be in the genesis block.
// [toEngine]: The channel used to send messages to the consensus engine.
// [fxs]: Feature extensions that attach to this VM.
Initialize(
ctx *snow.Context,
dbManager manager.Manager,
genesisBytes []byte,
upgradeBytes []byte,
configBytes []byte,
toEngine chan<- Message,
fxs []*Fx,
appSender AppSender,
) error
// Bootstrapping is called when the node is starting to bootstrap this chain.
Bootstrapping() error
// Bootstrapped is called when the node is done bootstrapping this chain.
Bootstrapped() error
// Shutdown is called when the node is shutting down.
Shutdown() error
// Version returns the version of the VM this node is running.
Version() (string, error)
// Creates the HTTP handlers for custom VM network calls.
//
// This exposes handlers that the outside world can use to communicate with
// a static reference to the VM. Each handler has the path:
// [Address of node]/ext/VM/[VM ID]/[extension]
//
// Returns a mapping from [extension]s to HTTP handlers.
//
// Each extension can specify how locking is managed for convenience.
//
// For example, it might make sense to have an extension for creating
// genesis bytes this VM can interpret.
CreateStaticHandlers() (map[string]*HTTPHandler, error)
// Creates the HTTP handlers for custom chain network calls.
//
// This exposes handlers that the outside world can use to communicate with
// the chain. Each handler has the path:
// [Address of node]/ext/bc/[chain ID]/[extension]
//
// Returns a mapping from [extension]s to HTTP handlers.
//
// Each extension can specify how locking is managed for convenience.
//
// For example, if this VM implements an account-based payments system,
// it have an extension called `accounts`, where clients could get
// information about their accounts.
CreateHandlers() (map[string]*HTTPHandler, error)
}
```
### `snowman.Block`
The `snowman.Block` interface It define the functionality a block must implement to be a block in a linear Snowman chain. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/block.go).
```go title="snow/consensus/snowman/block.go"
// Block is a possible decision that dictates the next canonical block.
//
// Blocks are guaranteed to be Verified, Accepted, and Rejected in topological
// order. Specifically, if Verify is called, then the parent has already been
// verified. If Accept is called, then the parent has already been accepted. If
// Reject is called, the parent has already been accepted or rejected.
//
// If the status of the block is Unknown, ID is assumed to be able to be called.
// If the status of the block is Accepted or Rejected; Parent, Verify, Accept,
// and Reject will never be called.
type Block interface {
choices.Decidable
// Parent returns the ID of this block's parent.
Parent() ids.ID
// Verify that the state transition this block would make if accepted is
// valid. If the state transition is invalid, a non-nil error should be
// returned.
//
// It is guaranteed that the Parent has been successfully verified.
Verify() error
// Bytes returns the binary representation of this block.
//
// This is used for sending blocks to peers. The bytes should be able to be
// parsed into the same block on another node.
Bytes() []byte
// Height returns the height of this block in the chain.
Height() uint64
}
```
### `choices.Decidable`
This interface is a superset of every decidable object, such as transactions, blocks, and vertices. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/choices/decidable.go).
```go title="snow/choices/decidable.go"
// Decidable represents element that can be decided.
//
// Decidable objects are typically thought of as either transactions, blocks, or
// vertices.
type Decidable interface {
// ID returns a unique ID for this element.
//
// Typically, this is implemented by using a cryptographic hash of a
// binary representation of this element. An element should return the same
// IDs upon repeated calls.
ID() ids.ID
// Accept this element.
//
// This element will be accepted by every correct node in the network.
Accept() error
// Reject this element.
//
// This element will not be accepted by any correct node in the network.
Reject() error
// Status returns this element's current status.
//
// If Accept has been called on an element with this ID, Accepted should be
// returned. Similarly, if Reject has been called on an element with this
// ID, Rejected should be returned. If the contents of this element are
// unknown, then Unknown should be returned. Otherwise, Processing should be
// returned.
Status() Status
}
```
# Manage VM Binaries
URL: /docs/virtual-machines/manage-vm-binaries
Learn about Avalanche Plugin Manager (APM) and how to use it to manage virtual machines binaries on existing AvalancheGo instances.
Avalanche Plugin Manager (APM) is a command-line tool to manage virtual machines binaries on existing AvalancheGo instances. It enables to add/remove nodes to Avalanche L1s, upgrade the VM plugin binaries as new versions get released to the plugin repository.
GitHub: [https://github.com/ava-labs/apm](https://github.com/ava-labs/apm)
## `avalanche-plugins-core`
`avalanche-plugins-core` is plugin repository that ships with the `apm`. A plugin repository consists of a set of virtual machine and Avalanche L1 definitions that the `apm` consumes to allow users to quickly and easily download and manage VM binaries.
GitHub: [https://github.com/ava-labs/avalanche-plugins-core](https://github.com/ava-labs/avalanche-plugins-core)
# Simple VM in Any Language
URL: /docs/virtual-machines/simple-vm-any-language
Learn how to implement a simple virtual machine in any language.
This is a language-agnostic high-level documentation explaining the basics of how to get started at implementing your own virtual machine from scratch.
Avalanche virtual machines are grpc servers implementing Avalanche's [Proto interfaces](https://buf.build/ava-labs/avalanche). This means that it can be done in [any language that has a grpc implementation](https://grpc.io/docs/languages/).
## Minimal Implementation
To get the process started, at the minimum, you will to implement the following interfaces:
* [`vm.Runtime`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime) (Client)
* [`vm.VM`](https://buf.build/ava-labs/avalanche/docs/main:vm) (Server)
To build a blockchain taking advantage of AvalancheGo's consensus to build blocks, you will need to implement:
* [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) (Client)
* [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) (Client)
To have a json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo, you will need to implement:
* [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) (Server)
You can and should use a tool like `buf` to generate the (Client/Server) code from the interfaces as stated in the [Avalanche module](https://buf.build/ava-labs/avalanche)'s page.
There are *server* and *client* interfaces to implement. AvalancheGo calls the *server* interfaces exposed by your VM and your VM calls the *client* interfaces exposed by AvalancheGo.
## Starting Process
Your VM is started by AvalancheGo launching your binary. Your binary is started as a sub-process of AvalancheGo. While launching your binary, AvalancheGo passes an environment variable `AVALANCHE_VM_RUNTIME_ENGINE_ADDR` containing an url. We must use this url to initialize a `vm.Runtime` client.
Your VM, after having started a grpc server implementing the VM interface must call the [`vm.Runtime.InitializeRequest`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime#vm.runtime.InitializeRequest) with the following parameters.
* `protocolVersion`: It must match the `supported plugin version` of the [AvalancheGo release](https://github.com/ava-labs/AvalancheGo/releases) you are using. It is always part of the release notes.
* `addr`: It is your grpc server's address. It must be in the following format `host:port` (example `localhost:12345`)
## VM Initialization
The service methods are described in the same order as they are called. You will need to implement these methods in your server.
### Pre-Initialization Sequence
AvalancheGo starts/stops your process multiple times before launching the real initialization sequence.
1. [VM.Version](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Version)
* Return: your VM's version.
2. [VM.CreateStaticHandler](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers)
* Return: an empty array - (Not absolutely required).
3. [VM.Shutdown](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Shutdown)
* You should gracefully stop your process.
* Return: Empty
### Initialization Sequence
1. [VM.CreateStaticHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers)
* Return an empty array - (Not absolutely required).
2. [VM.Initialize](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Initialize)
* Param: an [InitializeRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeRequest).
* You must use this data to initialize your VM.
* You should add the genesis block to your blockchain and set it as the last accepted block.
* Return: an [InitializeResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeResponse) containing data about the genesis extracted from the `genesis_bytes` that was sent in the request.
3. [VM.VerifyHeightIndex](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.VerifyHeightIndex)
* Return: a [VerifyHeightIndexResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VerifyHeightIndexResponse) with the code `ERROR_UNSPECIFIED` to indicate that no error has occurred.
4. [VM.CreateHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateHandlers)
* To serve json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo
* See [json-RPC](#json-rpc) for more detail
* Create a [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) server and get its url.
* Return: a `CreateHandlersResponse` containing a single item with the server's url. (or an empty array if not implementing the json-RPC endpoint)
5. [VM.StateSyncEnabled](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.StateSyncEnabled)
* Return: `true` if you want to enable StateSync, `false` otherwise.
6. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) *If you had specified `true` in the `StateSyncEnabled` result*
* Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `StateSyncing` value
* Set your blockchain's state to `StateSyncing`
* Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block.
7. [VM.GetOngoingSyncStateSummary](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.GetOngoingSyncStateSummary) *If you had specified `true` in the `StateSyncEnabled` result*
* Return: a [GetOngoingSyncStateSummaryResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.GetOngoingSyncStateSummaryResponse) built from the genesis block.
8. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState)
* Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `Bootstrapping` value
* Set your blockchain's state to `Bootstrapping`
* Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block.
9. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference)
* Param: `SetPreferenceRequest` containing the preferred block ID
* Return: Empty
10. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState)
* Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `NormalOp` value
* Set your blockchain's state to `NormalOp`
* Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block.
11. [VM.Connected](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Connected) (for every other node validating this Avalanche L1 in the network)
* Param: a [ConnectedRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ConnectedRequest) with the NodeID and the version of AvalancheGo.
* Return: Empty
12. [VM.Health](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Health)
* Param: Empty
* Return: a [HealthResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.HealthResponse) with an empty `details` property.
13. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock)
* Param: A byte array containing a Block (the genesis block in this case)
* Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block.
At this point, your VM is fully started and initialized.
### Building Blocks
#### Transaction Gossiping Sequence
When your VM receives transactions (for example using the [json-RPC](#json-rpc) endpoints), it can gossip them to the other nodes by using the [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) service.
Supposing we have a 3 nodes network with nodeX, nodeY, nodeZ. Let's say NodeX has received a new transaction on it's json-RPC endpoint.
[`AppSender.SendAppGossip`](https://buf.build/ava-labs/avalanche/docs/main:appsender#appsender.AppSender.SendAppGossip) (*client*): You must serialize your transaction data into a byte array and call the `SendAppGossip` to propagate the transaction.
AvalancheGo then propagates this to the other nodes.
[VM.AppGossip](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.AppGossip): You must deserialize the transaction and store it for the next block.
* Param: A byte array containing your transaction data, and the NodeID of the node which sent the gossip message.
* Return: Empty
#### Block Building Sequence
Whenever your VM is ready to build a new block, it will initiate the block building process by using the [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) service. Supposing that nodeY wants to build the block. you probably will implement some kind of background worker checking every second if there are any pending transactions:
*client* [`Messenger.Notify`](https://buf.build/ava-labs/avalanche/docs/main:messenger#messenger.Messenger.Notify): You must issue a notify request to AvalancheGo by calling the method with the `MESSAGE_BUILD_BLOCK` value.
1. [VM.BuildBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BuildBlock)
* Param: Empty
* You must build a block with your pending transactions. Serialize it to a byte array.
* Store this block in memory as a "pending blocks"
* Return: a [BuildBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.BuildBlockResponse) from the newly built block and it's associated data (`id`, `parent_id`, `height`, `timestamp`).
2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify)
* Param: The byte array containing the block data
* Return: the block's timestamp
3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference)
* Param: The block's ID
* You must mark this block as the next preferred block.
* Return: Empty
1. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock)
* Param: A byte array containing a the newly built block's data
* Store this block in memory as a "pending blocks"
* Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block.
2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify)
* Param: The byte array containing the block data
* Return: the block's timestamp
3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference)
* Param: The block's ID
* You must mark this block as the next preferred block.
* Return: Empty
[VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block.
* Param: The block's ID
* Return: Empty
#### Managing Conflicts
Conflicts happen when two or more nodes propose the next block at the same time. AvalancheGo takes care of this and decides which block should be considered final, and which blocks should be rejected using Snowman consensus. On the VM side, all there is to do is implement the `VM.BlockAccept` and `VM.BlockReject` methods.
*nodeX proposes block `0x123...`, nodeY proposes block `0x321...` and nodeZ proposes block `0x456`*
There are three conflicting blocks (different hashes), and if we look at our VM's log files, we can see that AvalancheGo uses Snowman to decide which block must be accepted.
```bash
... snowman/voter.go:58 filtering poll results ...
... snowman/voter.go:65 finishing poll ...
... snowman/voter.go:87 Snowman engine can't quiesce
...
... snowman/voter.go:58 filtering poll results ...
... snowman/voter.go:65 finishing poll ...
... snowman/topological.go:600 accepting block
```
Supposing that AvalancheGo accepts block `0x123...`. The following RPC methods are called on all nodes:
1. [VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block.
* Param: The block's ID (`0x123...`)
* Return: Empty
2. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected.
* Param: The block's ID (`0x321...`)
* Return: Empty
3. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected.
* Param: The block's ID (`0x456...`)
* Return: Empty
### JSON-RPC
To enable your json-RPC endpoint, you must implement the [HandleSimple](https://buf.build/ava-labs/avalanche/docs/main:http#http.HTTP.HandleSimple) method of the [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) interface.
* Param: a [HandleSimpleHTTPRequest](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPRequest) containing the original request's method, url, headers, and body.
* Analyze, deserialize and handle the request. For example: if the request represents a transaction, we must deserialize it, check the signature, store it and gossip it to the other nodes using the [messenger client](#block-building-sequence)).
* Return the [HandleSimpleHTTPResponse](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPResponse) response that will be sent back to the original sender.
This server is registered with AvalancheGo during the [initialization process](#initialization-sequence) when the `VM.CreateHandlers` method is called. You must simply respond with the server's url in the `CreateHandlersResponse` result.
# AvalancheGo Installer
URL: /docs/tooling/avalanche-go-installer
Script to install AvalancheGo on any Linux computer.
AvalancheGo Installer is a shell (bash) script that installs AvalancheGo on any Linux computer. This script sets up full, running node in a matter of minutes with minimal user input required. This is convenient if you want to run the node as a service on a standalone Linux installation, for example to set up a (Avalanche L1) validator, use the node as a private RPC server and similar uses. It also makes upgrading or reinstalling the nodes easy.
GitHub: [https://github.com/ava-labs/builders-hub/blob/master/scripts/avalanchego-installer.sh](https://github.com/ava-labs/builders-hub/blob/master/scripts/avalanchego-installer.sh)
How-to: [Run an Avalanche Node Using the Install Script](/docs/nodes/using-install-script/installing-avalanche-go)
If you want to run a node in a more complex environment, like in a docker or Kubernetes container, or as a part of an installation orchestrated using a tool like Terraform, this installer probably won't fit your purposes. See here for how to run AvalancheGo in a [Docker container](/docs/tooling/guides/run-with-docker).
# AvalancheJS
URL: /docs/tooling/avalanche-js
JavaScript library for Avalanche.
AvalancheJS is a JavaScript Library for interfacing with the [Avalanche](/docs/quick-start) Platform. It is built using TypeScript and intended to support both browser and Node.js. The AvalancheJS library allows you to issue commands to the Avalanche node APIs.
The APIs currently supported by default are:
* Admin API
* Auth API
* AVM API (X-Chain)
* EVM API (C-Chain)
* Health API
* Index API
* Info API
* Keystore API
* Metrics API
* PlatformVM API
* Socket API
We built AvalancheJS with ease of use in mind. With this library, any JavaScript developer is able to interact with a node on the Avalanche Platform who has enabled their API endpoints for the developer's consumption. We keep the library up-to-date with the latest changes in the Avalanche Platform Specification found in the [Platform Chain Specification](/docs/api-reference/p-chain/api), [Exchange Chain (X-Chain) Specification](/docs/api-reference/x-chain/api), [Contract Chain (C-Chain) Specification](/docs/api-reference/c-chain/api).
Using AvalancheJS, developers can:
* Retrieve balances on addresses
* Get UTXOs for addresses
* Build and sign transactions
* Issue signed transactions to the X-Chain, P-Chain, and C-Chain
* Perform cross-chain swaps between the X, P and C chains
* Add Validators and Delegators
* Create Avalanche L1s and Blockchains
## Requirements[](#requirements "Direct link to heading")
AvalancheJS requires Node.js LTS version 20.11.1 or higher to compile.
## Installation[](#installation "Direct link to heading")
### Using the NPM Package[](#using-the-npm-package "Direct link to heading")
Add AvalancheJS to your project via `npm` or `yarn`.
For installing via `npm`:
```bash
npm install --save @avalabs/avalanchejs
```
For installing via `yarn`:
```bash
yarn add @avalabs/avalanchejs
```
### Build from Repository[](#build-from-repository "Direct link to heading")
You can also pull the repo down directly and build it from scratch.
Clone the AvalancheJS repository:
```bash
git clone https://github.com/ava-labs/avalanchejs.git
```
Then build it:
```bash
npm run build
```
or
```bash
yarn build
```
This will generate double builds, one is CommonJS, the other one is ESM. The "avalanchejs" file can then be dropped into any project as a pure JavaScript implementation of Avalanche. The "index.js" file can then be dropped into any project as a pure JavaScript implementation of Avalanche. Depending on the project, the ESM or CommonJS file will be used.


## Use AvalancheJS in Projects[](#use-avalanchejs-in-projects "Direct link to heading")
The AvalancheJS library can be imported into your existing project as follows:
```js
import { avm, pvm, evm } from '@avalabs/avalanchejs';
```
## Importing Essentials[](#importing-essentials "Direct link to heading")
```js
import { avm /** X-chain */, pvm /** P-chain */, evm /** C-chain */, utils } from "@avalabs/avalanchejs"
// example calls
const exportTx = avm.newExportTx(...) // constructs a new export tx from X
const addValidatorTx = pvm.newAddPermissionlessValidatorTx(...) // constructs a new add validator tx on P
const importTx = evm.newImportTx(...) // constructs a new import tx to C
const publicKeyBytes = utils.hexToBuffer(publicKeyHex)
const signature = utils.signHash(bytes, privateKeyBytes)
```
## Run Scripts[](#run-scripts "Direct link to heading")
When cloning the AvalancheJS repository, there are several handy examples and utils. Because it is using ECMAScript Modules (ESM), and not CommonJS, the following command needs to be ran:
```bash
node --loader ts-node/esm path/script_name.ts
```
This command tells Node.js to use the ts-node/esm loader when running a TypeScript script.
Let's say that the AvalancheJS repository was cloned. Suppose we want to run `examples/p-chain/export.ts`.
This creates an export transaction from C-Chain to X-Chain.
First, make sure you add the environment variables in a `.env` file at the root of the project. Fill in the private key for your account, and the C-Chain and X-Chain addresses.
To execute the script, we use:
```bash
node --loader ts-node/esm examples/c-chain/export.ts
```
It ran successfully, providing the following output:
```bash
laviniatalpas@Lavinias-MacBook-Pro avalanchejs % node --loader ts-node/esm examples/c-chain/export.ts
(node:53180) ExperimentalWarning: `--experimental-loader` may be removed in the future; instead use `register()`:
--import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("ts-node/esm", pathToFileURL("./"));'
(Use `node --trace-warnings ...` to show where the warning was created)
{ txID: 'QKiNPBoLjAzbVwEoXsmx3XGWuMGZ2Nmt6e4CWvFC41iMEyu6P' }
```
# Avalanche Ops
URL: /docs/tooling/avalanche-ops
Avalanche Ops is a suite of commands that enables you to launch and configure network infrastructure (virtual machines or cloud instances) and installs Avalanche nodes from scratch allowing for various configuration requirements. It provisions all resources required to run a node or network with recommended setups (configurable).
This tool is intended for quickly creating, testing and iterating over various Avalanche network infrastructure configurations for testing and simulation purposes. Use this to play with various setups and reproduce potential problems and issues with possible configurations.
GitHub: [https://github.com/ava-labs/avalanche-ops](https://github.com/ava-labs/avalanche-ops)
# Avalanche Plugin Manager
URL: /docs/tooling/avalanche-plugin-manager
Avalanche Plugin Manager (APM) is a command-line tool to manage virtual machines binaries on existing AvalancheGo instances. It enables to add/remove nodes to Avalanche L1s, upgrade the VM plugin binaries as new versions get released to the plugin repository.
GitHub: [https://github.com/ava-labs/apm](https://github.com/ava-labs/apm)
## `avalanche-plugins-core`[](#avalanche-plugins-core "Direct link to heading")
`avalanche-plugins-core` is plugin repository that ships with the `apm`. A plugin repository consists of a set of virtual machine and Avalanche L1 definitions that the `apm` consumes to allow users to quickly and easily download and manage VM binaries.
GitHub: [https://github.com/ava-labs/avalanche-plugins-core](https://github.com/ava-labs/avalanche-plugins-core)
# CLI Commands
URL: /docs/tooling/cli-commands
Complete list of Avalanche CLI commands and their usage.
## avalanche blockchain
The blockchain command suite provides a collection of tools for developing
and deploying Blockchains.
To get started, use the blockchain create command wizard to walk through the
configuration of your very first Blockchain. Then, go ahead and deploy it
with the blockchain deploy command. You can use the rest of the commands to
manage your Blockchain configurations and live deployments.
**Usage:**
```bash
avalanche blockchain [subcommand] [flags]
```
**Subcommands:**
* [`addValidator`](#avalanche-blockchain-addvalidator): The blockchain addValidator command adds a node as a validator to
an L1 of the user provided deployed network. If the network is proof of
authority, the owner of the validator manager contract must sign the
transaction. If the network is proof of stake, the node must stake the L1's
staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain.
This command currently only works on Blockchains deployed to either the Fuji
Testnet or Mainnet.
* [`changeOwner`](#avalanche-blockchain-changeowner): The blockchain changeOwner changes the owner of the deployed Blockchain.
* [`changeWeight`](#avalanche-blockchain-changeweight): The blockchain changeWeight command changes the weight of a L1 Validator.
The L1 has to be a Proof of Authority L1.
* [`configure`](#avalanche-blockchain-configure): AvalancheGo nodes support several different configuration files.
Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see [https://build.avax.network/docs/nodes/configure/avalanche-l1-configs](https://build.avax.network/docs/nodes/configure/avalanche-l1-configs))
Each blockchain within the network can have its own chain config (see [https://build.avax.network/docs/nodes/chain-configs/c-chain](https://build.avax.network/docs/nodes/chain-configs/c-chain) [https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go](https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go) for subnet-evm options).
A chain can also have special requirements for the AvalancheGo node configuration itself (see [https://build.avax.network/docs/nodes/configure/configs-flags](https://build.avax.network/docs/nodes/configure/configs-flags)).
This command allows you to set all those files.
* [`create`](#avalanche-blockchain-create): The blockchain create command builds a new genesis file to configure your Blockchain.
By default, the command runs an interactive wizard. It walks you through
all the steps you need to create your first Blockchain.
The tool supports deploying Subnet-EVM, and custom VMs. You
can create a custom, user-generated genesis with a custom VM by providing
the path to your genesis and VM binaries with the --genesis and --vm flags.
By default, running the command with a blockchainName that already exists
causes the command to fail. If you'd like to overwrite an existing
configuration, pass the -f flag.
* [`delete`](#avalanche-blockchain-delete): The blockchain delete command deletes an existing blockchain configuration.
* [`deploy`](#avalanche-blockchain-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet.
At the end of the call, the command prints the RPC URL you can use to interact with the Subnet.
Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent
attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't
allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call
avalanche network clean to reset all deployed chain state. Subsequent local deploys
redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks,
so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet.
* [`describe`](#avalanche-blockchain-describe): The blockchain describe command prints the details of a Blockchain configuration to the console.
By default, the command prints a summary of the configuration. By providing the --genesis
flag, the command instead prints out the raw genesis file.
* [`export`](#avalanche-blockchain-export): The blockchain export command write the details of an existing Blockchain deploy to a file.
The command prompts for an output path. You can also provide one with
the --output flag.
* [`import`](#avalanche-blockchain-import): Import blockchain configurations into avalanche-cli.
This command suite supports importing from a file created on another computer,
or importing from blockchains running public networks
(e.g. created manually or with the deprecated subnet-cli)
* [`join`](#avalanche-blockchain-join): The blockchain join command configures your validator node to begin validating a new Blockchain.
To complete this process, you must have access to the machine running your validator. If the
CLI is running on the same machine as your validator, it can generate or update your node's
config file automatically. Alternatively, the command can print the necessary instructions
to update your node manually. To complete the validation process, the Blockchain's admins must add
the NodeID of your validator to the Blockchain's allow list by calling addValidator with your
NodeID.
After you update your validator's config, you need to restart your validator manually. If
you provide the --avalanchego-config flag, this command attempts to edit the config file
at that path.
This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet.
* [`list`](#avalanche-blockchain-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags,
it prints some general, static information about the Blockchain. With the --deployed flag, the command
shows additional information including the VMID, BlockchainID and SubnetID.
* [`publish`](#avalanche-blockchain-publish): The blockchain publish command publishes the Blockchain's VM to a repository.
* [`removeValidator`](#avalanche-blockchain-removevalidator): The blockchain removeValidator command stops a whitelisted blockchain network validator from
validating your deployed Blockchain.
To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass
these prompts by providing the values with flags.
* [`stats`](#avalanche-blockchain-stats): The blockchain stats command prints validator statistics for the given Blockchain.
* [`upgrade`](#avalanche-blockchain-upgrade): The blockchain upgrade command suite provides a collection of tools for
updating your developmental and deployed Blockchains.
* [`validators`](#avalanche-blockchain-validators): The blockchain validators command lists the validators of a blockchain and provides
several statistics about them.
* [`vmid`](#avalanche-blockchain-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain.
**Flags:**
```bash
-h, --help help for blockchain
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### addValidator
The blockchain addValidator command adds a node as a validator to
an L1 of the user provided deployed network. If the network is proof of
authority, the owner of the validator manager contract must sign the
transaction. If the network is proof of stake, the node must stake the L1's
staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain.
This command currently only works on Blockchains deployed to either the Fuji
Testnet or Mainnet.
**Usage:**
```bash
avalanche blockchain addValidator [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Debug")
--aggregator-log-to-stdout use stdout for signature aggregator logs
--balance float set the AVAX balance of the validator that will be used for continuous fee on P-Chain
--blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token)
--blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token)
--blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token)
--bls-proof-of-possession string set the BLS proof of possession of the validator to add
--bls-public-key string set the BLS public key of the validator to add
--cluster string operate on the given cluster
--create-local-validator create additional local validator and add it to existing running local node
--default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period
--default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet)
--default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator
--delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100)
--devnet operate on a devnet network
--disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet only]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for addValidator
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint
--node-id string node-id of the validator to add
--output-tx-path string (for Subnets, not L1s) file path of the add validator tx
--partial-sync set primary network partial sync for new validators (default true)
--remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet
--rpc string connect to validator manager at the given rpc endpoint
--stake-amount uint (PoS only) amount of tokens to stake
--staking-period duration how long this validator will be staking
--start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
--subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx
-t, --testnet fuji operate on testnet (alias to fuji)
--wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true)
--weight uint set the staking weight of the validator to add (default 20)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### changeOwner
The blockchain changeOwner changes the owner of the deployed Blockchain.
**Usage:**
```bash
avalanche blockchain changeOwner [subcommand] [flags]
```
**Flags:**
```bash
--auth-keys strings control keys that will be used to authenticate transfer blockchain ownership tx
--cluster string operate on the given cluster
--control-keys strings addresses that may make blockchain changes
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for changeOwner
-k, --key string select the key to use [fuji/devnet]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--output-tx-path string file path of the transfer blockchain ownership tx
-s, --same-control-key use the fee-paying key as control key
-t, --testnet fuji operate on testnet (alias to fuji)
--threshold uint32 required number of control key signatures to make blockchain changes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### changeWeight
The blockchain changeWeight command changes the weight of a L1 Validator.
The L1 has to be a Proof of Authority L1.
**Usage:**
```bash
avalanche blockchain changeWeight [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet only]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for changeWeight
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint
--node-id string node-id of the validator
-t, --testnet fuji operate on testnet (alias to fuji)
--weight uint set the new staking weight of the validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### configure
AvalancheGo nodes support several different configuration files.
Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see [https://build.avax.network/docs/nodes/configure/avalanche-l1-configs](https://build.avax.network/docs/nodes/configure/avalanche-l1-configs))
Each blockchain within the network can have its own chain config (see [https://build.avax.network/docs/nodes/chain-configs/c-chain](https://build.avax.network/docs/nodes/chain-configs/c-chain) [https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go](https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go) for subnet-evm options).
A chain can also have special requirements for the AvalancheGo node configuration itself (see [https://build.avax.network/docs/nodes/configure/configs-flags](https://build.avax.network/docs/nodes/configure/configs-flags)).
This command allows you to set all those files.
**Usage:**
```bash
avalanche blockchain configure [subcommand] [flags]
```
**Flags:**
```bash
--chain-config string path to the chain configuration
-h, --help help for configure
--node-config string path to avalanchego node configuration
--per-node-chain-config string path to per node chain configuration for local network
--subnet-config string path to the subnet configuration
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### create
The blockchain create command builds a new genesis file to configure your Blockchain.
By default, the command runs an interactive wizard. It walks you through
all the steps you need to create your first Blockchain.
The tool supports deploying Subnet-EVM, and custom VMs. You
can create a custom, user-generated genesis with a custom VM by providing
the path to your genesis and VM binaries with the --genesis and --vm flags.
By default, running the command with a blockchainName that already exists
causes the command to fail. If you'd like to overwrite an existing
configuration, pass the -f flag.
**Usage:**
```bash
avalanche blockchain create [subcommand] [flags]
```
**Flags:**
```bash
--custom use a custom VM template
--custom-vm-branch string custom vm branch or commit
--custom-vm-build-script string custom vm build-script
--custom-vm-path string file path of custom vm to use
--custom-vm-repo-url string custom vm repository url
--debug enable blockchain debugging (default true)
--evm use the Subnet-EVM as the base template
--evm-chain-id uint chain ID to use with Subnet-EVM
--evm-defaults deprecation notice: use '--production-defaults'
--evm-token string token symbol to use with Subnet-EVM
--external-gas-token use a gas token from another blockchain
-f, --force overwrite the existing configuration if one exists
--from-github-repo generate custom VM binary from github repository
--genesis string file path of genesis to use
-h, --help help for create
--icm interoperate with other blockchains using ICM
--icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental]
--latest use latest Subnet-EVM released version, takes precedence over --vm-version
--pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version
--production-defaults use default production settings for your blockchain
--proof-of-authority use proof of authority(PoA) for validator management
--proof-of-stake use proof of stake(PoS) for validator management
--proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract
--reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100)
--sovereign set to false if creating non-sovereign blockchain (default true)
--teleporter interoperate with other blockchains using ICM
--test-defaults use default test settings for your blockchain
--validator-manager-owner string EVM address that controls Validator Manager Owner
--vm string file path of custom vm to use. alias to custom-vm-path
--vm-version string version of Subnet-EVM template to use
--warp generate a vm with warp support (needed for ICM) (default true)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### delete
The blockchain delete command deletes an existing blockchain configuration.
**Usage:**
```bash
avalanche blockchain delete [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for delete
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
The blockchain deploy command deploys your Blockchain configuration to Local Network, to Fuji Testnet, DevNet or to Mainnet.
At the end of the call, the command prints the RPC URL you can use to interact with the L1 / Subnet.
When deploying an L1, Avalanche-CLI lets you use your local machine as a bootstrap validator, so you don't need to run separate Avalanche nodes.
This is controlled by the --use-local-machine flag (enabled by default on Local Network).
If --use-local-machine is set to true:
* Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx, followed by syncing the local machine bootstrap validator to the L1 and initialize
Validator Manager Contract on the L1
If using your own Avalanche Nodes as bootstrap validators:
* Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx
* You will have to sync your bootstrap validators to the L1
* Next, Initialize Validator Manager contract on the L1 using avalanche contract initValidatorManager \[L1\_Name]
Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent
attempts to deploy the same Blockchain to the same network (Local Network, Fuji, Mainnet) aren't
allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call
avalanche network clean to reset all deployed chain state. Subsequent local deploys
redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks,
so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet.
**Usage:**
```bash
avalanche blockchain deploy [subcommand] [flags]
```
**Flags:**
```bash
--convert-only avoid node track, restart and poa manager setup
-e, --ewoq use ewoq key [local/devnet deploy only]
-h, --help help for deploy
-k, --key string select the key to use [fuji/devnet deploy only]
-g, --ledger use ledger instead of key
--ledger-addrs strings use the given ledger addresses
--mainnet-chain-id uint32 use different ChainID for mainnet deployment
--output-tx-path string file path of the blockchain creation tx (for multi-sig signing)
-u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id
--subnet-only command stops after CreateSubnetTx and returns SubnetID
Network Flags (Select One):
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--fuji operate on fuji (alias to `testnet`)
--local operate on a local network
--mainnet operate on mainnet
--testnet operate on testnet (alias to `fuji`)
Bootstrap Validators Flags:
--balance float64 set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (setting balance=1 equals to 1 AVAX for each bootstrap validator)
--bootstrap-endpoints stringSlice take validator node info from the given endpoints
--bootstrap-filepath string JSON file path that provides details about bootstrap validators
--change-owner-address string address that will receive change if node is no longer L1 validator
--generate-node-id set to true to generate Node IDs for bootstrap validators when none are set up. Use these Node IDs to set up your Avalanche Nodes.
--num-bootstrap-validators int number of bootstrap validators to set up in sovereign L1 validator)
Local Machine Flags (Use Local Machine as Bootstrap Validator):
--avalanchego-path string use this avalanchego binary path
--avalanchego-version string use this version of avalanchego (ex: v1.17.12)
--http-port uintSlice http port for node(s)
--partial-sync set primary network partial sync for new validators
--staking-cert-key-path stringSlice path to provided staking cert key for node(s)
--staking-port uintSlice staking port for node(s)
--staking-signer-key-path stringSlice path to provided staking signer key for node(s)
--staking-tls-key-path stringSlice path to provided staking TLS key for node(s)
--use-local-machine use local machine as a blockchain validator
Local Network Flags:
--avalanchego-path string use this avalanchego binary path
--avalanchego-version string use this version of avalanchego (ex: v1.17.12)
--num-nodes uint32 number of nodes to be created on local network deploy
Non Subnet-Only-Validators (Non-SOV) Flags:
--auth-keys stringSlice control keys that will be used to authenticate chain creation
--control-keys stringSlice addresses that may make blockchain changes
--same-control-key use the fee-paying key as control key
--threshold uint32 required number of control key signatures to make blockchain changes
ICM Flags:
--cchain-funding-key string key to be used to fund relayer account on cchain
--cchain-icm-key string key to be used to pay for ICM deploys on C-Chain
--icm-key string key to be used to pay for ICM deploys
--icm-version string ICM version to deploy
--relay-cchain relay C-Chain as source and destination
--relayer-allow-private-ips allow relayer to connec to private ips
--relayer-amount float64 automatically fund relayer fee payments with the given amount
--relayer-key string key to be used by default both for rewards and to pay fees
--relayer-log-level string log level to be used for relayer logs
--relayer-path string relayer binary to use
--relayer-version string relayer version to deploy
--skip-icm-deploy Skip automatic ICM deploy
--skip-relayer skip relayer deploy
--teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file
--teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file
--teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file
--teleporter-registry-bytecode-path string path to an ICM Registry bytecode file
Proof Of Stake Flags:
--pos-maximum-stake-amount uint64 maximum stake amount
--pos-maximum-stake-multiplier uint8 maximum stake multiplier
--pos-minimum-delegation-fee uint16 minimum delegation fee
--pos-minimum-stake-amount uint64 minimum stake amount
--pos-minimum-stake-duration uint64 minimum stake duration (in seconds)
--pos-weight-to-value-factor uint64 weight to value factor
Signature Aggregator Flags:
--aggregator-log-level string log level to use with signature aggregator
--aggregator-log-to-stdout use stdout for signature aggregator logs
```
### describe
The blockchain describe command prints the details of a Blockchain configuration to the console.
By default, the command prints a summary of the configuration. By providing the --genesis
flag, the command instead prints out the raw genesis file.
**Usage:**
```bash
avalanche blockchain describe [subcommand] [flags]
```
**Flags:**
```bash
-g, --genesis Print the genesis to the console directly instead of the summary
-h, --help help for describe
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### export
The blockchain export command write the details of an existing Blockchain deploy to a file.
The command prompts for an output path. You can also provide one with
the --output flag.
**Usage:**
```bash
avalanche blockchain export [subcommand] [flags]
```
**Flags:**
```bash
--custom-vm-branch string custom vm branch
--custom-vm-build-script string custom vm build-script
--custom-vm-repo-url string custom vm repository url
-h, --help help for export
-o, --output string write the export data to the provided file path
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### import
Import blockchain configurations into avalanche-cli.
This command suite supports importing from a file created on another computer,
or importing from blockchains running public networks
(e.g. created manually or with the deprecated subnet-cli)
**Usage:**
```bash
avalanche blockchain import [subcommand] [flags]
```
**Subcommands:**
* [`file`](#avalanche-blockchain-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository.
To import from a file, you can optionally provide the path as a command-line argument.
Alternatively, running the command without any arguments triggers an interactive wizard.
To import from a repository, go through the wizard. By default, an imported Blockchain doesn't
overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
* [`public`](#avalanche-blockchain-import-public): The blockchain import public command imports a Blockchain configuration from a running network.
By default, an imported Blockchain
doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Flags:**
```bash
-h, --help help for import
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### import file
The blockchain import command will import a blockchain configuration from a file or a git repository.
To import from a file, you can optionally provide the path as a command-line argument.
Alternatively, running the command without any arguments triggers an interactive wizard.
To import from a repository, go through the wizard. By default, an imported Blockchain doesn't
overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Usage:**
```bash
avalanche blockchain import file [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string the blockchain configuration to import from the provided repo
--branch string the repo branch to use if downloading a new repo
-f, --force overwrite the existing configuration if one exists
-h, --help help for file
--repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### import public
The blockchain import public command imports a Blockchain configuration from a running network.
By default, an imported Blockchain
doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Usage:**
```bash
avalanche blockchain import public [subcommand] [flags]
```
**Flags:**
```bash
--blockchain-id string the blockchain ID
--cluster string operate on the given cluster
--custom use a custom VM template
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--evm import a subnet-evm
--force overwrite the existing configuration if one exists
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for public
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-url string [optional] URL of an already running validator
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### join
The blockchain join command configures your validator node to begin validating a new Blockchain.
To complete this process, you must have access to the machine running your validator. If the
CLI is running on the same machine as your validator, it can generate or update your node's
config file automatically. Alternatively, the command can print the necessary instructions
to update your node manually. To complete the validation process, the Blockchain's admins must add
the NodeID of your validator to the Blockchain's allow list by calling addValidator with your
NodeID.
After you update your validator's config, you need to restart your validator manually. If
you provide the --avalanchego-config flag, this command attempts to edit the config file
at that path.
This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet.
**Usage:**
```bash
avalanche blockchain join [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-config string file path of the avalanchego config file
--cluster string operate on the given cluster
--data-dir string path of avalanchego's data dir directory
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-write if true, skip to prompt to overwrite the config file
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for join
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string set the NodeID of the validator to check
--plugin-dir string file path of avalanchego's plugin directory
--print if true, print the manual config without prompting
--stake-amount uint amount of tokens to stake on validator
--staking-period duration how long validator validates for after start time
--start-time string start time that validator starts validating
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
The blockchain list command prints the names of all created Blockchain configurations. Without any flags,
it prints some general, static information about the Blockchain. With the --deployed flag, the command
shows additional information including the VMID, BlockchainID and SubnetID.
**Usage:**
```bash
avalanche blockchain list [subcommand] [flags]
```
**Flags:**
```bash
--deployed show additional deploy information
-h, --help help for list
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### publish
The blockchain publish command publishes the Blockchain's VM to a repository.
**Usage:**
```bash
avalanche blockchain publish [subcommand] [flags]
```
**Flags:**
```bash
--alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo).
--force If true, ignores if the blockchain has been published in the past, and attempts a forced publish.
-h, --help help for publish
--no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag.
--repo-url string The URL of the repo where we are publishing
--subnet-file-path string Path to the Blockchain description file. If not given, a prompting sequence will be initiated.
--vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated.
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### removeValidator
The blockchain removeValidator command stops a whitelisted blockchain network validator from
validating your deployed Blockchain.
To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass
these prompts by providing the values with flags.
**Usage:**
```bash
avalanche blockchain removeValidator [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Debug")
--aggregator-log-to-stdout use stdout for signature aggregator logs
--auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx
--blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token)
--blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token)
--blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token)
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force force validator removal even if it's not getting rewarded
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for removeValidator
-k, --key string select the key to use [fuji deploy only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-endpoint string remove validator that responds to the given endpoint
--node-id string node-id of the validator
--output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx
--rpc string connect to validator manager at the given rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### stats
The blockchain stats command prints validator statistics for the given Blockchain.
**Usage:**
```bash
avalanche blockchain stats [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for stats
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### upgrade
The blockchain upgrade command suite provides a collection of tools for
updating your developmental and deployed Blockchains.
**Usage:**
```bash
avalanche blockchain upgrade [subcommand] [flags]
```
**Subcommands:**
* [`apply`](#avalanche-blockchain-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade.
For public networks (Fuji Testnet or Mainnet), to complete this process,
you must have access to the machine running your validator.
If the CLI is running on the same machine as your validator, it can manipulate your node's
configuration automatically. Alternatively, the command can print the necessary instructions
to upgrade your node manually.
After you update your validator's configuration, you need to restart your validator manually.
If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path.
Refer to [https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs](https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs) for related documentation.
* [`export`](#avalanche-blockchain-upgrade-export): Export the upgrade bytes file to a location of choice on disk
* [`generate`](#avalanche-blockchain-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It
guides the user through the process using an interactive wizard.
* [`import`](#avalanche-blockchain-upgrade-import): Import the upgrade bytes file into the local environment
* [`print`](#avalanche-blockchain-upgrade-print): Print the upgrade.json file content
* [`vm`](#avalanche-blockchain-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command
can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet.
The command walks the user through an interactive wizard. The user can skip the wizard by providing
command line flags.
**Flags:**
```bash
-h, --help help for upgrade
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade apply
Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade.
For public networks (Fuji Testnet or Mainnet), to complete this process,
you must have access to the machine running your validator.
If the CLI is running on the same machine as your validator, it can manipulate your node's
configuration automatically. Alternatively, the command can print the necessary instructions
to upgrade your node manually.
After you update your validator's configuration, you need to restart your validator manually.
If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path.
Refer to [https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs](https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs) for related documentation.
**Usage:**
```bash
avalanche blockchain upgrade apply [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/home/runner/.avalanchego/chains")
--config create upgrade config for future subnet deployments (same as generate)
--force If true, don't prompt for confirmation of timestamps in the past
--fuji fuji apply upgrade existing fuji deployment (alias for `testnet`)
-h, --help help for apply
--local local apply upgrade existing local deployment
--mainnet mainnet apply upgrade existing mainnet deployment
--print if true, print the manual config without prompting (for public networks only)
--testnet testnet apply upgrade existing testnet deployment (alias for `fuji`)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade export
Export the upgrade bytes file to a location of choice on disk
**Usage:**
```bash
avalanche blockchain upgrade export [subcommand] [flags]
```
**Flags:**
```bash
--force If true, overwrite a possibly existing file without prompting
-h, --help help for export
--upgrade-filepath string Export upgrade bytes file to location of choice on disk
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade generate
The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It
guides the user through the process using an interactive wizard.
**Usage:**
```bash
avalanche blockchain upgrade generate [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for generate
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade import
Import the upgrade bytes file into the local environment
**Usage:**
```bash
avalanche blockchain upgrade import [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for import
--upgrade-filepath string Import upgrade bytes file into local environment
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade print
Print the upgrade.json file content
**Usage:**
```bash
avalanche blockchain upgrade print [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for print
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade vm
The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command
can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet.
The command walks the user through an interactive wizard. The user can skip the wizard by providing
command line flags.
**Usage:**
```bash
avalanche blockchain upgrade vm [subcommand] [flags]
```
**Flags:**
```bash
--binary string Upgrade to custom binary
--config upgrade config for future subnet deployments
--fuji fuji upgrade existing fuji deployment (alias for `testnet`)
-h, --help help for vm
--latest upgrade to latest version
--local local upgrade existing local deployment
--mainnet mainnet upgrade existing mainnet deployment
--plugin-dir string plugin directory to automatically upgrade VM
--print print instructions for upgrading
--testnet testnet upgrade existing testnet deployment (alias for `fuji`)
--version string Upgrade to custom version
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### validators
The blockchain validators command lists the validators of a blockchain and provides
several statistics about them.
**Usage:**
```bash
avalanche blockchain validators [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for validators
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### vmid
The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain.
**Usage:**
```bash
avalanche blockchain vmid [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for vmid
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche config
Customize configuration for Avalanche-CLI
**Usage:**
```bash
avalanche config [subcommand] [flags]
```
**Subcommands:**
* [`authorize-cloud-access`](#avalanche-config-authorize-cloud-access): set preferences to authorize access to cloud resources
* [`metrics`](#avalanche-config-metrics): set user metrics collection preferences
* [`migrate`](#avalanche-config-migrate): migrate command migrates old \~/.avalanche-cli.json and \~/.avalanche-cli/config to /.avalanche-cli/config.json..
* [`snapshotsAutoSave`](#avalanche-config-snapshotsautosave): set user preference between auto saving local network snapshots or not
* [`update`](#avalanche-config-update): set user preference between update check or not
**Flags:**
```bash
-h, --help help for config
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### authorize-cloud-access
set preferences to authorize access to cloud resources
**Usage:**
```bash
avalanche config authorize-cloud-access [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for authorize-cloud-access
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### metrics
set user metrics collection preferences
**Usage:**
```bash
avalanche config metrics [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for metrics
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### migrate
migrate command migrates old \~/.avalanche-cli.json and \~/.avalanche-cli/config to /.avalanche-cli/config.json..
**Usage:**
```bash
avalanche config migrate [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for migrate
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### snapshotsAutoSave
set user preference between auto saving local network snapshots or not
**Usage:**
```bash
avalanche config snapshotsAutoSave [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for snapshotsAutoSave
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### update
set user preference between update check or not
**Usage:**
```bash
avalanche config update [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for update
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche contract
The contract command suite provides a collection of tools for deploying
and interacting with smart contracts.
**Usage:**
```bash
avalanche contract [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-contract-deploy): The contract command suite provides a collection of tools for deploying
smart contracts.
* [`initValidatorManager`](#avalanche-contract-initvalidatormanager): Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to [https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager)
**Flags:**
```bash
-h, --help help for contract
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
The contract command suite provides a collection of tools for deploying
smart contracts.
**Usage:**
```bash
avalanche contract deploy [subcommand] [flags]
```
**Subcommands:**
* [`erc20`](#avalanche-contract-deploy-erc20): Deploy an ERC20 token into a given Network and Blockchain
**Flags:**
```bash
-h, --help help for deploy
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### deploy erc20
Deploy an ERC20 token into a given Network and Blockchain
**Usage:**
```bash
avalanche contract deploy erc20 [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string deploy the ERC20 contract into the given CLI blockchain
--blockchain-id string deploy the ERC20 contract into the given blockchain ID/Alias
--c-chain deploy the ERC20 contract into C-Chain
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--funded string set the funded address
--genesis-key use genesis allocated key as contract deployer
-h, --help help for erc20
--key string CLI stored key to use as contract deployer
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--private-key string private key to use as contract deployer
--rpc string deploy the contract into the given rpc endpoint
--supply uint set the token supply
--symbol string set the token symbol
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### initValidatorManager
Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to [https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager)
**Usage:**
```bash
avalanche contract initValidatorManager [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Debug")
--aggregator-log-to-stdout dump signature aggregator logs to stdout
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key as contract deployer
-h, --help help for initValidatorManager
--key string CLI stored key to use as contract deployer
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--pos-maximum-stake-amount uint (PoS only) maximum stake amount (default 1000)
--pos-maximum-stake-multiplier uint8 (PoS only )maximum stake multiplier (default 1)
--pos-minimum-delegation-fee uint16 (PoS only) minimum delegation fee (default 1)
--pos-minimum-stake-amount uint (PoS only) minimum stake amount (default 1)
--pos-minimum-stake-duration uint (PoS only) minimum stake duration (in seconds) (default 100)
--pos-reward-calculator-address string (PoS only) initialize the ValidatorManager with reward calculator address
--pos-weight-to-value-factor uint (PoS only) weight to value factor (default 1)
--private-key string private key to use as contract deployer
--rpc string deploy the contract into the given rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche help
Help provides help for any command in the application.
Simply type avalanche help \[path to command] for full details.
**Usage:**
```bash
avalanche help [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for help
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche icm
The messenger command suite provides a collection of tools for interacting
with ICM messenger contracts.
**Usage:**
```bash
avalanche icm [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-icm-deploy): Deploys ICM Messenger and Registry into a given L1.
* [`sendMsg`](#avalanche-icm-sendmsg): Sends and wait reception for a ICM msg between two blockchains.
**Flags:**
```bash
-h, --help help for icm
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
Deploys ICM Messenger and Registry into a given L1.
For Local Networks, it also deploys into C-Chain.
**Usage:**
```bash
avalanche icm deploy [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string deploy ICM into the given CLI blockchain
--blockchain-id string deploy ICM into the given blockchain ID/Alias
--c-chain deploy ICM into C-Chain
--cchain-key string key to be used to pay fees to deploy ICM to C-Chain
--cluster string operate on the given cluster
--deploy-messenger deploy ICM Messenger (default true)
--deploy-registry deploy ICM Registry (default true)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-registry-deploy deploy ICM Registry even if Messenger has already been deployed
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key to fund ICM deploy
-h, --help help for deploy
--include-cchain deploy ICM also to C-Chain
--key string CLI stored key to use to fund ICM deploy
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--messenger-contract-address-path string path to a messenger contract address file
--messenger-deployer-address-path string path to a messenger deployer address file
--messenger-deployer-tx-path string path to a messenger deployer tx file
--private-key string private key to use to fund ICM deploy
--registry-bytecode-path string path to a registry bytecode file
--rpc-url string use the given RPC URL to connect to the subnet
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to deploy (default "latest")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### sendMsg
Sends and wait reception for a ICM msg between two blockchains.
**Usage:**
```bash
avalanche icm sendMsg [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--dest-rpc string use the given destination blockchain rpc endpoint
--destination-address string deliver the message to the given contract destination address
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key as message originator and to pay source blockchain fees
-h, --help help for sendMsg
--hex-encoded given message is hex encoded
--key string CLI stored key to use as message originator and to pay source blockchain fees
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--private-key string private key to use as message originator and to pay source blockchain fees
--source-rpc string use the given source blockchain rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche ictt
The ictt command suite provides tools to deploy and manage Interchain Token Transferrers.
**Usage:**
```bash
avalanche ictt [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-ictt-deploy): Deploys a Token Transferrer into a given Network and Subnets
**Flags:**
```bash
-h, --help help for ictt
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
Deploys a Token Transferrer into a given Network and Subnets
**Usage:**
```bash
avalanche ictt deploy [subcommand] [flags]
```
**Flags:**
```bash
--c-chain-home set the Transferrer's Home Chain into C-Chain
--c-chain-remote set the Transferrer's Remote Chain into C-Chain
--cluster string operate on the given cluster
--deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token
--deploy-native-home deploy a Transferrer Home for the Chain's Native Token
--deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for deploy
--home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain
--home-genesis-key use genesis allocated key to deploy Transferrer Home
--home-key string CLI stored key to use to deploy Transferrer Home
--home-private-key string private key to use to deploy Transferrer Home
--home-rpc string use the given RPC URL to connect to the home blockchain
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain
--remote-genesis-key use genesis allocated key to deploy Transferrer Remote
--remote-key string CLI stored key to use to deploy Transferrer Remote
--remote-private-key string private key to use to deploy Transferrer Remote
--remote-rpc string use the given RPC URL to connect to the remote blockchain
--remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)]
--remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis
-t, --testnet fuji operate on testnet (alias to fuji)
--use-home string use the given Transferrer's Home Address
--version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche interchain
The interchain command suite provides a collection of tools to
set and manage interoperability between blockchains.
**Usage:**
```bash
avalanche interchain [subcommand] [flags]
```
**Subcommands:**
* [`messenger`](#avalanche-interchain-messenger): The messenger command suite provides a collection of tools for interacting
with ICM messenger contracts.
* [`relayer`](#avalanche-interchain-relayer): The relayer command suite provides a collection of tools for deploying
and configuring an ICM relayers.
* [`tokenTransferrer`](#avalanche-interchain-tokentransferrer): The tokenTransfer command suite provides tools to deploy and manage Token Transferrers.
**Flags:**
```bash
-h, --help help for interchain
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### messenger
The messenger command suite provides a collection of tools for interacting
with ICM messenger contracts.
**Usage:**
```bash
avalanche interchain messenger [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-interchain-messenger-deploy): Deploys ICM Messenger and Registry into a given L1.
* [`sendMsg`](#avalanche-interchain-messenger-sendmsg): Sends and wait reception for a ICM msg between two blockchains.
**Flags:**
```bash
-h, --help help for messenger
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### messenger deploy
Deploys ICM Messenger and Registry into a given L1.
For Local Networks, it also deploys into C-Chain.
**Usage:**
```bash
avalanche interchain messenger deploy [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string deploy ICM into the given CLI blockchain
--blockchain-id string deploy ICM into the given blockchain ID/Alias
--c-chain deploy ICM into C-Chain
--cchain-key string key to be used to pay fees to deploy ICM to C-Chain
--cluster string operate on the given cluster
--deploy-messenger deploy ICM Messenger (default true)
--deploy-registry deploy ICM Registry (default true)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-registry-deploy deploy ICM Registry even if Messenger has already been deployed
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key to fund ICM deploy
-h, --help help for deploy
--include-cchain deploy ICM also to C-Chain
--key string CLI stored key to use to fund ICM deploy
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--messenger-contract-address-path string path to a messenger contract address file
--messenger-deployer-address-path string path to a messenger deployer address file
--messenger-deployer-tx-path string path to a messenger deployer tx file
--private-key string private key to use to fund ICM deploy
--registry-bytecode-path string path to a registry bytecode file
--rpc-url string use the given RPC URL to connect to the subnet
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to deploy (default "latest")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### messenger sendMsg
Sends and wait reception for a ICM msg between two blockchains.
**Usage:**
```bash
avalanche interchain messenger sendMsg [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--dest-rpc string use the given destination blockchain rpc endpoint
--destination-address string deliver the message to the given contract destination address
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key as message originator and to pay source blockchain fees
-h, --help help for sendMsg
--hex-encoded given message is hex encoded
--key string CLI stored key to use as message originator and to pay source blockchain fees
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--private-key string private key to use as message originator and to pay source blockchain fees
--source-rpc string use the given source blockchain rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### relayer
The relayer command suite provides a collection of tools for deploying
and configuring an ICM relayers.
**Usage:**
```bash
avalanche interchain relayer [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-interchain-relayer-deploy): Deploys an ICM Relayer for the given Network.
* [`logs`](#avalanche-interchain-relayer-logs): Shows pretty formatted AWM relayer logs
* [`start`](#avalanche-interchain-relayer-start): Starts AWM relayer on the specified network (Currently only for local network).
* [`stop`](#avalanche-interchain-relayer-stop): Stops AWM relayer on the specified network (Currently only for local network, cluster).
**Flags:**
```bash
-h, --help help for relayer
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### relayer deploy
Deploys an ICM Relayer for the given Network.
**Usage:**
```bash
avalanche interchain relayer deploy [subcommand] [flags]
```
**Flags:**
```bash
--allow-private-ips allow relayer to connec to private ips (default true)
--amount float automatically fund l1s fee payments with the given amount
--bin-path string use the given relayer binary
--blockchain-funding-key string key to be used to fund relayer account on all l1s
--blockchains strings blockchains to relay as source and destination
--cchain relay C-Chain as source and destination
--cchain-amount float automatically fund cchain fee payments with the given amount
--cchain-funding-key string key to be used to fund relayer account on cchain
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for deploy
--key string key to be used by default both for rewards and to pay fees
-l, --local operate on a local network
--log-level string log level to use for relayer logs
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to deploy (default "latest-prerelease")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--skip-update-check skip check for new versions
```
#### relayer logs
Shows pretty formatted AWM relayer logs
**Usage:**
```bash
avalanche interchain relayer logs [subcommand] [flags]
```
**Flags:**
```bash
--endpoint string use the given endpoint for network operations
--first uint output first N log lines
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for logs
--last uint output last N log lines
-l, --local operate on a local network
--raw raw logs output
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### relayer start
Starts AWM relayer on the specified network (Currently only for local network).
**Usage:**
```bash
avalanche interchain relayer start [subcommand] [flags]
```
**Flags:**
```bash
--bin-path string use the given relayer binary
--cluster string operate on the given cluster
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for start
-l, --local operate on a local network
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to use (default "latest-prerelease")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### relayer stop
Stops AWM relayer on the specified network (Currently only for local network, cluster).
**Usage:**
```bash
avalanche interchain relayer stop [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for stop
-l, --local operate on a local network
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### tokenTransferrer
The tokenTransfer command suite provides tools to deploy and manage Token Transferrers.
**Usage:**
```bash
avalanche interchain tokenTransferrer [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-interchain-tokentransferrer-deploy): Deploys a Token Transferrer into a given Network and Subnets
**Flags:**
```bash
-h, --help help for tokenTransferrer
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### tokenTransferrer deploy
Deploys a Token Transferrer into a given Network and Subnets
**Usage:**
```bash
avalanche interchain tokenTransferrer deploy [subcommand] [flags]
```
**Flags:**
```bash
--c-chain-home set the Transferrer's Home Chain into C-Chain
--c-chain-remote set the Transferrer's Remote Chain into C-Chain
--cluster string operate on the given cluster
--deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token
--deploy-native-home deploy a Transferrer Home for the Chain's Native Token
--deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for deploy
--home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain
--home-genesis-key use genesis allocated key to deploy Transferrer Home
--home-key string CLI stored key to use to deploy Transferrer Home
--home-private-key string private key to use to deploy Transferrer Home
--home-rpc string use the given RPC URL to connect to the home blockchain
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain
--remote-genesis-key use genesis allocated key to deploy Transferrer Remote
--remote-key string CLI stored key to use to deploy Transferrer Remote
--remote-private-key string private key to use to deploy Transferrer Remote
--remote-rpc string use the given RPC URL to connect to the remote blockchain
--remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)]
--remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis
-t, --testnet fuji operate on testnet (alias to fuji)
--use-home string use the given Transferrer's Home Address
--version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche key
The key command suite provides a collection of tools for creating and managing
signing keys. You can use these keys to deploy Subnets to the Fuji Testnet,
but these keys are NOT suitable to use in production environments. DO NOT use
these keys on Mainnet.
To get started, use the key create command.
**Usage:**
```bash
avalanche key [subcommand] [flags]
```
**Subcommands:**
* [`create`](#avalanche-key-create): The key create command generates a new private key to use for creating and controlling
test Subnets. Keys generated by this command are NOT cryptographically secure enough to
use in production environments. DO NOT use these keys on Mainnet.
The command works by generating a secp256 key and storing it with the provided keyName. You
can use this key in other commands by providing this keyName.
If you'd like to import an existing key instead of generating one from scratch, provide the
\--file flag.
* [`delete`](#avalanche-key-delete): The key delete command deletes an existing signing key.
To delete a key, provide the keyName. The command prompts for confirmation
before deleting the key. To skip the confirmation, provide the --force flag.
* [`export`](#avalanche-key-export): The key export command exports a created signing key. You can use an exported key in other
applications or import it into another instance of Avalanche-CLI.
By default, the tool writes the hex encoded key to stdout. If you provide the --output
flag, the command writes the key to a file of your choosing.
* [`list`](#avalanche-key-list): The key list command prints information for all stored signing
keys or for the ledger addresses associated to certain indices.
* [`transfer`](#avalanche-key-transfer): The key transfer command allows to transfer funds between stored keys or ledger addresses.
**Flags:**
```bash
-h, --help help for key
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### create
The key create command generates a new private key to use for creating and controlling
test Subnets. Keys generated by this command are NOT cryptographically secure enough to
use in production environments. DO NOT use these keys on Mainnet.
The command works by generating a secp256 key and storing it with the provided keyName. You
can use this key in other commands by providing this keyName.
If you'd like to import an existing key instead of generating one from scratch, provide the
\--file flag.
**Usage:**
```bash
avalanche key create [subcommand] [flags]
```
**Flags:**
```bash
--file string import the key from an existing key file
-f, --force overwrite an existing key with the same name
-h, --help help for create
--skip-balances do not query public network balances for an imported key
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### delete
The key delete command deletes an existing signing key.
To delete a key, provide the keyName. The command prompts for confirmation
before deleting the key. To skip the confirmation, provide the --force flag.
**Usage:**
```bash
avalanche key delete [subcommand] [flags]
```
**Flags:**
```bash
-f, --force delete the key without confirmation
-h, --help help for delete
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### export
The key export command exports a created signing key. You can use an exported key in other
applications or import it into another instance of Avalanche-CLI.
By default, the tool writes the hex encoded key to stdout. If you provide the --output
flag, the command writes the key to a file of your choosing.
**Usage:**
```bash
avalanche key export [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for export
-o, --output string write the key to the provided file path
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
The key list command prints information for all stored signing
keys or for the ledger addresses associated to certain indices.
**Usage:**
```bash
avalanche key list [subcommand] [flags]
```
**Flags:**
```bash
-a, --all-networks list all network addresses
--blockchains strings blockchains to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c)
-c, --cchain list C-Chain addresses (default true)
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for list
--keys strings list addresses for the given keys
-g, --ledger uints list ledger addresses for the given indices (default [])
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--pchain list P-Chain addresses (default true)
--subnets strings subnets to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c)
-t, --testnet fuji operate on testnet (alias to fuji)
--tokens strings provide balance information for the given token contract addresses (Evm only) (default [Native])
--use-gwei use gwei for EVM balances
-n, --use-nano-avax use nano Avax for balances
--xchain list X-Chain addresses (default true)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### transfer
The key transfer command allows to transfer funds between stored keys or ledger addresses.
**Usage:**
```bash
avalanche key transfer [subcommand] [flags]
```
**Flags:**
```bash
-o, --amount float amount to send or receive (AVAX or TOKEN units)
--c-chain-receiver receive at C-Chain
--c-chain-sender send from C-Chain
--cluster string operate on the given cluster
-a, --destination-addr string destination address
--destination-key string key associated to a destination address
--destination-subnet string subnet where the funds will be sent (token transferrer experimental)
--destination-transferrer-address string token transferrer address at the destination subnet (token transferrer experimental)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for transfer
-k, --key string key associated to the sender or receiver address
-i, --ledger uint32 ledger index associated to the sender or receiver address (default 32768)
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--origin-subnet string subnet where the funds belong (token transferrer experimental)
--origin-transferrer-address string token transferrer address at the origin subnet (token transferrer experimental)
--p-chain-receiver receive at P-Chain
--p-chain-sender send from P-Chain
--receiver-blockchain string receive at the given CLI blockchain
--receiver-blockchain-id string receive at the given blockchain ID/Alias
--sender-blockchain string send from the given CLI blockchain
--sender-blockchain-id string send from the given blockchain ID/Alias
-t, --testnet fuji operate on testnet (alias to fuji)
--x-chain-receiver receive at X-Chain
--x-chain-sender send from X-Chain
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche network
The network command suite provides a collection of tools for managing local Blockchain
deployments.
When you deploy a Blockchain locally, it runs on a local, multi-node Avalanche network. The
blockchain deploy command starts this network in the background. This command suite allows you
to shutdown, restart, and clear that network.
This network currently supports multiple, concurrently deployed Blockchains.
**Usage:**
```bash
avalanche network [subcommand] [flags]
```
**Subcommands:**
* [`clean`](#avalanche-network-clean): The network clean command shuts down your local, multi-node network. All deployed Subnets
shutdown and delete their state. You can restart the network by deploying a new Subnet
configuration.
* [`start`](#avalanche-network-start): The network start command starts a local, multi-node Avalanche network on your machine.
By default, the command loads the default snapshot. If you provide the --snapshot-name
flag, the network loads that snapshot instead. The command fails if the local network is
already running.
* [`status`](#avalanche-network-status): The network status command prints whether or not a local Avalanche
network is running and some basic stats about the network.
* [`stop`](#avalanche-network-stop): The network stop command shuts down your local, multi-node network.
All deployed Subnets shutdown gracefully and save their state. If you provide the
\--snapshot-name flag, the network saves its state under this named snapshot. You can
reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the
network saves to the default snapshot, overwriting any existing state. You can reload the
default snapshot with network start.
**Flags:**
```bash
-h, --help help for network
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### clean
The network clean command shuts down your local, multi-node network. All deployed Subnets
shutdown and delete their state. You can restart the network by deploying a new Subnet
configuration.
**Usage:**
```bash
avalanche network clean [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for clean
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### start
The network start command starts a local, multi-node Avalanche network on your machine.
By default, the command loads the default snapshot. If you provide the --snapshot-name
flag, the network loads that snapshot instead. The command fails if the local network is
already running.
**Usage:**
```bash
avalanche network start [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-path string use this avalanchego binary path
--avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease")
-h, --help help for start
--num-nodes uint32 number of nodes to be created on local network (default 2)
--relayer-path string use this relayer binary path
--relayer-version string use this relayer version (default "latest-prerelease")
--snapshot-name string name of snapshot to use to start the network from (default "default")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### status
The network status command prints whether or not a local Avalanche
network is running and some basic stats about the network.
**Usage:**
```bash
avalanche network status [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for status
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### stop
The network stop command shuts down your local, multi-node network.
All deployed Subnets shutdown gracefully and save their state. If you provide the
\--snapshot-name flag, the network saves its state under this named snapshot. You can
reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the
network saves to the default snapshot, overwriting any existing state. You can reload the
default snapshot with network start.
**Usage:**
```bash
avalanche network stop [subcommand] [flags]
```
**Flags:**
```bash
--dont-save do not save snapshot, just stop the network
-h, --help help for stop
--snapshot-name string name of snapshot to use to save network state into (default "default")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche node
The node command suite provides a collection of tools for creating and maintaining
validators on Avalanche Network.
To get started, use the node create command wizard to walk through the
configuration to make your node a primary validator on Avalanche public network. You can use the
rest of the commands to maintain your node and make your node a Subnet Validator.
**Usage:**
```bash
avalanche node [subcommand] [flags]
```
**Subcommands:**
* [`addDashboard`](#avalanche-node-adddashboard): (ALPHA Warning) This command is currently in experimental mode.
The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the
cluster.
* [`create`](#avalanche-node-create): (ALPHA Warning) This command is currently in experimental mode.
The node create command sets up a validator on a cloud server of your choice.
The validator will be validating the Avalanche Primary Network and Subnet
of your choice. By default, the command runs an interactive wizard. It
walks you through all the steps you need to set up a validator.
Once this command is completed, you will have to wait for the validator
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet. You can check the bootstrapping
status by running avalanche node status
The created node will be part of group of validators called `clusterName`
and users can call node commands with `clusterName` so that the command
will apply to all nodes in the cluster
* [`destroy`](#avalanche-node-destroy): (ALPHA Warning) This command is currently in experimental mode.
The node destroy command terminates all running nodes in cloud server and deletes all storage disks.
If there is a static IP address attached, it will be released.
* [`devnet`](#avalanche-node-devnet): (ALPHA Warning) This command is currently in experimental mode.
The node devnet command suite provides a collection of commands related to devnets.
You can check the updated status by calling avalanche node status `clusterName`
* [`export`](#avalanche-node-export): (ALPHA Warning) This command is currently in experimental mode.
The node export command exports cluster configuration and its nodes config to a text file.
If no file is specified, the configuration is printed to the stdout.
Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information.
Exported cluster configuration without secrets can be imported by another user using node import command.
* [`import`](#avalanche-node-import): (ALPHA Warning) This command is currently in experimental mode.
The node import command imports cluster configuration and its nodes configuration from a text file
created from the node export command.
Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by
the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster.
Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands
affecting cloud nodes like node create or node destroy will be not applicable to it.
* [`list`](#avalanche-node-list): (ALPHA Warning) This command is currently in experimental mode.
The node list command lists all clusters together with their nodes.
* [`loadtest`](#avalanche-node-loadtest): (ALPHA Warning) This command is currently in experimental mode.
The node loadtest command suite starts and stops a load test for an existing devnet cluster.
* [`local`](#avalanche-node-local): The node local command suite provides a collection of commands related to local nodes
* [`refresh-ips`](#avalanche-node-refresh-ips): (ALPHA Warning) This command is currently in experimental mode.
The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster,
and updates the local node information used by CLI commands.
* [`resize`](#avalanche-node-resize): (ALPHA Warning) This command is currently in experimental mode.
The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes.
* [`scp`](#avalanche-node-scp): (ALPHA Warning) This command is currently in experimental mode.
The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format:
\[clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/\*.txt.
File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path.
If both destinations are remote, they must be nodes for the same cluster and not clusters themselves.
For example:
$avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt$ avalanche node scp /tmp/file.txt \[cluster1|NodeID-XXXX]:/tmp/file.txt
\$ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt
* [`ssh`](#avalanche-node-ssh): (ALPHA Warning) This command is currently in experimental mode.
The node ssh command execute a given command \[cmd] using ssh on all nodes in the cluster if ClusterName is given.
If no command is given, just prints the ssh command to be used to connect to each node in the cluster.
For provided NodeID or InstanceID or IP, the command \[cmd] will be executed on that node.
If no \[cmd] is provided for the node, it will open ssh shell there.
* [`status`](#avalanche-node-status): (ALPHA Warning) This command is currently in experimental mode.
The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network.
If no cluster is given, defaults to node list behaviour.
To get the bootstrap status of a node with a Blockchain, use --blockchain flag
* [`sync`](#avalanche-node-sync): (ALPHA Warning) This command is currently in experimental mode.
The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain.
You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName`
* [`update`](#avalanche-node-update): (ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM config.
You can check the status after update by calling avalanche node status
* [`upgrade`](#avalanche-node-upgrade): (ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM version.
You can check the status after upgrade by calling avalanche node status
* [`validate`](#avalanche-node-validate): (ALPHA Warning) This command is currently in experimental mode.
The node validate command suite provides a collection of commands for nodes to join
the Primary Network and Subnets as validators.
If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command
will fail. You can check the bootstrap status by calling avalanche node status `clusterName`
* [`whitelist`](#avalanche-node-whitelist): (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster.
Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http.
It also command adds SSH public key to all nodes in the cluster if --ssh params is there.
If no params provided it detects current user IP automaticaly and whitelists it
**Flags:**
```bash
-h, --help help for node
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### addDashboard
(ALPHA Warning) This command is currently in experimental mode.
The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the
cluster.
**Usage:**
```bash
avalanche node addDashboard [subcommand] [flags]
```
**Flags:**
```bash
--add-grafana-dashboard string path to additional grafana dashboard json file
-h, --help help for addDashboard
--subnet string subnet that the dasbhoard is intended for (if any)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### create
(ALPHA Warning) This command is currently in experimental mode.
The node create command sets up a validator on a cloud server of your choice.
The validator will be validating the Avalanche Primary Network and Subnet
of your choice. By default, the command runs an interactive wizard. It
walks you through all the steps you need to set up a validator.
Once this command is completed, you will have to wait for the validator
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet. You can check the bootstrapping
status by running avalanche node status
The created node will be part of group of validators called `clusterName`
and users can call node commands with `clusterName` so that the command
will apply to all nodes in the cluster
**Usage:**
```bash
avalanche node create [subcommand] [flags]
```
**Flags:**
```bash
--add-grafana-dashboard string path to additional grafana dashboard json file
--alternative-key-pair-name string key pair name to use if default one generates conflicts
--authorize-access authorize CLI to create cloud resources
--auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found
--avalanchego-version-from-subnet string install latest avalanchego version, that is compatible with the given subnet, on node/s
--aws create node/s in AWS cloud
--aws-profile string aws profile to use (default "default")
--aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000)
--aws-volume-size int AWS volume size in GB (default 1000)
--aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125)
--aws-volume-type string AWS volume type (default "gp3")
--bootstrap-ids stringArray nodeIDs of bootstrap nodes
--bootstrap-ips stringArray IP:port pairs of bootstrap nodes
--cluster string operate on the given cluster
--custom-avalanchego-version string install given avalanchego version on node/s
--devnet operate on a devnet network
--enable-monitoring set up Prometheus monitoring for created nodes. This option creates a separate monitoring cloud instance and incures additional cost
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--gcp create node/s in GCP cloud
--gcp-credentials string use given GCP credentials
--gcp-project string use given GCP project
--genesis string path to genesis file
--grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb
-h, --help help for create
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s
--latest-avalanchego-version install latest avalanchego release version on node/s
-m, --mainnet operate on mainnet
--node-type string cloud instance type. Use 'default' to use recommended default instance type
--num-apis ints number of API nodes(nodes without stake) to create in the new Devnet
--num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag
--partial-sync primary network partial sync (default true)
--public-http-port allow public access to avalanchego HTTP port
--region strings create node(s) in given region(s). Use comma to separate multiple regions
--ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used
-t, --testnet fuji operate on testnet (alias to fuji)
--upgrade string path to upgrade file
--use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth
--use-static-ip attach static Public IP on cloud servers (default true)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### destroy
(ALPHA Warning) This command is currently in experimental mode.
The node destroy command terminates all running nodes in cloud server and deletes all storage disks.
If there is a static IP address attached, it will be released.
**Usage:**
```bash
avalanche node destroy [subcommand] [flags]
```
**Flags:**
```bash
--all destroy all existing clusters created by Avalanche CLI
--authorize-access authorize CLI to release cloud resources
-y, --authorize-all authorize all CLI requests
--authorize-remove authorize CLI to remove all local files related to cloud nodes
--aws-profile string aws profile to use (default "default")
-h, --help help for destroy
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### devnet
(ALPHA Warning) This command is currently in experimental mode.
The node devnet command suite provides a collection of commands related to devnets.
You can check the updated status by calling avalanche node status `clusterName`
**Usage:**
```bash
avalanche node devnet [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-node-devnet-deploy): (ALPHA Warning) This command is currently in experimental mode.
The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it.
It saves the deploy info both locally and remotely.
* [`wiz`](#avalanche-node-devnet-wiz): (ALPHA Warning) This command is currently in experimental mode.
The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed.
**Flags:**
```bash
-h, --help help for devnet
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### devnet deploy
(ALPHA Warning) This command is currently in experimental mode.
The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it.
It saves the deploy info both locally and remotely.
**Usage:**
```bash
avalanche node devnet deploy [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for deploy
--no-checks do not check for healthy status or rpc compatibility of nodes against subnet
--subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name
--subnet-only only create a subnet
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### devnet wiz
(ALPHA Warning) This command is currently in experimental mode.
The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed.
**Usage:**
```bash
avalanche node devnet wiz [subcommand] [flags]
```
**Flags:**
```bash
--add-grafana-dashboard string path to additional grafana dashboard json file
--alternative-key-pair-name string key pair name to use if default one generates conflicts
--authorize-access authorize CLI to create cloud resources
--auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found
--aws create node/s in AWS cloud
--aws-profile string aws profile to use (default "default")
--aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000)
--aws-volume-size int AWS volume size in GB (default 1000)
--aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125)
--aws-volume-type string AWS volume type (default "gp3")
--chain-config string path to the chain configuration for subnet
--custom-avalanchego-version string install given avalanchego version on node/s
--custom-subnet use a custom VM as the subnet virtual machine
--custom-vm-branch string custom vm branch or commit
--custom-vm-build-script string custom vm build-script
--custom-vm-repo-url string custom vm repository url
--default-validator-params use default weight/start/duration params for subnet validator
--deploy-icm-messenger deploy Interchain Messenger (default true)
--deploy-icm-registry deploy Interchain Registry (default true)
--deploy-teleporter-messenger deploy Interchain Messenger (default true)
--deploy-teleporter-registry deploy Interchain Registry (default true)
--enable-monitoring set up Prometheus monitoring for created nodes. Please note that this option creates a separate monitoring instance and incures additional cost
--evm-chain-id uint chain ID to use with Subnet-EVM
--evm-defaults use default production settings with Subnet-EVM
--evm-production-defaults use default production settings for your blockchain
--evm-subnet use Subnet-EVM as the subnet virtual machine
--evm-test-defaults use default test settings for your blockchain
--evm-token string token name to use with Subnet-EVM
--evm-version string version of Subnet-EVM to use
--force-subnet-create overwrite the existing subnet configuration if one exists
--gcp create node/s in GCP cloud
--gcp-credentials string use given GCP credentials
--gcp-project string use given GCP project
--grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb
-h, --help help for wiz
--icm generate an icm-ready vm
--icm-messenger-contract-address-path string path to an icm messenger contract address file
--icm-messenger-deployer-address-path string path to an icm messenger deployer address file
--icm-messenger-deployer-tx-path string path to an icm messenger deployer tx file
--icm-registry-bytecode-path string path to an icm registry bytecode file
--icm-version string icm version to deploy (default "latest")
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s
--latest-avalanchego-version install latest avalanchego release version on node/s
--latest-evm-version use latest Subnet-EVM released version
--latest-pre-released-evm-version use latest Subnet-EVM pre-released version
--node-config string path to avalanchego node configuration for subnet
--node-type string cloud instance type. Use 'default' to use recommended default instance type
--num-apis ints number of API nodes(nodes without stake) to create in the new Devnet
--num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag
--public-http-port allow public access to avalanchego HTTP port
--region strings create node/s in given region(s). Use comma to separate multiple regions
--relayer run AWM relayer when deploying the vm
--ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used.
--subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name
--subnet-config string path to the subnet configuration for subnet
--subnet-genesis string file path of the subnet genesis
--teleporter generate an icm-ready vm
--teleporter-messenger-contract-address-path string path to an icm messenger contract address file
--teleporter-messenger-deployer-address-path string path to an icm messenger deployer address file
--teleporter-messenger-deployer-tx-path string path to an icm messenger deployer tx file
--teleporter-registry-bytecode-path string path to an icm registry bytecode file
--teleporter-version string icm version to deploy (default "latest")
--use-ssh-agent use ssh agent for ssh
--use-static-ip attach static Public IP on cloud servers (default true)
--validators strings deploy subnet into given comma separated list of validators. defaults to all cluster nodes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### export
(ALPHA Warning) This command is currently in experimental mode.
The node export command exports cluster configuration and its nodes config to a text file.
If no file is specified, the configuration is printed to the stdout.
Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information.
Exported cluster configuration without secrets can be imported by another user using node import command.
**Usage:**
```bash
avalanche node export [subcommand] [flags]
```
**Flags:**
```bash
--file string specify the file to export the cluster configuration to
--force overwrite the file if it exists
-h, --help help for export
--include-secrets include keys in the export
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### import
(ALPHA Warning) This command is currently in experimental mode.
The node import command imports cluster configuration and its nodes configuration from a text file
created from the node export command.
Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by
the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster.
Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands
affecting cloud nodes like node create or node destroy will be not applicable to it.
**Usage:**
```bash
avalanche node import [subcommand] [flags]
```
**Flags:**
```bash
--file string specify the file to export the cluster configuration to
-h, --help help for import
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
(ALPHA Warning) This command is currently in experimental mode.
The node list command lists all clusters together with their nodes.
**Usage:**
```bash
avalanche node list [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for list
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### loadtest
(ALPHA Warning) This command is currently in experimental mode.
The node loadtest command suite starts and stops a load test for an existing devnet cluster.
**Usage:**
```bash
avalanche node loadtest [subcommand] [flags]
```
**Subcommands:**
* [`start`](#avalanche-node-loadtest-start): (ALPHA Warning) This command is currently in experimental mode.
The node loadtest command starts load testing for an existing devnet cluster. If the cluster does
not have an existing load test host, the command creates a separate cloud server and builds the load
test binary based on the provided load test Git Repo URL and load test binary build command.
The command will then run the load test binary based on the provided load test run command.
* [`stop`](#avalanche-node-loadtest-stop): (ALPHA Warning) This command is currently in experimental mode.
The node loadtest stop command stops load testing for an existing devnet cluster and terminates the
separate cloud server created to host the load test.
**Flags:**
```bash
-h, --help help for loadtest
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### loadtest start
(ALPHA Warning) This command is currently in experimental mode.
The node loadtest command starts load testing for an existing devnet cluster. If the cluster does
not have an existing load test host, the command creates a separate cloud server and builds the load
test binary based on the provided load test Git Repo URL and load test binary build command.
The command will then run the load test binary based on the provided load test run command.
**Usage:**
```bash
avalanche node loadtest start [subcommand] [flags]
```
**Flags:**
```bash
--authorize-access authorize CLI to create cloud resources
--aws create loadtest node in AWS cloud
--aws-profile string aws profile to use (default "default")
--gcp create loadtest in GCP cloud
-h, --help help for start
--load-test-branch string load test branch or commit
--load-test-build-cmd string command to build load test binary
--load-test-cmd string command to run load test
--load-test-repo string load test repo url to use
--node-type string cloud instance type for loadtest script
--region string create load test node in a given region
--ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used
--use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### loadtest stop
(ALPHA Warning) This command is currently in experimental mode.
The node loadtest stop command stops load testing for an existing devnet cluster and terminates the
separate cloud server created to host the load test.
**Usage:**
```bash
avalanche node loadtest stop [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for stop
--load-test strings stop specified load test node(s). Use comma to separate multiple load test instance names
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### local
The node local command suite provides a collection of commands related to local nodes
**Usage:**
```bash
avalanche node local [subcommand] [flags]
```
**Subcommands:**
* [`destroy`](#avalanche-node-local-destroy): Cleanup local node.
* [`start`](#avalanche-node-local-start): The node local start command creates Avalanche nodes on the local machine.
Once this command is completed, you will have to wait for the Avalanche node
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet.
You can check the bootstrapping status by running avalanche node status local.
* [`status`](#avalanche-node-local-status): Get status of local node.
* [`stop`](#avalanche-node-local-stop): Stop local node.
* [`track`](#avalanche-node-local-track): Track specified blockchain with local node
* [`validate`](#avalanche-node-local-validate): Use Avalanche Node set up on local machine to set up specified L1 by providing the
RPC URL of the L1.
This command can only be used to validate Proof of Stake L1.
**Flags:**
```bash
-h, --help help for local
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local destroy
Cleanup local node.
**Usage:**
```bash
avalanche node local destroy [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for destroy
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local start
The node local start command creates Avalanche nodes on the local machine.
Once this command is completed, you will have to wait for the Avalanche node
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet.
You can check the bootstrapping status by running avalanche node status local.
**Usage:**
```bash
avalanche node local start [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-path string use this avalanchego binary path
--bootstrap-id stringArray nodeIDs of bootstrap nodes
--bootstrap-ip stringArray IP:port pairs of bootstrap nodes
--cluster string operate on the given cluster
--custom-avalanchego-version string install given avalanchego version on node/s
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis string path to genesis file
-h, --help help for start
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true)
--latest-avalanchego-version install latest avalanchego release version on node/s
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-config string path to common avalanchego config settings for all nodes
--num-nodes uint32 number of Avalanche nodes to create on local machine (default 1)
--partial-sync primary network partial sync (default true)
--staking-cert-key-path string path to provided staking cert key for node
--staking-signer-key-path string path to provided staking signer key for node
--staking-tls-key-path string path to provided staking tls key for node
-t, --testnet fuji operate on testnet (alias to fuji)
--upgrade string path to upgrade file
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local status
Get status of local node.
**Usage:**
```bash
avalanche node local status [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string specify the blockchain the node is syncing with
-h, --help help for status
--l1 string specify the blockchain the node is syncing with
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local stop
Stop local node.
**Usage:**
```bash
avalanche node local stop [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for stop
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local track
Track specified blockchain with local node
**Usage:**
```bash
avalanche node local track [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-path string use this avalanchego binary path
--custom-avalanchego-version string install given avalanchego version on node/s
-h, --help help for track
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true)
--latest-avalanchego-version install latest avalanchego release version on node/s
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local validate
Use Avalanche Node set up on local machine to set up specified L1 by providing the
RPC URL of the L1.
This command can only be used to validate Proof of Stake L1.
**Usage:**
```bash
avalanche node local validate [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-log-level string log level to use with signature aggregator (default "Debug")
--aggregator-log-to-stdout use stdout for signature aggregator logs
--balance float amount of AVAX to increase validator's balance by
--blockchain string specify the blockchain the node is syncing with
--delegation-fee uint16 delegation fee (in bips) (default 100)
--disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction
-h, --help help for validate
--l1 string specify the blockchain the node is syncing with
--minimum-stake-duration uint minimum stake duration (in seconds) (default 100)
--remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet
--rpc string connect to validator manager at the given rpc endpoint
--stake-amount uint amount of tokens to stake
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### refresh-ips
(ALPHA Warning) This command is currently in experimental mode.
The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster,
and updates the local node information used by CLI commands.
**Usage:**
```bash
avalanche node refresh-ips [subcommand] [flags]
```
**Flags:**
```bash
--aws-profile string aws profile to use (default "default")
-h, --help help for refresh-ips
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### resize
(ALPHA Warning) This command is currently in experimental mode.
The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes.
**Usage:**
```bash
avalanche node resize [subcommand] [flags]
```
**Flags:**
```bash
--aws-profile string aws profile to use (default "default")
--disk-size string Disk size to resize in Gb (e.g. 1000Gb)
-h, --help help for resize
--node-type string Node type to resize (e.g. t3.2xlarge)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### scp
(ALPHA Warning) This command is currently in experimental mode.
The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format:
\[clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/\*.txt.
File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path.
If both destinations are remote, they must be nodes for the same cluster and not clusters themselves.
For example:
$avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt$ avalanche node scp /tmp/file.txt \[cluster1|NodeID-XXXX]:/tmp/file.txt
\$ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt
**Usage:**
```bash
avalanche node scp [subcommand] [flags]
```
**Flags:**
```bash
--compress use compression for ssh
-h, --help help for scp
--recursive copy directories recursively
--with-loadtest include loadtest node for scp cluster operations
--with-monitor include monitoring node for scp cluster operations
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### ssh
(ALPHA Warning) This command is currently in experimental mode.
The node ssh command execute a given command \[cmd] using ssh on all nodes in the cluster if ClusterName is given.
If no command is given, just prints the ssh command to be used to connect to each node in the cluster.
For provided NodeID or InstanceID or IP, the command \[cmd] will be executed on that node.
If no \[cmd] is provided for the node, it will open ssh shell there.
**Usage:**
```bash
avalanche node ssh [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for ssh
--parallel run ssh command on all nodes in parallel
--with-loadtest include loadtest node for ssh cluster operations
--with-monitor include monitoring node for ssh cluster operations
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### status
(ALPHA Warning) This command is currently in experimental mode.
The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network.
If no cluster is given, defaults to node list behaviour.
To get the bootstrap status of a node with a Blockchain, use --blockchain flag
**Usage:**
```bash
avalanche node status [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string specify the blockchain the node is syncing with
-h, --help help for status
--subnet string specify the blockchain the node is syncing with
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### sync
(ALPHA Warning) This command is currently in experimental mode.
The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain.
You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName`
**Usage:**
```bash
avalanche node sync [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for sync
--no-checks do not check for bootstrapped/healthy status or rpc compatibility of nodes against subnet
--subnet-aliases strings subnet alias to be used for RPC calls. defaults to subnet blockchain ID
--validators strings sync subnet into given comma separated list of validators. defaults to all cluster nodes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### update
(ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM config.
You can check the status after update by calling avalanche node status
**Usage:**
```bash
avalanche node update [subcommand] [flags]
```
**Subcommands:**
* [`subnet`](#avalanche-node-update-subnet): (ALPHA Warning) This command is currently in experimental mode.
The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM.
You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName`
**Flags:**
```bash
-h, --help help for update
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### update subnet
(ALPHA Warning) This command is currently in experimental mode.
The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM.
You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName`
**Usage:**
```bash
avalanche node update subnet [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for subnet
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### upgrade
(ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM version.
You can check the status after upgrade by calling avalanche node status
**Usage:**
```bash
avalanche node upgrade [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for upgrade
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### validate
(ALPHA Warning) This command is currently in experimental mode.
The node validate command suite provides a collection of commands for nodes to join
the Primary Network and Subnets as validators.
If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command
will fail. You can check the bootstrap status by calling avalanche node status `clusterName`
**Usage:**
```bash
avalanche node validate [subcommand] [flags]
```
**Subcommands:**
* [`primary`](#avalanche-node-validate-primary): (ALPHA Warning) This command is currently in experimental mode.
The node validate primary command enables all nodes in a cluster to be validators of Primary
Network.
* [`subnet`](#avalanche-node-validate-subnet): (ALPHA Warning) This command is currently in experimental mode.
The node validate subnet command enables all nodes in a cluster to be validators of a Subnet.
If the command is run before the nodes are Primary Network validators, the command will first
make the nodes Primary Network validators before making them Subnet validators.
If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail.
You can check the bootstrap status by calling avalanche node status `clusterName`
If The command is run before the nodes are synced to the subnet, the command will fail.
You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName`
**Flags:**
```bash
-h, --help help for validate
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### validate primary
(ALPHA Warning) This command is currently in experimental mode.
The node validate primary command enables all nodes in a cluster to be validators of Primary
Network.
**Usage:**
```bash
avalanche node validate primary [subcommand] [flags]
```
**Flags:**
```bash
-e, --ewoq use ewoq key [fuji/devnet only]
-h, --help help for primary
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
--stake-amount uint how many AVAX to stake in the validator
--staking-period duration how long validator validates for after start time
--start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### validate subnet
(ALPHA Warning) This command is currently in experimental mode.
The node validate subnet command enables all nodes in a cluster to be validators of a Subnet.
If the command is run before the nodes are Primary Network validators, the command will first
make the nodes Primary Network validators before making them Subnet validators.
If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail.
You can check the bootstrap status by calling avalanche node status `clusterName`
If The command is run before the nodes are synced to the subnet, the command will fail.
You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName`
**Usage:**
```bash
avalanche node validate subnet [subcommand] [flags]
```
**Flags:**
```bash
--default-validator-params use default weight/start/duration params for subnet validator
-e, --ewoq use ewoq key [fuji/devnet only]
-h, --help help for subnet
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
--no-checks do not check for bootstrapped status or healthy status
--no-validation-checks do not check if subnet is already synced or validated (default true)
--stake-amount uint how many AVAX to stake in the validator
--staking-period duration how long validator validates for after start time
--start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
--validators strings validate subnet for the given comma separated list of validators. defaults to all cluster nodes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### whitelist
(ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster.
Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http.
It also command adds SSH public key to all nodes in the cluster if --ssh params is there.
If no params provided it detects current user IP automaticaly and whitelists it
**Usage:**
```bash
avalanche node whitelist [subcommand] [flags]
```
**Flags:**
```bash
-y, --current-ip whitelist current host ip
-h, --help help for whitelist
--ip string ip address to whitelist
--ssh string ssh public key to whitelist
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche primary
The primary command suite provides a collection of tools for interacting with the
Primary Network
**Usage:**
```bash
avalanche primary [subcommand] [flags]
```
**Subcommands:**
* [`addValidator`](#avalanche-primary-addvalidator): The primary addValidator command adds a node as a validator
in the Primary Network
* [`describe`](#avalanche-primary-describe): The subnet describe command prints details of the primary network configuration to the console.
**Flags:**
```bash
-h, --help help for primary
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### addValidator
The primary addValidator command adds a node as a validator
in the Primary Network
**Usage:**
```bash
avalanche primary addValidator [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--delegation-fee uint32 set the delegation fee (20 000 is equivalent to 2%)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for addValidator
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-m, --mainnet operate on mainnet
--nodeID string set the NodeID of the validator to add
--proof-of-possession string set the BLS proof of possession of the validator to add
--public-key string set the BLS public key of the validator to add
--staking-period duration how long this validator will be staking
--start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
-t, --testnet fuji operate on testnet (alias to fuji)
--weight uint set the staking weight of the validator to add
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### describe
The subnet describe command prints details of the primary network configuration to the console.
**Usage:**
```bash
avalanche primary describe [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
-h, --help help for describe
-l, --local operate on a local network
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche transaction
The transaction command suite provides all of the utilities required to sign multisig transactions.
**Usage:**
```bash
avalanche transaction [subcommand] [flags]
```
**Subcommands:**
* [`commit`](#avalanche-transaction-commit): The transaction commit command commits a transaction by submitting it to the P-Chain.
* [`sign`](#avalanche-transaction-sign): The transaction sign command signs a multisig transaction.
**Flags:**
```bash
-h, --help help for transaction
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### commit
The transaction commit command commits a transaction by submitting it to the P-Chain.
**Usage:**
```bash
avalanche transaction commit [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for commit
--input-tx-filepath string Path to the transaction signed by all signatories
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### sign
The transaction sign command signs a multisig transaction.
**Usage:**
```bash
avalanche transaction sign [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for sign
--input-tx-filepath string Path to the transaction file for signing
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche update
Check if an update is available, and prompt the user to install it
**Usage:**
```bash
avalanche update [subcommand] [flags]
```
**Flags:**
```bash
-c, --confirm Assume yes for installation
-h, --help help for update
-v, --version version for update
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche validator
The validator command suite provides a collection of tools for managing validator
balance on P-Chain.
Validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0,
the validator will be considered inactive and will no longer participate in validating the L1
**Usage:**
```bash
avalanche validator [subcommand] [flags]
```
**Subcommands:**
* [`getBalance`](#avalanche-validator-getbalance): This command gets the remaining validator P-Chain balance that is available to pay
P-Chain continuous fee
* [`increaseBalance`](#avalanche-validator-increasebalance): This command increases the validator P-Chain balance
* [`list`](#avalanche-validator-list): This command gets a list of the validators of the L1
**Flags:**
```bash
-h, --help help for validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### getBalance
This command gets the remaining validator P-Chain balance that is available to pay
P-Chain continuous fee
**Usage:**
```bash
avalanche validator getBalance [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for getBalance
--l1 string name of L1
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string node ID of the validator
-t, --testnet fuji operate on testnet (alias to fuji)
--validation-id string validation ID of the validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### increaseBalance
This command increases the validator P-Chain balance
**Usage:**
```bash
avalanche validator increaseBalance [subcommand] [flags]
```
**Flags:**
```bash
--balance float amount of AVAX to increase validator's balance by
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for increaseBalance
-k, --key string select the key to use [fuji/devnet deploy only]
--l1 string name of L1 (to increase balance of bootstrap validators only)
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string node ID of the validator
-t, --testnet fuji operate on testnet (alias to fuji)
--validation-id string validationIDStr of the validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
This command gets a list of the validators of the L1
**Usage:**
```bash
avalanche validator list [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for list
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
# Create Avalanche L1
URL: /docs/tooling/create-avalanche-l1
This page demonstrates how to create an Avalanche L1 using Avalanche-CLI.
This tutorial walks you through the process of using Avalanche-CLI to create an Avalanche L1, deploy it to a local network, and connect to it with Core wallet.
The first step of learning Avalanche L1 development is learning to use [Avalanche-CLI](https://github.com/ava-labs/avalanche-cli).
## Installation[](#installation "Direct link to heading")
The fastest way to install the latest Avalanche-CLI binary is by running the install script:
```bash
curl -sSfL https://raw.githubusercontent.com/ava-labs/avalanche-cli/main/scripts/install.sh | sh -s
```
The binary installs inside the `~/bin` directory. If the directory doesn't exist, it will be created.
You can run all of the commands in this tutorial by calling `~/bin/avalanche`.
You can also add the command to your system path by running:
```bash
export PATH=~/bin:$PATH
```
To make this change permanent, add this line to your shell’s initialization file (e.g., `~/.bashrc` or `~/.zshrc`). For example:
```bash
echo 'export PATH=~/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
```
Once you add it to your path, you should be able to call the program anywhere with just: `avalanche`
For more detailed installation instructions, see [Avalanche-CLI Installation](/docs/tooling/get-avalanche-cli).
## Create Your Avalanche L1 Configuration[](#create-your-avalanche-l1-configuration "Direct link to heading")
This tutorial teaches you how to create an Ethereum Virtual Machine (EVM) based Avalanche L1. To do so, you use Subnet-EVM, Avalanche's L1 fork of the EVM. It supports airdrops, custom fee tokens, configurable gas parameters, and multiple stateful precompiles. To learn more, take a look at [Subnet-EVM](https://github.com/ava-labs/subnet-evm). The goal of your first command is to create a Subnet-EVM configuration.
The `avalanche-cli` command suite provides a collection of tools for developing and deploying Avalanche L1s.
The Creation Wizard walks you through the process of creating your Avalanche L1. To get started, first pick a name for your Avalanche L1. This tutorial uses `myblockchain`, but feel free to substitute that with any name you like. Once you've picked your name, run:
```bash
avalanche blockchain create myblockchain
```
The following sections walk through each question in the wizard.
### Choose Your VM
```bash
? Which Virtual Machine would you like to use?:
▸ Subnet-EVM
Custom VM
Explain the difference
```
Select `Subnet-EVM`.
### Choose Validator Manager
```text
? Which validator management type would you like to use in your blockchain?:
▸ Proof Of Authority
Proof Of Stake
Explain the difference
```
Select `Proof Of Authority`.
```text
? Which address do you want to enable as controller of ValidatorManager contract?:
▸ Get address from an existing stored key (created from avalanche key create or avalanche key import)
Custom
```
Select `Get address from an existing stored key`.
```text
? Which stored key should be used enable as controller of ValidatorManager contract?:
▸ ewoq
cli-awm-relayer
cli-teleporter-deployer
```
Select `ewoq`.
This key is used to manage (add/remove) the validator set.
Do not use EWOQ key in a testnet or production setup. The EWOQ private key is publicly exposed.
To learn more about different validator management types, see [PoA vs PoS](/docs/avalanche-l1s/validator-manager/poa-vs-pos).
### Choose Blockchain Configuration
```text
? Do you want to use default values for the Blockchain configuration?:
▸ I want to use defaults for a test environment
I want to use defaults for a production environment
I don't want to use default values
Explain the difference
```
Select `I want to use defaults for a test environment`.
This will automatically setup the configuration for a test environment, including an airdrop to the EWOQ key and Avalanche ICM.
### Enter Your Avalanche L1's ChainID
```text
✗ Chain ID:
```
Choose a positive integer for your EVM-style ChainID.
In production environments, this ChainID needs to be unique and not shared with any other chain. You can visit [chainlist](https://chainlist.org/) to verify that your selection is unique. Because this is a development Avalanche L1, feel free to pick any number. Stay away from well-known ChainIDs such as 1 (Ethereum) or 43114 (Avalanche C-Chain) as those may cause issues with other tools.
### Token Symbol
```text
✗ Token Symbol:
```
Enter a string to name your Avalanche L1's native token. The token symbol doesn't necessarily need to be unique. Example token symbols are AVAX, JOE, and BTC.
### Wrapping Up
If all worked successfully, the command prints:
```bash
✓ Successfully created blockchain configuration
```
To view the Genesis configuration, use the following command:
```bash
avalanche blockchain describe myblockchain --genesis
```
You've successfully created your first Avalanche L1 configuration. Now it's time to deploy it.
# Installation
URL: /docs/tooling/get-avalanche-cli
Instructions for installing and setting up the Avalanche-CLI.
## Compatibility
Avalanche-CLI runs on Linux and Mac. Windows is currently not supported.
## Instructions
To download a binary for the latest release, run:
```bash
curl -sSfL https://raw.githubusercontent.com/ava-labs/avalanche-cli/main/scripts/install.sh | sh -s
```
The script installs the binary inside the `~/bin` directory. If the directory doesn't exist, it will be created.
## Adding Avalanche-CLI to Your PATH
To call the `avalanche` binary from anywhere, you'll need to add it to your system path. If you installed the binary into the default location, you can run the following snippet to add it to your path.
To add it to your path permanently, add an export command to your shell initialization script. If you run `bash`, use `.bashrc`. If you run `zsh`, use `.zshrc`.
For example:
```bash
export PATH=~/bin:$PATH >> .bashrc
```
## Checking Your Installation
You can test your installation by running `avalanche --version`. The tool should print the running version.
## Updating
To update your installation, you need to delete your current binary and download the latest version using the preceding steps.
## Building from Source
The source code is available in this [GitHub repository](https://github.com/ava-labs/avalanche-cli).
After you've cloned the repository, checkout the tag you'd like to run. You can compile the code by running `./scripts/build.sh` from the top level directory.
The build script names the binary `./bin/avalanche`.
# Glacier API
URL: /docs/tooling/glacier-api
The Glacier API is a performant API that allows web3 developers to more easily access the indexed blockchain data they need to build powerful applications on top of Avalanche's primary network and Avalanche L1s as well as Ethereum.
If you'd like increased rate limits for accessing the Glacier API, please visit [AvaCloud portal](https://avacloud.io/).
## Benefits[](#benefits "Direct link to heading")
By leveraging Glacier, developers can:
* Retrieve native and ERC-20 token balances and associated pricing information
* Get details related to blocks, transactions, and UTXOs
* Retrieve digital collectible (ERC-721/1155) balances and metadata
* Get native asset and token transfer history
## API[](#api "Direct link to heading")
API documentation and more information about accessing Glacier can be found [here](https://glacier.docs.avacloud.io/).
* For feedback or feature requests, please submit them [here](https://forms.gle/gTEoZ2XtRtx4TRSw6).
* Bug reports can be submitted [here](https://docs.google.com/forms/d/e/1FAIpQLSeJQrcp7QoNiqozMDKrVJGX5wpU827d3cVTgF8qa7t_J1Pb-g/viewform)
# Overview
URL: /docs/tooling
Documentation for different toolings available in the Avalanche ecosystem.
# Indexers
URL: /docs/tooling/indexers
Indexer solutions for Avalanche ecosystem.
There are several indexer solutions available, each offering different levels of decentralisation, ease of development, and performance for you to consider. These solutions serve as intermediaries to assist in indexing the Avalanche network.
Provided for informational purposes only, without representation, warranty or
guarantee of any kind. None of this is as an endorsement by the Avalanche
Foundation Limited, Ava Labs, Inc. or any of their respective subsidiaries or
affiliates, nor is any of this investment or financial advice. Please review
this
[Notice](https://assets.website-files.com/6059b554e81c705f9dd2dd32/60ec9590f189c16edaa086d4_Important%20Notice%20-%20avax.network.pdf)
and conduct your own research to properly evaluate the risks and benefits of
any project.
## Community Providers
### thirdweb
[thirdweb Insight](https://insight-api.thirdweb.com/guide/getting-started) is a fast, reliable and fully customizable way for developers to index, transform & query onchain data. It allows developers to retrieve blockchain data from any EVM chain, enrich it with metadata, transform it with custom logic, and then query the transformed data using REST endpoints. Developers can also define custom API schemas, or blueprints, without the need for ABIs, decoding, RPC, or web3 knowledge to fetch blockchain data.
With Insight, there's no need to learn the subgraph framework or deploy your own infrastructure. You just call the API and get the data you need.
[Sign up for a free thirdweb account](https://thirdweb.com/) to start indexing, and visit the [thirdweb Insight documentation](https://insight-api.thirdweb.com/guide/getting-started) to learn more.
### SubQuery
SubQuery is a leading blockchain data indexer that provides developers with fast, flexible, universal, open source and decentralised APIs for web3 projects. Another one of SubQuery's competitive advantages is the ability to aggregate data not only within a chain but across multiple blockchains all within a single project.
**Useful resources**:
* [SubQuery Docs](https://academy.subquery.network/)
* [Intro Quick Start Guide](https://academy.subquery.network/quickstart/quickstart.html)
* [Avalanche Quickstart](https://academy.subquery.network/quickstart/quickstart_chains/avalanche.html)
* [Mainnet Starter Project](https://github.com/subquery/ethereum-subql-starter/tree/main/Avalanche/avalanche-starter)
* [Fuji Starter Project](https://github.com/subquery/ethereum-subql-starter/tree/main/Avalanche/avalanche-fuji-starter)
### Flair
[Flair](https://flair.dev) provides real-time and historical custom data indexing for any EVM-compatible chain.
It offers reusable **indexing primitives** (such as fault-tolerant RPC ingestors, custom processors and aggregations, re-org aware database integrations) to make it easy to receive, transform, store and access your on-chain data.
To get started, visit the [documentation](https://docs.flair.dev) or clone the [starter boilerplate](https://github.com/flair-sdk/starter-boilerplate) template and follow the instructions.
### Envio
[Envio](https://envio.dev) is a full-featured data indexing solution that provides application developers with a seamless and efficient way to index and aggregate real-time and historical blockchain data for any EVM.
Envio supports [HyperSync](https://docs.envio.dev/docs/hypersync) on Avalanche. HyperSync is a real-time indexed layer of the Avalanche blockchain, providing accelerated APIs (JSON-RPC bypass) for the hyper-speed syncing of Avalanche data. Developers do not need to worry about RPC URLs, rate-limiting, or managing infrastructure, and can easily sync large datasets in a few minutes, something that would usually take 100x longer via standard RPC.
To get started, visit the [documentation](https://docs.envio.dev/docs/getting-started) or follow the [quickstart](https://docs.envio.dev/docs/contract-import) guide.
### DipDup
[DipDup](https://dipdup.io) is a Python framework for building smart contract indexers. It helps developers focus on business logic instead of writing a boilerplate to store and serve data. DipDup-based indexers are selective, which means only required data is requested. This approach allows to achieve faster indexing times and decreased load on underlying APIs.
To get started, visit the [documentation](https://dipdup.io/docs/supported-networks/avalanche) or follow the [quickstart](https://dipdup.io/docs/quickstart-evm) guide.
### Space and Time
[Space and Time](https://spaceandtime.io) is the blockchain for ZK-proven data. It provides data indexing services for all major chains including Bitcoin, Ethereum, ZKsync, Polygon, Avalanche etc.
Space and Time offers a way to query indexed blockchain data from major blockchains like Bitcoin, Ethereum, Base etc., giving smart contracts a way to ask questions about onchain and offchain activity, in a trustless way.
sign up here: [Space and Time documentation](https://app.spaceandtime.ai)
# Metrics API
URL: /docs/tooling/metrics-api
Power your analytics with Avalanche Metrics API such as Avalanche L1 usage, staking operations, and more.
See [https://metrics.avax.network/](https://metrics.avax.network/) for the API documentation.
# RPC Providers
URL: /docs/tooling/rpc-providers
RPC Providers in Avalanche ecosystem.
There are multiple RPC providers from which you can choose from. These providers will work as
intermediaries to help you interact with the Avalanche network. You'll experience different latency
levels depending on the provider's configurations. You can potentially use multiple providers for
redundancy and balancing.
## Mainnet RPC - Public API Server
There is a public API server that allows developers to access the Avalanche
network without having to run a node themselves. The public API server is
actually several [AvalancheGo](https://github.com/ava-labs/avalanchego) nodes
behind a load balancer to ensure high availability and high request throughput.
### Using the Public API Nodes
The public API server is at `api.avax.network` for Avalanche Mainnet and
`api.avax-test.network` for Avalanche Fuji Testnet. To access a particular API,
just append the relevant API endpoint, as documented
[here](/docs/api-reference/guides/issuing-api-calls). Namely, use the following end points for
each chain respectively:
#### HTTP
* For C-Chain API, the URL is `https://api.avax.network/ext/bc/C/rpc`.
* For X-Chain API, the URL is `https://api.avax.network/ext/bc/X`.
* For P-Chain API, the URL is `https://api.avax.network/ext/bc/P`.
Note: on Fuji Testnet, use `https://api.avax-test.network/` instead of `https://api.avax.network/`.
#### WebSocket
* For C-Chain API, the URL is `wss://api.avax.network/ext/bc/C/ws`.
Note: on Fuji Testnet, the URL is `wss://api.avax-test.network/ext/bc/C/ws`.
#### Supported APIs
The public API server supports all the API endpoints that make sense to be
available on a public-facing service, including APIs for the
[X-Chain](/docs/api-reference/x-chain/api), [P-Chain](/docs/api-reference/p-chain/api),
[C-Chain](/docs/api-reference/c-chain/api), and full archival for the Primary Network.
However, it doesn't support [Index APIs](/docs/api-reference/index-api), which includes
the X-Chain API's `getAddressTxs` method.
For a full list of available APIs see [here](/docs/api-reference/p-chain/api).
#### Limitations
The public API only supports C-Chain WebSocket API calls for API methods that
don't exist on the C-Chain's HTTP API.
For batched C-Chain requests on the public API node, the maximum number of items
is 40. We're working on to support a larger batch size.
The maximum number of blocks to serve per `getLogs` request is 2048, which is set by [`api-max-blocks-per-request`](/docs/nodes/chain-configs/c-chain#api-max-blocks-per-request).
#### Sticky Sessions
Requests to the public API server API are distributed by a load balancer to an
individual node. As a result, consecutive requests may go to different nodes.
That can cause issues for some use cases. For example, one node may think a
given transaction is accepted, while for another node the transaction is still
processing. To work around this, you can use 'sticky sessions', as documented
[here](https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials).
This allows consecutive API calls to be routed to the same node.
If you're using [AvalancheJS](/docs/tooling/avalanche-js) to access the public
API, simply set the following in your code:
```js
avalanche.setRequestConfig("withCredentials", true);
```
#### Availability
Usage of public API nodes is free and available to everyone without any
authentication or authorization. Rate limiting is present, but many of the API
calls are cached, and the rate limits are quite high. If your app is
running up against the limits, please [contact us](https://chat.avalabs.org) or
try using a community RPC provider.
#### Support
If you have questions, problems, or suggestions, join the official [Avalanche Discord](https://chat.avalabs.org/).
## Community Providers
Provided for informational purposes only, without representation, warranty or
guarantee of any kind. None of this is as an endorsement by the Avalanche
Foundation Limited, Ava Labs, Inc. or any of their respective subsidiaries or
affiliates, nor is any of this investment or financial advice.
Please review this [Notice](https://assets.website-files.com/6059b554e81c705f9dd2dd32/60ec9590f189c16edaa086d4_Important%20Notice%20-%20avax.network.pdf)
and conduct your own research to properly evaluate the risks and benefits of any project.
### Allnodes
[Allnodes](https://avalanche.publicnode.com) supports the C-Chain, X-Chain, and P-Chain.
Features:
* Free
* Privacy oriented
* Globally distributed infrastructure on Allnodes
* Optimized for speed and reliability - check our page for stats
#### Mainnet
##### HTTP
* For C-Chain RPC Endpoint, the URL is `https://avalanche-c-chain-rpc.publicnode.com`
* For X-Chain RPC Endpoint, the URL is `https://avalanche-x-chain-rpc.publicnode.com`
* For P-Chain RPC Endpoint, the URL is `https://avalanche-p-chain-rpc.publicnode.com`
##### Websockets
* For C-Chain WSS Endpoint, the URL is `wss://avalanche-c-chain-rpc.publicnode.com`
#### Testnet (Fuji)
##### HTTP
* For C-Chain RPC Endpoint, the URL is `https://avalanche-fuji-c-chain-rpc.publicnode.com`
* For X-Chain RPC Endpoint, the URL is `https://avalanche-fuji-x-chain-rpc.publicnode.com`
* For P-Chain RPC Endpoint, the URL is `https://avalanche-fuji-p-chain-rpc.publicnode.com`
##### Websockets
* For C-Chain WSS Endpoint, the URL is `wss://avalanche-fuji-c-chain-rpc.publicnode.com`
### ANKR
#### Mainnet
* Standard EVM API, the URL is `https://rpc.ankr.com/avalanche`.
* For C-Chain API, the URL is `https://rpc.ankr.com/avalanche-c`. On ANKR the C-Chain API doesn't support standard EVM APIs. For that use the Standard EVM API.
* For X-Chain API, the URL is `https://rpc.ankr.com/avalanche-x`.
* For P-Chain API, the URL is `https://rpc.ankr.com/avalanche-p`.
#### Testnet (Fuji)
* Standard EVM API, the URL is `https://rpc.ankr.com/avalanche_fuji`.
* For C-Chain API, the URL is `https://rpc.ankr.com/avalanche_fuji-c`. On ANKR the C-Chain API doesn't support standard EVM APIs. For that use the Standard EVM API.
* For X-Chain API, the URL is `https://rpc.ankr.com/avalanche_fuji-x`.
* For P-Chain API, the URL is `https://rpc.ankr.com/avalanche_fuji-p`.
Features:
* Archive Data Included.
* Automatic geo-routing across North America, Europe, and Asia.
Note: soft limited to 1 million daily requests per IP or referring domain. Batch calls limited to 1000.
### All That Node
[All That Node](https://www.allthatnode.com/protocol/avalanche.dsrv) supports the C-Chain, X-Chain, and P-Chain.
Features:
* Free plan available
* Support archive node
* Globally distributed infrastructure
#### Mainnet
##### HTTP (Full)
* For C-Chain RPC Endpoint, the URL is `https://avalanche-mainnet.g.allthatnode.com/full/evm//ext/bc/C/rpc`
* For X-Chain RPC Endpoint, the URL is `https://avalanche-mainnet.g.allthatnode.com/full/evm//ext/bc/X`
* For P-Chain RPC Endpoint, the URL is `https://avalanche-mainnet.g.allthatnode.com/full/evm//ext/bc/P`
##### HTTP (Archive)
* For C-Chain RPC Endpoint, the URL is `https://avalanche-mainnet.g.allthatnode.com/archive/evm//ext/bc/C/rpc`
* For X-Chain RPC Endpoint, the URL is `https://avalanche-mainnet.g.allthatnode.com/archive/evm//ext/bc/X`
* For P-Chain RPC Endpoint, the URL is `https://avalanche-mainnet.g.allthatnode.com/archive/evm//ext/bc/P`
##### Websocket (Full)
* For C-Chain RPC Endpoint, the URL is `wss://avalanche-mainnet.g.allthatnode.com/full/evm/`
##### Websocket (Archive)
* For C-Chain RPC Endpoint, the URL is `wss://avalanche-mainnet.g.allthatnode.com/archive/evm/`
#### Testnet (Fuji)
##### HTTP (Full)
* For C-Chain RPC Endpoint, the URL is `https://avalanche-fuji.g.allthatnode.com/full/evm//ext/bc/C/rpc`
* For X-Chain RPC Endpoint, the URL is `https://avalanche-fuji.g.allthatnode.com/full/evm//ext/bc/X`
* For P-Chain RPC Endpoint, the URL is `https://avalanche-fuji.g.allthatnode.com/full/evm//ext/bc/P`
##### HTTP (Archive)
* For C-Chain RPC Endpoint, the URL is `https://avalanche-fuji.g.allthatnode.com/archive/evm//ext/bc/C/rpc`
* For X-Chain RPC Endpoint, the URL is `https://avalanche-fuji.g.allthatnode.com/archive/evm//ext/bc/X`
* For P-Chain RPC Endpoint, the URL is `https://avalanche-fuji.g.allthatnode.com/archive/evm//ext/bc/P`
##### Websocket (Full)
* For C-Chain RPC Endpoint, the URL is `wss://avalanche-fuji.g.allthatnode.com/full/evm/`
##### Websocket (Archive)
* For C-Chain RPC Endpoint, the URL is `wss://avalanche-fuji.g.allthatnode.com/archive/evm/`
### Blast
[Blast](https://blastapi.io/public-api/avalanche) supports the C-Chain, X-Chain, and P-Chain.
#### Mainnet
##### HTTP
* For C-Chain RPC Endpoint ETH, the URL is `https://ava-mainnet.public.blastapi.io/ext/bc/C/rpc`
* For C-Chain RPC Endpoint AVAX, the URL is `https://ava-mainnet.public.blastapi.io/ext/bc/C/avax`
* For X-Chain RPC Endpoint, the URL is `https://ava-mainnet.public.blastapi.io/ext/bc/X`
* For P-Chain RPC Endpoint, the URL is `https://ava-mainnet.public.blastapi.io/ext/P`
##### Websockets
* For C-Chain WSS Endpoint, the URL is `wss://ava-mainnet.public.blastapi.io/ext/bc/C/ws`
#### Testnet (Fuji)
##### HTTP
* For C-Chain RPC Endpoint ETH, the URL is `https://ava-testnet.public.blastapi.io/ext/bc/C/rpc`
* For C-Chain RPC Endpoint AVAX, the URL is `https://ava-testnet.public.blastapi.io/ext/bc/C/avax`
* For X-Chain RPC Endpoint, the URL is `https://ava-testnet.public.blastapi.io/ext/bc/X`
* For P-Chain RPC Endpoint, the URL is `https://ava-testnet.public.blastapi.io/ext/P`
##### Websockets
* For C-Chain WSS Endpoint, the URL is `wss://ava-testnet.public.blastapi.io/ext/bc/C/ws`
### Chainstack
[Chainstack](https://chainstack.com/build-better-with-avalanche/) supports the
C-Chain, X-Chain, P-Chain, and the Fuji Testnet.
Features:
* Globally distributed infrastructure for optimal performance.
* Crypto payments natively.
* 24/7 customer support.
#### Mainnet
##### HTTP
* For C-Chain API, the regional elastic node URL is `https://nd-123-145-789.p2pify.com/API_KEY/ext/bc/C/rpc`, and the global elastic node URL is `https://avalanche-mainnet.core.chainstack.com/ext/bc/C/rpc/API_KEY`
* For X-Chain API, the regional elastic node URL is `https://nd-123-145-789.p2pify.com/API_KEY/ext/bc/X`, and the global elastic node URL is `https://avalanche-mainnet.core.chainstack.com/ext/bc/X/API_KEY`
* For P-Chain API, the regional elastic node URL is `https://nd-123-145-789.p2pify.com/API_KEY/ext/P`, and the global elastic node URL is `https://avalanche-mainnet.core.chainstack.com/ext/P/API_KEY`
##### Websockets
Websockets are available for the C-chain.
For the C-Chain API, the regional elastic node URL is `wss://ws-nd-123-145-789.p2pify.com/API_KEY/ext/bc/C/ws`, and the global elastic node URL is `wss://avalanche-mainnet.core.chainstack.com/ext/bc/C/ws/API_KEY`
#### Testnet (Fuji)
##### HTTP
* For C-Chain API, the URL is `https://nd-123-145-789.p2pify.com/API_KEY/ext/bc/C/rpc`
* For X-Chain API, the URL is `https://nd-123-145-789.p2pify.com/API_KEY/ext/bc/X`
* For P-Chain API, the URL is `https://nd-123-145-789.p2pify.com/API_KEY/ext/P`
##### Websockets
Websockets are available for the C-chain.
For the C-Chain API, the regional elastic node URL is `wss://ws-nd-123-145-789.p2pify.com/API_KEY/ext/bc/C/ws`, and the global elastic node URL is `wss://avalanche-fuji.core.chainstack.com/ext/bc/C/ws/API_KEY`
### DRPC
[DRPC](https://drpc.org/) supports the C-Chain.
#### Mainnet
* For C-Chain RPC Endpoint, the URL is `https://avalanche.drpc.org`
#### Testnet (Fuji)
* For C-Chain RPC Endpoint, the URL is `https://avalanche-fuji.drpc.org`
Features:
* Decentralized RPC nodes
* Node balancing
* Unlimited compute units per month on the free tier
* On free-tier is available Websockets
### GetBlock
[GetBlock](https://getblock.io/nodes/avax) currently only supports the C-Chain.
#### HTTP
* For C-Chain API, the URL is `https://avax.getblock.io/api_key/mainnet/ext/bc/C/ws?api_key=`
Note: on Fuji Testnet, the URL is `https://avax.getblock.io/api_key/testnet/ext/bc/C/ws?api_key=`.
#### Websockets
* For C-Chain API, the URL is `wss://avax.getblock.io/api_key/mainnet/ext/bc/C/ws?api_key=`
Note: on Fuji Testnet, the URL is `wss://avax.getblock.io/api_key/testnet/ext/bc/C/ws?api_key=`.
### Grove
[Grove](https://grove.city/) supports the C-Chain.
#### Mainnet
* For the C-Chain RPC Endpoint, the Public RPC URL is `https://avax.rpc.grove.city/v1/01fdb492`
* Private Endpoints can be created and the URL is `https://avax.rpc.grove.city/v1/`
Features:
* Decentralized RPC access on the Unstoppable [Pocket Network](https://pocket.network/)
* No compute units. 1 request = 1 relay.
* Free Tier: 150,000 Relays per Month capped at 30 RPS
### Infura
[Infura](https://docs.infura.io/infura/networks/avalanche-c-chain/) currently
only supports the C-Chain.
#### HTTP
* For C-Chain API, the URL is `https://avalanche-mainnet.infura.io/v3/YOUR-API-KEY`
Note: on Fuji Testnet, the URL is `https://avalanche-fuji.infura.io/v3/YOUR-API-KEY`.
### Moralis
[Moralis](https://moralis.io/?utm_source=avax-docs\&utm_medium=partner-docs) currently supports the C-Chain.
#### Mainnet
* [Moralis RPC Nodes](https://moralis.io/nodes/?utm_source=avax-docs\&utm_medium=partner-docs) for RPC Nodes
* [NFT API](https://moralis.io/api/nft/?utm_source=avax-docs\&utm_medium=partner-docs) for getting NFT metadata, balances, transfers, sales and more
* [Token API](https://moralis.io/api/token/?utm_source=avax-docs\&utm_medium=partner-docs) for getting ERC20 metadata, balances, transfers, prices, burns, mints and more
* [Wallet API](https://moralis.io/api/wallet/?utm_source=avax-docs\&utm_medium=partner-docs) for getting wallet balances, transaction history, net worth and more
* [Blockchain API](https://moralis.io/api/block/?utm_source=avax-docs\&utm_medium=partner-docs) for getting data about blocks, transactions, logs and events
* [Streams API](https://moralis.io/streams/?utm_source=avax-docs\&utm_medium=partner-docs) for getting real-time webhooks about any on-chain event
Features:
* Free plan available
* Supports all major EVM networks
### Nodies
[Nodies](https://nodies.app) supports the C, X, P, and DFK Avalanche L1 chains.
Features:
* Generous free tier
* Globally distributed infrastructure in 3+ geographic regions
* Decentralized and Centralized API's
#### HTTP
* For `C-Chain`, the URL is `https://lb.nodies.app/v1/105f8099e80f4123976b59df1ebfb433/ext/bc/C/rpc`
* For `X-Chain`, the URL is `https://lb.nodies.app/v1/105f8099e80f4123976b59df1ebfb433/ext/bc/X`
* For `P-Chain`, the URL is `https://lb.nodies.app/v1/105f8099e80f4123976b59df1ebfb433/ext/bc/P`
* For `DFK-Subnet`, the URL is `https://lb.nodies.app/v1/105f8099e80f4123976b59df1ebfb433/ext/bc/q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi/rpc`
### QuickNode
[QuickNode](https://www.quicknode.com/chains/avax) supports the X-Chain,
P-Chain, C-Chain, and Index API.
#### HTTP
* The URL is `http://sample-endpoint-name.network.quiknode.pro/token-goes-here/`
#### Websockets
* The URL is `wss://sample-endpoint-name.network.quiknode.pro/token-goes-here/`
### Stackup
[Stackup](https://www.stackup.sh) currently supports the Avalanche C-Chain on Mainnet and Fuji Testnet.
Features:
* Free
* Account abstraction RPC endpoints
* ERC-4337 bundlers and paymasters
#### HTTP
* The URL is `https://api.stackup.sh/v1/node/YOUR-API-KEY`
### Tenderly
Tenderly offers high-performance [Node RPC](https://docs.tenderly.co/node/?mtm_campaign=ext-docs\&mtm_kwd=avalanche) services for C-Chain, providing consistent support for developers. In addition to standard Node RPC, use **Simulation RPC** to simulate transactions, **Trace RPC** for detailed execution paths, and **Gas RPC** to optimize gas usage. Use Tenderly's Node RPC for reliable support, seamless transaction broadcasting, and blockchain data retrieval. Identify and resolve issues faster, minimize latency, and ensure reliable dapp performance with built-in debugging, testing, and monitoring.
Features:
* **[Node RPC](https://docs.tenderly.co/node/??mtm_campaign=ext-docs\&mtm_kwd=avalanche)**: High-performance, low-latency access to C-Chain nodes
* **[Simulation RPC](https://docs.tenderly.co/simulations/single-simulations#simulate-via-rpc?mtm_campaign=ext-docs\&mtm_kwd=avalanche)**: Accurate transaction simulation and gas cost prediction
* **[Trace RPC](https://docs.tenderly.co/node/rpc-reference/avalanche/trace_transaction)**: Detailed transaction execution paths for debugging
* **[Gas RPC](https://docs.tenderly.co/node/rpc-reference/avalanche/tenderly_estimateGas?mtm_campaign=ext-docs\&mtm_kwd=avalanche)**: Optimize gas usage and transaction costs. Use [`tenderly_gasPrice`](https://docs.tenderly.co/node/rpc-reference/avalanche/tenderly_gasPrice) to most likely current gas price
### NOWNodes
[NOWNodes](https://nownodes.io/nodes/avalanche-avax) supports the X-Chain, P-Chain, C-Chain, and Blockbook.
Features:
* Privacy oriented (non custodial, no KYC)
* Dedicated access with no limits by request
* Free starter plan
* Technical guides
* 24/7 Support
#### RPC
* Full Node endpoint: `https://avax.nownodes.io`
#### Explorer
* The URL is: `https://avax-blockbook.nownodes.io`
#### WSS
* Endpoint is: `https://avax.nownodes.io/wss`
#### Blockbook WSS
* Endpoint is: `https://avax-blockbook.nownodes.io/wss`
### Zeeve
[Zeeve](https://www.zeeve.io) supports the C-Chain, X-Chain, and P-Chain.
Features:
* Archive/Full Node Paid Plans
* 24/7 support
* Distributed global infrastructure
#### Mainnet
##### HTTP (Full)
-X Chain - [https://zeeve-avalanche-mainnet.zeeve.net/as11L2bAq0mZ8wT3rV1P/rpc/ext/bc/X](https://zeeve-avalanche-mainnet.zeeve.net/as11L2bAq0mZ8wT3rV1P/rpc/ext/bc/X)
-P Chain - [https://zeeve-avalanche-mainnet.zeeve.net/as11L2bAq0mZ8wT3rV1P/rpc/ext/bc/P](https://zeeve-avalanche-mainnet.zeeve.net/as11L2bAq0mZ8wT3rV1P/rpc/ext/bc/P)
-C Chain - [https://zeeve-avalanche-mainnet.zeeve.net/as11L2bAq0mZ8wT3rV1P/rpc/ext/bc/C/rpc](https://zeeve-avalanche-mainnet.zeeve.net/as11L2bAq0mZ8wT3rV1P/rpc/ext/bc/C/rpc)
### 1RPC
[1RPC](https://1rpc.io), by Automata Network supports the C-Chain, X-Chain, and P-Chain.
Features:
* Free to use
* First RPC relay to be attested on-chain
* Eradicate metadata exposure and leakage
* Zero tracking
#### Mainnet RPC
* For C-Chain RPC Endpoint, the URL is `https://1rpc.io/avax/c`
* For X-Chain RPC Endpoint, the URL is `https://1rpc.io/avax/x`
* For P-Chain RPC Endpoint, the URL is `https://1rpc.io/avax/p`
## Avalanche L1s RPC - Public API Servers
### Beam
#### HTTP
* The URL is `https://subnets.avax.network/beam/mainnet/rpc`.
Note: on Fuji Testnet, the URL is `https://subnets.avax.network/beam/testnet/rpc`.
#### Websockets
* The URL is `wss://subnets.avax.network/beam/mainnet/ws`.
Note: on Fuji Testnet, the URL is `wss://subnets.avax.network/beam/testnet/ws`.
### DeFi Kingdom (DFK)
#### HTTP
* The URL is `https://subnets.avax.network/defi-kingdoms/dfk-chain/rpc`.
Note: on Fuji Testnet, the URL is `https://subnets.avax.network/defi-kingdoms/dfk-chain-testnet/rpc`.
#### Websockets
* The URL is `wss://subnets.avax.network/defi-kingdoms/dfk-chain/ws`.
Note: on Fuji Testnet, the URL is `wss://subnets.avax.network/defi-kingdoms/dfk-chain-testnet/ws`.
### Dexalot
#### HTTP
* The URL is `https://subnets.avax.network/dexalot/mainnet/rpc`.
Note: on Fuji Testnet, the URL is `https://subnets.avax.network/dexalot/testnet/rpc`.
#### Websockets
* The URL is `wss://subnets.avax.network/dexalot/mainnet/ws`.
Note: on Fuji Testnet, the URL is `wss://subnets.avax.network/dexalot/testnet/ws`.
## Avalanche RPC Proxy and Caching
[eRPC](https://github.com/erpc/erpc) is a fault-tolerant EVM RPC proxy and re-org aware permanent caching solution. It is built with read-heavy use-cases in mind such as data indexing and high-load frontend usage.
# Quickstart
1. Create your [`erpc.yaml`](https://docs.erpc.cloud/config/example) configuration file:
```yaml filename="erpc.yaml"
logLevel: debug
projects:
- id: main
upstreams:
# You don't need to define architecture (e.g. evm) or chain id (e.g. 43114)
# as they will be detected automatically by eRPC.
- endpoint: https://ava-mainnet.blastapi.io/xxxx
- endpoint: evm+alchemy://xxxx-my-alchemy-api-key-xxxx
```
See [a complete config example](https://docs.erpc.cloud/config/example) for inspiration.
2. Use the Docker image:
```bash
docker run -v $(pwd)/erpc.yaml:/root/erpc.yaml -p 4000:4000 -p 4001:4001 ghcr.io/erpc/erpc:latest
```
3. Send your first request:
```bash
curl --location 'http://localhost:4000/main/evm/43114' \
--header 'Content-Type: application/json' \
--data '{
"method": "eth_getBlockByNumber",
"params": [
"0x2e76572",
false
],
"id": 9199,
"jsonrpc": "2.0"
}'
```
4. Bring up monitoring stack (Prometheus, Grafana) using docker-compose:
```bash
# clone the repo if you haven't
git clone https://github.com/erpc/erpc.git
cd erpc
# bring up the monitoring stack
docker-compose up -d
```
5. Open Grafana at [http://localhost:3000](http://localhost:3000) and login with the following credentials:
* username: `admin`
* password: `admin`
6. Send more requests and watch the metrics being collected and visualized in Grafana.

# Banff Changes
URL: /docs/api-reference/guides/banff-changes
This document specifies the changes in Avalanche “Banff”, which was released in AvalancheGo v1.9.x.
## Block Changes[](#block-changes "Direct link to heading")
### Apricot[](#apricot "Direct link to heading")
Apricot allows the following block types with the following content:
* *Standard Blocks* may contain multiple transactions of the following types:
* CreateChainTx
* CreateSubnetTx
* ImportTx
* ExportTx
* *Proposal Blocks* may contain a single transaction of the following types:
* AddValidatorTx
* AddDelegatorTx
* AddSubnetValidatorTx
* RewardValidatorTx
* AdvanceTimeTx
* *Options Blocks*, that is *Commit Block* and *Abort Block* do not contain any transactions.
Each block has a header containing:
* ParentID
* Height
### Banff[](#banff "Direct link to heading")
Banff allows the following block types with the following content:
* *Standard Blocks* may contain multiple transactions of the following types:
* CreateChainTx
* CreateSubnetTx
* ImportTx
* ExportTx
* AddValidatorTx
* AddDelegatorTx
* AddSubnetValidatorTx
* *RemoveSubnetValidatorTx*
* *TransformSubnetTx*
* *AddPermissionlessValidatorTx*
* *AddPermissionlessDelegatorTx*
* *Proposal Blocks* may contain a single transaction of the following types:
* RewardValidatorTx
* *Options blocks*, that is *Commit Block* and *Abort Block* do not contain any transactions.
Note that each block has an header containing:
* ParentID
* Height
* *Time*
So the two main differences with respect to Apricot are:
* *AddValidatorTx*, *AddDelegatorTx*, *AddSubnetValidatorTx* are included into Standard Blocks rather than Proposal Blocks so that they don't need to be voted on (that is followed by a Commit/Abort Block).
* New Transaction types: *RemoveSubnetValidatorTx*, *TransformSubnetTx*, *AddPermissionlessValidatorTx*, and *AddPermissionlessDelegatorTx* have been added into Standard Blocks.
* Block timestamp is explicitly serialized into block header, to allow chain time update.
### New Transactions[](#new-transactions "Direct link to heading")
#### RemoveSubnetValidatorTx[](#removesubnetvalidatortx "Direct link to heading")
```
type RemoveSubnetValidatorTx struct {
BaseTx `serialize:"true"`
// The node to remove from the Avalanche L1.
NodeID ids.NodeID `serialize:"true" json:"nodeID"`
// The Avalanche L1 to remove the node from.
Subnet ids.ID `serialize:"true" json:"subnet"`
// Proves that the issuer has the right to remove the node from the Avalanche L1.
SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"`
}
```
#### TransformSubnetTx[](#transformsubnettx "Direct link to heading")
```
type TransformSubnetTx struct {
// Metadata, inputs and outputs
BaseTx `serialize:"true"`
// ID of the Subnet to transform
// Restrictions:
// - Must not be the Primary Network ID
Subnet ids.ID `serialize:"true" json:"subnetID"`
// Asset to use when staking on the Avalanche L1
// Restrictions:
// - Must not be the Empty ID
// - Must not be the AVAX ID
AssetID ids.ID `serialize:"true" json:"assetID"`
// Amount to initially specify as the current supply
// Restrictions:
// - Must be > 0
InitialSupply uint64 `serialize:"true" json:"initialSupply"`
// Amount to specify as the maximum token supply
// Restrictions:
// - Must be >= [InitialSupply]
MaximumSupply uint64 `serialize:"true" json:"maximumSupply"`
// MinConsumptionRate is the rate to allocate funds if the validator's stake
// duration is 0
MinConsumptionRate uint64 `serialize:"true" json:"minConsumptionRate"`
// MaxConsumptionRate is the rate to allocate funds if the validator's stake
// duration is equal to the minting period
// Restrictions:
// - Must be >= [MinConsumptionRate]
// - Must be <= [reward.PercentDenominator]
MaxConsumptionRate uint64 `serialize:"true" json:"maxConsumptionRate"`
// MinValidatorStake is the minimum amount of funds required to become a
// validator.
// Restrictions:
// - Must be > 0
// - Must be <= [InitialSupply]
MinValidatorStake uint64 `serialize:"true" json:"minValidatorStake"`
// MaxValidatorStake is the maximum amount of funds a single validator can
// be allocated, including delegated funds.
// Restrictions:
// - Must be >= [MinValidatorStake]
// - Must be <= [MaximumSupply]
MaxValidatorStake uint64 `serialize:"true" json:"maxValidatorStake"`
// MinStakeDuration is the minimum number of seconds a staker can stake for.
// Restrictions:
// - Must be > 0
MinStakeDuration uint32 `serialize:"true" json:"minStakeDuration"`
// MaxStakeDuration is the maximum number of seconds a staker can stake for.
// Restrictions:
// - Must be >= [MinStakeDuration]
// - Must be <= [GlobalMaxStakeDuration]
MaxStakeDuration uint32 `serialize:"true" json:"maxStakeDuration"`
// MinDelegationFee is the minimum percentage a validator must charge a
// delegator for delegating.
// Restrictions:
// - Must be <= [reward.PercentDenominator]
MinDelegationFee uint32 `serialize:"true" json:"minDelegationFee"`
// MinDelegatorStake is the minimum amount of funds required to become a
// delegator.
// Restrictions:
// - Must be > 0
MinDelegatorStake uint64 `serialize:"true" json:"minDelegatorStake"`
// MaxValidatorWeightFactor is the factor which calculates the maximum
// amount of delegation a validator can receive.
// Note: a value of 1 effectively disables delegation.
// Restrictions:
// - Must be > 0
MaxValidatorWeightFactor byte `serialize:"true" json:"maxValidatorWeightFactor"`
// UptimeRequirement is the minimum percentage a validator must be online
// and responsive to receive a reward.
// Restrictions:
// - Must be <= [reward.PercentDenominator]
UptimeRequirement uint32 `serialize:"true" json:"uptimeRequirement"`
// Authorizes this transformation
SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"`
}
```
#### AddPermissionlessValidatorTx[](#addpermissionlessvalidatortx "Direct link to heading")
```
type AddPermissionlessValidatorTx struct {
// Metadata, inputs and outputs
BaseTx `serialize:"true"`
// Describes the validator
Validator validator.Validator `serialize:"true" json:"validator"`
// ID of the Avalanche L1 this validator is validating
Subnet ids.ID `serialize:"true" json:"subnet"`
// Where to send staked tokens when done validating
StakeOuts []*avax.TransferableOutput `serialize:"true" json:"stake"`
// Where to send validation rewards when done validating
ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"`
// Where to send delegation rewards when done validating
DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"`
// Fee this validator charges delegators as a percentage, times 10,000
// For example, if this validator has DelegationShares=300,000 then they
// take 30% of rewards from delegators
DelegationShares uint32 `serialize:"true" json:"shares"`
}
```
#### AddPermissionlessDelegatorTx[](#addpermissionlessdelegatortx "Direct link to heading")
```
type AddPermissionlessDelegatorTx struct {
// Metadata, inputs and outputs
BaseTx `serialize:"true"`
// Describes the validator
Validator validator.Validator `serialize:"true" json:"validator"`
// ID of the Avalanche L1 this validator is validating
Subnet ids.ID `serialize:"true" json:"subnet"`
// Where to send staked tokens when done validating
Stake []*avax.TransferableOutput `serialize:"true" json:"stake"`
// Where to send staking rewards when done validating
RewardsOwner fx.Owner `serialize:"true" json:"rewardsOwner"`
}
```
#### New TypeIDs[](#new-typeids "Direct link to heading")
```
ApricotProposalBlock = 0
ApricotAbortBlock = 1
ApricotCommitBlock = 2
ApricotStandardBlock = 3
ApricotAtomicBlock = 4
secp256k1fx.TransferInput = 5
secp256k1fx.MintOutput = 6
secp256k1fx.TransferOutput = 7
secp256k1fx.MintOperation = 8
secp256k1fx.Credential = 9
secp256k1fx.Input = 10
secp256k1fx.OutputOwners = 11
AddValidatorTx = 12
AddSubnetValidatorTx = 13
AddDelegatorTx = 14
CreateChainTx = 15
CreateSubnetTx = 16
ImportTx = 17
ExportTx = 18
AdvanceTimeTx = 19
RewardValidatorTx = 20
stakeable.LockIn = 21
stakeable.LockOut = 22
RemoveSubnetValidatorTx = 23
TransformSubnetTx = 24
AddPermissionlessValidatorTx = 25
AddPermissionlessDelegatorTx = 26
EmptyProofOfPossession = 27
BLSProofOfPossession = 28
BanffProposalBlock = 29
BanffAbortBlock = 30
BanffCommitBlock = 31
BanffStandardBlock = 32
```
# Flow of a Single Blockchain
URL: /docs/api-reference/guides/blockchain-flow

## Intro[](#intro "Direct link to heading")
The Avalanche network consists of 3 built-in blockchains: X-Chain, C-Chain, and P-Chain. The X-Chain is used to manage assets and uses the Avalanche consensus protocol. The C-Chain is used to create and interact with smart contracts and uses the Snowman consensus protocol. The P-Chain is used to coordinate validators and stake and also uses the Snowman consensus protocol. At the time of writing, the Avalanche network has \~1200 validators. A set of validators makes up an Avalanche L1. Avalanche L1s can validate 1 or more chains. It is a common misconception that 1 Avalanche L1 = 1 chain and this is shown by the primary Avalanche L1 of Avalanche which is made up of the X-Chain, C-Chain, and P-Chain.
A node in the Avalanche network can either be a validator or a non-validator. A validator stakes AVAX tokens and participates in consensus to earn rewards. A non-validator does not participate in consensus or have any AVAX staked but can be used as an API server. Both validators and non-validators need to have their own copy of the chain and need to know the current state of the network. At the time of writing, there are \~1200 validators and \~1800 non-validators.
Each blockchain on Avalanche has several components: the virtual machine, database, consensus engine, sender, and handler. These components help the chain run smoothly. Blockchains also interact with the P2P layer and the chain router to send and receive messages.
## Peer-to-Peer (P2P)[](#peer-to-peer-p2p "Direct link to heading")
### Outbound Messages[](#outbound-messages "Direct link to heading")
[The `OutboundMsgBuilder` interface](https://github.com/ava-labs/avalanchego/blob/master/message/outbound_msg_builder.go) specifies methods that build messages of type `OutboundMessage`. Nodes communicate to other nodes by sending `OutboundMessage` messages.
All messaging functions in `OutboundMsgBuilder` can be categorized as follows:
* **Handshake**
* Nodes need to be on a certain version before they can be accepted into the network.
* **State Sync**
* A new node can ask other nodes for the current state of the network. It only syncs the required state for a specific block.
* **Bootstrapping**
* Nodes can ask other nodes for blocks to build their own copy of the chain. A node can fetch all blocks from the locally last accepted block to the current last accepted block in the network.
* **Consensus**
* Once a node is up to tip they can participate in consensus! During consensus, a node conducts a poll to several different small random samples of the validator set. They can communicate decisions on whether or not they have accepted/rejected a block.
* **App**
* VMs communicate application-specific messages to other nodes through app messages. A common example is mempool gossiping.
Currently, AvalancheGo implements its own message serialization to communicate. In the future, AvalancheGo will use protocol buffers to communicate.
### Network[](#network "Direct link to heading")
[The networking interface](https://github.com/ava-labs/avalanchego/blob/master/network/network.go) is shared across all chains. It implements functions from the `ExternalSender` interface. The two functions it implements are `Send` and `Gossip`. `Send` sends a message of type `OutboundMessage` to a specific set of nodes (specified by an array of `NodeIDs`). `Gossip` sends a message of type `OutboundMessage` to a random group of nodes in an Avalanche L1 (can be a validator or a non-validator). Gossiping is used to push transactions across the network. The networking protocol uses TLS to pass messages between peers.
Along with sending and gossiping, the networking library is also responsible for making connections and maintaining connections. Any node, either a validator or non-validator, will attempt to connect to the primary network.
## Router[](#router "Direct link to heading")
[The `ChainRouter`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/router/chain_router.go) routes all incoming messages to its respective blockchain using `ChainID`. It does this by pushing all the messages onto the respective Chain handler's queue. The `ChainRouter` references all existing chains on the network such as the X-chain, C-chain, P-chain and possibly any other chain. The `ChainRouter` handles timeouts as well. When sending messages on the P2P layer, timeouts are registered on the sender and cleared on the `ChainRouter` side when a response is received. If no response is received, then it triggers a timeout. Because timeouts are handled on the `ChainRouter` side, the handler is reliable. Timeouts are triggered when peers do not respond and the `ChainRouter` will still notify the handler of failure cases. The timeout manager within `ChainRouter` is also adaptive. If the network is experiencing long latencies, timeouts will then be adjusted as well.
## Handler[](#handler "Direct link to heading")
The main function of [the `Handler`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/handler/handler.go) is to pass messages from the network to the consensus engine. It receives these messages from the `ChainRouter`. It passes messages by pushing them onto a sync or Async queue (depends on message type). Messages are then popped from the queue, parsed, and routed to the correct function in consensus engine. This can be one of the following.
* **State sync message (sync queue)**
* **Bootstrapping message (sync queue)**
* **Consensus message (sync queue)**
* **App message (Async queue)**
## Sender[](#sender "Direct link to heading")
The main role of [the `sender`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/sender/sender.go) is to build and send outbound messages. It is actually a very thin wrapper around the normal networking code. The main difference here is that sender registers timeouts and tells the router to expect a response message. The timer starts on the sender side. If there is no response, sender will send a failed response to the router. If a node is repeatedly unresponsive, that node will get benched and the sender will immediately start marking those messages as failed. If a sufficient amount of network deems the node benched, it might not get rewards (as a validator).
## Consensus Engine[](#consensus-engine "Direct link to heading")
Consensus is defined as getting a group of distributed systems to agree on an outcome. In the case of the Avalanche network, consensus is achieved when validators are in agreement with the state of the blockchain. The novel consensus algorithm is documented in the [white paper](https://assets.website-files.com/5d80307810123f5ffbb34d6e/6009805681b416f34dcae012_Avalanche%20Consensus%20Whitepaper.pdf). There are two main consensus algorithms: Avalanche and [Snowman](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/consensus.go). The engine is responsible for adding proposing a new block to consensus, repeatedly polling the network for decisions (accept/reject), and communicating that decision to the `Sender`.
## Blockchain Creation[](#blockchain-creation "Direct link to heading")
[The `Manager`](https://github.com/ava-labs/avalanchego/blob/master/chains/manager.go) is what kick-starts everything in regards to blockchain creation, starting with the P-Chain. Once the P-Chain finishes bootstrapping, it will kickstart C-Chain and X-Chain and any other chains. The `Manager`'s job is not done yet, if a create-chain transaction is seen by a validator, a whole new process to create a chain will be started by the `Manager`. This can happen dynamically, long after the 3 chains in the Primary Network have been created and bootstrapped.
# Issuing API Calls
URL: /docs/api-reference/guides/issuing-api-calls
This guide explains how to make calls to APIs exposed by Avalanche nodes.
## Endpoints[](#endpoints "Direct link to heading")
An API call is made to an endpoint, which is a URL, made up of the base URI which is the address and the port of the node, and the path the particular endpoint the API call is on.
### Base URL[](#base-url "Direct link to heading")
The base of the URL is always:
`[node-ip]:[http-port]`
where
* `node-ip` is the IP address of the node the call is to.
* `http-port` is the port the node listens on for HTTP calls. This is specified by [command-line argument](/docs/nodes/configure/configs-flags#http-server) `http-port` (default value `9650`).
For example, if you're making RPC calls on the local node, the base URL might look like this: `127.0.0.1:9650`.
If you're making RPC calls to remote nodes, then the instead of `127.0.0.1` you should use the public IP of the server where the node is. Note that by default the node will only accept API calls on the local interface, so you will need to set up the [`http-host`](/docs/nodes/configure/configs-flags#--http-host-string) config flag on the node. Also, you will need to make sure the firewall and/or security policy allows access to the `http-port` from the internet.
When setting up RPC access to a node, make sure you don't leave the `http-port` accessible to everyone! There are malicious actors that scan for nodes that have unrestricted access to their RPC port and then use those nodes for spamming them with resource-intensive queries which can knock the node offline. Only allow access to your node's RPC port from known IP addresses!
### Endpoint Path[](#endpoint-path "Direct link to heading")
Each API's documentation specifies what endpoint path a user should make calls to in order to access the API's methods.
In general, they are formatted like:
So for the Admin API, the endpoint path is `/ext/admin`, for the Info API it is `/ext/info` and so on. Note that some APIs have additional path components, most notably the chain RPC endpoints which includes the Avalanche L1 chain RPCs. We'll go over those in detail in the next section.
So, in combining the base URL and the endpoint path we get the complete URL for making RPC calls. For example, to make a local RPC call on the Info API, the full URL would be:
```
http://127.0.0.1:9650/ext/info
```
## Primary Network and Avalanche L1 RPC calls[](#primary-network-and-avalanche-l1-rpc-calls "Direct link to heading")
Besides the APIs that are local to the node, like Admin or Metrics APIs, nodes also expose endpoints for talking to particular chains that are either part of the Primary Network (the X, P and C chains), or part of any Avalanche L1s the node might be syncing or validating.
In general, chain endpoints are formatted as:
### Primary Network Endpoints[](#primary-network-endpoints "Direct link to heading")
The Primary Network consists of three chains: X, P and C chain. As those chains are present on every node, there are also convenient aliases defined that can be used instead of the full blockchainIDs. So, the endpoints look like:
### C-Chain and Subnet-EVM Endpoints[](#c-chain-and-subnet-evm-endpoints "Direct link to heading")
C-Chain and many Avalanche L1s run a version of the EthereumVM (EVM). EVM exposes its own endpoints, which are also accessible on the node: JSON-RPC, and Websocket.
#### JSON-RPC EVM Endpoints[](#json-rpc-evm-endpoints "Direct link to heading")
To interact with C-Chain EVM via the JSON-RPC use the endpoint:
To interact with Avalanche L1 instances of the EVM via the JSON-RPC endpoint:
```
/ext/bc/[blockchainID]/rpc
```
where `blockchainID` is the ID of the blockchain running the EVM. So for example, the RPC URL for the DFK Network (an Avalanche L1 that runs the DeFi Kingdoms:Crystalvale game) running on a local node would be:
```
http://127.0.0.1/ext/bc/q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi/rpc
```
Or for the WAGMI Avalanche L1 on the Fuji testnet:
```
http://127.0.0.1/ext/bc/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/rpc
```
#### Websocket EVM Endpoints[](#websocket-evm-endpoints "Direct link to heading")
To interact with C-Chain via the websocket endpoint, use:
To interact with other instances of the EVM via the websocket endpoint:
where `blockchainID` is the ID of the blockchain running the EVM. For example, to interact with the C-Chain's Ethereum APIs via websocket on localhost you can use:
```
ws://127.0.0.1:9650/ext/bc/C/ws
```
When using the [Public API](/docs/tooling/rpc-providers) or another host that supports HTTPS, use `https://` or `wss://` instead of `http://` or `ws://`.
Also, note that the [public API](/docs/tooling/rpc-providers#using-the-public-api-nodes) only supports C-Chain websocket API calls for API methods that don't exist on the C-Chain's HTTP API.
## Making a JSON RPC Request[](#making-a-json-rpc-request "Direct link to heading")
Most of the built-in APIs use the [JSON RPC 2.0](https://www.jsonrpc.org/specification) format to describe their requests and responses. Such APIs include the Platform API and the X-Chain API.
Suppose we want to call the `getTxStatus` method of the [X-Chain API](/docs/api-reference/x-chain/api). The X-Chain API documentation tells us that the endpoint for this API is `/ext/bc/X`.
That means that the endpoint we send our API call to is:
`[node-ip]:[http-port]/ext/bc/X`
The X-Chain API documentation tells us that the signature of `getTxStatus` is:
[`avm.getTxStatus`](/docs/api-reference/x-chain/api#avmgettxstatus)`(txID:bytes) -> (status:string)`
where:
* Argument `txID` is the ID of the transaction we're getting the status of.
* Returned value `status` is the status of the transaction in question.
To call this method, then:
```
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :4,
"method" :"avm.getTxStatus",
"params" :{
"txID":"2QouvFWUbjuySRxeX5xMbNCuAaKWfbk5FeEa2JmoF85RKLk2dD"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
* `jsonrpc` specifies the version of the JSON RPC protocol. (In practice is always 2.0)
* `method` specifies the service (`avm`) and method (`getTxStatus`) that we want to invoke.
* `params` specifies the arguments to the method.
* `id` is the ID of this request. Request IDs should be unique.
That's it!
### JSON RPC Success Response[](#json-rpc-success-response "Direct link to heading")
If the call is successful, the response will look like this:
```
{
"jsonrpc": "2.0",
"result": {
"Status": "Accepted"
},
"id": 1
}
```
* `id` is the ID of the request that this response corresponds to.
* `result` is the returned values of `getTxStatus`.
### JSON RPC Error Response[](#json-rpc-error-response "Direct link to heading")
If the API method invoked returns an error then the response will have a field `error` in place of `result`. Additionally, there is an extra field, `data`, which holds additional information about the error that occurred.
Such a response would look like:
```
{
"jsonrpc": "2.0",
"error": {
"code": -32600,
"message": "[Some error message here]",
"data": [Object with additional information about the error]
},
"id": 1
}
```
## Other API Formats[](#other-api-formats "Direct link to heading")
Some APIs may use a standard other than JSON RPC 2.0 to format their requests and responses. Such extension should specify how to make calls and parse responses to them in their documentation.
## Sending and Receiving Bytes[](#sending-and-receiving-bytes "Direct link to heading")
Unless otherwise noted, when bytes are sent in an API call/response, they are in hex representation. However, Transaction IDs (TXIDs), ChainIDs, and subnetIDs are in [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) representation, a base-58 encoding with a checksum.
# Transaction Fees
URL: /docs/api-reference/guides/txn-fees
In order to prevent spam, transactions on Avalanche require the payment of a transaction fee. The fee is paid in AVAX. **The transaction fee is burned (destroyed forever).**
When you issue a transaction through Avalanche's API, the transaction fee is automatically deducted from one of the addresses you control.
The [avalanchego wallet](https://github.com/ava-labs/avalanchego/blob/master/wallet/chain) contains example code written in golang for building and signing transactions on all three mainnet chains.
## X-Chain Fees[](#fee-schedule)
The X-Chain currently operates under a fixed fee mechanism. This table shows the X-Chain transaction fee schedule:
```
+----------+---------------------------+--------------------------------+
| Chain | Transaction Type | Mainnet Transaction Fee (AVAX) |
+----------+---------------------------+--------------------------------+
| X | Send | 0.001 |
+----------+---------------------------+--------------------------------+
| X | Create Asset | 0.01 |
+----------+---------------------------+--------------------------------+
| X | Mint Asset | 0.001 |
+----------+---------------------------+--------------------------------+
| X | Import AVAX | 0.001 |
+----------+---------------------------+--------------------------------+
| X | Export AVAX | 0.001 |
+----------+---------------------------+--------------------------------+
```
## C-Chain Fees[](#c-chain-fees)
The Avalanche C-Chain uses an algorithm to determine the "base fee" for a transaction. The base fee increases when network utilization is above the target utilization and decreases when network utilization is below the target.
### Dynamic Fee Transactions[](#dynamic-fee-transactions)
Transaction fees for non-atomic transactions are based on Ethereum's EIP-1559 style Dynamic Fee Transactions, which consists of a gas fee cap and a gas tip cap.
The fee cap specifies the maximum price the transaction is willing to pay per unit of gas. The tip cap (also called the priority fee) specifies the maximum amount above the base fee that the transaction is willing to pay per unit of gas. Therefore, the effective gas price paid by a transaction will be `min(gasFeeCap, baseFee + gasTipCap)`. Unlike in Ethereum, where the priority fee is paid to the miner that produces the block, in Avalanche both the base fee and the priority fee are burned. For legacy transactions, which only specify a single gas price, the gas price serves as both the gas fee cap and the gas tip cap.
Use the [`eth_baseFee`](/docs/api-reference/c-chain/api#eth_basefee) API method to estimate the base fee for the next block. If more blocks are produced in between the time that you construct your transaction and it is included in a block, the base fee could be different from the base fee estimated by the API call, so it is important to treat this value as an estimate.
Next, use [eth\_maxPriorityFeePerGas](/docs/api-reference/c-chain/api#eth_maxpriorityfeepergas) API call to estimate the priority fee needed to be included in a block. This API call will look at the most recent blocks and see what tips have been paid by recent transactions in order to be included in the block.
Transactions are ordered by the priority fee, then the timestamp (oldest first).
Based off of this information, you can specify the `gasFeeCap` and `gasTipCap` to your liking based on how you prioritize getting your transaction included as quickly as possible vs. minimizing the price paid per unit of gas.
#### Base Fee[](#base-fee)
The base fee can go as low as 1 nAVAX (Gwei) and has no upper bound. You can use the [`eth_baseFee`](/docs/api-reference/c-chain/api#eth_basefee) and [eth\_maxPriorityFeePerGas](/docs/api-reference/c-chain/api#eth_maxpriorityfeepergas) API methods, or [Snowtrace's C-Chain Gas Tracker](https://snowtrace.io/gastracker), to estimate the gas price to use in your transactions.
#### Further Readings[](#further-readings)
* [Adjusting Gas Price During High Network Activity](/docs/dapps/advanced-tutorials/manually-adjust-gas-price)
* [Sending Transactions with Dynamic Fees using JavaScript](/docs/dapps/advanced-tutorials/dynamic-gas-fees)
### Atomic Transaction Fees[](#atomic-transaction-fees)
C-Chain atomic transactions (that is imports and exports from/to other chains) charge dynamic fees based on the amount of gas used by the transaction and the base fee of the block that includes the atomic transaction.
Gas Used:
```
+---------------------+-------+
| Item : Gas |
+---------------------+-------+
| Unsigned Tx Byte : 1 |
+---------------------+-------+
| Signature : 1000 |
+---------------------+-------+
| Per Atomic Tx : 10000 |
+---------------------+-------+
```
Therefore, the gas used by an atomic transaction is `1 * len(unsignedTxBytes) + 1,000 * numSignatures + 10,000`
The TX fee additionally takes the base fee into account. Due to the fact that atomic transactions use units denominated in 9 decimal places, the base fee must be converted to 9 decimal places before calculating the actual fee paid by the transaction. Therefore, the actual fee is: `gasUsed * baseFee (converted to 9 decimals)`.
## P-Chain Fees[](#p-chain-fees)
The Avalanche P-Chain utilizes a dynamic fee mechanism to optimize transaction costs and network utilization. This system adapts fees based on gas consumption to maintain a target utilization rate.
### Dimensions of Gas Consumption
Gas consumption is measured across four dimensions:
1. **Bandwidth** The transaction size in bytes.
2. **Reads** The number of state/database reads.
3. **Writes** The number of state/database writes.
4. **Compute** The compute time in microseconds.
The total gas consumed ($G$) by a transaction is:
```math
G = B + 1000R + 1000W + 4C
```
The current fee dimension weight configurations as well as the parameter configurations of the P-Chain can be read at any time with the [`platform.getFeeConfig`](/docs/api-reference/p-chain/api#platformgetfeeconfig) API endpoint.
### Fee Adjustment Mechanism
Fees adjust dynamically based on excess gas consumption, the difference between current gas usage and the target gas rate. The exponential adjustment ensures consistent reactivity regardless of the current gas price. Fee changes scale proportionally with excess gas consumption, maintaining fairness and network stability. The technical specification of this mechanism is documented in [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md#mechanism).
# X-Chain Migration
URL: /docs/api-reference/guides/x-chain-migration
## Overview[](#overview "Direct link to heading")
This document summarizes all of the changes made to the X-Chain API to support Avalanche Cortina (v1.10.0), which migrates the X-Chain to run Snowman++. In summary, the core transaction submission and confirmation flow is unchanged, however, there are new APIs that must be called to index all transactions.
## Transaction Broadcast and Confirmation[](#transaction-broadcast-and-confirmation "Direct link to heading")
The transaction format on the X-Chain does not change in Cortina. This means that wallets that have already integrated with the X-Chain don't need to change how they sign transactions. Additionally, there is no change to the format of the [avm.issueTx](/docs/api-reference/x-chain/api#avmissuetx) or the [avm.getTx](/docs/api-reference/x-chain/api#avmgettx) API.
However, the [avm.getTxStatus](/docs/api-reference/x-chain/api#avmgettxstatus) endpoint is now deprecated and its usage should be replaced with [avm.getTx](/docs/api-reference/x-chain/api#avmgettx) (which only returns accepted transactions for AvalancheGo >= v1.9.12). [avm.getTxStatus](/docs/api-reference/x-chain/api#avmgettxstatus) will still work up to and after the Cortina activation if you wish to migrate after the network upgrade has occurred.
## Vertex -> Block Indexing[](#vertex---block-indexing "Direct link to heading")
Before Cortina, indexing the X-Chain required polling the `/ext/index/X/vtx` endpoint to fetch new vertices. During the Cortina activation, a “stop vertex” will be produced using a [new codec version](https://github.com/ava-labs/avalanchego/blob/c27721a8da1397b218ce9e9ec69839b8a30f9860/snow/engine/avalanche/vertex/codec.go#L17-L18) that will contain no transactions. This new vertex type will be the [same format](https://github.com/ava-labs/avalanchego/blob/c27721a8da1397b218ce9e9ec69839b8a30f9860/snow/engine/avalanche/vertex/stateless_vertex.go#L95-L102) as previous vertices. To ensure historical data can still be accessed in Cortina, the `/ext/index/X/vtx` will remain accessible even though it will no longer be populated with chain data.
The index for the X-chain tx and vtx endpoints will never increase again. The index for the X-chain blocks will increase as new blocks are added.
After Cortina activation, you will need to migrate to using the new *ext/index/X/block* endpoint (shares the same semantics as [/ext/index/P/block](/docs/api-reference/index-api#p-chain-blocks)) to continue indexing X-Chain activity. Because X-Chain ordering is deterministic in Cortina, this means that X-Chain blocks across all heights will be consistent across all nodes and will include a timestamp. Here is an example of iterating over these blocks in Golang:
```
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/ava-labs/avalanchego/indexer"
"github.com/ava-labs/avalanchego/vms/proposervm/block"
"github.com/ava-labs/avalanchego/wallet/chain/x"
"github.com/ava-labs/avalanchego/wallet/subnet/primary"
)
func main() {
var (
uri = fmt.Sprintf("%s/ext/index/X/block", primary.LocalAPIURI)
client = indexer.NewClient(uri)
ctx = context.Background()
nextIndex uint64
)
for {
log.Printf("polling for next accepted block")
container, err := client.GetContainerByIndex(ctx, nextIndex)
if err != nil {
time.Sleep(time.Second)
continue
}
proposerVMBlock, err := block.Parse(container.Bytes)
if err != nil {
log.Fatalf("failed to parse proposervm block: %s\n", err)
}
avmBlockBytes := proposerVMBlock.Block()
avmBlock, err := x.Parser.ParseBlock(avmBlockBytes)
if err != nil {
log.Fatalf("failed to parse avm block: %s\n", err)
}
acceptedTxs := avmBlock.Txs()
log.Printf("accepted block %s with %d transactions", avmBlock.ID(), len(acceptedTxs))
for _, tx := range acceptedTxs {
log.Printf("accepted transaction %s", tx.ID())
}
nextIndex++
}
}
```
After Cortina activation, it will also be possible to fetch X-Chain blocks directly without enabling the Index API. You can use the [avm.getBlock](/docs/api-reference/x-chain/api#avmgetblock), [avm.getBlockByHeight](/docs/api-reference/x-chain/api#avmgetblockbyheight), and [avm.getHeight](/docs/api-reference/x-chain/api#avmgetheight) endpoints to do so. This, again, will be similar to the [P-Chain semantics](/docs/api-reference/p-chain/api#platformgetblock).
## Deprecated API Calls[](#deprecated-api-calls "Direct link to heading")
This long-term deprecation effort will better align usage of AvalancheGo with its purpose, to be a minimal and efficient runtime that supports only what is required to validate the Primary Network and Avalanche L1s. Integrators should make plans to migrate to tools and services that are better optimized for serving queries over Avalanche Network state and avoid keeping any keys on the node itself.
This deprecation ONLY applies to APIs that AvalancheGo exposes over the HTTP port. Transaction types with similar names to these APIs are NOT being deprecated.
* ipcs
* ipcs.publishBlockchain
* ipcs.unpublishBlockchain
* ipcs.getPublishedBlockchains
* keystore
* keystore.createUser
* keystore.deleteUser
* keystore.listUsers
* keystore.importUser
* keystore.exportUser
* avm/pubsub
* avm
* avm.getAddressTxs
* avm.getBalance
* avm.getAllBalances
* avm.createAsset
* avm.createFixedCapAsset
* avm.createVariableCapAsset
* avm.createNFTAsset
* avm.createAddress
* avm.listAddresses
* avm.exportKey
* avm.importKey
* avm.mint
* avm.sendNFT
* avm.mintNFT
* avm.import
* avm.export
* avm.send
* avm.sendMultiple
* avm/wallet
* wallet.issueTx
* wallet.send
* wallet.sendMultiple
* platform
* platform.exportKey
* platform.importKey
* platform.getBalance
* platform.createAddress
* platform.listAddresses
* platform.getSubnets
* platform.addValidator
* platform.addDelegator
* platform.addSubnetValidator
* platform.createSubnet
* platform.exportAVAX
* platform.importAVAX
* platform.createBlockchain
* platform.getBlockchains
* platform.getStake
* platform.getMaxStakeAmount
* platform.getRewardUTXOs
## Cortina FAQ[](#cortina-faq "Direct link to heading")
### Do I Have to Upgrade my Node?[](#do-i-have-to-upgrade-my-node "Direct link to heading")
If you don't upgrade your validator to `v1.10.0` before the Avalanche Mainnet activation date, your node will be marked as offline and other nodes will report your node as having lower uptime, which may jeopardize your staking rewards.
### Is There any Change in Hardware Requirements?[](#is-there-any-change-in-hardware-requirements "Direct link to heading")
No.
### Will Updating Decrease my Validator's Uptime?[](#will-updating-decrease-my-validators-uptime "Direct link to heading")
No. As a reminder, you can check your validator's estimated uptime using the [`info.uptime` API call](/docs/api-reference/info-api#infouptime).
### I Think Something Is Wrong. What Should I Do?[](#i-think-something-is-wrong-what-should-i-do "Direct link to heading")
First, make sure that you've read the documentation thoroughly and checked the [FAQs](https://support.avax.network/en/). If you don't see an answer to your question, go to our [Discord](https://discord.com/invite/RwXY7P6) server and search for your question. If it has not already been asked, please post it in the appropriate channel.
# C-Chain API
URL: /docs/api-reference/c-chain/api
This page is an overview of the C-Chain API associated with AvalancheGo.
Ethereum has its own notion of `networkID` and `chainID`. These have no relationship to Avalanche's view of networkID and chainID and are purely internal to the [C-Chain](https://github.com/docs/quick-start/primary-network#c-chain). On Mainnet, the C-Chain uses `1` and `43114` for these values. On the Fuji Testnet, it uses `1` and `43113` for these values. `networkID` and `chainID` can also be obtained using the `net_version` and `eth_chainId` methods.
## Ethereum APIs
### Endpoints
#### JSON-RPC Endpoints
To interact with C-Chain via the JSON-RPC endpoint:
```sh
/ext/bc/C/rpc
```
To interact with other instances of the EVM via the JSON-RPC endpoint:
```sh
/ext/bc/blockchainID/rpc
```
where `blockchainID` is the ID of the blockchain running the EVM.
#### WebSocket Endpoints
On the [public API node](https://github.com/docs/tooling/rpc-providers), it only supports C-Chain
websocket API calls for API methods that don't exist on the C-Chain's HTTP API
To interact with C-Chain via the websocket endpoint:
```sh
/ext/bc/C/ws
```
For example, to interact with the C-Chain's Ethereum APIs via websocket on localhost, you can use:
```sh
ws://127.0.0.1:9650/ext/bc/C/ws
```
}>
On localhost, use `ws://`. When using the [Public API](https://github.com/docs/tooling/rpc-providers) or another
host that supports encryption, use `wss://`.
To interact with other instances of the EVM via the websocket endpoint:
```sh
/ext/bc/blockchainID/ws
```
where `blockchainID` is the ID of the blockchain running the EVM.
### Standard Ethereum APIs
Avalanche offers an API interface identical to Geth's API except that it only supports the following
services:
* `web3_`
* `net_`
* `eth_`
* `personal_`
* `txpool_`
* `debug_` (note: this is turned off on the public API node.)
You can interact with these services the same exact way you'd interact with Geth (see exceptions below). See the
[Ethereum Wiki's JSON-RPC Documentation](https://ethereum.org/en/developers/docs/apis/json-rpc/)
and [Geth's JSON-RPC Documentation](https://geth.ethereum.org/docs/rpc/server)
for a full description of this API.
For batched requests on the [public API node](https://github.com/docs/tooling/rpc-providers) , the maximum
number of items is 40.
#### Exceptions
Starting with release [`v0.12.2`](https://github.com/ava-labs/avalanchego/releases/tag/v1.12.2), `eth_getProof` has a different behavior compared to geth:
* On archival nodes (nodes with`pruning-enabled` set to `false`), queries for state proofs older than 24 hours preceding the last accepted block will be rejected by default. This can be adjusted with `historical-proof-query-window`, which defines the number of blocks before the last accepted block that can be queried for state proofs. Set this option to `0` to accept a state query for any block number.
* On pruning nodes (nodes with `pruning-enabled` set to `true`), queries for state proofs outside the 32 block window after the last accepted block are always rejected.
### Avalanche - Ethereum APIs
In addition to the standard Ethereum APIs, Avalanche offers `eth_baseFee`,
`eth_maxPriorityFeePerGas`, and `eth_getChainConfig`.
They use the same endpoint as standard Ethereum APIs:
```sh
/ext/bc/C/rpc
```
#### `eth_baseFee`
Get the base fee for the next block.
**Signature:**
```sh
eth_baseFee() -> {}
```
`result` is the hex value of the base fee for the next block.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"eth_baseFee",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": "0x34630b8a00"
}
```
#### `eth_maxPriorityFeePerGas`
Get the priority fee needed to be included in a block.
**Signature:**
```sh
eth_maxPriorityFeePerGas() -> {}
```
`result` is hex value of the estimated priority fee needed to be included in a block.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"eth_maxPriorityFeePerGas",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": "0x2540be400"
}
```
For more information on dynamic fees see the [C-Chain section of the transaction fee
documentation](https://github.com/docs/api-reference/guides/txn-fees#c-chain-fees).
## Admin APIs
The Admin API provides administrative functionality for the EVM.
### Endpoint
```sh
/ext/bc/C/admin
```
### Methods
#### `admin_startCPUProfiler`
Starts a CPU profile that writes to the specified file.
**Signature:**
```sh
admin_startCPUProfiler() -> {}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin_startCPUProfiler",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin
```
#### `admin_stopCPUProfiler`
Stops the CPU profile.
**Signature:**
```sh
admin_stopCPUProfiler() -> {}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin_stopCPUProfiler",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin
```
#### `admin_memoryProfile`
Runs a memory profile writing to the specified file.
**Signature:**
```sh
admin_memoryProfile() -> {}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin_memoryProfile",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin
```
#### `admin_lockProfile`
Runs a mutex profile writing to the specified file.
**Signature:**
```sh
admin_lockProfile() -> {}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin_lockProfile",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin
```
#### `admin_setLogLevel`
Sets the log level for the EVM.
**Signature:**
```sh
admin_setLogLevel({
level: string
}) -> {}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin_setLogLevel",
"params" :[{
"level": "debug"
}]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin
```
#### `admin_getVMConfig`
Returns the current VM configuration.
**Signature:**
```sh
admin_getVMConfig() -> {
config: {
// VM configuration fields
}
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"admin_getVMConfig",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin
```
## Avalanche-Specific APIs
### Endpoint
```sh
/ext/bc/C/avax
```
### Methods
#### `avax.getUTXOs`
Gets all UTXOs for the specified addresses.
**Signature:**
```sh
avax.getUTXOs({
addresses: [string],
sourceChain: string,
startIndex: {
address: string,
utxo: string
},
limit: number,
encoding: string
}) -> {
utxos: [string],
endIndex: {
address: string,
utxo: string
},
numFetched: number,
encoding: string
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avax.getUTXOs",
"params" :[{
"addresses": ["X-avax1..."],
"sourceChain": "X",
"limit": 100,
"encoding": "hex"
}]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax
```
#### `avax.issueTx`
Issues a transaction to the network.
**Signature:**
```sh
avax.issueTx({
tx: string,
encoding: string
}) -> {
txID: string
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avax.issueTx",
"params" :[{
"tx": "0x...",
"encoding": "hex"
}]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax
```
#### `avax.getAtomicTxStatus`
Returns the status of the specified atomic transaction.
**Signature:**
```sh
avax.getAtomicTxStatus({
txID: string
}) -> {
status: string,
blockHeight: number (optional)
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avax.getAtomicTxStatus",
"params" :[{
"txID": "2QouvNW..."
}]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax
```
#### `avax.getAtomicTx`
Returns the specified atomic transaction.
**Signature:**
```sh
avax.getAtomicTx({
txID: string,
encoding: string
}) -> {
tx: string,
encoding: string,
blockHeight: number (optional)
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avax.getAtomicTx",
"params" :[{
"txID": "2QouvNW...",
"encoding": "hex"
}]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax
```
#### `avax.version`
Returns the version of the VM.
**Signature:**
```sh
avax.version() -> {
version: string
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avax.version",
"params" :[]
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax
```
# Transaction Format
URL: /docs/api-reference/c-chain/txn-format
This page is meant to be the single source of truth for how we serialize atomic
transactions in `Coreth`. This document uses the [primitive serialization](/docs/api-reference/standards/serialization-primitives) format for packing and
[secp256k1](/docs/api-reference/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine)
for cryptographic user identification.
## Codec ID
Some data is prepended with a codec ID (unt16) that denotes how the data should
be deserialized. Right now, the only valid codec ID is 0 (`0x00 0x00`).
## Inputs
Inputs to Coreth Atomic Transactions are either an `EVMInput` from this chain or
a `TransferableInput` (which contains a `SECP256K1TransferInput`) from another
chain. The `EVMInput` will be used in `ExportTx` to spend funds from this chain,
while the `TransferableInput` will be used to import atomic UTXOs from another
chain.
## EVM Input
Input type that specifies an EVM account to deduct the funds from as part of an `ExportTx`.
### What EVM Input Contains
An EVM Input contains an `address`, `amount`, `assetID`, and `nonce`.
* **`Address`** is the EVM address from which to transfer funds.
* **`Amount`** is the amount of the asset to be transferred (specified in nAVAX
for AVAX and the smallest denomination for all other assets).
* **`AssetID`** is the ID of the asset to transfer.
* **`Nonce`** is the nonce of the EVM account exporting funds.
### Gantt EVM Input Specification
```text
+----------+----------+-------------------------+
| address : [20]byte | 20 bytes |
+----------+----------+-------------------------+
| amount : uint64 | 08 bytes |
+----------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+----------+----------+-------------------------+
| nonce : uint64 | 08 bytes |
+----------+----------+-------------------------+
| 68 bytes |
+-------------------------+
```
### Proto EVM Input Specification
```text
message {
bytes address = 1; // 20 bytes
uint64 amount = 2; // 08 bytes
bytes assetID = 3; // 32 bytes
uint64 nonce = 4; // 08 bytes
}
```
### EVM Input Example
Let's make an EVM Input:
* `Address: 0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc`
* `Amount: 2000000`
* `AssetID: 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`
* `Nonce: 0`
```text
[
Address <- 0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc,
Amount <- 0x00000000001e8480
AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db
Nonce <- 0x0000000000000000
]
=
[
// address:
0x8d, 0xb9, 0x7c, 0x7c, 0xec, 0xe2, 0x49, 0xc2,
0xb9, 0x8b, 0xdc, 0x02, 0x26, 0xcc, 0x4c, 0x2a,
0x57, 0xbf, 0x52, 0xfc,
// amount:
0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0x84, 0x80,
// assetID:
0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96,
0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8,
0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0,
0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb,
// nonce:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]
```
## Transferable Input
Transferable Input wraps a `SECP256K1TransferInput`. Transferable inputs
describe a specific UTXO with a provided transfer input.
### What Transferable Input Contains
A transferable input contains a `TxID`, `UTXOIndex` `AssetID` and an `Input`.
* **`TxID`** is a 32-byte array that defines which transaction this input is consuming an output from.
* **`UTXOIndex`** is an int that defines which utxo this input is consuming in the specified transaction.
* **`AssetID`** is a 32-byte array that defines which asset this input references.
* **`Input`** is a `SECP256K1TransferInput`, as defined below.
### Gantt Transferable Input Specification
```text
+------------+----------+------------------------+
| tx_id : [32]byte | 32 bytes |
+------------+----------+------------------------+
| utxo_index : int | 04 bytes |
+------------+----------+------------------------+
| asset_id : [32]byte | 32 bytes |
+------------+----------+------------------------+
| input : Input | size(input) bytes |
+------------+----------+------------------------+
| 68 + size(input) bytes |
+------------------------+
```
### Proto Transferable Input Specification
```text
message TransferableInput {
bytes tx_id = 1; // 32 bytes
uint32 utxo_index = 2; // 04 bytes
bytes asset_id = 3; // 32 bytes
Input input = 4; // size(input)
}
```
### Transferable Input Example
Let's make a transferable input:
* `TxID: 0x6613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e59`
* `UTXOIndex: 1`
* `AssetID: 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db`
* `Input: "Example SECP256K1 Transfer Input from below"`
```text
[
TxID <- 0x6613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e59
UTXOIndex <- 0x00000001
AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db
Input <- 0x0000000500000000075bcd15000000020000000700000003
]
=
[
// txID:
0x66, 0x13, 0xa4, 0x0d, 0xcd, 0xd8, 0xd2, 0x2e,
0xa4, 0xaa, 0x99, 0xa4, 0xc8, 0x43, 0x49, 0x05,
0x63, 0x17, 0xcf, 0x55, 0x0b, 0x66, 0x85, 0xe0,
0x45, 0xe4, 0x59, 0x95, 0x4f, 0x25, 0x8e, 0x59,
// utxoIndex:
0x00, 0x00, 0x00, 0x01,
// assetID:
0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96,
0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8,
0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0,
0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb,
// input:
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x74,
0x6a, 0x52, 0x88, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00,
]
```
## SECP256K1 Transfer Input
A
[secp256k1](/docs/api-reference/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine)
transfer input allows for spending an unspent secp256k1 transfer output.
### What SECP256K1 Transfer Input Contains
A secp256k1 transfer input contains an `Amount` and `AddressIndices`.
* **`TypeID`** is the ID for this input type. It is `0x00000005`.
* **`Amount`** is a long that specifies the quantity that this input should be
consuming from the UTXO. Must be positive. Must be equal to the amount
specified in the UTXO.
* **`AddressIndices`** is a list of unique ints that define the private keys
that are being used to spend the UTXO. Each UTXO has an array of addresses
that can spend the UTXO. Each int represents the index in this address array
that will sign this transaction. The array must be sorted low to high.
### Gantt SECP256K1 Transfer Input Specification
```text
+-------------------------+-------------------------------------+
| type_id : int | 4 bytes |
+-----------------+-------+-------------------------------------+
| amount : long | 8 bytes |
+-----------------+-------+-------------------------------------+
| address_indices : []int | 4 + 4 * len(address_indices) bytes |
+-----------------+-------+-------------------------------------+
| 16 + 4 * len(address_indices) bytes |
+-------------------------------------+
```
### Proto SECP256K1 Transfer Input Specification
```text
message SECP256K1TransferInput {
uint32 typeID = 1; // 04 bytes
uint64 amount = 2; // 08 bytes
repeated uint32 address_indices = 3; // 04 bytes + 04 bytes * len(address_indices)
}
```
### SECP256K1 Transfer Input Example
Let's make a payment input with:
* **`TypeId`**: 5
* **`Amount`**: 500000000000
* **`AddressIndices`**: \[0]
```text
[
TypeID <- 0x00000005
Amount <- 500000000000 = 0x000000746a528800,
AddressIndices <- [0x00000000]
]
=
[
// type id:
0x00, 0x00, 0x00, 0x05,
// amount:
0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00,
// length:
0x00, 0x00, 0x00, 0x01,
// sig[0]
0x00, 0x00, 0x00, 0x00,
]
```
## Outputs
Outputs to Coreth Atomic Transactions are either an `EVMOutput` to be added to
the balance of an address on this chain or a `TransferableOutput` (which
contains a `SECP256K1TransferOutput`) to be moved to another chain.
The EVM Output will be used in `ImportTx` to add funds to this chain, while the
`TransferableOutput` will be used to export atomic UTXOs to another chain.
## EVM Output
Output type specifying a state change to be applied to an EVM account as part of an `ImportTx`.
### What EVM Output Contains
An EVM Output contains an `address`, `amount`, and `assetID`.
* **`Address`** is the EVM address that will receive the funds.
* **`Amount`** is the amount of the asset to be transferred (specified in nAVAX
for AVAX and the smallest denomination for all other assets).
* **`AssetID`** is the ID of the asset to transfer.
### Gantt EVM Output Specification
```text
+----------+----------+-------------------------+
| address : [20]byte | 20 bytes |
+----------+----------+-------------------------+
| amount : uin64 | 08 bytes |
+----------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+----------+----------+-------------------------+
| 60 bytes |
+-------------------------+
```
### Proto EVM Output Specification
```text
message {
bytes address = 1; // 20 bytes
uint64 amount = 2; // 08 bytes
bytes assetID = 3; // 32 bytes
}
```
### EVM Output Example
Let's make an EVM Output:
* `Address: 0x0eb5ccb85c29009b6060decb353a38ea3b52cd20`
* `Amount: 500000000000`
* `AssetID: 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db`
```text
[
Address <- 0x0eb5ccb85c29009b6060decb353a38ea3b52cd20,
Amount <- 0x000000746a528800
AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db
]
=
[
// address:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
// amount:
0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00,
// assetID:
0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96,
0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8,
0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0,
0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb,
]
```
## Transferable Output
Transferable outputs wrap a `SECP256K1TransferOutput` with an asset ID.
### What Transferable Output Contains
A transferable output contains an `AssetID` and an `Output` which is a `SECP256K1TransferOutput`.
* **`AssetID`** is a 32-byte array that defines which asset this output references.
* **`Output`** is a `SECP256K1TransferOutput` as defined below.
### Gantt Transferable Output Specification
```text
+----------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+----------+----------+-------------------------+
| output : Output | size(output) bytes |
+----------+----------+-------------------------+
| 32 + size(output) bytes |
+-------------------------+
```
### Proto Transferable Output Specification
```text
message TransferableOutput {
bytes asset_id = 1; // 32 bytes
Output output = 2; // size(output)
}
```
### Transferable Output Example
Let's make a transferable output:
* `AssetID: 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db`
* `Output: "Example SECP256K1 Transfer Output from below"`
```text
[
AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db
Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
=
[
// assetID:
0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96,
0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8,
0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0,
0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb,
// output:
0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96,
0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8,
0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0,
0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x0f, 0x42, 0x40, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0x66, 0xf9, 0x0d, 0xb6,
0x13, 0x7a, 0x78, 0xf7, 0x6b, 0x36, 0x93, 0xf7,
0xf2, 0xbc, 0x50, 0x79, 0x56, 0xda, 0xe5, 0x63,
]
```
## SECP256K1 Transfer Output
A
[secp256k1](/docs/api-reference/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine)
transfer output allows for sending a quantity of an asset to a collection of
addresses after a specified Unix time.
### What SECP256K1 Transfer Output Contains
A secp256k1 transfer output contains a `TypeID`, `Amount`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this output type. It is `0x00000007`.
* **`Amount`** is a long that specifies the quantity of the asset that this output owns. Must be positive.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt SECP256K1 Transfer Output Specification
```text
+-----------+------------+--------------------------------+
| type_id : int | 4 bytes |
+-----------+------------+--------------------------------+
| amount : long | 8 bytes |
+-----------+------------+--------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+--------------------------------+
| threshold : int | 4 bytes |
+-----------+------------+--------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------+------------+--------------------------------+
| 28 + 20 * len(addresses) bytes |
+--------------------------------+
```
### Proto SECP256K1 Transfer Output Specification
```text
message SECP256K1TransferOutput {
uint32 typeID = 1; // 04 bytes
uint64 amount = 2; // 08 bytes
uint64 locktime = 3; // 08 bytes
uint32 threshold = 4; // 04 bytes
repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses)
}
```
### SECP256K1 Transfer Output Example
Let's make a secp256k1 transfer output with:
* **`TypeID`**: 7
* **`Amount`**: 1000000
* **`Locktime`**: 0
* **`Threshold`**: 1
* **`Addresses`**:
* 0x66f90db6137a78f76b3693f7f2bc507956dae563
```text
[
TypeID <- 0x00000007
Amount <- 0x00000000000f4240
Locktime <- 0x0000000000000000
Threshold <- 0x00000001
Addresses <- [
0x66f90db6137a78f76b3693f7f2bc507956dae563
]
]
=
[
// typeID:
0x00, 0x00, 0x00, 0x07,
// amount:
0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x42, 0x40,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x01,
// addrs[0]:
0x66, 0xf9, 0x0d, 0xb6, 0x13, 0x7a, 0x78, 0xf7,
0x6b, 0x36, 0x93, 0xf7, 0xf2, 0xbc, 0x50, 0x79,
0x56, 0xda, 0xe5, 0x63,
]
```
## Atomic Transactions
Atomic Transactions are used to move funds between chains. There are two types `ImportTx` and `ExportTx`.
## ExportTx
ExportTx is a transaction to export funds from Coreth to a different chain.
### What ExportTx Contains
An ExportTx contains an `typeID`, `networkID`, `blockchainID`, `destinationChain`, `inputs`, and `exportedOutputs`.
* **`typeID`** is an int that the type for an ExportTx. The typeID for an exportTx is 1.
* **`networkID`** is an int that defines which Avalanche network this
transaction is meant to be issued to. This could refer to Mainnet, Fuji, etc.
and is different than the EVM's network ID.
* **`blockchainID`** is a 32-byte array that defines which blockchain this transaction was issued to.
* **`destinationChain`** is a 32-byte array that defines which blockchain this
transaction exports funds to.
* **`inputs`** is an array of EVM Inputs to fund the ExportTx.
* **`exportedOutputs`** is an array of TransferableOutputs to be transferred to `destinationChain`.
### Gantt ExportTx Specification
```text
+---------------------+----------------------+-------------------------------------------------+
| typeID : int | 04 bytes |
+---------------------+----------------------+-------------------------------------------------+
| networkID : int | 04 bytes |
+---------------------+----------------------+-------------------------------------------------+
| blockchainID : [32]byte | 32 bytes |
+---------------------+----------------------+-------------------------------------------------+
| destinationChain : [32]byte | 32 bytes |
+---------------------+----------------------+-------------------------------------------------+
| inputs : []EvmInput | 4 + size(inputs) bytes |
+---------------------+----------------------+-------------------------------------------------+
| exportedOutputs : []TransferableOutput | 4 + size(exportedOutputs) bytes |
+----------+----------+----------------------+-------------------------------------------------+
| 80 + size(inputs) + size(exportedOutputs) bytes |
+-------------------------------------------------+
```
### ExportTx Example
Let's make an EVM Output:
* **`TypeID`**: `1`
* **`NetworkID`**: `12345`
* **`BlockchainID`**: `0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735`
* **`DestinationChain`**: `0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf`
* **`Inputs`**:
* `"Example EVMInput as defined above"`
* **`Exportedoutputs`**:
* `"Example TransferableOutput as defined above"`
```text
[
TypeID <- 0x00000001
NetworkID <- 0x00003039
BlockchainID <- 0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735
DestinationChain <- 0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf
Inputs <- [
0xc3344128e060128ede3523a24a461c8943ab08590000000000003039000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000000000001
]
ExportedOutputs <- [
0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2dbdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db0000000700000000000f42400000000000000000000000010000000166f90db6137a78f76b3693f7f2bc507956dae563
]
]
=
[
// typeID:
0x00, 0x00, 0x00, 0x01,
// networkID:
0x00, 0x00, 0x00, 0x04,
// blockchainID:
0x91, 0x06, 0x0e, 0xab, 0xfb, 0x5a, 0x57, 0x17,
0x20, 0x10, 0x9b, 0x58, 0x96, 0xe5, 0xff, 0x00,
0x01, 0x0a, 0x1c, 0xfe, 0x6b, 0x10, 0x3d, 0x58,
0x5e, 0x6e, 0xbf, 0x27, 0xb9, 0x7a, 0x17, 0x35,
// destination_chain:
0xd8, 0x91, 0xad, 0x56, 0x05, 0x6d, 0x9c, 0x01,
0xf1, 0x8f, 0x43, 0xf5, 0x8b, 0x5c, 0x78, 0x4a,
0xd0, 0x7a, 0x4a, 0x49, 0xcf, 0x3d, 0x1f, 0x11,
0x62, 0x38, 0x04, 0xb5, 0xcb, 0xa2, 0xc6, 0xbf,
// inputs[] count:
0x00, 0x00, 0x00, 0x01,
// inputs[0]
0x8d, 0xb9, 0x7c, 0x7c, 0xec, 0xe2, 0x49, 0xc2,
0xb9, 0x8b, 0xdc, 0x02, 0x26, 0xcc, 0x4c, 0x2a,
0x57, 0xbf, 0x52, 0xfc, 0x00, 0x00, 0x00, 0x00,
0x00, 0x1e, 0x84, 0x80, 0xdb, 0xcf, 0x89, 0x0f,
0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7,
0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a,
0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12,
0xba, 0x53, 0xf2, 0xdb, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
// exportedOutputs[] count
0x00, 0x00, 0x00, 0x01,
// exportedOutputs[0]
0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96,
0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8,
0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0,
0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x0f, 0x42, 0x40, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0x66, 0xf9, 0x0d, 0xb6,
0x13, 0x7a, 0x78, 0xf7, 0x6b, 0x36, 0x93, 0xf7,
0xf2, 0xbc, 0x50, 0x79, 0x56, 0xda, 0xe5, 0x63,
]
```
## ImportTx
ImportTx is a transaction to import funds to Coreth from another chain.
### What ImportTx Contains
An ImportTx contains an `typeID`, `networkID`, `blockchainID`,
`destinationChain`, `importedInputs`, and `Outs`.
* **`typeID`** is an int that the type for an ImportTx. The typeID for an `ImportTx` is 0.
* **`networkID`** is an int that defines which Avalanche network this
transaction is meant to be issued to. This could refer to Mainnet, Fuji, etc.
and is different than the EVM's network ID.
* **`blockchainID`** is a 32-byte array that defines which blockchain this transaction was issued to.
* **`sourceChain`** is a 32-byte array that defines which blockchain from which to import funds.
* **`importedInputs`** is an array of TransferableInputs to fund the ImportTx.
* **`Outs`** is an array of EVM Outputs to be imported to this chain.
### Gantt ImportTx Specification
```text
+---------------------+----------------------+-------------------------------------------------+
| typeID : int | 04 bytes |
+---------------------+----------------------+-------------------------------------------------+
| networkID : int | 04 bytes |
+---------------------+----------------------+-------------------------------------------------+
| blockchainID : [32]byte | 32 bytes |
+---------------------+----------------------+-------------------------------------------------+
| sourceChain : [32]byte | 32 bytes |
+---------------------+----------------------+-------------------------------------------------+
| importedInputs : []TransferableInput | 4 + size(importedInputs) bytes |
+---------------------+----------------------+-------------------------------------------------+
| outs : []EVMOutput | 4 + size(outs) bytes |
+----------+----------+----------------------+-------------------------------------------------+
| 80 + size(importedInputs) + size(outs) bytes |
+-------------------------------------------------+
```
### ImportTx Example
Let's make an ImportTx:
* **`TypeID`**: `0`
* **`NetworkID`**: `12345`
* **`BlockchainID`**: `0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735`
* **`SourceChain`**: `0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf`
* **`ImportedInputs`**:
* `"Example TransferableInput as defined above"`
* **`Outs`**:
* `"Example EVMOutput as defined above"`
```text
[
TypeID <- 0x00000000
NetworkID <- 0x00003039
BlockchainID <- 0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735
SourceChain <- 0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf
ImportedInputs <- [
0x6613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e5900000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000005000000746a5288000000000100000000
]
Outs <- [
0x0eb5ccb85c29009b6060decb353a38ea3b52cd20000000746a528800dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db
]
]
=
[
// typeID:
0x00, 0x00, 0x00, 0x00,
// networkID:
0x00, 0x00, 0x00, 0x04,
// blockchainID:
0x91, 0x06, 0x0e, 0xab, 0xfb, 0x5a, 0x57, 0x17,
0x20, 0x10, 0x9b, 0x58, 0x96, 0xe5, 0xff, 0x00,
0x01, 0x0a, 0x1c, 0xfe, 0x6b, 0x10, 0x3d, 0x58,
0x5e, 0x6e, 0xbf, 0x27, 0xb9, 0x7a, 0x17, 0x35,
// sourceChain:
0xd8, 0x91, 0xad, 0x56, 0x05, 0x6d, 0x9c, 0x01,
0xf1, 0x8f, 0x43, 0xf5, 0x8b, 0x5c, 0x78, 0x4a,
0xd0, 0x7a, 0x4a, 0x49, 0xcf, 0x3d, 0x1f, 0x11,
0x62, 0x38, 0x04, 0xb5, 0xcb, 0xa2, 0xc6, 0xbf,
// importedInputs[] count:
0x00, 0x00, 0x00, 0x01,
// importedInputs[0]
0x66, 0x13, 0xa4, 0x0d, 0xcd, 0xd8, 0xd2, 0x2e,
0xa4, 0xaa, 0x99, 0xa4, 0xc8, 0x43, 0x49, 0x05,
0x63, 0x17, 0xcf, 0x55, 0x0b, 0x66, 0x85, 0xe0,
0x45, 0xe4, 0x59, 0x95, 0x4f, 0x25, 0x8e, 0x59,
0x00, 0x00, 0x00, 0x01, 0xdb, 0xcf, 0x89, 0x0f,
0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7,
0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a,
0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12,
0xba, 0x53, 0xf2, 0xdb, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
// outs[] count
0x00, 0x00, 0x00, 0x01,
// outs[0]
0x0e, 0xb5, 0xcc, 0xb8, 0x5c, 0x29, 0x00, 0x9b,
0x60, 0x60, 0xde, 0xcb, 0x35, 0x3a, 0x38, 0xea,
0x3b, 0x52, 0xcd, 0x20, 0x00, 0x00, 0x00, 0x74,
0x6a, 0x52, 0x88, 0x00, 0xdb, 0xcf, 0x89, 0x0f,
0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7,
0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a,
0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12,
0xba, 0x53, 0xf2, 0xdb,
]
```
## Credentials
Credentials have one possible type: `SECP256K1Credential`. Each credential is
paired with an Input. The order of the credentials match the order of the
inputs.
## SECP256K1 Credential
A
[secp256k1](/docs/api-reference/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine)
credential contains a list of 65-byte recoverable signatures.
### What SECP256K1 Credential Contains
* **`TypeID`** is the ID for this type. It is `0x00000009`.
* **`Signatures`** is an array of 65-byte recoverable signatures. The order of
the signatures must match the input's signature indices.
### Gantt SECP256K1 Credential Specification
```text
+------------------------------+---------------------------------+
| type_id : int | 4 bytes |
+-----------------+------------+---------------------------------+
| signatures : [][65]byte | 4 + 65 * len(signatures) bytes |
+-----------------+------------+---------------------------------+
| 8 + 65 * len(signatures) bytes |
+---------------------------------+
```
### Proto SECP256K1 Credential Specification
```text
message SECP256K1Credential {
uint32 typeID = 1; // 4 bytes
repeated bytes signatures = 2; // 4 bytes + 65 bytes * len(signatures)
}
```
### SECP256K1 Credential Example
Let's make a payment input with:
* **`TypeID`**: 9
* **`signatures`**:
* `0x0acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a30010199dbf`
```text
[
TypeID <- 0x00000009
Signatures <- [
0x0acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a30010199dbf,
]
]
=
[
// Type ID
0x00, 0x00, 0x00, 0x09,
// length:
0x00, 0x00, 0x00, 0x01,
// sig[0]
0x0a, 0xcc, 0xcf, 0x47, 0xa8, 0x20, 0x54, 0x9a,
0x84, 0x42, 0x84, 0x40, 0xe2, 0x42, 0x19, 0x75,
0x13, 0x87, 0x90, 0xe4, 0x1b, 0xe2, 0x62, 0xf7,
0x19, 0x7f, 0x3d, 0x93, 0xfa, 0xa2, 0x6c, 0xc8,
0x74, 0x10, 0x60, 0xd7, 0x43, 0xff, 0xaf, 0x02,
0x57, 0x82, 0xc8, 0xc8, 0x6b, 0x86, 0x2d, 0x2b,
0x9f, 0xeb, 0xeb, 0xe7, 0xd3, 0x52, 0xf0, 0xb4,
0x59, 0x1a, 0xfb, 0xd1, 0xa7, 0x37, 0xf8, 0xa3,
0x00, 0x10, 0x19, 0x9d, 0xbf,
]
```
## Signed Transaction
A signed transaction contains an unsigned `AtomicTx` and credentials.
### What Signed Transaction Contains
A signed transaction contains a `CodecID`, `AtomicTx`, and `Credentials`.
* **`CodecID`** The only current valid codec id is `00 00`.
* **`AtomicTx`** is an atomic transaction, as described above.
* **`Credentials`** is an array of credentials. Each credential corresponds to
the input at the same index in the AtomicTx
### Gantt Signed Transaction Specification
```text
+---------------------+--------------+------------------------------------------------+
| codec_id : uint16 | 2 bytes |
+---------------------+--------------+------------------------------------------------+
| atomic_tx : AtomicTx | size(atomic_tx) bytes |
+---------------------+--------------+------------------------------------------------+
| credentials : []Credential | 4 + size(credentials) bytes |
+---------------------+--------------+------------------------------------------------+
| 6 + size(atomic_tx) + len(credentials) bytes |
+------------------------------------------------+
```
### Proto Signed Transaction Specification
```text
message Tx {
uint16 codec_id = 1; // 2 bytes
AtomicTx atomic_tx = 2; // size(atomic_tx)
repeated Credential credentials = 3; // 4 bytes + size(credentials)
}
```
### Signed Transaction Example
Let's make a signed transaction that uses the unsigned transaction and credential from the previous examples.
* **`CodecID`**: `0`
* **`UnsignedTx`**: `0x000000000000303991060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf000000016613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e5900000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000005000000746a5288000000000100000000000000010eb5ccb85c29009b6060decb353a38ea3b52cd20000000746a528800dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db`
* **`Credentials`**
`0x00000009000000010acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a300`
```text
[
CodecID <- 0x0000
UnsignedAtomic Tx <- 0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203
Credentials <- [
0x00000009000000010acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a300,
]
]
=
[
// Codec ID
0x00, 0x00,
// unsigned atomic transaction:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39,
0x91, 0x06, 0x0e, 0xab, 0xfb, 0x5a, 0x57, 0x17,
0x20, 0x10, 0x9b, 0x58, 0x96, 0xe5, 0xff, 0x00,
0x01, 0x0a, 0x1c, 0xfe, 0x6b, 0x10, 0x3d, 0x58,
0x5e, 0x6e, 0xbf, 0x27, 0xb9, 0x7a, 0x17, 0x35,
0xd8, 0x91, 0xad, 0x56, 0x05, 0x6d, 0x9c, 0x01,
0xf1, 0x8f, 0x43, 0xf5, 0x8b, 0x5c, 0x78, 0x4a,
0xd0, 0x7a, 0x4a, 0x49, 0xcf, 0x3d, 0x1f, 0x11,
0x62, 0x38, 0x04, 0xb5, 0xcb, 0xa2, 0xc6, 0xbf,
0x00, 0x00, 0x00, 0x01, 0x66, 0x13, 0xa4, 0x0d,
0xcd, 0xd8, 0xd2, 0x2e, 0xa4, 0xaa, 0x99, 0xa4,
0xc8, 0x43, 0x49, 0x05, 0x63, 0x17, 0xcf, 0x55,
0x0b, 0x66, 0x85, 0xe0, 0x45, 0xe4, 0x59, 0x95,
0x4f, 0x25, 0x8e, 0x59, 0x00, 0x00, 0x00, 0x01,
0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96,
0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8,
0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0,
0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x74,
0x6a, 0x52, 0x88, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x0e, 0xb5, 0xcc, 0xb8, 0x5c, 0x29, 0x00, 0x9b,
0x60, 0x60, 0xde, 0xcb, 0x35, 0x3a, 0x38, 0xea,
0x3b, 0x52, 0xcd, 0x20, 0x00, 0x00, 0x00, 0x74,
0x6a, 0x52, 0x88, 0x00, 0xdb, 0xcf, 0x89, 0x0f,
0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7,
0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a,
0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12,
0xba, 0x53, 0xf2, 0xdb,
// number of credentials:
0x00, 0x00, 0x00, 0x01,
// credential[0]:
0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x01,
0x0a, 0xcc, 0xcf, 0x47, 0xa8, 0x20, 0x54, 0x9a,
0x84, 0x42, 0x84, 0x40, 0xe2, 0x42, 0x19, 0x75,
0x13, 0x87, 0x90, 0xe4, 0x1b, 0xe2, 0x62, 0xf7,
0x19, 0x7f, 0x3d, 0x93, 0xfa, 0xa2, 0x6c, 0xc8,
0x74, 0x10, 0x60, 0xd7, 0x43, 0xff, 0xaf, 0x02,
0x57, 0x82, 0xc8, 0xc8, 0x6b, 0x86, 0x2d, 0x2b,
0x9f, 0xeb, 0xeb, 0xe7, 0xd3, 0x52, 0xf0, 0xb4,
0x59, 0x1a, 0xfb, 0xd1, 0xa7, 0x37, 0xf8, 0xa3,
0x00,
```
## UTXO
A UTXO is a standalone representation of a transaction output.
### What UTXO Contains
A UTXO contains a `CodecID`, `TxID`, `UTXOIndex`, `AssetID`, and `Output`.
* **`CodecID`** The only valid `CodecID` is `00 00`
* **`TxID`** is a 32-byte transaction ID. Transaction IDs are calculated by
taking sha256 of the bytes of the signed transaction.
* **`UTXOIndex`** is an int that specifies which output in the transaction
specified by **`TxID`** that this utxo was created by.
* **`AssetID`** is a 32-byte array that defines which asset this utxo references.
* **`Output`** is the output object that created this utxo. The serialization of
Outputs was defined above.
### Gantt UTXO Specification
```text
+--------------+----------+-------------------------+
| codec_id : uint16 | 2 bytes |
+--------------+----------+-------------------------+
| tx_id : [32]byte | 32 bytes |
+--------------+----------+-------------------------+
| output_index : int | 4 bytes |
+--------------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+--------------+----------+-------------------------+
| output : Output | size(output) bytes |
+--------------+----------+-------------------------+
| 70 + size(output) bytes |
+-------------------------+
```
### Proto UTXO Specification
```text
message Utxo {
uint16 codec_id = 1; // 02 bytes
bytes tx_id = 2; // 32 bytes
uint32 output_index = 3; // 04 bytes
bytes asset_id = 4; // 32 bytes
Output output = 5; // size(output)
}
```
### UTXO Example
Let's make a UTXO from the signed transaction created above:
* **`CodecID`**: `0`
* **`TxID`**: `0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7`
* **`UTXOIndex`**: 0 = 0x00000000
* **`AssetID`**: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`
* **`Output`**: `"Example EVMOutput as defined above"`
```text
[
CodecID <- 0x0000
TxID <- 0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7
UTXOIndex <- 0x00000000
AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859
]
=
[
// Codec ID:
0x00, 0x00,
// txID:
0xf9, 0x66, 0x75, 0x0f, 0x43, 0x88, 0x67, 0xc3,
0xc9, 0x82, 0x8d, 0xdc, 0xdb, 0xe6, 0x60, 0xe2,
0x1c, 0xcd, 0xbb, 0x36, 0xa9, 0x27, 0x69, 0x58,
0xf0, 0x11, 0xba, 0x47, 0x2f, 0x75, 0xd4, 0xe7,
// utxo index:
0x00, 0x00, 0x00, 0x00,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// output:
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23,
0x24, 0x25, 0x26, 0x27,
]
```
# Avalanche Network Protocol
URL: /docs/api-reference/standards/avalanche-network-protocol
## Overview[](#overview "Direct link to heading")
Avalanche network defines the core communication format between Avalanche nodes. It uses the [primitive serialization](/docs/api-reference/standards/serialization-primitives) format for payload packing.
`"Containers"` are mentioned extensively in the description. A Container is simply a generic term for blocks.
This document describes the protocol for peer-to-peer communication using Protocol Buffers (proto3). The protocol defines a set of messages exchanged between peers in a peer-to-peer network. Each message is represented by the `Message` proto message, which can encapsulate various types of messages, including network messages, state-sync messages, bootstrapping messages, consensus messages, and application messages.
## Message[](#message "Direct link to heading")
The `Message` proto message is the main container for all peer-to-peer communication. It uses the `oneof` construct to represent different message types. The supported compression algorithms include Gzip and Zstd.
```
message Message {
oneof message {
bytes compressed_gzip = 1;
bytes compressed_zstd = 2;
// ... (other compression algorithms can be added)
Ping ping = 11;
Pong pong = 12;
Version version = 13;
PeerList peer_list = 14;
// ... (other message types)
}
}
```
### Compression[](#compression "Direct link to heading")
The `compressed_gzip` and `compressed_zstd` fields are used for Gzip and Zstd compression, respectively, of the encapsulated message. These fields are set only if the message type supports compression.
## Network Messages[](#network-messages "Direct link to heading")
### Ping[](#ping "Direct link to heading")
The `Ping` message reports a peer's perceived uptime percentage.
```
message Ping {
uint32 uptime = 1;
repeated SubnetUptime subnet_uptimes = 2;
}
```
* `uptime`: Uptime percentage on the primary network \[0, 100].
* `subnet_uptimes`: Uptime percentages on Avalanche L1s.
### Pong[](#pong "Direct link to heading")
The `Pong` message is sent in response to a `Ping` with the perceived uptime of the peer.
```
message Pong {
uint32 uptime = 1; // Deprecated: uptime is now sent in Ping
repeated SubnetUptime subnet_uptimes = 2; // Deprecated: uptime is now sent in Ping
}
```
### Version[](#version "Direct link to heading")
The `Version` message is the first outbound message sent to a peer during the p2p handshake.
```
message Version {
uint32 network_id = 1;
uint64 my_time = 2;
bytes ip_addr = 3;
uint32 ip_port = 4;
string my_version = 5;
uint64 my_version_time = 6;
bytes sig = 7;
repeated bytes tracked_subnets = 8;
}
```
* `network_id`: Network identifier (e.g., local, testnet, Mainnet).
* `my_time`: Unix timestamp when the `Version` message was created.
* `ip_addr`: IP address of the peer.
* `ip_port`: IP port of the peer.
* `my_version`: Avalanche client version.
* `my_version_time`: Timestamp of the IP.
* `sig`: Signature of the peer IP port pair at a provided timestamp.
* `tracked_subnets`: Avalanche L1s the peer is tracking.
### PeerList[](#peerlist "Direct link to heading")
The `PeerList` message contains network-level metadata for a set of validators.
```
message PeerList {
repeated ClaimedIpPort claimed_ip_ports = 1;
}
```
* `claimed_ip_ports`: List of claimed IP and port pairs.
### PeerListAck[](#peerlistack "Direct link to heading")
The `PeerListAck` message is sent in response to `PeerList` to acknowledge the subset of peers that the peer will attempt to connect to.
```
message PeerListAck {
reserved 1; // deprecated; used to be tx_ids
repeated PeerAck peer_acks = 2;
}
```
* `peer_acks`: List of acknowledged peers.
## State-Sync Messages[](#state-sync-messages "Direct link to heading")
### GetStateSummaryFrontier[](#getstatesummaryfrontier "Direct link to heading")
The `GetStateSummaryFrontier` message requests a peer's most recently accepted state summary.
```
message GetStateSummaryFrontier {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
### StateSummaryFrontier[](#statesummaryfrontier "Direct link to heading")
The `StateSummaryFrontier` message is sent in response to a `GetStateSummaryFrontier` request.
```
message StateSummaryFrontier {
bytes chain_id = 1;
uint32 request_id = 2;
bytes summary = 3;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `GetStateSummaryFrontier` request.
* `summary`: The requested state summary.
### GetAcceptedStateSummary[](#getacceptedstatesummary "Direct link to heading")
The `GetAcceptedStateSummary` message requests a set of state summaries at specified block heights.
```
message GetAcceptedStateSummary {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
repeated uint64 heights = 4;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
* `heights`: Heights being requested.
### AcceptedStateSummary[](#acceptedstatesummary "Direct link to heading")
The `AcceptedStateSummary` message is sent in response to `GetAcceptedStateSummary`.
```
message AcceptedStateSummary {
bytes chain_id = 1;
uint32 request_id = 2;
repeated bytes summary_ids = 3;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `GetAcceptedStateSummary` request.
* `summary_ids`: State summary IDs.
## Bootstrapping Messages[](#bootstrapping-messages "Direct link to heading")
### GetAcceptedFrontier[](#getacceptedfrontier "Direct link to heading")
The `GetAcceptedFrontier` message requests the accepted frontier from a peer.
```
message GetAcceptedFrontier {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
EngineType engine_type = 4;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
* `engine_type`: Consensus type the remote peer should use to handle this message.
### AcceptedFrontier[](#acceptedfrontier "Direct link to heading")
The `AcceptedFrontier` message contains the remote peer's last accepted frontier.
```
message AcceptedFrontier {
reserved 4; // Until Cortina upgrade is activated
bytes chain_id = 1;
uint32 request_id = 2;
bytes container_id = 3;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `GetAcceptedFrontier` request.
* `container_id`: The ID of the last accepted frontier.
### GetAccepted[](#getaccepted "Direct link to heading")
The `GetAccepted` message sends a request with the sender's accepted frontier to a remote peer.
```
message GetAccepted {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
repeated bytes container_ids = 4;
EngineType engine_type = 5;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this message.
* `deadline`: Timeout (ns) for this request.
* `container_ids`: The
sender's accepted frontier.
* `engine_type`: Consensus type to handle this message.
### Accepted[](#accepted "Direct link to heading")
The `Accepted` message is sent in response to `GetAccepted`.
```
message Accepted {
reserved 4; // Until Cortina upgrade is activated
bytes chain_id = 1;
uint32 request_id = 2;
repeated bytes container_ids = 3;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `GetAccepted` request.
* `container_ids`: Subset of container IDs from the `GetAccepted` request that the sender has accepted.
### GetAncestors[](#getancestors "Direct link to heading")
The `GetAncestors` message requests the ancestors for a given container.
```
message GetAncestors {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
bytes container_id = 4;
EngineType engine_type = 5;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
* `container_id`: Container for which ancestors are being requested.
* `engine_type`: Consensus type to handle this message.
### Ancestors[](#ancestors "Direct link to heading")
The `Ancestors` message is sent in response to `GetAncestors`.
```
message Ancestors {
reserved 4; // Until Cortina upgrade is activated
bytes chain_id = 1;
uint32 request_id = 2;
repeated bytes containers = 3;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `GetAncestors` request.
* `containers`: Ancestry for the requested container.
## Consensus Messages[](#consensus-messages "Direct link to heading")
### Get[](#get "Direct link to heading")
The `Get` message requests a container from a remote peer.
```
message Get {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
bytes container_id = 4;
EngineType engine_type = 5;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
* `container_id`: Container being requested.
* `engine_type`: Consensus type to handle this message.
### Put[](#put "Direct link to heading")
The `Put` message is sent in response to `Get` with the requested block.
```
message Put {
bytes chain_id = 1;
uint32 request_id = 2;
bytes container = 3;
EngineType engine_type = 4;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `Get` request.
* `container`: Requested container.
* `engine_type`: Consensus type to handle this message.
### PushQuery[](#pushquery "Direct link to heading")
The `PushQuery` message requests the preferences of a remote peer given a container.
```
message PushQuery {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
bytes container = 4;
EngineType engine_type = 5;
uint64 requested_height = 6;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
* `container`: Container being gossiped.
* `engine_type`: Consensus type to handle this message.
* `requested_height`: Requesting peer's last accepted height.
### PullQuery[](#pullquery "Direct link to heading")
The `PullQuery` message requests the preferences of a remote peer given a container id.
```
message PullQuery {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
bytes container_id = 4;
EngineType engine_type = 5;
uint64 requested_height = 6;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
* `container_id`: Container id being gossiped.
* `engine_type`: Consensus type to handle this message.
* `requested_height`: Requesting peer's last accepted height.
### Chits[](#chits "Direct link to heading")
The `Chits` message contains the preferences of a peer in response to a `PushQuery` or `PullQuery` message.
```
message Chits {
bytes chain_id = 1;
uint32 request_id = 2;
bytes preferred_id = 3;
bytes accepted_id = 4;
bytes preferred_id_at_height = 5;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `PushQuery`/`PullQuery` request.
* `preferred_id`: Currently preferred block.
* `accepted_id`: Last accepted block.
* `preferred_id_at_height`: Currently preferred block at the requested height.
## Application Messages[](#application-messages "Direct link to heading")
### AppRequest[](#apprequest "Direct link to heading")
The `AppRequest` message is a VM-defined request.
```
message AppRequest {
bytes chain_id = 1;
uint32 request_id = 2;
uint64 deadline = 3;
bytes app_bytes = 4;
}
```
* `chain_id`: Chain being requested from.
* `request_id`: Unique identifier for this request.
* `deadline`: Timeout (ns) for this request.
* `app_bytes`: Request body.
### AppResponse[](#appresponse "Direct link to heading")
The `AppResponse` message is a VM-defined response sent in response to `AppRequest`.
```
message AppResponse {
bytes chain_id = 1;
uint32 request_id = 2;
bytes app_bytes = 3;
}
```
* `chain_id`: Chain being responded from.
* `request_id`: Request ID of the original `AppRequest`.
* `app_bytes`: Response body.
### AppGossip[](#appgossip "Direct link to heading")
The `AppGossip` message is a VM-defined message.
```
message AppGossip {
bytes chain_id = 1;
bytes app_bytes = 2;
}
```
* `chain_id`: Chain the message is for.
* `app_bytes`: Message body.
# Cryptographic Primitives
URL: /docs/api-reference/standards/cryptographic-primitives
Avalanche uses a variety of cryptographic primitives for its different functions. This file summarizes the type and kind of cryptography used at the network and blockchain layers.
## Cryptography in the Network Layer
Avalanche uses Transport Layer Security, TLS, to protect node-to-node communications from eavesdroppers. TLS combines the practicality of public-key cryptography with the efficiency of symmetric-key cryptography. This has resulted in TLS becoming the standard for internet communication. Whereas most classical consensus protocols employ public-key cryptography to prove receipt of messages to third parties, the novel Snow\* consensus family does not require such proofs. This enables Avalanche to employ TLS in authenticating stakers and eliminates the need for costly public-key cryptography for signing network messages.
### TLS Certificates
Avalanche does not rely on any centralized third-parties, and in particular, it does not use certificates issued by third-party authenticators. All certificates used within the network layer to identify endpoints are self-signed, thus creating a self-sovereign identity layer. No third parties are ever involved.
### TLS Addresses
To avoid posting the full TLS certificate to the P-Chain, the certificate is first hashed. For consistency, Avalanche employs the same hashing mechanism for the TLS certificates as is used in Bitcoin. Namely, the DER representation of the certificate is hashed with sha256, and the result is then hashed with ripemd160 to yield a 20-byte identifier for stakers.
This 20-byte identifier is represented by "NodeID-" followed by the data's [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded string.
## Cryptography in the Avalanche Virtual Machine
The Avalanche virtual machine uses elliptic curve cryptography, specifically `secp256k1`, for its signatures on the blockchain.
This 32-byte identifier is represented by "PrivateKey-" followed by the data's [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded string.
### Secp256k1 Addresses
Avalanche is not prescriptive about addressing schemes, choosing to instead leave addressing up to each blockchain.
The addressing scheme of the X-Chain and the P-Chain relies on secp256k1. Avalanche follows a similar approach as Bitcoin and hashes the ECDSA public key. The 33-byte compressed representation of the public key is hashed with sha256 **once**. The result is then hashed with ripemd160 to yield a 20-byte address.
Avalanche uses the convention `chainID-address` to specify which chain an address exists on. `chainID` may be replaced with an alias of the chain. When transmitting information through external applications, the CB58 convention is required.
### Bech32
Addresses on the X-Chain and P-Chain use the [Bech32](http://support.avalabs.org/en/articles/4587392-what-is-bech32) standard outlined in [BIP 0173](https://en.bitcoin.it/wiki/BIP_0173). There are four parts to a Bech32 address scheme. In order of appearance:
* A human-readable part (HRP). On Mainnet this is `avax`.
* The number `1`, which separates the HRP from the address and error correction code.
* A base-32 encoded string representing the 20 byte address.
* A 6-character base-32 encoded error correction code.
Additionally, an Avalanche address is prefixed with the alias of the chain it exists on, followed by a dash. For example, X-Chain addresses are prefixed with `X-`.
The following regular expression matches addresses on the X-Chain, P-Chain and C-Chain for Mainnet, Fuji and localhost. Note that all valid Avalanche addresses will match this regular expression, but some strings that are not valid Avalanche addresses may match this regular expression.
```
^([XPC]|[a-km-zA-HJ-NP-Z1-9]{36,72})-[a-zA-Z]{1,83}1[qpzry9x8gf2tvdw0s3jn54khce6mua7l]{38}$
```
Read more about Avalanche's [addressing scheme](https://support.avalabs.org/en/articles/4596397-what-is-an-address).
For example the following Bech32 address, `X-avax19rknw8l0grnfunjrzwxlxync6zrlu33y2jxhrg`, is composed like so:
1. HRP: `avax`
2. Separator: `1`
3. Address: `9rknw8l0grnfunjrzwxlxync6zrlu33y`
4. Checksum: `2jxhrg`
Depending on the `networkID`, the encoded addresses will have a distinctive HRP per each network.
* 0 - X-`custom`19rknw8l0grnfunjrzwxlxync6zrlu33yeg5dya
* 1 - X-`avax`19rknw8l0grnfunjrzwxlxync6zrlu33y2jxhrg
* 2 - X-`cascade`19rknw8l0grnfunjrzwxlxync6zrlu33ypmtvnh
* 3 - X-`denali`19rknw8l0grnfunjrzwxlxync6zrlu33yhc357h
* 4 - X-`everest`19rknw8l0grnfunjrzwxlxync6zrlu33yn44wty
* 5 - X-`fuji`19rknw8l0grnfunjrzwxlxync6zrlu33yxqzg0h
* 1337 - X-`custom`19rknw8l0grnfunjrzwxlxync6zrlu33yeg5dya
* 12345 - X-`local`19rknw8l0grnfunjrzwxlxync6zrlu33ynpm3qq
Here's the mapping of `networkID` to bech32 HRP.
```
0: "custom",
1: "avax",
2: "cascade",
3: "denali",
4: "everest",
5: "fuji",
1337: "custom",
12345: "local"
```
### Secp256k1 Recoverable Signatures
Recoverable signatures are stored as the 65-byte **`[R || S || V]`** where **`V`** is 0 or 1 to allow quick public key recoverability. **`S`** must be in the lower half of the possible range to prevent signature malleability. Before signing a message, the message is hashed using sha256.
### Secp256k1 Example
Suppose Rick and Morty are setting up a secure communication channel. Morty creates a new public-private key pair.
Private Key: `0x98cb077f972feb0481f1d894f272c6a1e3c15e272a1658ff716444f465200070`
Public Key (33-byte compressed): `0x02b33c917f2f6103448d7feb42614037d05928433cb25e78f01a825aa829bb3c27`
Because of Rick's infinite wisdom, he doesn't trust himself with carrying around Morty's public key, so he only asks for Morty's address. Morty follows the instructions, SHA256's his public key, and then ripemd160's that result to produce an address.
SHA256(Public Key): `0x28d7670d71667e93ff586f664937f52828e6290068fa2a37782045bffa7b0d2f`
Address: `0xe8777f38c88ca153a6fdc25942176d2bf5491b89`
Morty is quite confused because a public key should be safe to be public knowledge. Rick belches and explains that hashing the public key protects the private key owner from potential future security flaws in elliptic curve cryptography. In the event cryptography is broken and a private key can be derived from a public key, users can transfer their funds to an address that has never signed a transaction before, preventing their funds from being compromised by an attacker. This enables coin owners to be protected while the cryptography is upgraded across the clients.
Later, once Morty has learned more about Rick's backstory, Morty attempts to send Rick a message. Morty knows that Rick will only read the message if he can verify it was from him, so he signs the message with his private key.
Message: `0x68656c702049276d207472617070656420696e206120636f6d7075746572`
Message Hash: `0x912800c29d554fb9cdce579c0abba991165bbbc8bfec9622481d01e0b3e4b7da`
Message Signature: `0xb52aa0535c5c48268d843bd65395623d2462016325a86f09420c81f142578e121d11bd368b88ca6de4179a007e6abe0e8d0be1a6a4485def8f9e02957d3d72da01`
Morty was never seen again.
### Signed Messages
A standard for interoperable generic signed messages based on the Bitcoin Script format and Ethereum format.
```
sign(sha256(length(prefix) + prefix + length(message) + message))
```
The prefix is simply the string `\x1AAvalanche Signed Message:\n`, where `0x1A` is the length of the prefix text and `length(message)` is an [integer](/docs/api-reference/standards/serialization-primitives#integer) of the message size.
### Gantt Pre-Image Specification
```
+---------------+-----------+------------------------------+
| prefix : [26]byte | 26 bytes |
+---------------+-----------+------------------------------+
| messageLength : int | 4 bytes |
+---------------+-----------+------------------------------+
| message : []byte | size(message) bytes |
+---------------+-----------+------------------------------+
| 26 + 4 + size(message) |
+------------------------------+
```
### Example
As an example we will sign the message "Through consensus to the stars"
```
// prefix size: 26 bytes
0x1a
// prefix: Avalanche Signed Message:\n
0x41 0x76 0x61 0x6c 0x61 0x6e 0x63 0x68 0x65 0x20 0x53 0x69 0x67 0x6e 0x65 0x64 0x20 0x4d 0x65 0x73 0x73 0x61 0x67 0x65 0x3a 0x0a
// msg size: 30 bytes
0x00 0x00 0x00 0x1e
// msg: Through consensus to the stars
54 68 72 6f 75 67 68 20 63 6f 6e 73 65 6e 73 75 73 20 74 6f 20 74 68 65 20 73 74 61 72 73
```
After hashing with `sha256` and signing the pre-image we return the value [cb58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded: `4Eb2zAHF4JjZFJmp4usSokTGqq9mEGwVMY2WZzzCmu657SNFZhndsiS8TvL32n3bexd8emUwiXs8XqKjhqzvoRFvghnvSN`. Here's an example using [Core web](https://core.app/tools/signing-tools/sign/).
A full guide on how to sign messages with Core web can be found [here](https://support.avax.network/en/articles/7206948-core-web-how-do-i-use-the-signing-tools).

## Cryptography in Ethereum Virtual Machine
Avalanche nodes support the full Ethereum Virtual Machine (EVM) and precisely duplicate all of the cryptographic constructs used in Ethereum. This includes the Keccak hash function and the other mechanisms used for cryptographic security in the EVM.
## Cryptography in Other Virtual Machines
Since Avalanche is an extensible platform, we expect that people will add additional cryptographic primitives to the system over time.
# Serialization Primitives
URL: /docs/api-reference/standards/serialization-primitives
Avalanche uses a simple, uniform, and elegant representation for all internal data. This document describes how primitive types are encoded on the Avalanche platform. Transactions are encoded in terms of these basic primitive types.
## Byte[](#byte "Direct link to heading")
Bytes are packed as-is into the message payload.
Example:
```
Packing:
0x01
Results in:
[0x01]
```
## Short[](#short "Direct link to heading")
Shorts are packed in BigEndian format into the message payload.
Example:
```
Packing:
0x0102
Results in:
[0x01, 0x02]
```
## Integer[](#integer "Direct link to heading")
Integers are 32-bit values packed in BigEndian format into the message payload.
Example:
```
Packing:
0x01020304
Results in:
[0x01, 0x02, 0x03, 0x04]
```
## Long Integers[](#long-integers "Direct link to heading")
Long integers are 64-bit values packed in BigEndian format into the message payload.
Example:
```
Packing:
0x0102030405060708
Results in:
[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]
```
## IP Addresses[](#ip-addresses "Direct link to heading")
IP addresses are represented as 16-byte IPv6 format, with the port appended into the message payload as a Short. IPv4 addresses are padded with 12 bytes of leading 0x00s.
IPv4 example:
```
Packing:
"127.0.0.1:9650"
Results in:
[
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xff, 0xff, 0x7f, 0x00, 0x00, 0x01,
0x25, 0xb2,
]
```
IPv6 example:
```
Packing:
"[2001:0db8:ac10:fe01::]:12345"
Results in:
[
0x20, 0x01, 0x0d, 0xb8, 0xac, 0x10, 0xfe, 0x01,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x30, 0x39,
]
```
## Fixed-Length Array[](#fixed-length-array "Direct link to heading")
Fixed-length arrays, whose length is known ahead of time and by context, are packed in order.
Byte array example:
```
Packing:
[0x01, 0x02]
Results in:
[0x01, 0x02]
```
Integer array example:
```
Packing:
[0x03040506]
Results in:
[0x03, 0x04, 0x05, 0x06]
```
## Variable Length Array[](#variable-length-array "Direct link to heading")
The length of the array is prefixed in Integer format, followed by the packing of the array contents in Fixed Length Array format.
Byte array example:
```
Packing:
[0x01, 0x02]
Results in:
[0x00, 0x00, 0x00, 0x02, 0x01, 0x02]
```
Int array example:
```
Packing:
[0x03040506]
Results in:
[0x00, 0x00, 0x00, 0x01, 0x03, 0x04, 0x05, 0x06]
```
## String[](#string "Direct link to heading")
A String is packed similarly to a variable-length byte array. However, the length prefix is a short rather than an int. Strings are encoded in UTF-8 format.
Example:
```
Packing:
"Avax"
Results in:
[0x00, 0x04, 0x41, 0x76, 0x61, 0x78]
```
# P-Chain API
URL: /docs/api-reference/p-chain/api
This page is an overview of the P-Chain API associated with AvalancheGo.
The P-Chain API allows clients to interact with the [P-Chain](https://build.avax.network/docs/quick-start/primary-network#p-chain), which maintains Avalanche’s validator set and handles blockchain creation.
## Endpoint
```
/ext/bc/P
```
## Format
This API uses the `json 2.0` RPC format.
## Methods
### `platform.getBalance`
Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Get the balance of AVAX controlled by a given address.
**Signature:**
```
platform.getBalance({
addresses: []string
}) -> {
balances: string -> int,
unlockeds: string -> int,
lockedStakeables: string -> int,
lockedNotStakeables: string -> int,
utxoIDs: []{
txID: string,
outputIndex: int
}
}
```
* `addresses` are the addresses to get the balance of.
* `balances` is a map from assetID to the total balance.
* `unlockeds` is a map from assetID to the unlocked balance.
* `lockedStakeables` is a map from assetID to the locked stakeable balance.
* `lockedNotStakeables` is a map from assetID to the locked and not stakeable balance.
* `utxoIDs` are the IDs of the UTXOs that reference `address`.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" : 1,
"method" :"platform.getBalance",
"params" :{
"addresses":["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"]
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"balance": "30000000000000000",
"unlocked": "20000000000000000",
"lockedStakeable": "10000000000000000",
"lockedNotStakeable": "0",
"balances": {
"BUuypiq2wyuLMvyhzFXcPyxPMCgSp7eeDohhQRqTChoBjKziC": "30000000000000000"
},
"unlockeds": {
"BUuypiq2wyuLMvyhzFXcPyxPMCgSp7eeDohhQRqTChoBjKziC": "20000000000000000"
},
"lockedStakeables": {
"BUuypiq2wyuLMvyhzFXcPyxPMCgSp7eeDohhQRqTChoBjKziC": "10000000000000000"
},
"lockedNotStakeables": {},
"utxoIDs": [
{
"txID": "11111111111111111111111111111111LpoYY",
"outputIndex": 1
},
{
"txID": "11111111111111111111111111111111LpoYY",
"outputIndex": 0
}
]
},
"id": 1
}
```
### `platform.getBlock`
Get a block by its ID.
**Signature:**
```
platform.getBlock({
blockID: string
encoding: string // optional
}) -> {
block: string,
encoding: string
}
```
**Request:**
* `blockID` is the block ID. It should be in cb58 format.
* `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`.
**Response:**
* `block` is the block encoded to `encoding`.
* `encoding` is the `encoding`.
#### Hex Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getBlock",
"params": {
"blockID": "d7WYmb8VeZNHsny3EJCwMm6QA37s1EHwMxw1Y71V3FqPZ5EFG",
"encoding": "hex"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"block": "0x00000000000309473dc99a0851a29174d84e522da8ccb1a56ac23f7b0ba79f80acce34cf576900000000000f4241000000010000001200000001000000000000000000000000000000000000000000000000000000000000000000000000000000011c4c57e1bcb3c567f9f03caa75563502d1a21393173c06d9d79ea247b20e24800000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000000338e0465f0000000100000000000000000427d4b22a2a78bcddd456742caf91b56badbff985ee19aef14573e7343fd6520000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000000338d1041f0000000000000000000000010000000195a4467dd8f939554ea4e6501c08294386938cbf000000010000000900000001c79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed738580119286dde",
"encoding": "hex"
},
"id": 1
}
```
#### JSON Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getBlock",
"params": {
"blockID": "d7WYmb8VeZNHsny3EJCwMm6QA37s1EHwMxw1Y71V3FqPZ5EFG",
"encoding": "json"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"block": {
"parentID": "5615di9ytxujackzaXNrVuWQy5y8Yrt8chPCscMr5Ku9YxJ1S",
"height": 1000001,
"txs": [
{
"unsignedTx": {
"inputs": {
"networkID": 1,
"blockchainID": "11111111111111111111111111111111LpoYY",
"outputs": [],
"inputs": [
{
"txID": "DTqiagiMFdqbNQ62V2Gt1GddTVLkKUk2caGr4pyza9hTtsfta",
"outputIndex": 0,
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"input": {
"amount": 13839124063,
"signatureIndices": [0]
}
}
],
"memo": "0x"
},
"destinationChain": "2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5",
"exportedOutputs": [
{
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"output": {
"addresses": [
"P-avax1jkjyvlwclyu42n4yuegpczpfgwrf8r9lyj0d3c"
],
"amount": 13838124063,
"locktime": 0,
"threshold": 1
}
}
]
},
"credentials": [
{
"signatures": [
"0xc79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed7385801"
]
}
]
}
]
},
"encoding": "json"
},
"id": 1
}
```
### `platform.getBlockByHeight`
Get a block by its height.
**Signature:**
```
platform.getBlockByHeight({
height: int
encoding: string // optional
}) -> {
block: string,
encoding: string
}
```
**Request:**
* `height` is the block height.
* `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`.
**Response:**
* `block` is the block encoded to `encoding`.
* `encoding` is the `encoding`.
#### Hex Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getBlockByHeight",
"params": {
"height": 1000001,
"encoding": "hex"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"block": "0x00000000000309473dc99a0851a29174d84e522da8ccb1a56ac23f7b0ba79f80acce34cf576900000000000f4241000000010000001200000001000000000000000000000000000000000000000000000000000000000000000000000000000000011c4c57e1bcb3c567f9f03caa75563502d1a21393173c06d9d79ea247b20e24800000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000000338e0465f0000000100000000000000000427d4b22a2a78bcddd456742caf91b56badbff985ee19aef14573e7343fd6520000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000000338d1041f0000000000000000000000010000000195a4467dd8f939554ea4e6501c08294386938cbf000000010000000900000001c79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed738580119286dde",
"encoding": "hex"
},
"id": 1
}
```
#### JSON Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getBlockByHeight",
"params": {
"height": 1000001,
"encoding": "json"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"block": {
"parentID": "5615di9ytxujackzaXNrVuWQy5y8Yrt8chPCscMr5Ku9YxJ1S",
"height": 1000001,
"txs": [
{
"unsignedTx": {
"inputs": {
"networkID": 1,
"blockchainID": "11111111111111111111111111111111LpoYY",
"outputs": [],
"inputs": [
{
"txID": "DTqiagiMFdqbNQ62V2Gt1GddTVLkKUk2caGr4pyza9hTtsfta",
"outputIndex": 0,
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"input": {
"amount": 13839124063,
"signatureIndices": [0]
}
}
],
"memo": "0x"
},
"destinationChain": "2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5",
"exportedOutputs": [
{
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"output": {
"addresses": [
"P-avax1jkjyvlwclyu42n4yuegpczpfgwrf8r9lyj0d3c"
],
"amount": 13838124063,
"locktime": 0,
"threshold": 1
}
}
]
},
"credentials": [
{
"signatures": [
"0xc79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed7385801"
]
}
]
}
]
},
"encoding": "json"
},
"id": 1
}
```
### `platform.getBlockchains`
Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Get all the blockchains that exist (excluding the P-Chain).
**Signature:**
```
platform.getBlockchains() ->
{
blockchains: []{
id: string,
name: string,
subnetID: string,
vmID: string
}
}
```
* `blockchains` is all of the blockchains that exists on the Avalanche network.
* `name` is the human-readable name of this blockchain.
* `id` is the blockchain’s ID.
* `subnetID` is the ID of the Subnet that validates this blockchain.
* `vmID` is the ID of the Virtual Machine the blockchain runs.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getBlockchains",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"blockchains": [
{
"id": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM",
"name": "X-Chain",
"subnetID": "11111111111111111111111111111111LpoYY",
"vmID": "jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq"
},
{
"id": "2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5",
"name": "C-Chain",
"subnetID": "11111111111111111111111111111111LpoYY",
"vmID": "mgj786NP7uDwBCcq6YwThhaN8FLyybkCa4zBWTQbNgmK6k9A6"
},
{
"id": "CqhF97NNugqYLiGaQJ2xckfmkEr8uNeGG5TQbyGcgnZ5ahQwa",
"name": "Simple DAG Payments",
"subnetID": "11111111111111111111111111111111LpoYY",
"vmID": "sqjdyTKUSrQs1YmKDTUbdUhdstSdtRTGRbUn8sqK8B6pkZkz1"
},
{
"id": "VcqKNBJsYanhVFxGyQE5CyNVYxL3ZFD7cnKptKWeVikJKQkjv",
"name": "Simple Chain Payments",
"subnetID": "11111111111111111111111111111111LpoYY",
"vmID": "sqjchUjzDqDfBPGjfQq2tXW1UCwZTyvzAWHsNzF2cb1eVHt6w"
},
{
"id": "2SMYrx4Dj6QqCEA3WjnUTYEFSnpqVTwyV3GPNgQqQZbBbFgoJX",
"name": "Simple Timestamp Server",
"subnetID": "11111111111111111111111111111111LpoYY",
"vmID": "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH"
},
{
"id": "KDYHHKjM4yTJTT8H8qPs5KXzE6gQH5TZrmP1qVr1P6qECj3XN",
"name": "My new timestamp",
"subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r",
"vmID": "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH"
},
{
"id": "2TtHFqEAAJ6b33dromYMqfgavGPF3iCpdG3hwNMiart2aB5QHi",
"name": "My new AVM",
"subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r",
"vmID": "jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq"
}
]
},
"id": 1
}
```
### `platform.getBlockchainStatus`
Get the status of a blockchain.
**Signature:**
```
platform.getBlockchainStatus(
{
blockchainID: string
}
) -> {status: string}
```
`status` is one of:
* `Validating`: The blockchain is being validated by this node.
* `Created`: The blockchain exists but isn’t being validated by this node.
* `Preferred`: The blockchain was proposed to be created and is likely to be created but the
transaction isn’t yet accepted.
* `Syncing`: This node is participating in this blockchain as a non-validating node.
* `Unknown`: The blockchain either wasn’t proposed or the proposal to create it isn’t preferred. The
proposal may be resubmitted.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getBlockchainStatus",
"params":{
"blockchainID":"2NbS4dwGaf2p1MaXb65PrkZdXRwmSX4ZzGnUu7jm3aykgThuZE"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"status": "Created"
},
"id": 1
}
```
### `platform.getCurrentSupply`
Returns an upper bound on amount of tokens that exist that can stake the requested Subnet. This is
an upper bound because it does not account for burnt tokens, including transaction fees.
**Signature:**
```
platform.getCurrentSupply ({
subnetID: string // optional
}) -> { supply: int }
```
* `supply` is an upper bound on the number of tokens that exist.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getCurrentSupply",
"params": {
"subnetID": "11111111111111111111111111111111LpoYY"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"supply": "365865167637779183"
},
"id": 1
}
```
The response in this example indicates that AVAX’s supply is at most 365.865 million.
### `platform.getCurrentValidators`
List the current validators of the given Subnet.
**Signature:**
```
platform.getCurrentValidators({
subnetID: string, // optional
nodeIDs: string[], // optional
}) -> {
validators: []{
txID: string,
startTime: string,
endTime: string,
nodeID: string,
weight: string,
validationID: string,
publicKey: string,
remainingBalanceOwner: {
locktime: string,
threshold: string,
addresses: string[]
},
deactivationOwner: {
locktime: string,
threshold: string,
addresses: string[]
},
minNonce: string,
balance: string,
validationRewardOwner: {
locktime: string,
threshold: string,
addresses: string[]
},
delegationRewardOwner: {
locktime: string,
threshold: string,
addresses: string[]
},
potentialReward: string,
delegationFee: string,
uptime: string,
connected: bool,
signer: {
publicKey: string,
proofOfPosession: string
},
delegatorCount: string,
delegatorWeight: string,
delegators: []{
txID: string,
startTime: string,
endTime: string,
weight: string,
nodeID: string,
rewardOwner: {
locktime: string,
threshold: string,
addresses: string[]
},
potentialReward: string,
}
}
}
```
* `subnetID` is the Subnet whose current validators are returned. If omitted, returns the current
validators of the Primary Network.
* `nodeIDs` is a list of the NodeIDs of current validators to request. If omitted, all current
validators are returned. If a specified NodeID is not in the set of current validators, it will
not be included in the response.
* `validators` can include different fields based on the subnet type (L1, PoA Subnets, the Primary Network):
* `txID` is the validator transaction.
* `startTime` is the Unix time when the validator starts validating the Subnet.
* `endTime` is the Unix time when the validator stops validating the Subnet. Omitted if `subnetID` is a L1 Subnet.
* `nodeID` is the validator’s node ID.
* `weight` is the validator’s weight (stake) when sampling validators.
* `validationID` is the ID for L1 subnet validator registration transaction. Omitted if `subnetID` is not an L1 Subnet.
* `publicKey` is the compressed BLS public key of the validator. Omitted if `subnetID` is not an L1 Subnet.
* `remainingBalanceOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that will receive any withdrawn balance. Omitted if `subnetID` is not an L1 Subnet.
* `deactivationOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that can withdraw the balance. Omitted if `subnetID` is not an L1 Subnet.
* `minNonce` is minimum nonce that must be included in a `SetL1ValidatorWeightTx` for the transaction to be valid. Omitted if `subnetID` is not an L1 Subnet.
* `balance` is current remaining balance that can be used to pay for the validators continuous fee. Omitted if `subnetID` is not an L1 Subnet.
* `validationRewardOwner` is an `OutputOwners` output which includes `locktime`, `threshold` and
array of `addresses`. Specifies the owner of the potential reward earned from staking. Omitted
if `subnetID` is not the Primary Network.
* `delegationRewardOwner` is an `OutputOwners` output which includes `locktime`, `threshold` and
array of `addresses`. Specifies the owner of the potential reward earned from delegations. Omitted if `subnetID` is not the Primary Network.
* `potentialReward` is the potential reward earned from staking. Omitted if `subnetID` is not the Primary Network.
* `delegationFeeRate` is the percent fee this validator charges when others delegate stake to
them. Omitted if `subnetID` is not the Primary Network.
* `uptime` is the % of time the queried node has reported the peer as online and validating the
Subnet. Omitted if `subnetID` is not the Primary Network.
* `connected` is if the node is connected and tracks the Subnet. Omitted if `subnetID` is not the Primary Network.
* `signer` is the node's BLS public key and proof of possession. Omitted if the validator doesn't
have a BLS public key. Omitted if `subnetID` is not the Primary Network.
* `delegatorCount` is the number of delegators on this validator.
Omitted if `subnetID` is not the Primary Network.
* `delegatorWeight` is total weight of delegators on this validator.
Omitted if `subnetID` is not the Primary Network.
* `delegators` is the list of delegators to this validator. Omitted if `subnetID` is not the Primary Network. Omitted unless `nodeIDs` specifies a single NodeID.
* `txID` is the delegator transaction.
* `startTime` is the Unix time when the delegator started.
* `endTime` is the Unix time when the delegator stops.
* `weight` is the amount of nAVAX this delegator staked.
* `nodeID` is the validating node’s node ID.
* `rewardOwner` is an `OutputOwners` output which includes `locktime`, `threshold` and array of
`addresses`.
* `potentialReward` is the potential reward earned from staking
Note: An L1 Subnet can include both initial legacy PoA validators (before L1 conversion) and L1 validators. The response will include both types of validators.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getCurrentValidators",
"params": {
"nodeIDs": ["NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD"]
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response (Primary Network):**
```json
{
"jsonrpc": "2.0",
"result": {
"validators": [
{
"txID": "2NNkpYTGfTFLSGXJcHtVv6drwVU2cczhmjK2uhvwDyxwsjzZMm",
"startTime": "1600368632",
"endTime": "1602960455",
"weight": "2000000000000",
"nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD",
"validationRewardOwner": {
"locktime": "0",
"threshold": "1",
"addresses": ["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"]
},
"delegationRewardOwner": {
"locktime": "0",
"threshold": "1",
"addresses": ["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"]
},
"potentialReward": "117431493426",
"delegationFee": "10.0000",
"uptime": "0.0000",
"connected": false,
"delegatorCount": "1",
"delegatorWeight": "25000000000",
"delegators": [
{
"txID": "Bbai8nzGVcyn2VmeYcbS74zfjJLjDacGNVuzuvAQkHn1uWfoV",
"startTime": "1600368523",
"endTime": "1602960342",
"weight": "25000000000",
"nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD",
"rewardOwner": {
"locktime": "0",
"threshold": "1",
"addresses": ["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"]
},
"potentialReward": "11743144774"
}
]
}
]
},
"id": 1
}
```
**Example Response (L1):**
```json
{
"jsonrpc": "2.0",
"result": {
"validators": [
{
"validationID": "2wTscvX3JUsMbZHFRd9t8Ywz2q9j2BmETg8cTvgUHgawjbSvZX",
"nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD",
"publicKey": "0x91951771ff32b1a985a4936592bce8512a986353c4c2eb5a0f12dbb76bda3a0a0c975e26413ff44c0ee9d8d689eff8ed",
"remainingBalanceOwner": {
"locktime": "0",
"threshold": "1",
"addresses": [
"P-fuji1ywzvrftfqexh5g6qa9zyrytj6pqdfetza2hqln"
]
},
"deactivationOwner": {
"locktime": "0",
"threshold": "1",
"addresses": [
"P-fuji1ywzvrftfqexh5g6qa9zyrytj6pqdfetza2hqln"
]
},
"startTime": "1734034648",
"weight": "20",
"minNonce": "0",
"balance": "8780477952"
}
]
},
"id": 1
}
```
### `platform.getFeeConfig`
Returns the dynamic fee configuration of the P-chain.
**Signature:**
```
platform.getFeeConfig() -> {
weights: []uint64,
maxCapacity: uint64,
maxPerSecond: uint64,
targetPerSecond: uint64,
minPrice: uint64,
excessConversionConstant: uint64
}
```
* `weights` to merge fee dimensions into a single gas value
* `maxCapacity` is the amount of gas the chain is allowed to store for future use
* `maxPerSecond` is the amount of gas the chain is allowed to consume per second
* `targetPerSecond` is the target amount of gas the chain should consume per second to keep fees stable
* `minPrice` is the minimum price per unit of gas
* `excessConversionConstant` is used to convert excess gas to a gas price
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getFeeConfig",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"weights": [1, 1000, 1000, 4],
"maxCapacity": 1000000,
"maxPerSecond": 100000,
"targetPerSecond": 50000,
"minPrice": 1,
"excessConversionConstant": 2164043
},
"id": 1
}
```
### `platform.getFeeState`
Returns the current fee state of the P-chain.
**Signature:**
```
platform.getFeeState() -> {
capacity: uint64,
excess: uint64,
price: uint64,
timestamp: string
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getFeeState",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"capacity": 973044,
"excess": 26956,
"price": 1,
"timestamp": "2024-12-16T17:19:07Z"
},
"id": 1
}
```
### `platform.getHeight`
Returns the height of the last accepted block.
**Signature:**
```
platform.getHeight() ->
{
height: int,
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getHeight",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"height": "56"
},
"id": 1
}
```
### `platform.getL1Validator`
Returns a current L1 validator.
**Signature:**
```
platform.getL1Validator({
validationID: string,
}) -> {
validationID: string,
subnetID: string,
nodeID: string,
publicKey: string,
remainingBalanceOwner: {
locktime: string,
threshold: string,
addresses: string[]
},
deactivationOwner: {
locktime: string,
threshold: string,
addresses: string[]
},
startTime: string,
weight: string,
minNonce: string,
balance: string,
height: string
}
```
* `validationID` is the ID for L1 subnet validator registration transaction.
* `subnetID` is the L1 this validator is validating.
* `nodeID` is the node ID of the validator.
* `publicKey` is the compressed BLS public key of the validator.
* `remainingBalanceOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that will receive any withdrawn balance.
* `deactivationOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that can withdraw the balance.
* `startTime` is the unix timestamp, in seconds, of when this validator was added to the validator set.
* `weight` is weight of this validator used for consensus voting and ICM.
* `minNonce` is minimum nonce that must be included in a `SetL1ValidatorWeightTx` for the transaction to be valid.
* `balance` is current remaining balance that can be used to pay for the validators continuous fee.
* `height` is height of the last accepted block.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getL1Validator",
"params": {
"validationID": ["9FAftNgNBrzHUMMApsSyV6RcFiL9UmCbvsCu28xdLV2mQ7CMo"]
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"subnetID": "2DeHa7Qb6sufPkmQcFWG2uCd4pBPv9WB6dkzroiMQhd1NSRtof",
"nodeID": "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg",
"validationID": "9FAftNgNBrzHUMMApsSyV6RcFiL9UmCbvsCu28xdLV2mQ7CMo",
"publicKey": "0x900c9b119b5c82d781d4b49be78c3fc7ae65f2b435b7ed9e3a8b9a03e475edff86d8a64827fec8db23a6f236afbf127d",
"remainingBalanceOwner": {
"locktime": "0",
"threshold": "0",
"addresses": []
},
"deactivationOwner": {
"locktime": "0",
"threshold": "0",
"addresses": []
},
"startTime": "1731445206",
"weight": "49463",
"minNonce": "0",
"balance": "1000000000",
"height": "3"
},
"id": 1
}
```
### `platform.getProposedHeight`
Returns this node's current proposer VM height
**Signature:**
```
platform.getProposedHeight() ->
{
height: int,
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getProposedHeight",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"height": "56"
},
"id": 1
}
```
### `platform.getMinStake`
Get the minimum amount of tokens required to validate the requested Subnet and the minimum amount of
tokens that can be delegated.
**Signature:**
```
platform.getMinStake({
subnetID: string // optional
}) ->
{
minValidatorStake : uint64,
minDelegatorStake : uint64
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"platform.getMinStake",
"params": {
"subnetID":"11111111111111111111111111111111LpoYY"
},
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"minValidatorStake": "2000000000000",
"minDelegatorStake": "25000000000"
},
"id": 1
}
```
### `platform.getRewardUTXOs`
Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Returns the UTXOs that were rewarded after the provided transaction's staking or delegation period
ended.
**Signature:**
```
platform.getRewardUTXOs({
txID: string,
encoding: string // optional
}) -> {
numFetched: integer,
utxos: []string,
encoding: string
}
```
* `txID` is the ID of the staking or delegating transaction
* `numFetched` is the number of returned UTXOs
* `utxos` is an array of encoded reward UTXOs
* `encoding` specifies the format for the returned UTXOs. Can only be `hex` when a value is
provided.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getRewardUTXOs",
"params": {
"txID": "2nmH8LithVbdjaXsxVQCQfXtzN9hBbmebrsaEYnLM9T32Uy2Y5"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"numFetched": "2",
"utxos": [
"0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765",
"0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a"
],
"encoding": "hex"
},
"id": 1
}
```
### `platform.getStake`
Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Get the amount of nAVAX staked by a set of addresses. The amount returned does not include staking
rewards.
**Signature:**
```
platform.getStake({
addresses: []string,
validatorsOnly: true or false
}) ->
{
stakeds: string -> int,
stakedOutputs: []string,
encoding: string
}
```
* `addresses` are the addresses to get information about.
* `validatorsOnly` can be either `true` or `false`. If `true`, will skip checking delegators for stake.
* `stakeds` is a map from assetID to the amount staked by addresses provided.
* `stakedOutputs` are the string representation of staked outputs.
* `encoding` specifies the format for the returned outputs.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getStake",
"params": {
"addresses": [
"P-avax1pmgmagjcljjzuz2ve339dx82khm7q8getlegte"
],
"validatorsOnly": true
},
"id": 1
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"staked": "6500000000000",
"stakeds": {
"FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z": "6500000000000"
},
"stakedOutputs": [
"0x000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff00000007000005e96630e800000000000000000000000001000000011f1c933f38da6ba0ba46f8c1b0a7040a9a991a80dd338ed1"
],
"encoding": "hex"
},
"id": 1
}
```
### `platform.getStakingAssetID`
Retrieve an assetID for a Subnet’s staking asset.
**Signature:**
```
platform.getStakingAssetID({
subnetID: string // optional
}) -> {
assetID: string
}
```
* `subnetID` is the Subnet whose assetID is requested.
* `assetID` is the assetID for a Subnet’s staking asset.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getStakingAssetID",
"params": {
"subnetID": "11111111111111111111111111111111LpoYY"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"assetID": "2fombhL7aGPwj3KH4bfrmJwW6PVnMobf9Y2fn9GwxiAAJyFDbe"
},
"id": 1
}
```
The AssetID for AVAX differs depending on the network you are on.
Mainnet: FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z
Testnet: U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK
### `platform.getSubnet`
Get owners and info about the Subnet or L1.
**Signature:**
```
platform.getSubnet({
subnetID: string
}) ->
{
isPermissioned: bool,
controlKeys: []string,
threshold: string,
locktime: string,
subnetTransformationTxID: string,
conversionID: string,
managerChainID: string,
managerAddress: string
}
```
* `subnetID` is the ID of the Subnet to get information about. If omitted, fails.
* `threshold` signatures from addresses in `controlKeys` are needed to make changes to
a permissioned subnet. If the Subnet is not a PoA Subnet, then `threshold` will be `0` and `controlKeys`
will be empty.
* changes can not be made into the subnet until `locktime` is in the past.
* `subnetTransformationTxID` is the ID of the transaction that changed the subnet into an elastic one, if it exists.
* `conversionID` is the ID of the conversion from a permissioned Subnet into an L1, if it exists.
* `managerChainID` is the ChainID that has the ability to modify this L1s validator set, if it exists.
* `managerAddress` is the address that has the ability to modify this L1s validator set, if it exists.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getSubnet",
"params": {"subnetID":"Vz2ArUpigHt7fyE79uF3gAXvTPLJi2LGgZoMpgNPHowUZJxBb"},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"isPermissioned": true,
"controlKeys": [
"P-fuji1ztvstx6naeg6aarfd047fzppdt8v4gsah88e0c",
"P-fuji193kvt4grqewv6ce2x59wnhydr88xwdgfcedyr3"
],
"threshold": "1",
"locktime": "0",
"subnetTransformationTxID": "11111111111111111111111111111111LpoYY",
"conversionID": "11111111111111111111111111111111LpoYY",
"managerChainID": "11111111111111111111111111111111LpoYY",
"managerAddress": null
},
"id": 1
}
```
### `platform.getSubnets`
Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Get info about the Subnets.
**Signature:**
```
platform.getSubnets({
ids: []string
}) ->
{
subnets: []{
id: string,
controlKeys: []string,
threshold: string
}
}
```
* `ids` are the IDs of the Subnets to get information about. If omitted, gets information about all
Subnets.
* `id` is the Subnet’s ID.
* `threshold` signatures from addresses in `controlKeys` are needed to add a validator to the
Subnet. If the Subnet is not a PoA Subnet, then `threshold` will be `0` and `controlKeys` will be
empty.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getSubnets",
"params": {"ids":["hW8Ma7dLMA7o4xmJf3AXBbo17bXzE7xnThUd3ypM4VAWo1sNJ"]},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"subnets": [
{
"id": "hW8Ma7dLMA7o4xmJf3AXBbo17bXzE7xnThUd3ypM4VAWo1sNJ",
"controlKeys": [
"KNjXsaA1sZsaKCD1cd85YXauDuxshTes2",
"Aiz4eEt5xv9t4NCnAWaQJFNz5ABqLtJkR"
],
"threshold": "2"
}
]
},
"id": 1
}
```
### `platform.getTimestamp`
Get the current P-Chain timestamp.
**Signature:**
```
platform.getTimestamp() -> {time: string}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getTimestamp",
"params": {},
"id": 1
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"timestamp": "2021-09-07T00:00:00-04:00"
},
"id": 1
}
```
### `platform.getTotalStake`
Get the total amount of tokens staked on the requested Subnet.
**Signature:**
```
platform.getTotalStake({
subnetID: string
}) -> {
stake: int
weight: int
}
```
#### Primary Network Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getTotalStake",
"params": {
"subnetID": "11111111111111111111111111111111LpoYY"
},
"id": 1
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"stake": "279825917679866811",
"weight": "279825917679866811"
},
"id": 1
}
```
#### Subnet Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getTotalStake",
"params": {
"subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r",
},
"id": 1
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"weight": "100000"
},
"id": 1
}
```
### `platform.getTx`
Gets a transaction by its ID.
Optional `encoding` parameter to specify the format for the returned transaction. Can be either
`hex` or `json`. Defaults to `hex`.
**Signature:**
```
platform.getTx({
txID: string,
encoding: string // optional
}) -> {
tx: string,
encoding: string,
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getTx",
"params": {
"txID":"28KVjSw5h3XKGuNpJXWY74EdnGq4TUWvCgEtJPymgQTvudiugb",
"encoding": "json"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"tx": {
"unsignedTx": {
"networkID": 1,
"blockchainID": "11111111111111111111111111111111LpoYY",
"outputs": [],
"inputs": [
{
"txID": "NXNJHKeaJyjjWVSq341t6LGQP5UNz796o1crpHPByv1TKp9ZP",
"outputIndex": 0,
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"input": {
"amount": 20824279595,
"signatureIndices": [0]
}
},
{
"txID": "2ahK5SzD8iqi5KBqpKfxrnWtrEoVwQCqJsMoB9kvChCaHgAQC9",
"outputIndex": 1,
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"input": {
"amount": 28119890783,
"signatureIndices": [0]
}
}
],
"memo": "0x",
"validator": {
"nodeID": "NodeID-VT3YhgFaWEzy4Ap937qMeNEDscCammzG",
"start": 1682945406,
"end": 1684155006,
"weight": 48944170378
},
"stake": [
{
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"output": {
"addresses": ["P-avax1tnuesf6cqwnjw7fxjyk7lhch0vhf0v95wj5jvy"],
"amount": 48944170378,
"locktime": 0,
"threshold": 1
}
}
],
"rewardsOwner": {
"addresses": ["P-avax19zfygxaf59stehzedhxjesads0p5jdvfeedal0"],
"locktime": 0,
"threshold": 1
}
},
"credentials": [
{
"signatures": [
"0x6954e90b98437646fde0c1d54c12190fc23ae5e319c4d95dda56b53b4a23e43825251289cdc3728f1f1e0d48eac20e5c8f097baa9b49ea8a3cb6a41bb272d16601"
]
},
{
"signatures": [
"0x6954e90b98437646fde0c1d54c12190fc23ae5e319c4d95dda56b53b4a23e43825251289cdc3728f1f1e0d48eac20e5c8f097baa9b49ea8a3cb6a41bb272d16601"
]
}
],
"id": "28KVjSw5h3XKGuNpJXWY74EdnGq4TUWvCgEtJPymgQTvudiugb"
},
"encoding": "json"
},
"id": 1
}
```
### `platform.getTxStatus`
Gets a transaction’s status by its ID. If the transaction was dropped, response will include a
`reason` field with more information why the transaction was dropped.
**Signature:**
```
platform.getTxStatus({
txID: string
}) -> { status: string }
```
`status` is one of:
* `Committed`: The transaction is (or will be) accepted by every node
* `Processing`: The transaction is being voted on by this node
* `Dropped`: The transaction will never be accepted by any node in the network, check `reason` field
for more information
* `Unknown`: The transaction hasn’t been seen by this node
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getTxStatus",
"params": {
"txID":"TAG9Ns1sa723mZy1GSoGqWipK6Mvpaj7CAswVJGM6MkVJDF9Q"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"status": "Committed"
},
"id": 1
}
```
### `platform.getUTXOs`
Gets the UTXOs that reference a given set of addresses.
**Signature:**
```
platform.getUTXOs(
{
addresses: []string,
limit: int, // optional
startIndex: { // optional
address: string,
utxo: string
},
sourceChain: string, // optional
encoding: string, // optional
},
) ->
{
numFetched: int,
utxos: []string,
endIndex: {
address: string,
utxo: string
},
encoding: string,
}
```
* `utxos` is a list of UTXOs such that each UTXO references at least one address in `addresses`.
* At most `limit` UTXOs are returned. If `limit` is omitted or greater than 1024, it is set to 1024.
* This method supports pagination. `endIndex` denotes the last UTXO returned. To get the next set of
UTXOs, use the value of `endIndex` as `startIndex` in the next call.
* If `startIndex` is omitted, will fetch all UTXOs up to `limit`.
* When using pagination (that is when `startIndex` is provided), UTXOs are not guaranteed to be unique
across multiple calls. That is, a UTXO may appear in the result of the first call, and then again
in the second call.
* When using pagination, consistency is not guaranteed across multiple calls. That is, the UTXO set
of the addresses may have changed between calls.
* `encoding` specifies the format for the returned UTXOs. Can only be `hex` when a value is
provided.
#### **Example**
Suppose we want all UTXOs that reference at least one of
`P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5` and `P-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6`.
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"platform.getUTXOs",
"params" :{
"addresses":["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "P-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"],
"limit":5,
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
This gives response:
```json
{
"jsonrpc": "2.0",
"result": {
"numFetched": "5",
"utxos": [
"0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765",
"0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a",
"0x0000731ce04b1feefa9f4291d869adc30a33463f315491e164d89be7d6d2d7890cfc00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21600dd3047",
"0x0000b462030cc4734f24c0bc224cf0d16ee452ea6b67615517caffead123ab4fbf1500000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c71b387e",
"0x000054f6826c39bc957c0c6d44b70f961a994898999179cc32d21eb09c1908d7167b00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f2166290e79d"
],
"endIndex": {
"address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j"
},
"encoding": "hex"
},
"id": 1
}
```
Since `numFetched` is the same as `limit`, we can tell that there may be more UTXOs that were not
fetched. We call the method again, this time with `startIndex`:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"platform.getUTXOs",
"params" :{
"addresses":["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"],
"limit":5,
"startIndex": {
"address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "0x62fc816bb209857923770c286192ab1f9e3f11e4a7d4ba0943111c3bbfeb9e4a5ea72fae"
},
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
This gives response:
```json
{
"jsonrpc": "2.0",
"result": {
"numFetched": "4",
"utxos": [
"0x000020e182dd51ee4dcd31909fddd75bb3438d9431f8e4efce86a88a684f5c7fa09300000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21662861d59",
"0x0000a71ba36c475c18eb65dc90f6e85c4fd4a462d51c5de3ac2cbddf47db4d99284e00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21665f6f83f",
"0x0000925424f61cb13e0fbdecc66e1270de68de9667b85baa3fdc84741d048daa69fa00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216afecf76a",
"0x000082f30327514f819da6009fad92b5dba24d27db01e29ad7541aa8e6b6b554615c00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216779c2d59"
],
"endIndex": {
"address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "21jG2RfqyHUUgkTLe2tUp6ETGLriSDTW3th8JXFbPRNiSZ11jK"
},
"encoding": "hex"
},
"id": 1
}
```
Since `numFetched` is less than `limit`, we know that we are done fetching UTXOs and don’t need to
call this method again.
Suppose we want to fetch the UTXOs exported from the X Chain to the P Chain in order to build an
ImportTx. Then we need to call GetUTXOs with the `sourceChain` argument in order to retrieve the
atomic UTXOs:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"platform.getUTXOs",
"params" :{
"addresses":["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"],
"sourceChain": "X",
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
This gives response:
```json
{
"jsonrpc": "2.0",
"result": {
"numFetched": "1",
"utxos": [
"0x00001f989ffaf18a18a59bdfbf209342aa61c6a62a67e8639d02bb3c8ddab315c6fa0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000000746a528800000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29cd704fe76"
],
"endIndex": {
"address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "S5UKgWoVpoGFyxfisebmmRf8WqC7ZwcmYwS7XaDVZqoaFcCwK"
},
"encoding": "hex"
},
"id": 1
}
```
### `platform.getValidatorsAt`
Get the validators and their weights of a Subnet or the Primary Network at a given P-Chain height.
**Signature:**
```
platform.getValidatorsAt(
{
height: [int|string],
subnetID: string, // optional
}
)
```
* `height` is the P-Chain height to get the validator set at, or the string literal "proposed"
to return the validator set at this node's ProposerVM height.
* `subnetID` is the Subnet ID to get the validator set of. If not given, gets validator set of the
Primary Network.
**Example Call:**
```bash
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getValidatorsAt",
"params": {
"height":1
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"validators": {
"NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg": 2000000000000000,
"NodeID-GWPcbFJZFfZreETSoWjPimr846mXEKCtu": 2000000000000000,
"NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ": 2000000000000000,
"NodeID-NFBbbJ4qCmNaCzeW7sxErhvWqvEQMnYcN": 2000000000000000,
"NodeID-P7oB2McjBGgW2NXXWVYjV8JEDFoW9xDE5": 2000000000000000
}
},
"id": 1
}
```
### `platform.getAllValidatorsAt`
Get the validators and their weights of all Subnets and the Primary Network at a given P-Chain height.
**Signature:**
```
platform.getAllValidatorsAt(
{
height: [int|string],
}
)
```
```bash
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getAllValidatorsAt",
"params": {
"height":1
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"validatorSets": {
"11111111111111111111111111111111LpoYY": {
"validators": [
{
"publicKey": "0x8048109c3da13de0700f9f3590c3270bfc42277417f6d0cc84282947e1a1f8b4980fd3e3fe223acf0f56a5838890814a",
"weight": "2000000000000000",
"nodeIDs": [
"NodeID-P7oB2McjBGgW2NXXWVYjV8JEDFoW9xDE5"
]
},
{
"publicKey": "0xa058ff27a4c570664bfa28e34939368539a1340867951943d0f56fa8aac13bc09ff64f341acf8cc0cef74202c2d6f9c0",
"weight": "2000000000000000",
"nodeIDs": [
"NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ"
]
},
{
"publicKey": "0xa10b6955a85684a0f5c94b8381f04506f1bee60625927d372323f78b3d30196cc56c8618c77eaf429298e74673d832c3",
"weight": "2000000000000000",
"nodeIDs": [
"NodeID-NFBbbJ4qCmNaCzeW7sxErhvWqvEQMnYcN"
]
},
{
"publicKey": "0xaccd61ceb90c61628aa0fa34acab27ecb08f6897e9ccad283578c278c52109f9e10e4f8bc31aa6d7905c4e1623de367e",
"weight": "2000000000000000",
"nodeIDs": [
"NodeID-GWPcbFJZFfZreETSoWjPimr846mXEKCtu"
]
},
{
"publicKey": "0x900c9b119b5c82d781d4b49be78c3fc7ae65f2b435b7ed9e3a8b9a03e475edff86d8a64827fec8db23a6f236afbf127d",
"weight": "2000000000000000",
"nodeIDs": [
"NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg"
]
}
],
"totalWeight": "10000000000000000"
}
}
},
"id": 1
}
```
* `height` is the P-Chain height to get the validator set at, or the string literal "proposed"
to return the validator set at this node's ProposerVM height.
**Example Call:**
### `platform.getValidatorFeeConfig`
Returns the validator fee configuration of the P-Chain.
**Signature:**
```
platform.getValidatorFeeConfig() -> {
capacity: uint64,
target: uint64,
minPrice: uint64,
excessConversionConstant: uint64
}
```
* `capacity` is the maximum number of L1 validators the chain is allowed to have at any given time
* `target` is the target number of L1 validators the chain should have to keep fees stable
* `minPrice` is the minimum price per L1 validator
* `excessConversionConstant` is used to convert excess L1 validators to a gas price
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getValidatorFeeConfig",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"capacity": 20000,
"target": 10000,
"targetPerSecond": 50000,
"minPrice": 512,
"excessConversionConstant": 1246488515
},
"id": 1
}
```
### `platform.getValidatorFeeState`
Returns the current validator fee state of the P-Chain.
**Signature:**
```
platform.getValidatorFeeState() -> {
excess: uint64,
price: uint64,
timestamp: string
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.getValidatorFeeState",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"excess": 26956,
"price": 512,
"timestamp": "2024-12-16T17:19:07Z"
},
"id": 1
}
```
### `platform.issueTx`
Issue a transaction to the Platform Chain.
**Signature:**
```
platform.issueTx({
tx: string,
encoding: string, // optional
}) -> { txID: string }
```
* `tx` is the byte representation of a transaction.
* `encoding` specifies the encoding format for the transaction bytes. Can only be `hex` when a value
is provided.
* `txID` is the transaction’s ID.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.issueTx",
"params": {
"tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730",
"encoding": "hex"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"txID": "G3BuH6ytQ2averrLxJJugjWZHTRubzCrUZEXoheG5JMqL5ccY"
},
"id": 1
}
```
### `platform.sampleValidators`
Sample validators from the specified Subnet.
**Signature:**
```
platform.sampleValidators(
{
size: int,
subnetID: string, // optional
}
) ->
{
validators: []string
}
```
* `size` is the number of validators to sample.
* `subnetID` is the Subnet to sampled from. If omitted, defaults to the Primary Network.
* Each element of `validators` is the ID of a validator.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"platform.sampleValidators",
"params" :{
"size":2
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"validators": [
"NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ",
"NodeID-NFBbbJ4qCmNaCzeW7sxErhvWqvEQMnYcN"
]
}
}
```
### `platform.validatedBy`
Get the Subnet that validates a given blockchain.
**Signature:**
```
platform.validatedBy(
{
blockchainID: string
}
) -> { subnetID: string }
```
* `blockchainID` is the blockchain’s ID.
* `subnetID` is the ID of the Subnet that validates the blockchain.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.validatedBy",
"params": {
"blockchainID": "KDYHHKjM4yTJTT8H8qPs5KXzE6gQH5TZrmP1qVr1P6qECj3XN"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r"
},
"id": 1
}
```
### `platform.validates`
Get the IDs of the blockchains a Subnet validates.
**Signature:**
```
platform.validates(
{
subnetID: string
}
) -> { blockchainIDs: []string }
```
* `subnetID` is the Subnet’s ID.
* Each element of `blockchainIDs` is the ID of a blockchain the Subnet validates.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "platform.validates",
"params": {
"subnetID":"2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"blockchainIDs": [
"KDYHHKjM4yTJTT8H8qPs5KXzE6gQH5TZrmP1qVr1P6qECj3XN",
"2TtHFqEAAJ6b33dromYMqfgavGPF3iCpdG3hwNMiart2aB5QHi"
]
},
"id": 1
}
```
# Transaction Format
URL: /docs/api-reference/p-chain/txn-format
This file is meant to be the single source of truth for how we serialize
transactions in Avalanche's Platform Virtual Machine, aka the `Platform Chain`
or `P-Chain`. This document uses the [primitive serialization](/docs/api-reference/standards/serialization-primitives) format for packing and
[secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) for cryptographic
user identification.
## Codec ID
Some data is prepended with a codec ID (unt16) that denotes how the data should
be deserialized. Right now, the only valid codec ID is 0 (`0x00 0x00`).
## Proof of Possession
A BLS public key and a proof of possession of the key.
### What Proof of Possession Contains
* **PublicKey** is the 48 byte representation of the public key.
* **Signature** is the 96 byte signature by the private key over its public key.
### Proof of Possession Specification
```text
+------------+----------+-------------------------+
| public_key : [48]byte | 48 bytes |
+------------+----------+-------------------------+
| signature : [96]byte | 96 bytes |
+------------+----------+-------------------------+
| 144 bytes |
+-------------------------+
```
### Proof of Possession Specification
```text
message ProofOfPossession {
bytes public_key = 1; // 48 bytes
bytes signature = 2; // 96 bytes
}
```
### Proof of Possession Example
```text
// Public Key:
0x85, 0x02, 0x5b, 0xca, 0x6a, 0x30, 0x2d, 0xc6,
0x13, 0x38, 0xff, 0x49, 0xc8, 0xba, 0xa5, 0x72,
0xde, 0xd3, 0xe8, 0x6f, 0x37, 0x59, 0x30, 0x4c,
0x7f, 0x61, 0x8a, 0x2a, 0x25, 0x93, 0xc1, 0x87,
0xe0, 0x80, 0xa3, 0xcf, 0xde, 0xc9, 0x50, 0x40,
0x30, 0x9a, 0xd1, 0xf1, 0x58, 0x95, 0x30, 0x67,
// Signature:
0x8b, 0x1d, 0x61, 0x33, 0xd1, 0x7e, 0x34, 0x83,
0x22, 0x0a, 0xd9, 0x60, 0xb6, 0xfd, 0xe1, 0x1e,
0x4e, 0x12, 0x14, 0xa8, 0xce, 0x21, 0xef, 0x61,
0x62, 0x27, 0xe5, 0xd5, 0xee, 0xf0, 0x70, 0xd7,
0x50, 0x0e, 0x6f, 0x7d, 0x44, 0x52, 0xc5, 0xa7,
0x60, 0x62, 0x0c, 0xc0, 0x67, 0x95, 0xcb, 0xe2,
0x18, 0xe0, 0x72, 0xeb, 0xa7, 0x6d, 0x94, 0x78,
0x8d, 0x9d, 0x01, 0x17, 0x6c, 0xe4, 0xec, 0xad,
0xfb, 0x96, 0xb4, 0x7f, 0x94, 0x22, 0x81, 0x89,
0x4d, 0xdf, 0xad, 0xd1, 0xc1, 0x74, 0x3f, 0x7f,
0x54, 0x9f, 0x1d, 0x07, 0xd5, 0x9d, 0x55, 0x65,
0x59, 0x27, 0xf7, 0x2b, 0xc6, 0xbf, 0x7c, 0x12
```
## Transferable Output
Transferable outputs wrap an output with an asset ID.
### What Transferable Output Contains
A transferable output contains an `AssetID` and an `Output`.
* **`AssetID`** is a 32-byte array that defines which asset this output
references. The only valid `AssetID` is the AVAX `AssetID`.
* **`Output`** is an output, as defined below. For example, this can be a SECP256K1 transfer output.
### Gantt Transferable Output Specification
```text
+----------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+----------+----------+-------------------------+
| output : Output | size(output) bytes |
+----------+----------+-------------------------+
| 32 + size(output) bytes |
+-------------------------+
```
### Proto Transferable Output Specification
```text
message TransferableOutput {
bytes asset_id = 1; // 32 bytes
Output output = 2; // size(output)
}
```
### Transferable Output Example
Let's make a transferable output:
* `AssetID: 0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a`
* `Output: "Example SECP256K1 Transfer Output from below"`
```text
[
AssetID <- 0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a,
Output <- 0x0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c,
]
=
[
// assetID:
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
// output:
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c,
]
```
## Transferable Input
Transferable inputs describe a specific UTXO with a provided transfer input.
### What Transferable Input Contains
A transferable input contains a `TxID`, `UTXOIndex` `AssetID` and an `Input`.
* **`TxID`** is a 32-byte array that defines which transaction this input is consuming an output from.
* **`UTXOIndex`** is an int that defines which utxo this input is consuming the specified transaction.
* **`AssetID`** is a 32-byte array that defines which asset this input
references. The only valid `AssetID` is the AVAX `AssetID`.
* **`Input`** is a transferable input object.
### Gantt Transferable Input Specification
```text
+------------+----------+------------------------+
| tx_id : [32]byte | 32 bytes |
+------------+----------+------------------------+
| utxo_index : int | 04 bytes |
+------------+----------+------------------------+
| asset_id : [32]byte | 32 bytes |
+------------+----------+------------------------+
| input : Input | size(input) bytes |
+------------+----------+------------------------+
| 68 + size(input) bytes |
+------------------------+
```
### Proto Transferable Input Specification
```text
message TransferableInput {
bytes tx_id = 1; // 32 bytes
uint32 utxo_index = 2; // 04 bytes
bytes asset_id = 3; // 32 bytes
Input input = 4; // size(input)
}
```
### Transferable Input Example
Let's make a transferable input:
* **`TxID`**: `0x0dfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15`
* **`UTXOIndex`**: `0`
* **`AssetID`**: `0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a`
* **`Input`**: `"Example SECP256K1 Transfer Input from below"`
```text
[
TxID <- 0x0dfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15
UTXOIndex <- 0x00000001
AssetID <- 0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a
Input <- 0x0000000500000000ee6b28000000000100000000
]
=
[
// txID:
0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c,
0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e,
0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14,
0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15,
// utxoIndex:
0x00, 0x00, 0x00, 0x01,
// assetID:
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
// input:
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00
]
```
## Outputs
Outputs have two possible type: `SECP256K1TransferOutput`, `SECP256K1OutputOwners`.
## SECP256K1 Transfer Output
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) transfer output
allows for sending a quantity of an asset to a collection of addresses after a
specified Unix time. The only valid asset is AVAX.
### What SECP256K1 Transfer Output Contains
A secp256k1 transfer output contains a `TypeID`, `Amount`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this output type. It is `0x00000007`.
* **`Amount`** is a long that specifies the quantity of the asset that this output owns. Must be positive.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt SECP256K1 Transfer Output Specification
```text
+-----------+------------+--------------------------------+
| type_id : int | 4 bytes |
+-----------+------------+--------------------------------+
| amount : long | 8 bytes |
+-----------+------------+--------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+--------------------------------+
| threshold : int | 4 bytes |
+-----------+------------+--------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------+------------+--------------------------------+
| 28 + 20 * len(addresses) bytes |
+--------------------------------+
```
### Proto SECP256K1 Transfer Output Specification
```text
message SECP256K1TransferOutput {
uint32 type_id = 1; // 04 bytes
uint64 amount = 2; // 08 bytes
uint64 locktime = 3; // 08 bytes
uint32 threshold = 4; // 04 bytes
repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses)
}
```
### SECP256K1 Transfer Output Example
Let's make a secp256k1 transfer output with:
* **`TypeID`**: 7
* **`Amount`**: 3999000000
* **`Locktime`**: 0
* **`Threshold`**: 1
* **`Addresses`**:
* 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c
```text
[
TypeID <- 0x00000007
Amount <- 0x00000000ee5be5c0
Locktime <- 0x0000000000000000
Threshold <- 0x00000001
Addresses <- [
0xda2bee01be82ecc00c34f361eda8eb30fb5a715c,
]
]
=
[
// type_id:
0x00, 0x00, 0x00, 0x07,
// amount:
0x00, 0x00, 0x00, 0x00, 0xee, 0x5b, 0xe5, 0xc0,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x01,
// addrs[0]:
0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0,
0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30,
0xfb, 0x5a, 0x71, 0x5c,
]
```
## SECP256K1 Output Owners Output
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) output owners
output will receive the staking rewards when the lock up period ends.
### What SECP256K1 Output Owners Output Contains
A secp256k1 output owners output contains a `TypeID`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this output type. It is `0x0000000b`.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt SECP256K1 Output Owners Output Specification
```text
+-----------+------------+--------------------------------+
| type_id : int | 4 bytes |
+-----------+------------+--------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+--------------------------------+
| threshold : int | 4 bytes |
+-----------+------------+--------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------+------------+--------------------------------+
| 20 + 20 * len(addresses) bytes |
+--------------------------------+
```
### Proto SECP256K1 Output Owners Output Specification
```text
message SECP256K1OutputOwnersOutput {
uint32 type_id = 1; // 04 bytes
uint64 locktime = 2; // 08 bytes
uint32 threshold = 3; // 04 bytes
repeated bytes addresses = 4; // 04 bytes + 20 bytes * len(addresses)
}
```
### SECP256K1 Output Owners Output Example
Let's make a secp256k1 output owners output with:
* **`TypeID`**: 11
* **`Locktime`**: 0
* **`Threshold`**: 1
* **`Addresses`**:
* 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c
```text
[
TypeID <- 0x0000000b
Locktime <- 0x0000000000000000
Threshold <- 0x00000001
Addresses <- [
0xda2bee01be82ecc00c34f361eda8eb30fb5a715c,
]
]
=
[
// type_id:
0x00, 0x00, 0x00, 0x0b,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x01,
// addrs[0]:
0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0,
0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30,
0xfb, 0x5a, 0x71, 0x5c,
]
```
## Inputs
Inputs have one possible type: `SECP256K1TransferInput`.
## SECP256K1 Transfer Input
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) transfer input
allows for spending an unspent secp256k1 transfer output.
### What SECP256K1 Transfer Input Contains
A secp256k1 transfer input contains an `Amount` and `AddressIndices`.
* **`TypeID`** is the ID for this output type. It is `0x00000005`.
* **`Amount`** is a long that specifies the quantity that this input should be
consuming from the UTXO. Must be positive. Must be equal to the amount
specified in the UTXO.
* **`AddressIndices`** is a list of unique ints that define the private keys are
being used to spend the UTXO. Each UTXO has an array of addresses that can
spend the UTXO. Each int represents the index in this address array that will
sign this transaction. The array must be sorted low to high.
### Gantt SECP256K1 Transfer Input Specification
```text
+-------------------------+-------------------------------------+
| type_id : int | 4 bytes |
+-----------------+-------+-------------------------------------+
| amount : long | 8 bytes |
+-----------------+-------+-------------------------------------+
| address_indices : []int | 4 + 4 * len(address_indices) bytes |
+-----------------+-------+-------------------------------------+
| 16 + 4 * len(address_indices) bytes |
+-------------------------------------+
```
**Proto SECP256K1 Transfer Input Specification**
```text
message SECP256K1TransferInput {
uint32 type_id = 1; // 04 bytes
uint64 amount = 2; // 08 bytes
repeated uint32 address_indices = 3; // 04 bytes + 4 bytes * len(address_indices)
}
```
### SECP256K1 Transfer Input Example
Let's make a payment input with:
* **`TypeID`**: 5
* **`Amount`**: 4000000000
* **`AddressIndices`**: \[0]
```text
[
TypeID <- 0x00000005
Amount <- 0x00000000ee6b2800,
AddressIndices <- [0x00000000]
]
=
[
// type_id:
0x00, 0x00, 0x00, 0x05,
// amount:
0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00,
// length:
0x00, 0x00, 0x00, 0x01,
// address_indices[0]
0x00, 0x00, 0x00, 0x00
]
```
## Unsigned Transactions
Unsigned transactions contain the full content of a transaction with only the
signatures missing. Unsigned transactions have six possible types:
`AddValidatorTx`, `AddSubnetValidatorTx`, `AddDelegatorTx`, `CreateSubnetTx`,
`ImportTx`, and `ExportTx`. They embed `BaseTx`, which contains common fields
and operations.
## Unsigned BaseTx
### What Base TX Contains
A base TX contains a `TypeID`, `NetworkID`, `BlockchainID`, `Outputs`, `Inputs`, and `Memo`.
* **`TypeID`** is the ID for this type. It is `0x00000000`.
* **`NetworkID`** is an int that defines which network this transaction is meant
to be issued to. This value is meant to support transaction routing and is not
designed for replay attack prevention.
* **`BlockchainID`** is a 32-byte array that defines which blockchain this
transaction was issued to. This is used for replay attack prevention for
transactions that could potentially be valid across network or blockchain.
* **`Outputs`** is an array of transferable output objects. Outputs must be
sorted lexicographically by their serialized representation. The total
quantity of the assets created in these outputs must be less than or equal to
the total quantity of each asset consumed in the inputs minus the transaction
fee.
* **`Inputs`** is an array of transferable input objects. Inputs must be sorted
and unique. Inputs are sorted first lexicographically by their **`TxID`** and
then by the **`UTXOIndex`** from low to high. If there are inputs that have
the same **`TxID`** and **`UTXOIndex`**, then the transaction is invalid as
this would result in a double spend.
* **`Memo`** Memo field contains arbitrary bytes, up to 256 bytes.
### Gantt Base TX Specification
```text
+---------------+----------------------+-----------------------------------------+
| type_id : int | 4 bytes |
+---------------+----------------------+-----------------------------------------+
| network_id : int | 4 bytes |
+---------------+----------------------+-----------------------------------------+
| blockchain_id : [32]byte | 32 bytes |
+---------------+----------------------+-----------------------------------------+
| outputs : []TransferableOutput | 4 + size(outputs) bytes |
+---------------+----------------------+-----------------------------------------+
| inputs : []TransferableInput | 4 + size(inputs) bytes |
+---------------+----------------------+-----------------------------------------+
| memo : [256]byte | 4 + size(memo) bytes |
+---------------+----------------------+-----------------------------------------+
| 52 + size(outputs) + size(inputs) + size(memo) bytes |
+------------------------------------------------------+
```
### Proto Base TX Specification
```text
message BaseTx {
uint32 type_id = 1; // 04 bytes
uint32 network_id = 2; // 04 bytes
bytes blockchain_id = 3; // 32 bytes
repeated Output outputs = 4; // 04 bytes + size(outs)
repeated Input inputs = 5; // 04 bytes + size(ins)
bytes memo = 6; // 04 bytes + size(memo)
}
```
### Base TX Example
Let's make a base TX that uses the inputs and outputs from the previous examples:
* **`TypeID`**: `0`
* **`NetworkID`**: `12345`
* **`BlockchainID`**: `0x000000000000000000000000000000000000000000000000000000000000000`
* **`Outputs`**:
* `"Example Transferable Output as defined above"`
* **`Inputs`**:
* `"Example Transferable Input as defined above"`
```text
[
TypeID <- 0x00000000
NetworkID <- 0x00003039
BlockchainID <- 0x000000000000000000000000000000000000000000000000000000000000000
Outputs <- [
0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c
]
Inputs <- [
0xdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
]
]
=
[
// type_id:
0x00, 0x00, 0x00, 0x00,
// networkID:
0x00, 0x00, 0x30, 0x39,
// blockchainID:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// number of outputs:
0x00, 0x00, 0x00, 0x01,
// transferable output:
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c,
// number of inputs:
0x00, 0x00, 0x00, 0x01,
// transferable input:
0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c,
0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e,
0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14,
0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15,
0x00, 0x00, 0x00, 0x01,
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00,
// Memo length:
0x00, 0x00, 0x00, 0x00,
]
```
## Unsigned Add Validator TX
### What Unsigned Add Validator TX Contains
An unsigned add validator TX contains a `BaseTx`, `Validator`, `Stake`,
`RewardsOwner`, and `Shares`. The `TypeID` for this type is `0x0000000c`.
* **`BaseTx`**
* **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight`
* **`NodeID`** is 20 bytes which is the node ID of the validator.
* **`StartTime`** is a long which is the Unix time when the validator starts validating.
* **`EndTime`** is a long which is the Unix time when the validator stops validating.
* **`Weight`** is a long which is the amount the validator stakes
* **`Stake`** Stake has `LockedOuts`
* **`LockedOuts`** An array of Transferable Outputs that are locked for the
duration of the staking period. At the end of the staking period, these
outputs are refunded to their respective addresses.
* **`RewardsOwner`** A `SECP256K1OutputOwners`
* **`Shares`** 10,000 times percentage of reward taken from delegators
### Gantt Unsigned Add Validator TX Specification
```text
+---------------+-----------------------+-----------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+---------------+-----------------------+-----------------------------------------+
| validator : Validator | 44 bytes |
+---------------+-----------------------+-----------------------------------------+
| stake : Stake | size(LockedOuts) bytes |
+---------------+-----------------------+-----------------------------------------+
| rewards_owner : SECP256K1OutputOwners | size(rewards_owner) bytes |
+---------------+-----------------------+-----------------------------------------+
| shares : Shares | 4 bytes |
+---------------+-----------------------+-----------------------------------------+
| 48 + size(stake) + size(rewards_owner) + size(base_tx) bytes |
+--------------------------------------------------------------+
```
### Proto Unsigned Add Validator TX Specification
```text
message AddValidatorTx {
BaseTx base_tx = 1; // size(base_tx)
Validator validator = 2; // 44 bytes
Stake stake = 3; // size(LockedOuts)
SECP256K1OutputOwners rewards_owner = 4; // size(rewards_owner)
uint32 shares = 5; // 04 bytes
}
```
### Unsigned Add Validator TX Example
Let's make an unsigned add validator TX that uses the inputs and outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0c"`
* **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight`
* **`NodeID`** is 20 bytes which is the node ID of the validator.
* **`StartTime`** is a long which is the Unix time when the validator starts validating.
* **`EndTime`** is a long which is the Unix time when the validator stops validating.
* **`Weight`** is a long which is the amount the validator stakes
* **`Stake`**: `0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c`
* **`RewardsOwner`**: `0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c`
* **`Shares`**: `0x00000064`
```text
[
BaseTx <- 0x0000000c000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
NodeID <- 0xe9094f73698002fd52c90819b457b9fbc866ab80
StarTime <- 0x000000005f21f31d
EndTime <- 0x000000005f497dc6
Weight <- 0x000000000000d431
Stake <- 0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c
RewardsOwner <- 0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c
Shares <- 0x00000064
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x30, 0x39,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6,
0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5,
0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd,
0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c,
0x00, 0x00, 0x00, 0x01,
0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c,
0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e,
0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14,
0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15,
0x00, 0x00, 0x00, 0x01,
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
// Node ID
0xe9, 0x09, 0x4f, 0x73, 0x69, 0x80, 0x02, 0xfd,
0x52, 0xc9, 0x08, 0x19, 0xb4, 0x57, 0xb9, 0xfb,
0xc8, 0x66, 0xab, 0x80,
// StartTime
0x00, 0x00, 0x00, 0x00, 0x5f, 0x21, 0xf3, 0x1d,
// EndTime
0x00, 0x00, 0x00, 0x00, 0x5f, 0x49, 0x7d, 0xc6,
// Weight
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// Stake
0x00, 0x00, 0x00, 0x01, 0x39, 0xc3, 0x3a, 0x49,
0x9c, 0xe4, 0xc3, 0x3a, 0x3b, 0x09, 0xcd, 0xd2,
0xcf, 0xa0, 0x1a, 0xe7, 0x0d, 0xbf, 0x2d, 0x18,
0xb2, 0xd7, 0xd1, 0x68, 0x52, 0x44, 0x40, 0xe5,
0x5d, 0x55, 0x00, 0x88, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01,
0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a,
0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68,
0x61, 0xe1, 0xb2, 0x9c,
// RewardsOwner
0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c,
// Shares
0x00, 0x00, 0x00, 0x64,
]
```
## Unsigned Remove Avalanche L1 Validator TX
### What Unsigned Remove Avalanche L1 Validator TX Contains
An unsigned remove Avalanche L1 validator TX contains a `BaseTx`, `NodeID`,
`SubnetID`, and `SubnetAuth`. The `TypeID` for this type is 23 or `0x00000017`.
* **`BaseTx`**
* **`NodeID`** is the 20 byte node ID of the validator.
* **`SubnetID`** is the 32 byte Avalanche L1 ID (SubnetID) that the validator is being removed from.
* **`SubnetAuth`** contains `SigIndices` and has a type id of `0x0000000a`.
`SigIndices` is a list of unique ints that define the addresses signing the
control signature which proves that the issuer has the right to remove the
node from the Avalanche L1. The array must be sorted low to high.
### Gantt Unsigned Remove Avalanche L1 Validator TX Specification
```text
+---------------+----------------------+------------------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+---------------+----------------------+------------------------------------------------+
| node_id : [20]byte | 20 bytes |
+---------------+----------------------+------------------------------------------------+
| subnet_id : [32]byte | 32 bytes |
+---------------+----------------------+------------------------------------------------+
| sig_indices : SubnetAuth | 4 bytes + len(sig_indices) bytes |
+---------------+----------------------+------------------------------------------------+
| 56 + len(sig_indices) + size(base_tx) bytes |
+---------------------------------------------------------------------------------------+
```
### Proto Unsigned Remove Avalanche L1 Validator TX Specification
```text
message RemoveSubnetValidatorTx {
BaseTx base_tx = 1; // size(base_tx)
string node_id = 2; // 20 bytes
SubnetID subnet_id = 3; // 32 bytes
SubnetAuth subnet_auth = 4; // 04 bytes + len(sig_indices)
}
```
### Unsigned Remove Avalanche L1 Validator TX Example
Let's make an unsigned remove Avalanche L1 validator TX that uses the inputs and
outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 17"`
* **`NodeID`**: `0xe902a9a86640bfdb1cd0e36c0cc982b83e5765fa`
* **`SubnetID`**: `0x4a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db`
* **`SubnetAuth`**: `0x0000000a0000000100000000`
```text
[
BaseTx <- 0x0000000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000000000000000000000
NodeID <- 0xe902a9a86640bfdb1cd0e36c0cc982b83e5765fa
SubnetID <- 0x4a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db
SubnetAuth <- 0x0000000a0000000100000000
]
=
[
// BaseTx
0x00, 0x00, 0x00, 0x17, 0x00, 0x00, 0x30, 0x39,
0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e,
0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00,
0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d,
0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
// NodeID
0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb,
0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8,
0x3e, 0x57, 0x65, 0xfa,
// SubnetID
0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92,
0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5,
0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99,
0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb,
// SubnetAuth
// SubnetAuth TypeID
0x00, 0x00, 0x00, 0x0a,
// SigIndices length
0x00, 0x00, 0x00, 0x01,
// SigIndices
0x00, 0x00, 0x00, 0x00,
]
```
## Unsigned Add Permissionless Validator TX
### What Unsigned Add Permissionless Validator TX Contains
An unsigned add permissionless validator TX contains a `BaseTx`, `Validator`,
`SubnetID`, `Signer`, `StakeOuts`, `ValidatorRewardsOwner`,
`DelegatorRewardsOwner`, and `DelegationShares`. The `TypeID` for this type is
25 or `0x00000019`.
* **`BaseTx`**
* **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight`
* **`NodeID`** is the 20 byte node ID of the validator.
* **`StartTime`** is a long which is the Unix time when the validator starts validating.
* **`EndTime`** is a long which is the Unix time when the validator stops validating.
* **`Weight`** is a long which is the amount the validator stakes
* **`SubnetID`** is the 32 byte Avalanche L1 ID (SubnetID) of the Avalanche L1 this validator will validate.
* **`Signer`** If the \[SubnetID] is the primary network, \[Signer] is the type ID
28 (`0x1C`) followed by a [Proof of Possession](#proof-of-possession). If the
\[SubnetID] is not the primary network, this value is the empty signer, whose
byte representation is only the type ID 27 (`0x1B`).
* **`StakeOuts`** An array of Transferable Outputs. Where to send staked tokens when done validating.
* **`ValidatorRewardsOwner`** Where to send validation rewards when done validating.
* **`DelegatorRewardsOwner`** Where to send delegation rewards when done validating.
* **`DelegationShares`** a short which is the fee this validator charges
delegators as a percentage, times 10,000 For example, if this validator has
DelegationShares=300,000 then they take 30% of rewards from delegators.
### Gantt Unsigned Add Permissionless Validator TX Specification
```text
+---------------+----------------------+------------------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+---------------+----------------------+------------------------------------------------+
| validator : Validator | 44 bytes |
+---------------+----------------------+------------------------------------------------+
| subnet_id : [32]byte | 32 bytes |
+---------------+----------------------+------------------------------------------------+
| signer : Signer | 144 bytes |
+---------------+----------------------+------------------------------------------------+
| stake_outs : []TransferOut | 4 + size(stake_outs) bytes |
+---------------+----------------------+------------------------------------------------+
| validator_rewards_owner : SECP256K1OutputOwners | size(validator_rewards_owner) bytes |
+---------------+----------------------+------------------------------------------------+
| delegator_rewards_owner : SECP256K1OutputOwners | size(delegator_rewards_owner) bytes |
+---------------+----------------------+------------------------------------------------+
| delegation_shares : uint32 | 4 bytes |
+---------------+----------------------+------------------------------------------------+
| 232 + size(base_tx) + size(stake_outs) + |
| size(validator_rewards_owner) + size(delegator_rewards_owner) bytes |
+---------------------------------------------------------------------------------------+
```
### Proto Unsigned Add Permissionless Validator TX Specification
```text
message AddPermissionlessValidatorTx {
BaseTx base_tx = 1; // size(base_tx)
Validator validator = 2; // 44 bytes
SubnetID subnet_id = 3; // 32 bytes
Signer signer = 4; // 148 bytes
repeated TransferOut stake_outs = 5; // 4 bytes + size(stake_outs)
SECP256K1OutputOwners validator_rewards_owner = 6; // size(validator_rewards_owner) bytes
SECP256K1OutputOwners delegator_rewards_owner = 7; // size(delegator_rewards_owner) bytes
uint32 delegation_shares = 8; // 4 bytes
}
```
### Unsigned Add Permissionless Validator TX Example
Let's make an unsigned add permissionless validator TX that uses the inputs and
outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 1a"`
* **`Validator`**: `0x5fa29ed4356903dac2364713c60f57d8472c7dda000000006397616e0000000063beee6e000001d1a94a2000`
* **`SubnetID`**: `0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada`
* **`Signer`**: `0x0000001ca5af179e4188583893c2b99e1a8be27d90a9213cfbff1d75b74fe2bc9f3b072c2ded0863a9d9acd9033f223295810e429238e28d3c9b7f7212b63d746b2ae73a54fe08a3de61b132f2f89e9eeff97d4d7ca3a3c88986aa855cd36296fcfe8f02162d0258be494d267d4c5798bc081ab602ded90b0fc16d8a035e68ff5294794cb63ff1ee068fbfc2b4c8cd2d08ebf297`
* **`StakeOuts`**: `0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0`
* **`ValidatorRewardsOwner`**: `0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52`
* **`DelegatorRewardsOwner`**: `0x0000000b00000000000000000000000100000001b2b91313ac487c222445254e26cd026d21f6f440`
* **`DelegationShares`**: `0x00004e20`
```text
[
BaseTx <- 0x0000001a00003039e902a9a86640bfdb1cd0e36c0cc982b83e5765fad5f6bbe6abdcce7b5ae7d7c700000000000000014a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db000000003d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000005000001d1a94a2000000000010000000000000000
Validator <- 0x5fa29ed4356903dac2364713c60f57d8472c7dda000000006397616e0000000063beee6e000001d1a94a2000
SubnetID <- 0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada
Signer <- 0x0000001ca5af179e4188583893c2b99e1a8be27d90a9213cfbff1d75b74fe2bc9f3b072c2ded0863a9d9acd9033f223295810e429238e28d3c9b7f7212b63d746b2ae73a54fe08a3de61b132f2f89e9eeff97d4d7ca3a3c88986aa855cd36296fcfe8f02162d0258be494d267d4c5798bc081ab602ded90b0fc16d8a035e68ff5294794cb63ff1ee068fbfc2b4c8cd2d08ebf297
StakeOuts <- 0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0
ValidatorRewardsOwner <- 0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52
DelegatorRewardsOwner <- 0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52
DelegationShares <- 0x00004e20
]
=
[
// BaseTx
0x00, 0x00, 0x00, 0x1a, 0x00, 0x00, 0x30, 0x39,
0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb,
0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8,
0x3e, 0x57, 0x65, 0xfa, 0xd5, 0xf6, 0xbb, 0xe6,
0xab, 0xdc, 0xce, 0x7b, 0x5a, 0xe7, 0xd7, 0xc7,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92,
0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5,
0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99,
0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb,
0x00, 0x00, 0x00, 0x00, 0x3d, 0x0a, 0xd1, 0x2b,
0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9,
0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07,
0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b,
0x25, 0xcf, 0x59, 0xa7, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
// Validator
// NodeID
0x5f, 0xa2, 0x9e, 0xd4, 0x35, 0x69, 0x03, 0xda,
0xc2, 0x36, 0x47, 0x13, 0xc6, 0x0f, 0x57, 0xd8,
0x47, 0x2c, 0x7d, 0xda, 0x
// Start time
0x00, 0x00, 0x00, 0x00, 0x63, 0x97, 0x61, 0x6e,
// End time
0x00, 0x00, 0x00, 0x00, 0x63, 0xbe, 0xee, 0x6e,
// Weight
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
// SubnetID
0xf3, 0x08, 0x6d, 0x7b, 0xfc, 0x35, 0xbe, 0x1c,
0x68, 0xdb, 0x66, 0x4b, 0xa9, 0xce, 0x61, 0xa2,
0x06, 0x01, 0x26, 0xb0, 0xd6, 0xb4, 0xbf, 0xb0,
0x9f, 0xd7, 0xa5, 0xfb, 0x76, 0x78, 0xca, 0xda,
// Signer
// TypeID
0x00, 0x00, 0x00, 0x1c,
// Pub key
0xa5, 0xaf, 0x17, 0x9e, 0x41, 0x88, 0x58, 0x38,
0x93, 0xc2, 0xb9, 0x9e, 0x1a, 0x8b, 0xe2, 0x7d,
0x90, 0xa9, 0x21, 0x3c, 0xfb, 0xff, 0x1d, 0x75,
0xb7, 0x4f, 0xe2, 0xbc, 0x9f, 0x3b, 0x07, 0x2c,
0x2d, 0xed, 0x08, 0x63, 0xa9, 0xd9, 0xac, 0xd9,
0x03, 0x3f, 0x22, 0x32, 0x95, 0x81, 0x0e, 0x42,
// Sig
0x92, 0x38, 0xe2, 0x8d, 0x3c, 0x9b, 0x7f, 0x72,
0x12, 0xb6, 0x3d, 0x74, 0x6b, 0x2a, 0xe7, 0x3a,
0x54, 0xfe, 0x08, 0xa3, 0xde, 0x61, 0xb1, 0x32,
0xf2, 0xf8, 0x9e, 0x9e, 0xef, 0xf9, 0x7d, 0x4d,
0x7c, 0xa3, 0xa3, 0xc8, 0x89, 0x86, 0xaa, 0x85,
0x5c, 0xd3, 0x62, 0x96, 0xfc, 0xfe, 0x8f, 0x02,
0x16, 0x2d, 0x02, 0x58, 0xbe, 0x49, 0x4d, 0x26,
0x7d, 0x4c, 0x57, 0x98, 0xbc, 0x08, 0x1a, 0xb6,
0x02, 0xde, 0xd9, 0x0b, 0x0f, 0xc1, 0x6d, 0x8a,
0x03, 0x5e, 0x68, 0xff, 0x52, 0x94, 0x79, 0x4c,
0xb6, 0x3f, 0xf1, 0xee, 0x06, 0x8f, 0xbf, 0xc2,
0xb4, 0xc8, 0xcd, 0x2d, 0x08, 0xeb, 0xf2, 0x97,
// Stake outs
// Num stake outs
0x00, 0x00, 0x00, 0x01,
// AssetID
0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e,
0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00,
0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d,
0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7,
// Output
// typeID
0x00, 0x00, 0x00, 0x07,
// Amount
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
// Locktime
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Threshold
0x00, 0x00, 0x00, 0x01,
// Num addrs
0x00, 0x00, 0x00, 0x01,
// Addr 0
0x33, 0xee, 0xff, 0xc6, 0x47, 0x85, 0xcf, 0x9d,
0x80, 0xe7, 0x73, 0x1d, 0x9f, 0x31, 0xf6, 0x7b,
0xd0, 0x3c, 0x5c, 0xf0,
// Validator rewards owner
// TypeID
0x00, 0x00, 0x00, 0x0b,
// Locktime
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Threshold
0x00, 0x00, 0x00, 0x01,
// Num addrs
0x00, 0x00, 0x00, 0x01,
// Addr 0
0x72, 0xf3, 0xeb, 0x9a, 0xea, 0xf8, 0x28, 0x30,
0x11, 0xce, 0x6e, 0x43, 0x7f, 0xde, 0xcd, 0x65,
0xea, 0xce, 0x8f, 0x52,
// Delegator rewards owner
// TypeID
0x00, 0x00, 0x00, 0x0b,
// Locktime
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Threshold
0x00, 0x00, 0x00, 0x01,
// Num addrs
0x00, 0x00, 0x00, 0x01,
// Addr 0
0xb2, 0xb9, 0x13, 0x13, 0xac, 0x48, 0x7c, 0x22,
0x24, 0x45, 0x25, 0x4e, 0x26, 0xcd, 0x02, 0x6d,
0x21, 0xf6, 0xf4, 0x40,
// Delegation shares
0x00, 0x00, 0x4e, 0x20,
]
```
## Unsigned Add Permissionless Delegator TX
### What Unsigned Add Permissionless Delegator TX Contains
An unsigned add permissionless delegator TX contains a `BaseTx`, `Validator`,
`SubnetID`, `StakeOuts`, and `DelegatorRewardsOwner`. The `TypeID` for this type
is 26 or `0x0000001a`.
* **`BaseTx`**
* **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight`
* **`NodeID`** is the 20 byte node ID of the validator.
* **`StartTime`** is a long which is the Unix time when the validator starts validating.
* **`EndTime`** is a long which is the Unix time when the validator stops validating.
* **`Weight`** is a long which is the amount the validator stakes
* **`SubnetID`** is the 32 byte Avalanche L1 ID (SubnetID) of the Avalanche L1 this delegation is on.
* **`StakeOuts`** An array of Transferable Outputs. Where to send staked tokens when done validating.
* **`DelegatorRewardsOwner`** Where to send staking rewards when done validating.
### Gantt Unsigned Add Permissionless Delegator TX Specification
```text
+---------------+----------------------+------------------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+---------------+----------------------+------------------------------------------------+
| validator : Validator | 44 bytes |
+---------------+----------------------+------------------------------------------------+
| subnet_id : [32]byte | 32 bytes |
+---------------+----------------------+------------------------------------------------+
| stake_outs : []TransferOut | 4 + size(stake_outs) bytes |
+---------------+----------------------+------------------------------------------------+
| delegator_rewards_owner : SECP256K1OutputOwners | size(delegator_rewards_owner) bytes |
+---------------+----------------------+------------------------------------------------+
| 80 + size(base_tx) + size(stake_outs) + size(delegator_rewards_owner) bytes |
+---------------------------------------------------------------------------------------+
```
### Proto Unsigned Add Permissionless Delegator TX Specification
```text
message AddPermissionlessDelegatorTx {
BaseTx base_tx = 1; // size(base_tx)
Validator validator = 2; // size(validator)
SubnetID subnet_id = 3; // 32 bytes
repeated TransferOut stake_outs = 4; // 4 bytes + size(stake_outs)
SECP256K1OutputOwners delegator_rewards_owner = 5; // size(delegator_rewards_owner) bytes
}
```
### Unsigned Add Permissionless Delegator TX Example
Let's make an unsigned add permissionless delegator TX that uses the inputs and
outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 1a"`
* **`Validator`**: `0x5fa29ed4356903dac2364713c60f57d8472c7dda00000000639761970000000063beee97000001d1a94a2000`
* **`SubnetID`**: `0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada`
* **`StakeOuts`**: `0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0`
* **`DelegatorRewardsOwner`**: `0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52`
```text
[
BaseTx <- 0x0000001a00003039e902a9a86640bfdb1cd0e36c0cc982b83e5765fad5f6bbe6abdcce7b5ae7d7c700000000000000014a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db000000003d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000005000001d1a94a2000000000010000000000000000
Validator <- 0x5fa29ed4356903dac2364713c60f57d8472c7dda00000000639761970000000063beee97000001d1a94a2000
SubnetID <- 0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada
StakeOuts <- 0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0
DelegatorRewardsOwner <- 0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52
]
=
[
// BaseTx
0x00, 0x00, 0x00, 0x1a, 0x00, 0x00, 0x30, 0x39,
0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb,
0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8,
0x3e, 0x57, 0x65, 0xfa, 0xd5, 0xf6, 0xbb, 0xe6,
0xab, 0xdc, 0xce, 0x7b, 0x5a, 0xe7, 0xd7, 0xc7,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92,
0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5,
0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99,
0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb,
0x00, 0x00, 0x00, 0x00, 0x3d, 0x0a, 0xd1, 0x2b,
0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9,
0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07,
0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b,
0x25, 0xcf, 0x59, 0xa7, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
// Validator
// NodeID
0x5f, 0xa2, 0x9e, 0xd4, 0x35, 0x69, 0x03, 0xda,
0xc2, 0x36, 0x47, 0x13, 0xc6, 0x0f, 0x57, 0xd8,
0x47, 0x2c, 0x7d, 0xda,
// Start time
0x00, 0x00, 0x00, 0x00, 0x63, 0x97, 0x61, 0x97,
// End time
0x00, 0x00, 0x00, 0x00, 0x63, 0xbe, 0xee, 0x97,
// Weight
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
// Stake_outs
// Num stake outs
0x00, 0x00, 0x00, 0x01,
// Stake out 0
// AssetID
0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e,
0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00,
0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d,
0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7,
// TypeID
0x00, 0x00, 0x00, 0x07,
// Amount
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
// Locktime
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Threshold
0x00, 0x00, 0x00, 0x01,
// Num addrs
0x00, 0x00, 0x00, 0x01,
// Addr 0
0x33, 0xee, 0xff, 0xc6, 0x47, 0x85, 0xcf, 0x9d,
0x80, 0xe7, 0x73, 0x1d, 0x9f, 0x31, 0xf6, 0x7b,
0xd0, 0x3c, 0x5c, 0xf0,
// Delegator_rewards_owner
// TypeID
0x00, 0x00, 0x00, 0x0b,
// Locktime
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// Threshold
0x00, 0x00, 0x00, 0x01,
// Num addrs
0x00, 0x00, 0x00, 0x01,
// Addr 0
0x72, 0xf3, 0xeb, 0x9a, 0xea, 0xf8, 0x28, 0x30,
0x11, 0xce, 0x6e, 0x43, 0x7f, 0xde, 0xcd, 0x65,
0xea, 0xce, 0x8f, 0x52,
]
```
## Unsigned Transform Avalanche L1 TX
> **Note:** This transaction type has been disabled post-activation of ACP-77 (Etna upgrade). The `TransformSubnetTx` is no longer accepted on the P-Chain after the activation of this upgrade.
Transforms a permissioned Avalanche L1 into a permissionless Avalanche L1. Must be signed by the Avalanche L1 owner.
### What Unsigned Transform Avalanche L1 TX Contains
An unsigned transform Avalanche L1 TX contains a `BaseTx`, `SubnetID`, `AssetID`,
`InitialSupply`, `MaximumSupply`, `MinConsumptionRate`, `MaxConsumptionRate`,
`MinValidatorStake`, `MaxValidatorStake`, `MinStakeDuration`,
`MaxStakeDuration`, `MinDelegationFee`, `MinDelegatorStake`,
`MaxValidatorWeightFactor`, `UptimeRequirement`, and `SubnetAuth`. The `TypeID`
for this type is 24 or `0x00000018`.
* **`BaseTx`**
* **`SubnetID`** a 32-byte Avalanche L1 ID of the Avalanche L1 to transform.
* **`AssetID`** is a 32-byte array that defines which asset to use when staking on the Avalanche L1.
* Restrictions
* Must not be the Empty ID
* Must not be the AVAX ID
* **`InitialSupply`** is a long which is the amount to initially specify as the current supply.
* Restrictions
* Must be > 0
* **`MaximumSupply`** is a long which is the amount to specify as the maximum token supply.
* Restrictions
* Must be >= \[InitialSupply]
* **`MinConsumptionRate`** is a long which is the rate to allocate funds if the
validator's stake duration is 0.
* **`MaxConsumptionRate`** is a long which is the rate to allocate funds if the
validator's stake duration is equal to the minting period.
* Restrictions
* Must be `>=` \[MinConsumptionRate]
* Must be `<=` \[`reward.PercentDenominator`]
* **`MinValidatorStake`** is a long which the minimum amount of funds required to become a validator.
* Restrictions
* Must be `>` 0
* Must be `<=` \[InitialSupply]
* **`MaxValidatorStake`** is a long which is the maximum amount of funds a
single validator can be allocated, including delegated funds.
* Restrictions:
* Must be `>=` \[MinValidatorStake]
* Must be `<=` \[MaximumSupply]
* **`MinStakeDuration`** is a short which is the minimum number of seconds a staker can stake for.
* Restrictions
* Must be `>` 0
* **`MaxStakeDuration`** is a short which is the maximum number of seconds a staker can stake for.
* Restrictions
* Must be `>=` \[MinStakeDuration]
* Must be `<=` \[GlobalMaxStakeDuration]
* **`MinDelegationFee`** is a short is the minimum percentage a validator must
charge a delegator for delegating.
* Restrictions
* Must be `<=` \[`reward.PercentDenominator`]
* **`MinDelegatorStake`** is a short which is the minimum amount of funds required to become a delegator.
* Restrictions
* Must be `>` 0
* **`MaxValidatorWeightFactor`** is a byte which is the factor which calculates
the maximum amount of delegation a validator can receive. Note: a value of 1
effectively disables delegation.
* Restrictions
* Must be `>` 0
* **`UptimeRequirement`** is a short which is the minimum percentage a validator
must be online and responsive to receive a reward.
* Restrictions
* Must be `<=` \[`reward.PercentDenominator`]
* **`SubnetAuth`** contains `SigIndices` and has a type id of `0x0000000a`.
`SigIndices` is a list of unique ints that define the addresses signing the
control signature to authorizes this transformation. The array must be sorted
low to high.
### Gantt Unsigned Transform Avalanche L1 TX Specification
```text
+----------------------+------------------+----------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+----------------------+------------------+----------------------------------+
| subnet_id : [32]byte | 32 bytes |
+----------------------+------------------+----------------------------------+
| asset_id : [32]byte | 32 bytes |
+----------------------+------------------+----------------------------------+
| initial_supply : long | 8 bytes |
+----------------------+------------------+----------------------------------+
| maximum_supply : long | 8 bytes |
+----------------------+------------------+----------------------------------+
| min_consumption_rate : long | 8 bytes |
+----------------------+------------------+----------------------------------+
| max_consumption_rate : long | 8 bytes |
+----------------------+------------------+----------------------------------+
| min_validator_stake : long | 8 bytes |
+----------------------+------------------+----------------------------------+
| max_validator_stake : long | 8 bytes |
+----------------------+------------------+----------------------------------+
| min_stake_duration : short | 4 bytes |
+----------------------+------------------+----------------------------------+
| max_stake_duration : short | 4 bytes |
+----------------------+------------------+----------------------------------+
| min_delegation_fee : short | 4 bytes |
+----------------------+------------------+----------------------------------+
| min_delegator_stake : long | 8 bytes |
+----------------------+------------------+----------------------------------+
| max_validator_weight_factor : byte | 1 byte |
+----------------------+------------------+----------------------------------+
| uptime_requirement : short | 4 bytes |
+----------------------+------------------+----------------------------------+
| subnet_auth : SubnetAuth | 4 bytes + len(sig_indices) bytes |
+----------------------+------------------+----------------------------------+
| 141 + size(base_tx) + len(sig_indices) bytes |
+----------------------------------------------------------------------------+
```
### Proto Unsigned Transform Avalanche L1 TX Specification
```text
message TransformSubnetTx {
BaseTx base_tx = 1; // size(base_tx)
SubnetID subnet_id = 2; // 32 bytes
bytes asset_id = 3; // 32 bytes
uint64 initial_supply = 4; // 08 bytes
uint64 maximum_supply = 5; // 08 bytes
uint64 min_consumption_rate = 6; // 08 bytes
uint64 max_consumption_rate = 7; // 08 bytes
uint64 min_validator_stake = 8; // 08 bytes
uint64 max_validator_stake = 9; // 08 bytes
uint32 min_stake_duration = 10; // 04 bytes
uint32 max_stake_duration = 11; // 04 bytes
uint32 min_delegation_fee = 12; // 04 bytes
uint32 min_delegator_stake = 13; // 08 bytes
byte max_validator_weight_factor = 14; // 01 byte
uint32 uptime_requirement = 15; // 04 bytes
SubnetAuth subnet_auth = 16; // 04 bytes + len(sig_indices)
}
```
### Unsigned Transform Avalanche L1 TX Example
Let's make an unsigned transform Avalanche L1 TX that uses the inputs and outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 18"`
* **`SubnetID`**: `0x5fa29ed4356903dac2364713c60f57d8472c7dda4a5e08d88a88ad8ea71aed60`
* **`AssetID`**: `0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada`
* **`InitialSupply`**: `0x000000e8d4a51000`
* **`MaximumSupply`**: `0x000009184e72a000`
* **`MinConsumptionRate`**: `0x0000000000000001`
* **`MaxConsumptionRate`**: `0x000000000000000a`
* **`MinValidatorStake`**: `0x000000174876e800`
* **`MaxValidatorStake`**: `0x000001d1a94a2000`
* **`MinStakeDuration`**: `0x00015180`
* **`MaxStakeDuration`**: `0x01e13380`
* **`MinDelegationFee`**: `0x00002710`
* **`MinDelegatorStake`**: `0x000000174876e800`
* **`MaxValidatorWeightFactor`**: `0x05`
* **`UptimeRequirement`**: `0x000c3500`
* **`SubnetAuth`**:
* **`TypeID`**: `0x0000000a`
* **`SigIndices`**: `0x00000000`
```text
[
BaseTx <- 0000001800003039e902a9a86640bfdb1cd0e36c0cc982b83e5765fad5f6bbe6abdcce7b5ae7d7c700000000000000014a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db000000003d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a70000000500000000000f4240000000010000000000000000
SubnetID <- 0x5fa29ed4356903dac2364713c60f57d8472c7dda4a5e08d88a88ad8ea71aed60
AssetID <- 0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada
InitialSupply <- 0x000000e8d4a51000
MaximumSupply <- 0x000009184e72a000
MinConsumptionRate <- 0x0000000000000001
MaxConsumptionRate <- 0x000000000000000a
MinValidatorStake <- 0x000000174876e800
MaxValidatorStake <- 0x000001d1a94a2000
MinStakeDuration <- 0x00015180
MaxStakeDuration <- 0x01e13380
MinDelegationFee <- 0x00002710
MinDelegatorStake <- 0x000000174876e800
MaxValidatorWeightFactor <- 0x05
UptimeRequirement <- 0x000c3500
SubnetAuth <- 0x0000000a0000000100000000
]
=
[
// BaseTx:
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x30, 0x39,
0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb,
0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8,
0x3e, 0x57, 0x65, 0xfa, 0xd5, 0xf6, 0xbb, 0xe6,
0xab, 0xdc, 0xce, 0x7b, 0x5a, 0xe7, 0xd7, 0xc7,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92,
0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5,
0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99,
0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb,
0x00, 0x00, 0x00, 0x00, 0x3d, 0x0a, 0xd1, 0x2b,
0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9,
0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07,
0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b,
0x25, 0xcf, 0x59, 0xa7, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x42, 0x40,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x5f, 0xa2, 0x9e, 0xd4,
0x35, 0x69, 0x03, 0xda, 0xc2, 0x36, 0x47, 0x13,
0xc6, 0x0f, 0x57, 0xd8, 0x47, 0x2c, 0x7d, 0xda,
0x4a, 0x5e, 0x08, 0xd8, 0x8a, 0x88, 0xad, 0x8e,
0xa7, 0x1a, 0xed, 0x60, 0xf3, 0x08, 0x6d, 0x7b,
0xfc, 0x35, 0xbe, 0x1c, 0x68, 0xdb, 0x66, 0x4b,
0xa9, 0xce, 0x61, 0xa2, 0x06, 0x01, 0x26, 0xb0,
0xd6, 0xb4, 0xbf, 0xb0, 0x9f, 0xd7, 0xa5, 0xfb,
0x76, 0x78, 0xca, 0xda, 0x00, 0x00, 0x00, 0xe8,
0xd4, 0xa5, 0x10, 0x00, 0x00, 0x00, 0x09, 0x18,
0x4e, 0x72, 0xa0, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x17,
0x48, 0x76, 0xe8, 0x00, 0x00, 0x00, 0x01, 0xd1,
0xa9, 0x4a, 0x20, 0x00, 0x00, 0x01, 0x51, 0x80,
0x01, 0xe1, 0x33, 0x80, 0x00, 0x00, 0x27, 0x10,
0x00, 0x00, 0x00, 0x17, 0x48, 0x76, 0xe8, 0x00,
0x05, 0x00, 0x0c, 0x35, 0x00, 0x00, 0x00, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x00,
// SubnetID
0x5f, 0xa2, 0x9e, 0xd4, 0x35, 0x69, 0x03, 0xda,
0xc2, 0x36, 0x47, 0x13, 0xc6, 0x0f, 0x57, 0xd8,
0x47, 0x2c, 0x7d, 0xda, 0x4a, 0x5e, 0x08, 0xd8,
0x8a, 0x88, 0xad, 0x8e, 0xa7, 0x1a, 0xed, 0x60,
// AssetID
0xf3, 0x08, 0x6d, 0x7b, 0xfc, 0x35, 0xbe, 0x1c,
0x68, 0xdb, 0x66, 0x4b, 0xa9, 0xce, 0x61, 0xa2,
0x06, 0x01, 0x26, 0xb0, 0xd6, 0xb4, 0xbf, 0xb0,
0x9f, 0xd7, 0xa5, 0xfb, 0x76, 0x78, 0xca, 0xda,
// InitialSupply
0x00, 0x00, 0x00, 0xe8, 0xd4, 0xa5, 0x10, 0x00,
// MaximumSupply
0x00, 0x00, 0x09, 0x18, 0x4e, 0x72, 0xa0, 0x00,
// MinConsumptionRate
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
// MaxConsumptionRate
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a,
// MinValidatorStake
0x00, 0x00, 0x00, 0x17, 0x48, 0x76, 0xe8, 0x00,
// MaxValidatorStake
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
// MinStakeDuration
0x00, 0x01, 0x51, 0x80,
// MaxStakeDuration
0x01, 0xe1, 0x33, 0x80,
// MinDelegationFee
0x00, 0x00, 0x27, 0x10,
// MinDelegatorStake
0x00, 0x00, 0x00, 0x17, 0x48, 0x76, 0xe8, 0x00,
// MaxValidatorWeightFactor
0x05,
// UptimeRequirement
0x00, 0x0c, 0x35, 0x00,
```
// SubnetAuth
```
// SubnetAuth TypeID
0x00, 0x00, 0x00, 0x0a,
// SigIndices length
0x00, 0x00, 0x00, 0x01,
// SigIndices
0x00, 0x00, 0x00, 0x00,
]
```
## Unsigned Add Avalanche L1 Validator TX
### What Unsigned Add Avalanche L1 Validator TX Contains
An unsigned add Avalanche L1 validator TX contains a `BaseTx`, `Validator`,
`SubnetID`, and `SubnetAuth`. The `TypeID` for this type is `0x0000000d`.
* **`BaseTx`**
* **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight`
* **`NodeID`** is the 20 byte node ID of the validator.
* **`StartTime`** is a long which is the Unix time when the validator starts validating.
* **`EndTime`** is a long which is the Unix time when the validator stops validating.
* **`Weight`** is a long which is the amount the validator stakes
* **`SubnetID`** is the 32 byte Avalanche L1 ID to add the validator to.
* **`SubnetAuth`** contains `SigIndices` and has a type id of `0x0000000a`.
`SigIndices` is a list of unique ints that define the addresses signing the
control signature to add a validator to an Avalanche L1. The array must be sorted low
to high.
### Gantt Unsigned Add Avalanche L1 Validator TX Specification
```text
+---------------+----------------------+-----------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+---------------+----------------------+-----------------------------------------+
| validator : Validator | 44 bytes |
+---------------+----------------------+-----------------------------------------+
| subnet_id : [32]byte | 32 bytes |
+---------------+----------------------+-----------------------------------------+
| subnet_auth : SubnetAuth | 4 bytes + len(sig_indices) bytes |
+---------------+----------------------+-----------------------------------------+
| 80 + len(sig_indices) + size(base_tx) bytes |
+---------------------------------------------+
```
### Proto Unsigned Add Avalanche L1 Validator TX Specification
```text
message AddSubnetValidatorTx {
BaseTx base_tx = 1; // size(base_tx)
Validator validator = 2; // size(validator)
SubnetID subnet_id = 3; // 32 bytes
SubnetAuth subnet_auth = 4; // 04 bytes + len(sig_indices)
}
```
### Unsigned Add Avalanche L1 Validator TX Example
Let's make an unsigned add Avalanche L1 validator TX that uses the inputs and outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0d"`
* **`NodeID`**: `0xe9094f73698002fd52c90819b457b9fbc866ab80`
* **`StarTime`**: `0x000000005f21f31d`
* **`EndTime`**: `0x000000005f497dc6`
* **`Weight`**: `0x000000000000d431`
* **`SubnetID`**: `0x58b1092871db85bc752742054e2e8be0adf8166ec1f0f0769f4779f14c71d7eb`
* **`SubnetAuth`**:
* **`TypeID`**: `0x0000000a`
* **`SigIndices`**: `0x00000000`
```text
[
BaseTx <- 0x0000000d000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
NodeID <- 0xe9094f73698002fd52c90819b457b9fbc866ab80
StarTime <- 0x000000005f21f31d
EndTime <- 0x000000005f497dc6
Weight <- 0x000000000000d431
SubnetID <- 0x58b1092871db85bc752742054e2e8be0adf8166ec1f0f0769f4779f14c71d7eb
SubnetAuth TypeID <- 0x0000000a
SubnetAuth <- 0x00000000
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x0d,
0x00, 0x00, 0x30, 0x39,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01,
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c,
0x00, 0x00, 0x00, 0x01,
0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c,
0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e,
0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14,
0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15,
0x00, 0x00, 0x00, 0x01,
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
// Node ID
0xe9, 0x09, 0x4f, 0x73, 0x69, 0x80, 0x02, 0xfd,
0x52, 0xc9, 0x08, 0x19, 0xb4, 0x57, 0xb9, 0xfb,
0xc8, 0x66, 0xab, 0x80,
// StartTime
0x00, 0x00, 0x00, 0x00, 0x5f, 0x21, 0xf3, 0x1d,
// EndTime
0x00, 0x00, 0x00, 0x00, 0x5f, 0x49, 0x7d, 0xc6,
// Weight
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// SubnetID
0x58, 0xb1, 0x09, 0x28, 0x71, 0xdb, 0x85, 0xbc,
0x75, 0x27, 0x42, 0x05, 0x4e, 0x2e, 0x8b, 0xe0,
0xad, 0xf8, 0x16, 0x6e, 0xc1, 0xf0, 0xf0, 0x76,
0x9f, 0x47, 0x79, 0xf1, 0x4c, 0x71, 0xd7, 0xeb,
// SubnetAuth
// SubnetAuth TypeID
0x00, 0x00, 0x00, 0x0a,
// SigIndices length
0x00, 0x00, 0x00, 0x01,
// SigIndices
0x00, 0x00, 0x00, 0x00,
]
```
## Unsigned Add Delegator TX
### What Unsigned Add Delegator TX Contains
An unsigned add delegator TX contains a `BaseTx`, `Validator`, `Stake`, and
`RewardsOwner`. The `TypeID` for this type is `0x0000000e`.
* **`BaseTx`**
* **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight`
* **`NodeID`** is 20 bytes which is the node ID of the delegatee.
* **`StartTime`** is a long which is the Unix time when the delegator starts delegating.
* **`EndTime`** is a long which is the Unix time when the delegator stops
delegating (and staked AVAX is returned).
* **`Weight`** is a long which is the amount the delegator stakes
* **`Stake`** Stake has `LockedOuts`
* **`LockedOuts`** An array of Transferable Outputs that are locked for the
duration of the staking period. At the end of the staking period, these
outputs are refunded to their respective addresses.
* **`RewardsOwner`** An `SECP256K1OutputOwners`
### Gantt Unsigned Add Delegator TX Specification
```text
+---------------+-----------------------+-----------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+---------------+-----------------------+-----------------------------------------+
| validator : Validator | 44 bytes |
+---------------+-----------------------+-----------------------------------------+
| stake : Stake | size(LockedOuts) bytes |
+---------------+-----------------------+-----------------------------------------+
| rewards_owner : SECP256K1OutputOwners | size(rewards_owner) bytes |
+---------------+-----------------------+-----------------------------------------+
| 44 + size(stake) + size(rewards_owner) + size(base_tx) bytes |
+-----------------------------------------------------------------+
```
### Proto Unsigned Add Delegator TX Specification
```text
message AddDelegatorTx {
BaseTx base_tx = 1; // size(base_tx)
Validator validator = 2; // 44 bytes
Stake stake = 3; // size(LockedOuts)
SECP256K1OutputOwners rewards_owner = 4; // size(rewards_owner)
}
```
### Unsigned Add Delegator TX Example
Let's make an unsigned add delegator TX that uses the inputs and outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0e"`
* **`NodeID`**: `0xe9094f73698002fd52c90819b457b9fbc866ab80`
* **`StarTime`**: `0x000000005f21f31d`
* **`EndTime`**: `0x000000005f497dc6`
* **`Weight`**: `0x000000000000d431`
* **`Stake`**: `0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c`
* **`RewardsOwner`**: `0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c`
```text
[
BaseTx <- 0x0000000e000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
NodeID <- 0xe9094f73698002fd52c90819b457b9fbc866ab80
StarTime <- 0x000000005f21f31d
EndTime <- 0x000000005f497dc6
Weight <- 0x000000000000d431
Stake <- 0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c
RewardsOwner <- 0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x0e, 0x00, 0x00, 0x30, 0x39,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01,
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c,
0x00, 0x00, 0x00, 0x01,
0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c,
0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e,
0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14,
0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15,
0x00, 0x00, 0x00, 0x01,
0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40,
0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28,
0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6,
0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
// Node ID
0xe9, 0x09, 0x4f, 0x73, 0x69, 0x80, 0x02, 0xfd,
0x52, 0xc9, 0x08, 0x19, 0xb4, 0x57, 0xb9, 0xfb,
0xc8, 0x66, 0xab, 0x80,
// StartTime
0x00, 0x00, 0x00, 0x00, 0x5f, 0x21, 0xf3, 0x1d,
// EndTime
0x00, 0x00, 0x00, 0x00, 0x5f, 0x49, 0x7d, 0xc6,
// Weight
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// Stake
0x00, 0x00, 0x00, 0x01, 0x39, 0xc3, 0x3a, 0x49,
0x9c, 0xe4, 0xc3, 0x3a, 0x3b, 0x09, 0xcd, 0xd2,
0xcf, 0xa0, 0x1a, 0xe7, 0x0d, 0xbf, 0x2d, 0x18,
0xb2, 0xd7, 0xd1, 0x68, 0x52, 0x44, 0x40, 0xe5,
0x5d, 0x55, 0x00, 0x88, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01,
0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a,
0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68,
0x61, 0xe1, 0xb2, 0x9c,
// RewardsOwner
0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c,
]
```
## Unsigned Create Chain TX
### What Unsigned Create Chain TX Contains
An unsigned create chain TX contains a `BaseTx`, `SubnetID`, `ChainName`,
`VMID`, `FxIDs`, `GenesisData` and `SubnetAuth`. The `TypeID` for this type is
`0x0000000f`.
* **`BaseTx`**
* **`SubnetID`** ID of the Avalanche L1 that validates this blockchain
* **`ChainName`** A human readable name for the chain; need not be unique
* **`VMID`** ID of the VM running on the new chain
* **`FxIDs`** IDs of the feature extensions running on the new chain
* **`GenesisData`** Byte representation of genesis state of the new chain
* **`SubnetAuth`** Authorizes this blockchain to be added to this Avalanche L1
### Gantt Unsigned Create Chain TX Specification
```text
+--------------+-------------+------------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+--------------+-------------+------------------------------------------+
| subnet_id : SubnetID | 32 bytes |
+--------------+-------------+------------------------------------------+
| chain_name : ChainName | 2 + len(chain_name) bytes |
+--------------+-------------+------------------------------------------+
| vm_id : VMID | 32 bytes |
+--------------+-------------+------------------------------------------+
| fx_ids : FxIDs | 4 + size(fx_ids) bytes |
+--------------+-------------+------------------------------------------+
| genesis_data : GenesisData | 4 + size(genesis_data) bytes |
+--------------+-------------+------------------------------------------+
| subnet_auth : SubnetAuth | size(subnet_auth) bytes |
+--------------+-------------+------------------------------------------+
| 74 + size(base_tx) + size(chain_name) + size(fx_ids) + |
| size(genesis_data) + size(subnet_auth) bytes |
+--------------+--------------------------------------------------------+
```
### Proto Unsigned Create Chain TX Specification
```text
message CreateChainTx {
BaseTx base_tx = 1; // size(base_tx)
SubnetID subnet_id = 2; // 32 bytes
ChainName chain_name = 3; // 2 + len(chain_name) bytes
VMID vm_id = 4; // 32 bytes
FxIDs fx_ids = 5; // 4 + size(fx_ids) bytes
GenesisData genesis_data = 6 // 4 + size(genesis_data) bytes
SubnetAuth subnet_auth = 7; // size(subnet_auth) bytes
}
```
### Unsigned Create Chain TX Example
Let's make an unsigned create chain TX that uses the inputs and outputs from the previous examples:
* **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0f"`
* **`SubnetID`**: `24tZhrm8j8GCJRE9PomW8FaeqbgGS4UAQjJnqqn8pq5NwYSYV1`
* **`ChainName`**: `EPIC AVM`
* **`VMID`**: `avm`
* **`FxIDs`**: \[`secp256k1fx`]
* **`GenesisData`**: `11111DdZMhYXUZiFV9FNpfpTSQroysXhzWicG954YAKfkrk3bCEzLVY7gun1eAmAwMiQzVhtGpdR6dnPVcfhBE7brzkJ1r4wzi3dgA8G9Jwc4WpZ6Uh4Dr9aTdw7sFA5cpvCAVBsx6Xf3CB82jwH1gjPZ3WQnnCSKr2reoLtam6TfyYRra5xxXSkZcUm6BaJMW4fKzNP58uyExajPYKZvT5LrQ7MPJ9Fp7ebmYSzXg7YYauNARj`
* **`SubnetAuth`**: `0x0000000a0000000100000000`
```text
[
BaseTx <- 0x0000000f000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
SubnetID <- 0x8c86d07cd60218661863e0116552dccd5bd84c564bd29d7181dbddd5ec616104
ChainName <- 0x455049432041564d
VMID <- 0x61766d0000000000000000000000000000000000000000000000000000000000
FxIDs <- 0x736563703235366b316678000000000000000000000000000000000000000000
GenesisData <- 0x000000000001000e4173736574416c6961735465737400000539000000000000000000000000000000000000000000000000000000000000000000000000000000000000001b66726f6d20736e6f77666c616b6520746f206176616c616e636865000a54657374204173736574000454455354000000000100000000000000010000000700000000000001fb000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c
SubnetAuth <- 0x0000000a0000000100000000
]
=
[
// base tx
0x00, 0x00, 0x00, 0x0f,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a,
0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7,
0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68,
0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x12, 0x30,
0x9c, 0xd5, 0xfd, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84,
0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1,
0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// end base tx
// Subnet id
0x8c, 0x86, 0xd0, 0x7c, 0xd6, 0x02, 0x18, 0x66,
0x18, 0x63, 0xe0, 0x11, 0x65, 0x52, 0xdc, 0xcd,
0x5b, 0xd8, 0x4c, 0x56, 0x4b, 0xd2, 0x9d, 0x71,
0x81, 0xdb, 0xdd, 0xd5, 0xec, 0x61, 0x61, 0x04,
// chain name length
0x00, 0x08,
// chain name
0x45, 0x50, 0x49, 0x43, 0x20, 0x41, 0x56, 0x4d,
// vm id
0x61, 0x76, 0x6d, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// fxids
// num fxids
0x00, 0x00, 0x00, 0x01,
// fxid
0x73, 0x65, 0x63, 0x70, 0x32, 0x35, 0x36, 0x6b,
0x31, 0x66, 0x78, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// genesis data len
0x00, 0x00, 0x00, 0xb0,
// genesis data
0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x0e,
0x41, 0x73, 0x73, 0x65, 0x74, 0x41, 0x6c, 0x69,
0x61, 0x73, 0x54, 0x65, 0x73, 0x74, 0x00, 0x00,
0x05, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x1b, 0x66, 0x72,
0x6f, 0x6d, 0x20, 0x73, 0x6e, 0x6f, 0x77, 0x66,
0x6c, 0x61, 0x6b, 0x65, 0x20, 0x74, 0x6f, 0x20,
0x61, 0x76, 0x61, 0x6c, 0x61, 0x6e, 0x63, 0x68,
0x65, 0x00, 0x0a, 0x54, 0x65, 0x73, 0x74, 0x20,
0x41, 0x73, 0x73, 0x65, 0x74, 0x00, 0x04, 0x54,
0x45, 0x53, 0x54, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x01, 0xfb, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84,
0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1,
0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c,
// type id (Subnet Auth)
0x00, 0x00, 0x00, 0x0a,
// num address indices
0x00, 0x00, 0x00, 0x01,
// address index
0x00, 0x00, 0x00, 0x00,
]
```
## Unsigned Create Avalanche L1 TX
### What Unsigned Create Avalanche L1 TX Contains
An unsigned create Avalanche L1 TX contains a `BaseTx`, and `RewardsOwner`. The `TypeID` for this type is `0x00000010`.
* **`BaseTx`**
* **`RewardsOwner`** A `SECP256K1OutputOwners`
### Gantt Unsigned Create Avalanche L1 TX Specification
```text
+-----------------+-----------------------|---------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+-----------------+-----------------------+--------------------------------+
| rewards_owner : SECP256K1OutputOwners | size(rewards_owner) bytes |
+-----------------+-----------------------+---------------------------------+
| size(rewards_owner) + size(base_tx) bytes |
+-------------------------------------------+
```
### Proto Unsigned Create Avalanche L1 TX Specification
```text
message CreateSubnetTx {
BaseTx base_tx = 1; // size(base_tx)
SECP256K1OutputOwners rewards_owner = 2; // size(rewards_owner)
}
```
### Unsigned Create Avalanche L1 TX Example
Let's make an unsigned create Avalanche L1 TX that uses the inputs from the previous examples:
* **`BaseTx`**: "Example BaseTx as defined above but with TypeID set to 16"
* **`RewardsOwner`**:
* **`TypeId`**: 11
* **`Locktime`**: 0
* **`Threshold`**: 1
* **`Addresses`**: \[ 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c ]
```text
[
BaseTx <- 0x00000010000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
RewardsOwner <-
TypeID <- 0x0000000b
Locktime <- 0x0000000000000000
Threshold <- 0x00000001
Addresses <- [
0xda2bee01be82ecc00c34f361eda8eb30fb5a715c,
]
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x10,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a,
0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7,
0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68,
0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x12, 0x30,
0x9c, 0xd5, 0xfd, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84,
0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1,
0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// RewardsOwner type id
0x00, 0x00, 0x00, 0x0b,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x01,
// addrs[0]:
0xda, 0x2b, 0xee, 0x01,
0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61,
0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c
]
```
## Unsigned Import TX
### What Unsigned Import TX Contains
An unsigned import TX contains a `BaseTx`, `SourceChain`, and `Ins`. The `TypeID` for this type is `0x00000011`.
* **`BaseTx`**
* **`SourceChain`** is a 32-byte source blockchain ID.
* **`Ins`** is a variable length array of Transferable Inputs.
### Gantt Unsigned Import TX Specification
```text
+-----------------+--------------|---------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+-----------------+--------------+---------------------------------+
| source_chain : [32]byte | 32 bytes |
+-----------------+--------------+---------------------------------+
| ins : []TransferIn | 4 + size(ins) bytes |
+-----------------+--------------+---------------------------------+
| 36 + size(ins) + size(base_tx) bytes |
+--------------------------------------+
```
### Proto Unsigned Import TX Specification
```text
message ImportTx {
BaseTx base_tx = 1; // size(base_tx)
bytes source_chain = 2; // 32 bytes
repeated TransferIn ins = 3; // 4 bytes + size(ins)
}
```
### Unsigned Import TX Example
Let's make an unsigned import TX that uses the inputs from the previous examples:
* **`BaseTx`**: "Example BaseTx as defined above with TypeID set to 17"
* **`SourceChain`**:
* **`Ins`**: "Example SECP256K1 Transfer Input as defined above"
```text
[
BaseTx <- 0x00000011000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
SourceChain <- 0x787cd3243c002e9bf5bbbaea8a42a16c1a19cc105047c66996807cbf16acee10
Ins <- [
// input:
]
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x11,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a,
0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7,
0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68,
0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x12, 0x30,
0x9c, 0xd5, 0xfd, 0xc0, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84,
0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1,
0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// sourceChain
0x78, 0x7c, 0xd3, 0x24, 0x3c, 0x00, 0x2e, 0x9b,
0xf5, 0xbb, 0xba, 0xea, 0x8a, 0x42, 0xa1, 0x6c,
0x1a, 0x19, 0xcc, 0x10, 0x50, 0x47, 0xc6, 0x69,
0x96, 0x80, 0x7c, 0xbf, 0x16, 0xac, 0xee, 0x10,
// input count:
0x00, 0x00, 0x00, 0x01,
// txID:
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
// utxoIndex:
0x00, 0x00, 0x00, 0x05,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// input:
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00,
]
```
## Unsigned Export TX
### What Unsigned Export TX Contains
An unsigned export TX contains a `BaseTx`, `DestinationChain`, and `Outs`. The
`TypeID` for this type is `0x00000012`.
* **`DestinationChain`** is the 32 byte ID of the chain where the funds are being exported to.
* **`Outs`** is a variable length array of Transferable Outputs.
### Gantt Unsigned Export TX Specification
```text
+-------------------+---------------+--------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+-------------------+---------------+--------------------------------------+
| destination_chain : [32]byte | 32 bytes |
+-------------------+---------------+--------------------------------------+
| outs : []TransferOut | 4 + size(outs) bytes |
+-------------------+---------------+--------------------------------------+
| 36 + size(outs) + size(base_tx) bytes |
+---------------------------------------+
```
### Proto Unsigned Export TX Specification
```text
message ExportTx {
BaseTx base_tx = 1; // size(base_tx)
bytes destination_chain = 2; // 32 bytes
repeated TransferOut outs = 3; // 4 bytes + size(outs)
}
```
### Unsigned Export TX Example
Let's make an unsigned export TX that uses the outputs from the previous examples:
* `BaseTx`: "Example BaseTx as defined above" with `TypeID` set to 18
* `DestinationChain`: `0x0000000000000000000000000000000000000000000000000000000000000000`
* `Outs`: "Example SECP256K1 Transfer Output as defined above"
```text
[
BaseTx <- 0x00000012000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000
DestinationChain <- 0x0000000000000000000000000000000000000000000000000000000000000000
Outs <- [
000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x12
0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff,
0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd,
0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb,
0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01,
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15,
0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04,
0x00, 0x01, 0x02, 0x03
// destination_chain:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// outs[] count:
0x00, 0x00, 0x00, 0x01,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// output:
0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02,
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28,
0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2,
0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59,
]
```
## Credentials
Credentials have one possible types: `SECP256K1Credential`. Each credential is
paired with an Input or Operation. The order of the credentials match the order
of the inputs or operations.
## SECP256K1 Credential
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) credential
contains a list of 65-byte recoverable signatures.
### What SECP256K1 Credential Contains
* **`TypeID`** is the ID for this type. It is `0x00000009`.
* **`Signatures`** is an array of 65-byte recoverable signatures. The order of
the signatures must match the input's signature indices.
### Gantt SECP256K1 Credential Specification
```text
+------------------------------+---------------------------------+
| type_id : int | 4 bytes |
+-----------------+------------+---------------------------------+
| signatures : [][65]byte | 4 + 65 * len(signatures) bytes |
+-----------------+------------+---------------------------------+
| 8 + 65 * len(signatures) bytes |
+---------------------------------+
```
### Proto SECP256K1 Credential Specification
```text
message SECP256K1Credential {
uint32 TypeID = 1; // 4 bytes
repeated bytes signatures = 2; // 4 bytes + 65 bytes * len(signatures)
}
```
### SECP256K1 Credential Example
Let's make a payment input with:
* **`signatures`**:
* `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00`
* `0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00`
```text
[
Signatures <- [
0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00,
0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00,
]
]
=
[
// Type ID
0x00, 0x00, 0x00, 0x09,
// length:
0x00, 0x00, 0x00, 0x02,
// sig[0]
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f,
0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f,
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
0x00,
// sig[1]
0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d, 0x5f,
0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d, 0x6f,
0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f,
0x00,
]
```
## Signed Transaction
A signed transaction is an unsigned transaction with the addition of an array of credentials.
### What Signed Transaction Contains
A signed transaction contains a `CodecID`, `UnsignedTx`, and `Credentials`.
* **`CodecID`** The only current valid codec id is `00 00`.
* **`UnsignedTx`** is an unsigned transaction, as described above.
* **`Credentials`** is an array of credentials. Each credential will be paired
with the input in the same index at this credential.
### Gantt Signed Transaction Specification
```text
+---------------------+--------------+------------------------------------------------+
| codec_id : uint16 | 2 bytes |
+---------------------+--------------+------------------------------------------------+
| unsigned_tx : UnsignedTx | size(unsigned_tx) bytes |
+---------------------+--------------+------------------------------------------------+
| credentials : []Credential | 4 + size(credentials) bytes |
+---------------------+--------------+------------------------------------------------+
| 6 + size(unsigned_tx) + len(credentials) bytes |
+------------------------------------------------+
```
### Proto Signed Transaction Specification
```text
message Tx {
uint32 codec_id = 1; // 2 bytes
UnsignedTx unsigned_tx = 2; // size(unsigned_tx)
repeated Credential credentials = 3; // 4 bytes + size(credentials)
}
```
### Signed Transaction Example
Let's make a signed transaction that uses the unsigned transaction and credential from the previous examples.
* **`CodecID`**: `0`
* **`UnsignedTx`**: `0x0000000100000003ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000003000000070000000400010203`
* **`Credentials`** `0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00`
```text
[
CodecID <- 0x0000
UnsignedTx <- 0x0000000100000003ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000003000000070000000400010203
Credentials <- [
0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00,
]
]
=
[
// Codec ID
0x00, 0x00,
// unsigned transaction:
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03,
0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee,
0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc,
0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa,
0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88,
0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02,
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28,
0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2,
0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59,
0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1,
0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41,
0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0,
0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40,
0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03
// number of credentials:
0x00, 0x00, 0x00, 0x01,
// credential[0]:
0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x02,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f,
0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f,
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46,
0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e,
0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56,
0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d,
0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66,
0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d,
0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76,
0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e,
0x7f, 0x00,
]
```
## UTXO
A UTXO is a standalone representation of a transaction output.
### What UTXO Contains
A UTXO contains a `CodecID`, `TxID`, `UTXOIndex`, and `Output`.
* **`CodecID`** The only current valid codec id is `00 00`.
* **`TxID`** is a 32-byte transaction ID. Transaction IDs are calculated by
taking sha256 of the bytes of the signed transaction.
* **`UTXOIndex`** is an int that specifies which output in the transaction
specified by **`TxID`** that this utxo was created by.
* **`AssetID`** is a 32-byte array that defines which asset this utxo
references.
* **`Output`** is the output object that created this utxo. The serialization of
Outputs was defined above.
#### Gantt UTXO Specification
```text
+--------------+----------+-------------------------+
| codec_id : uint16 | 2 bytes |
+--------------+----------+-------------------------+
| tx_id : [32]byte | 32 bytes |
+--------------+----------+-------------------------+
| output_index : int | 4 bytes |
+--------------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+--------------+----------+-------------------------+
| output : Output | size(output) bytes |
+--------------+----------+-------------------------+
| 70 + size(output) bytes |
+-------------------------+
```
### Proto UTXO Specification
```text
message Utxo {
uint32 codec_id = 1; // 02 bytes
bytes tx_id = 2; // 32 bytes
uint32 output_index = 3; // 04 bytes
bytes asset_id = 4; // 32 bytes
Output output = 5; // size(output)
}
```
### UTXO Example
Let's make a UTXO from the signed transaction created above:
* **`CodecID`**: `0`
* **`TxID`**: `0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7`
* **`UTXOIndex`**: 0x00000000
* **`AssetID`**: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`
* **`Output`**: `"Example SECP256K1 Transferable Output as defined above"`
```text
[
CodecID <- 0x0000
TxID <- 0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7
UTXOIndex <- 0x00000000
AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859
]
=
[
// Codec ID:
0x00, 0x00,
// txID:
0xf9, 0x66, 0x75, 0x0f, 0x43, 0x88, 0x67, 0xc3,
0xc9, 0x82, 0x8d, 0xdc, 0xdb, 0xe6, 0x60, 0xe2,
0x1c, 0xcd, 0xbb, 0x36, 0xa9, 0x27, 0x69, 0x58,
0xf0, 0x11, 0xba, 0x47, 0x2f, 0x75, 0xd4, 0xe7,
// utxo index:
0x00, 0x00, 0x00, 0x00,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// output:
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23,
0x24, 0x25, 0x26, 0x27,
]
```
## StakeableLockIn
A StakeableLockIn is a staked and locked input. The StakeableLockIn can only
fund StakeableLockOuts with the same address until its lock time has passed.
### What StakeableLockIn Contains
A StakeableLockIn contains a `TypeID`, `Locktime` and `TransferableIn`.
* **`TypeID`** is the ID for this output type. It is `0x00000015`.
* **`Locktime`** is a long that contains the Unix timestamp before which the
input can be consumed only to stake. The Unix timestamp is specific to the
second.
* **`TransferableIn`** is a transferable input object.
### Gantt StakeableLockIn Specification
```text
+-----------------+-------------------+--------------------------------+
| type_id : int | 4 bytes |
+-----------------+-------------------+--------------------------------+
| locktime : long | 8 bytes |
+-----------------+-------------------+--------------------------------+
| transferable_in : TransferableInput | size(transferable_in) |
+-----------------+-------------------+--------------------------------+
| 12 + size(transferable_in) bytes |
+----------------------------------+
```
### Proto StakeableLockIn Specification
```text
message StakeableLockIn {
uint32 type_id = 1; // 04 bytes
uint64 locktime = 2; // 08 bytes
TransferableInput transferable_in = 3; // size(transferable_in)
}
```
### StakeableLockIn Example
Let's make a StakeableLockIn with:
* **`TypeID`**: 21
* **`Locktime`**: 54321
* **`TransferableIn`**: "Example SECP256K1 Transfer Input as defined above"
```text
[
TypeID <- 0x00000015
Locktime <- 0x000000000000d431
TransferableIn <- [
f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000100000000,
]
]
=
[
// type_id:
0x00, 0x00, 0x00, 0x15,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// transferable_in
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
0x00, 0x00, 0x00, 0x05,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00,
]
```
## StakeableLockOut
A StakeableLockOut is an output that is locked until its lock time, but can be staked in the meantime.
### What StakeableLockOut Contains
A StakeableLockOut contains a `TypeID`, `Locktime` and `TransferableOut`.
* **`TypeID`** is the ID for this output type. It is `0x00000016`.
* **`Locktime`** is a long that contains the Unix timestamp before which the
output can be consumed only to stake. The Unix timestamp is specific to the
second.
* **`transferableout`**: "Example SECP256K1 Transfer Output as defined above"
### Gantt StakeableLockOut Specification
```text
+------------------+--------------------+--------------------------------+
| type_id : int | 4 bytes |
+------------------+--------------------+--------------------------------+
| locktime : long | 8 bytes |
+------------------+--------------------+--------------------------------+
| transferable_out : TransferableOutput | size(transferable_out) |
+------------------+--------------------+--------------------------------+
| 12 + size(transferable_out) bytes |
+-----------------------------------+
```
### Proto StakeableLockOut Specification
```text
message StakeableLockOut {
uint32 type_id = 1; // 04 bytes
uint64 locktime = 2; // 08 bytes
TransferableOutput transferable_out = 3; // size(transferable_out)
}
```
### StakeableLockOut Example
Let's make a stakeablelockout with:
* **`TypeID`**: 22
* **`Locktime`**: 54321
* **`TransferableOutput`**: `"Example SECP256K1 Transfer Output from above"`
```text
[
TypeID <- 0x00000016
Locktime <- 0x000000000000d431
TransferableOutput <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
=
[
// type_id:
0x00, 0x00, 0x00, 0x16,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// transferable_out
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## Avalanche L1 Auth
### What Avalanche L1 Auth Contains
Specifies the addresses whose signatures will be provided to demonstrate that
the owners of an Avalanche L1 approve something.
* **`TypeID`** is the ID for this type. It is `0x0000000a`.
* **`AddressIndices`** defines which addresses' signatures will be attached to
this transaction. AddressIndices\[i] is the index in an Avalanche L1 owner list that
corresponds to the signature at index i in the signature list. Must be sorted
low to high and not have duplicates.
### Gantt Avalanche L1 Auth Specification
```text
+-----------------+------------------+-------------------------------------+
| type_id : int | 4 bytes |
+-----------------+------------------+-------------------------------------+
| address_indices : []int | 4 + 4*len(address_indices) bytes |
+-----------------+------------------+-------------------------------------+
| 8 + 4*len(address_indices) bytes |
+-----------------+--------------------------------------------------------+
```
### Proto Avalanche L1 Auth Specification
```text
message SubnetAuth {
uint32 type_id = 1; // 04 bytes
repeated AddressIndex address_indices = 2; // 04 + 4*len(address_indices) bytes
}
```
### Avalanche L1 Auth Example
Let's make an Avalanche L1 auth:
* **`TypeID`**: `10`
* **`AddressIndices`**: \[`0`]
```text
[
TypeID <- 0x0000000a
AddressIndices <- [
0x00000000
]
]
=
[
// type id
0x00, 0x00, 00x0, 0x0a,
// num address indices
0x00, 0x00, 0x00, 0x01,
// address index 1
0x00, 0x00, 0x00, 0x00
]
```
## Validator
A validator verifies transactions on a blockchain.
### What Validator Contains
A validator contains `NodeID`, `Start`, `End`, and `Wght`
* **`NodeID`** is the ID of the validator
* **`Start`** Unix time this validator starts validating
* **`End`** Unix time this validator stops validating
* **`Wght`** Weight of this validator used when sampling
### Gantt Validator Specification
```text
+------------------+----------+
| node_id : string | 20 bytes |
+------------------+----------+
| start : uint64 | 8 bytes |
+------------------+----------+
| end : uint64 | 8 bytes |
+------------------+----------+
| wght : uint64 | 8 bytes |
+------------------+----------+
| | 44 bytes |
+------------------+----------+
```
### Proto Validator Specification
```text
message Validator {
string node_id = 1; // 20 bytes
uint64 start = 2; // 08 bytes
uint64 end = 3; // 08 bytes
uint64 wght = 4; // 08 bytes
}
```
### Validator Example
Let's make a validator:
* **`NodeID`**: `"NodeID-GWPcbFJZFfZreETSoWjPimr846mXEKCtu"`
* **`Start`**: `1643068824`
* **`End`**: `1644364767`
* **`Wght`**: `20`
```text
[
NodeID <- 0xaa18d3991cf637aa6c162f5e95cf163f69cd8291
Start <- 0x61ef3d98
End <- 0x620303df
Wght <- 0x14
]
=
[
// node id
0xaa, 0x18, 0xd3, 0x99, 0x1c, 0xf6, 0x37,
0xaa, 0x6c, 0x16, 0x2f, 0x5e, 0x95, 0xcf,
0x16, 0x3f, 0x69, 0xcd, 0x82, 0x91,
// start
0x61, 0xef, 0x3d, 0x98,
// end
0x62, 0x03, 0x03, 0xdf,
// wght
0x14,
]
```
## Rewards Owner
Where to send staking rewards when done validating
### What Rewards Owner Contains
A rewards owner contains a `TypeID`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this validator. It is `0x0000000b`.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt Rewards Owner Specification
```text
+------------------------+-------------------------------+
| type_id : int | 4 bytes |
+------------------------+-------------------------------+
| locktime : long | 8 bytes |
+------------------------+-------------------------------+
| threshold : int | 4 bytes |
+------------------------+-------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+------------------------+-------------------------------+
| | 40 bytes |
+------------------------+-------------------------------+
```
### Proto Rewards Owner Specification
```text
message RewardsOwner {
string type_id = 1; // 4 bytes
uint64 locktime = 2; // 08 bytes
uint32 threshold = 3; // 04 bytes
repeated bytes addresses = 4; // 04 bytes + 20 bytes * len(addresses)
}
```
### Rewards Owner Example
Let's make a rewards owner:
* **`TypeID`**: `11`
* **`Locktime`**: `54321`
* **`Threshold`**: `1`
* **`Addresses`**:
* `0x51025c61fbcfc078f69334f834be6dd26d55a955`
* `0xc3344128e060128ede3523a24a461c8943ab0859`
```text
[
TypeID <- 0x0000000b
Locktime <- 0x000000000000d431
Threshold <- 0x00000001
Addresses <- [
0x51025c61fbcfc078f69334f834be6dd26d55a955,
0xc3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// type id
0x00, 0x00, 0x00, 0x0b,
// locktime
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x02,
// addrs[0]:
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55,
// addrs[1]:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## Unsigned Convert Subnet To L1 TX
### What Unsigned Convert Subnet To L1 TX Contains
An unsigned convert subnet to L1 TX contains a `BaseTx`, `Subnet`, `ChainID`, `Address`, `Validators`, and `SubnetAuth`. The `TypeID` for this type is `0x00000030`.
* **`BaseTx`**
* **`Subnet`** ID of the Subnet to transform into an L1. Must not be the Primary Network ID.
* **`ChainID`** BlockchainID where the validator manager lives.
* **`Address`** Address of the validator manager.
* **`Validators`** Initial continuous-fee-paying validators for the L1.
* **`SubnetAuth`** Authorizes this conversion. Must be signed by the Subnet's owner.
### Gantt Unsigned Convert Subnet To L1 TX Specification
```text
+------------+------------------+----------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+------------+------------------+----------------------------------+
| subnet : [32]byte | 32 bytes |
+------------+------------------+----------------------------------+
| chain_id : [32]byte | 32 bytes |
+------------+------------------+----------------------------------+
| address : []byte | 4 + len(address) bytes |
+------------+------------------+----------------------------------+
| validators : []L1Validator | 4 + size(validators) bytes |
+------------+------------------+----------------------------------+
| subnet_auth: SubnetAuth | 4 bytes + len(sig_indices) bytes |
+------------+------------------+----------------------------------+
| 76 + size(base_tx) + len(address) + size(validators) + len(sig_indices) bytes |
+----------------------------------------------------------------------------+
```
### Proto Unsigned Convert Subnet To L1 TX Specification
```text
message ConvertSubnetToL1Tx {
BaseTx base_tx = 1; // size(base_tx)
SubnetID subnet = 2; // 32 bytes
ChainID chain_id = 3; // 32 bytes
bytes address = 4; // 4 + len(address) bytes
repeated L1Validator validators = 5; // 4 + size(validators) bytes
SubnetAuth subnet_auth = 6; // 4 bytes + len(sig_indices)
}
```
## Unsigned Register L1 Validator TX
### What Unsigned Register L1 Validator TX Contains
An unsigned register L1 validator TX contains a `BaseTx`, `Balance`, `Signer` and `Message`. The `TypeID` for this type is `0x00000031`.
* **`BaseTx`**
* **`Balance`** is the amount of AVAX being provided for fees, where `Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee`.
* **`Signer`** is a BLS signature proving ownership of the BLS public key specified in the Message for this validator.
* **`Message`** is a RegisterL1ValidatorMessage payload delivered as a Warp Message.
### Gantt Unsigned Register L1 Validator TX Specification
```text
+------------+------------------+----------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+------------+------------------+----------------------------------+
| balance : uint64 | 8 bytes |
+------------+------------------+----------------------------------+
| signer : [96]byte | 96 bytes |
+------------+------------------+----------------------------------+
| message : WarpMessage | size(message) bytes |
+------------+------------------+----------------------------------+
| 104 + size(base_tx) + size(message) bytes |
+------------------------------------------------------------------+
```
### Proto Unsigned Register L1 Validator TX Specification
```text
message RegisterL1ValidatorTx {
BaseTx base_tx = 1; // size(base_tx)
uint64 balance = 2; // 8 bytes
bytes signer = 3; // 96 bytes
WarpMessage message = 4; // size(message) bytes
}
```
## Unsigned Set L1 Validator Weight TX
### What Unsigned Set L1 Validator Weight TX Contains
An unsigned set L1 validator weight TX contains a `BaseTx` and `Message`. The `TypeID` for this type is `0x00000032`.
* **`BaseTx`**
* **`Message`** An L1ValidatorWeightMessage payload delivered as a Warp Message. Contains the validationID, nonce, and new weight for a validator.
### Gantt Unsigned Set L1 Validator Weight TX Specification
```text
+------------+------------------+----------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+------------+------------------+----------------------------------+
| message : WarpMessage | size(message) bytes |
+------------+------------------+----------------------------------+
| size(base_tx) + size(message) bytes |
+------------------------------------------------------------------+
```
### Proto Unsigned Set L1 Validator Weight TX Specification
```text
message SetL1ValidatorWeightTx {
BaseTx base_tx = 1; // size(base_tx)
WarpMessage message = 2; // size(message) bytes
}
```
## Unsigned Disable L1 Validator TX
### What Unsigned Disable L1 Validator TX Contains
An unsigned disable L1 validator TX contains a `BaseTx`, `ValidationID` and `DisableAuth`. The `TypeID` for this type is `0x00000033`.
* **`BaseTx`**
* **`ValidationID`** ID corresponding to the validator to be disabled.
* **`DisableAuth`** Authorizes this validator to be disabled. Must be signed by the DisableOwner specified when the validator was added.
### Gantt Unsigned Disable L1 Validator TX Specification
```text
+----------------+------------------+----------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+----------------+------------------+----------------------------------+
| validation_id : [32]byte | 32 bytes |
+----------------+------------------+----------------------------------+
| disable_auth : Verifiable | size(disable_auth) bytes |
+----------------+------------------+----------------------------------+
| 32 + size(base_tx) + size(disable_auth) bytes |
+----------------------------------------------------------------------+
```
### Proto Unsigned Disable L1 Validator TX Specification
```text
message DisableL1ValidatorTx {
BaseTx base_tx = 1; // size(base_tx)
bytes validation_id = 2; // 32 bytes
Verifiable disable_auth = 3; // size(disable_auth) bytes
}
```
## Unsigned Increase L1 Validator Balance TX
### What Unsigned Increase L1 Validator Balance TX Contains
An unsigned increase L1 validator balance TX contains a `BaseTx`, `ValidationID` and `Balance`. The `TypeID` for this type is `0x00000034`.
* **`BaseTx`**
* **`ValidationID`** ID corresponding to the validator.
* **`Balance`** Additional AVAX amount to add to the validator's balance where `Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee`.
### Gantt Unsigned Increase L1 Validator Balance TX Specification
```text
+----------------+------------------+----------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+----------------+------------------+----------------------------------+
| validation_id : [32]byte | 32 bytes |
+----------------+------------------+----------------------------------+
| balance : uint64 | 8 bytes |
+----------------+------------------+----------------------------------+
| 40 + size(base_tx) bytes |
+----------------------------------------------------------------------+
```
### Proto Unsigned Increase L1 Validator Balance TX Specification
```text
message IncreaseL1ValidatorBalanceTx {
BaseTx base_tx = 1; // size(base_tx)
bytes validation_id = 2; // 32 bytes
uint64 balance = 3; // 8 bytes
}
```
# Deploy a Smart Contract
URL: /docs/avalanche-l1s/add-utility/deploy-smart-contract
Deploy a smart contract on your Avalanche L1.
This tutorial assumes that:
* [an Avalanche L1 and EVM blockchain](/docs/avalanche-l1s/deploy-a-avalanche-l1/fuji-testnet) has been created
* Your Node is currently validating your target Avalanche L1
* Your wallet has a balance of the Avalanche L1 Native Token(Specified under *alloc* in your [Genesis File](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#genesis)).
## Step 1: Setting up Core[](#step-1-setting-up-core "Direct link to heading")
### **EVM Avalanche L1 Settings**: [(EVM Core Tutorial)](/docs/avalanche-l1s/deploy-a-avalanche-l1/fuji-testnet#connect-with-core)[](#evm-avalanche-l1-settings-evm-core-tutorial "Direct link to heading")
* **`Network Name`**: Custom Subnet-EVM
* **`New RPC URL`**: [http://NodeIPAddress:9650/ext/bc/BlockchainID/rpc](http://NodeIPAddress:9650/ext/bc/BlockchainID/rpc) (Note: the port number should match your local setting which can be different from 9650.)
* **`ChainID`**: Subnet-EVM ChainID
* **`Symbol`**: Subnet-EVM Token Symbol
* **`Explorer`**: N/A
You should see a balance of your Avalanche L1's Native Token in Core.

## Step 2: Connect Core and Deploy a Smart Contract[](#step-2-connect-core-and-deploy-a-smart-contract "Direct link to heading")
### Using Remix[](#using-remix "Direct link to heading")
Open [Remix](https://remix.ethereum.org/) -> Select Solidity.

Create the smart contracts that we want to compile and deploy using Remix file explorer
### Using GitHub[](#using-github "Direct link to heading")
In Remix Home *Click* the GitHub button.

Paste the [link to the Smart Contract](https://github.com/ava-labs/avalanche-smart-contract-quickstart/blob/main/contracts/NFT.sol) into the popup and *Click* import.

For this example, we will deploy an ERC721 contract from the [Avalanche Smart Contract Quickstart Repository](https://github.com/ava-labs/avalanche-smart-contract-quickstart).

Navigate to Deploy Tab -> Open the "ENVIRONMENT" drop-down and select Injected Web3 (make sure Core is loaded).

Once we injected the web3-> Go back to the compiler, and compile the selected contract -> Navigate to Deploy Tab.

Now, the smart contract is compiled, Core is injected, and we are ready to deploy our ERC721. Click "Deploy."

Confirm the transaction on the Core pop up.

Our contract is successfully deployed!

Now, we can expand it by selecting it from the "Deployed Contracts" tab and test it out.

The contract ABI and Bytecode are available on the compiler tab.

If you had any difficulties following this tutorial or simply want to discuss Avalanche with us, you can join our community at [Discord](https://chat.avalabs.org/)!
You can use Subnet-EVM just like you use C-Chain and EVM tools. Only differences are `chainID` and RPC URL. For example you can deploy your contracts with [hardhat quick start guide](/docs/dapps/toolchains/hardhat) by changing `url` and `chainId` in the `hardhat.config.ts`.
# Add a Testnet Faucet
URL: /docs/avalanche-l1s/add-utility/testnet-faucet
This guide will help you add a testnet faucet to your Avalanche L1.
There are thousands of networks and chains in the blockchain space, each with its capabilities and use-cases. Each network requires native coins to do any transaction on them, which can have a monetary value as well. These coins can be collected through centralized exchanges, token sales, etc in exchange for some monetary assets like USD.
But we cannot risk our funds on the network or on any applications hosted on that network, without testing them first. So, these networks often have test networks or testnets, where the native coins do not have any monetary value, and thus can be obtained freely through faucets.
These testnets are often the testbeds for any new native feature of the network itself, or any dapp or [Avalanche L1](/docs/quick-start/avalanche-l1s) that is going live on the main network (Mainnet). For example, [Fuji](/docs/quick-start/networks/fuji-testnet) network is the Testnet for Avalanche's Mainnet.
Besides Fuji Testnet, the [Avalanche Faucet](https://core.app/tools/testnet-faucet/?avalanche-l1=c\&token=c) can be used to get free test tokens on testnet Avalanche L1s like:
* [WAGMI Testnet](https://core.app/tools/testnet-faucet/?avalanche-l1=wagmi)
* [DeFI Kingdoms Testnet](https://core.app/tools/testnet-faucet/?avalanche-l1=dfk)
* [Beam Testnet](https://core.app/tools/testnet-faucet/?avalanche-l1=beam\&token=beam) and many more.
You can use this [repository](https://github.com/ava-labs/avalanche-faucet) to deploy your faucet or just make a PR with the [configurations](https://github.com/ava-labs/avalanche-faucet/blob/main/config.json) of the Avalanche L1. This faucet comes with many features like multiple chain support, custom rate-limiting per Avalanche L1, CAPTCHA verification, and concurrent transaction handling.
## Summary[](#summary "Direct link to heading")
A [Faucet](https://core.app/tools/testnet-faucet/) powered by Avalanche for Fuji Network and other Avalanche L1s. You can -
* Request test coins for the supported Avalanche L1s
* Integrate your EVM Avalanche L1 with the faucet by making a PR with the [chain configurations](https://github.com/ava-labs/avalanche-faucet/blob/main/config.json)
* Fork the [repository](https://github.com/ava-labs/avalanche-faucet) to deploy your faucet for any EVM chain
## Adding a New Avalanche L1[](#adding-a-new-avalanche-l1 "Direct link to heading")
You can also integrate a new Avalanche L1 on the live [faucet](https://core.app/tools/testnet-faucet/) with just a few lines of configuration parameters. All you have to do is make a PR on the [Avalanche Faucet](https://github.com/ava-labs/avalanche-faucet) git repository with the Avalanche L1's information. The following parameters are required.
```json
{
"ID": string,
"NAME": string,
"TOKEN": string,
"RPC": string,
"CHAINID": number,
"EXPLORER": string,
"IMAGE": string,
"MAX_PRIORITY_FEE": string,
"MAX_FEE": string,
"DRIP_AMOUNT": number,
"RATELIMIT": {
"MAX_LIMIT": number,
"WINDOW_SIZE": number
}
}
```
* `ID` - Each Avalanche L1 chain should have a unique and relatable ID.
* `NAME` - Name of the Avalanche L1 chain that will appear on the site.
* `RPC` - A valid RPC URL for accessing the chain.
* `CHAINID` - ChainID of the chain
* `EXPLORER` - Base URL of standard explorer's site.
* `IMAGE` - URL of the icon of the chain that will be shown in the dropdown.
* `MAX_PRIORITY_FEE` - Maximum tip per faucet drop in **wei** or **10-18** unit (for EIP1559 supported chains)
* `MAX_FEE` - Maximum fee that can be paid for a faucet drop in **wei** or **10-18** unit
* `DRIP_AMOUNT` - Amount of coins to send per request in **gwei** or **10-9** unit
* `RECALIBRATE` *(optional)* - Number of seconds after which the nonce and balance will recalibrate
* `RATELIMIT` - Number of times (MAX\_LIMIT) to allow per user within the WINDOW\_SIZE (in minutes)
Add the configuration in the array of `evmchains` inside the [config.json](https://github.com/ava-labs/avalanche-faucet/blob/main/config.json) file and make a PR.
## Building and Deploying a Faucet[](#building-and-deploying-a-faucet "Direct link to heading")
You can also deploy and build your faucet by using the [Avalanche Faucet](https://github.com/ava-labs/avalanche-faucet) repository.
### Requirements[](#requirements "Direct link to heading")
* [Node](https://nodejs.org/en) >= 17.0 and [npm](https://www.npmjs.com/) >= 8.0
* [Google's reCAPTCHA](https://www.google.com/recaptcha/intro/v3.html) v3 keys
* [Docker](https://www.docker.com/get-started/)
### Installation[](#installation "Direct link to heading")
Clone this repository at your preferred location.
```bash
git clone https://github.com/ava-labs/avalanche-faucet
```
The repository cloning method used is HTTPS, but SSH can be used too:
`git clone git@github.com:ava-labs/avalanche-faucet.git`
You can find more about SSH and how to use it [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh).
### Client-Side Configurations[](#client-side-configurations "Direct link to heading")
We need to configure our application with the server API endpoints and CAPTCHA site keys. All the client-side configurations are there in the `client/src/config.json` file. Since there are no secrets on the client-side, we do not need any environment variables. Update the config files according to your need.
```json
{
"banner": "/banner.png",
"apiBaseEndpointProduction": "/api/",
"apiBaseEndpointDevelopment": "http://localhost:8000/api/",
"apiTimeout": 10000,
"CAPTCHA": {
"siteKey": "6LcNScYfAAAAAJH8fauA-okTZrmAxYqfF9gOmujf",
"action": "faucetdrip"
}
}
```
Put the Google's reCAPTCHA site-key without which the faucet client can't send the necessary CAPTCHA response to the server. This key is not a secret and could be public.
In the above file, there are 2 base endpoints for the faucet server `apiBaseEndpointProduction` and `apiBaseEndpointDevelopment`.
In production mode, the client-side will be served as static content over the server's endpoint, and hence we do not have to provide the server's IP address or domain.
The URL path should be valid, where the server's APIs are hosted. If the endpoints for API have a leading `/v1/api` and the server is running on localhost at port 3000, then you should use `http://localhost:3000/v1/api` or `/v1/api/` depending on whether it is production or development.
### Server-Side Configurations[](#server-side-configurations "Direct link to heading")
On the server-side, we need to configure 2 files - `.env` for secret keys and `config.json` for chain and API rate limiting configurations.
#### Setup Environment Variables[](#setup-environment-variables "Direct link to heading")
Setup the environment variable with your private key and reCAPTCHA secret. Make a `.env` file in your preferred location with the following credentials, as this file will not be committed to the repository. The faucet server can handle multiple EVM chains, and therefore requires private keys for addresses with funds on each of the chains.
If you have funds on the same address on every chain, then you can specify them with the single variable`PK`. But if you have funds on different addresses on different chains, then you can provide each of the private keys against the ID of the chain, as shown below.
```bash
C="C chain private key"
WAGMI="Wagmi chain private key"
PK="Sender Private Key with Funds in it"
CAPTCHA_SECRET="Google reCAPTCHA Secret"
```
`PK` will act as a fallback private key, in case, the key for any chain is not provided.
#### Setup EVM Chain Configurations[](#setup-evm-chain-configurations "Direct link to heading")
You can create a faucet server for any EVM chain by making changes in the `config.json` file. Add your chain configuration as shown below in the `evmchains` object. Configuration for Fuji's C-Chain and WAGMI chain is shown below for example.
```json
"evmchains": [
{
"ID": "C",
"NAME": "Fuji (C-Chain)",
"TOKEN": "AVAX",
"RPC": "https://api.avax-test.network/ext/C/rpc",
"CHAINID": 43113,
"EXPLORER": "https://testnet.snowtrace.io",
"IMAGE": "/avaxred.png",
"MAX_PRIORITY_FEE": "2000000000",
"MAX_FEE": "100000000000",
"DRIP_AMOUNT": 2000000000,
"RECALIBRATE": 30,
"RATELIMIT": {
"MAX_LIMIT": 1,
"WINDOW_SIZE": 1440
}
},
{
"ID": "WAGMI",
"NAME": "WAGMI Testnet",
"TOKEN": "WGM",
"RPC": "https://subnets.avax.network/wagmi/wagmi-chain-testnet/rpc",
"CHAINID": 11111,
"EXPLORER": "https://subnets.avax.network/wagmi/wagmi-chain-testnet/explorer",
"IMAGE": "/wagmi.png",
"MAX_PRIORITY_FEE": "2000000000",
"MAX_FEE": "100000000000",
"DRIP_AMOUNT": 2000000000,
"RATELIMIT": {
"MAX_LIMIT": 1,
"WINDOW_SIZE": 1440
}
}
]
```
In the above configuration drip amount is in `nAVAX` or `gwei`, whereas fees are in `wei`. For example, with the above configurations, the faucet will send `1 AVAX` with maximum fees per gas being `100 nAVAX` and priority fee as `2 nAVAX`.
The rate limiter for C-Chain will only accept 1 request in 60 minutes for a particular API and 2 requests in 60 minutes for the WAGMI chain. Though it will skip any failed requests so that users can request tokens again, even if there is some internal error in the application. On the other hand, the global rate limiter will allow 15 requests per minute on every API. This time failed requests will also get counted so that no one can abuse the APIs.
### API Endpoints[](#api-endpoints "Direct link to heading")
This server will expose the following APIs
#### Health API[](#health-api "Direct link to heading")
The `/health` API will always return a response with a `200` status code. This endpoint can be used to know the health of the server.
```bash
curl http://localhost:8000/health
```
Response
#### Get Faucet Address[](#get-faucet-address "Direct link to heading")
This API will be used for fetching the faucet address.
```bash
curl http://localhost:8000/api/faucetAddress?chain=C
```
It will give the following response:
```bash
0x3EA53fA26b41885cB9149B62f0b7c0BAf76C78D4
```
#### Get Faucet Balance[](#get-faucet-balance "Direct link to heading")
This API will be used for fetching the faucet address.
```bash
curl http://localhost:8000/api/getBalance?chain=C
```
#### Send Token[](#send-token "Direct link to heading")
This API endpoint will handle token requests from users. It will return the transaction hash as a receipt of the faucet drip.
```bash
curl -d '{
"address": "0x3EA53fA26b41885cB9149B62f0b7c0BAf76C78D4"
"chain": "C"
}' -H 'Content-Type: application/json' http://localhost:8000/api/sendToken
```
Send token API requires a CAPTCHA response token that is generated using the CAPTCHA site key on the client-side.
Since we can't generate and pass this token while making a curl request, we have to disable the CAPTCHA verification for testing purposes. You can find the steps to disable it in the next sections. The response is shown below
```json
{
"message": "Transaction successful on Avalanche C Chain!",
"txHash": "0x3d1f1c3facf59c5cd7d6937b3b727d047a1e664f52834daf20b0555e89fc8317"
}
```
### Rate Limiters[](#rate-limiters-important "Direct link to heading")
The rate limiters are applied on the global (all endpoints) as well as on the `/api/sendToken` API. These can be configured from the `config.json` file. Rate limiting parameters for chains are passed in the chain configuration as shown above.
```json
"GLOBAL_RL": {
"ID": "GLOBAL",
"RATELIMIT": {
"REVERSE_PROXIES": 4,
"MAX_LIMIT": 40,
"WINDOW_SIZE": 1,
"PATH": "/",
"SKIP_FAILED_REQUESTS": false
}
}
```
There could be multiple proxies between the server and the client. The server will see the IP address of the adjacent proxy connected with the server, and this may not be the client's actual IP.
The IPs of all the proxies that the request has hopped through are stuffed inside the header **x-forwarded-for** array. But the proxies in between can easily manipulate these headers to bypass rate limiters. So, we cannot trust all the proxies and hence all the IPs inside the header.
The proxies that are set up by the owner of the server (reverse-proxies) are the trusted proxies on which we can rely and know that they have stuffed the actual IP of the callers in between. Any proxy that is not set up by the server, should be considered an untrusted proxy. So, we can jump to the IP address added by the last proxy that we trust. The number of jumps that we want can be configured in the `config.json` file inside the `GLOBAL_RL` object.

#### Clients Behind Same Proxy[](#clients-behind-same-proxy "Direct link to heading")
Consider the below diagram. The server is set up with 2 reverse proxies. If the client is behind proxies, then we cannot get the client's actual IP, and instead will consider the proxy's IP as the client's IP. And if some other client is behind the same proxy, then those clients will be considered as a single entity and might get rate-limited faster.

Therefore it is advised to the users, to avoid using any proxy for accessing applications that have critical rate limits, like this faucet.
#### Wrong Number of Reverse Proxies[](#wrong-number-of-reverse-proxies "Direct link to heading")
So, if you want to deploy this faucet, and have some reverse proxies in between, then you should configure this inside the `GLOBAL_RL` key of the `config.json` file. If this is not configured properly, then the users might get rate-limited very frequently, since the server-side proxy's IP addresses are being viewed as the client's IP. You can verify this in the code [here](https://github.com/ava-labs/avalanche-faucet/blob/23eb300635b64130bc9ce10d9e894f0a0b3d81ea/middlewares/rateLimiter.ts#L25).
```json
"GLOBAL_RL": {
"ID": "GLOBAL",
"RATELIMIT": {
"REVERSE_PROXIES": 4,
...
}
}
```

It is also quite common to have Cloudflare as the last reverse proxy or the exposed server. Cloudflare provides a header **cf-connecting-ip** which is the IP of the client that requested the faucet and hence Cloudflare. We are using this as default.
### CAPTCHA Verification[](#captcha-verification "Direct link to heading")
CAPTCHA is required to prove the user is a human and not a bot. For this purpose, we will use [Google's reCAPTCHA](https://www.google.com/recaptcha/intro/v3.html). The server-side will require `CAPTCHA_SECRET` that should not be exposed. You can set the threshold score to pass the CAPTCHA test by the users [here](https://github.com/ava-labs/avalanche-faucet/blob/23eb300635b64130bc9ce10d9e894f0a0b3d81ea/middlewares/verifyCaptcha.ts#L20).
You can disable these CAPTCHA verifications and rate limiters for testing the purpose, by tweaking in the `server.ts` file.
### Disabling Rate Limiters[](#disabling-rate-limiters "Direct link to heading")
Comment or remove these two lines from the `server.ts` file
```ts title="server.ts"
new RateLimiter(app, [GLOBAL_RL]);
new RateLimiter(app, evmchains);
```
### Disabling CAPTCHA Verification[](#disabling-captcha-verification "Direct link to heading")
Remove the `captcha.middleware` from `sendToken` API.
### Starting the Faucet[](#starting-the-faucet "Direct link to heading")
Follow the below commands to start your local faucet.
#### Installing Dependencies[](#installing-dependencies "Direct link to heading")
This will concurrently install dependencies for both client and server.
If ports have a default configuration, then the client will start at port 3000 and the server will start at port 8000 while in development mode.
#### Starting in Development Mode[](#starting-in-development-mode "Direct link to heading")
This will concurrently start the server and client in development mode.
#### Building for Production[](#building-for-production "Direct link to heading")
The following command will build server and client at `build/` and `build/client` directories.
#### Starting in Production Mode[](#starting-in-production-mode "Direct link to heading")
This command should only be run after successfully building the client and server-side code.
### Setting up with Docker[](#setting-up-with-docker "Direct link to heading")
Follow the steps to run this application in a Docker container.
#### Build Docker Image[](#build-docker-image "Direct link to heading")
Docker images can be served as the built versions of our application, that can be used to deploy on Docker container.
```bash
docker build . -t faucet-image
```
#### Starting Application inside Docker Container[](#starting-application-inside-docker-container "Direct link to heading")
Now we can create any number of containers using the above `faucet` image. We also have to supply the `.env` file or the environment variables with the secret keys to create the container. Once the container is created, these variables and configurations will be persisted and can be easily started or stopped with a single command.
```bash
docker run -p 3000:8000 --name faucet-container --env-file ../.env faucet-image
```
The server will run on port 8000, and our Docker will also expose this port for the outer world to interact. We have exposed this port in the `Dockerfile`. But we cannot directly interact with the container port, so we had to bind this container port to our host port. For the host port, we have chosen 3000. This flag `-p 3000:8000` achieves the same.
This will start our faucet application in a Docker container at port 3000 (port 8000 on the container). You can interact with the application by visiting \[[http://localhost:3000\\](http://localhost:3000\\)] in your browser.
#### Stopping the Container[](#stopping-the-container "Direct link to heading")
You can easily stop the container using the following command
```bash
docker stop faucet-container
```
#### Restarting the Container[](#restarting-the-container "Direct link to heading")
To restart the container, use the following command
```bash
docker start faucet-container
```
## Using the Faucet[](#using-the-faucet "Direct link to heading")
Using the faucet is quite straightforward, but for the sake of completeness, let's go through the steps, to collect your first test coins.
### Visit Avalanche Faucet Site[](#visit-avalanche-faucet-site "Direct link to heading")
Go to [https://core.app/tools/testnet-faucet/](https://core.app/tools/testnet-faucet/). You will see various network parameters like network name, faucet balance, drop amount, drop limit, faucet address, etc.

### Select Network[](#select-network "Direct link to heading")
You can use the dropdown to select the network of your choice and get some free coins (each network may have a different drop amount).

### Put Address and Request Coins[](#put-address-and-request-coins "Direct link to heading")
If you already have an AVAX balance greater than zero on Mainnet, paste your C-Chain address there, and request test tokens. Otherwise, please request a faucet coupon on [Guild](https://guild.xyz/avalanche). Admins and mods on the official [Discord](https://discord.com/invite/RwXY7P6) can provide testnet AVAX if developers are unable to obtain it from the other two options.
Within a second, you will get a **transaction hash** for the processed transaction. The hash would be a hyperlink to Avalanche L1's explorer. You can see the transaction status, by clicking on that hyperlink.

### More Interactions[](#more-interactions "Direct link to heading")
This is not just it. Using the buttons shown below, you can go to the Avalanche L1 explorer or add the Avalanche L1 to your browser wallet extensions like Core or MetaMask with a single click.

### Probable Errors and Troubleshooting[](#probable-errors-and-troubleshooting "Direct link to heading")
Errors are not expected, but if you are facing some of the errors shown, then you could try troubleshooting as shown below. If none of the troubleshooting works, reach us through [Discord](https://discord.com/channels/578992315641626624/).
1. **Too many requests. Please try again after X minutes**: This is a rate-limiting message. Every Avalanche L1 can set its drop limits. The above message suggests that you have reached your drop limit, that is the number of times you could request coins within the window of X minutes. You should try requesting after X minutes. If you are facing this problem, even when you are requesting for the first time in the window, you may be behind some proxy, Wi-Fi, or VPN service that is also being used by some other user.
2. **CAPTCHA verification failed! Try refreshing**: We are using v3 of [Google's reCAPTCHA](https://developers.google.com/recaptcha/docs/v3). This version uses scores between 0 and 1 to rate the interaction of humans with the site, with 0 being the most suspicious one. You do not have to solve any puzzle or mark the **I am not a Robot** checkbox. The score will be automatically calculated. We want our users to score at least 0.3 to use the faucet. This is configurable, and we will update the threshold after having broader data. But if you are facing this issue, then you can try refreshing your page, disabling ad-blockers, or switching off any VPN. You can follow this [guide](https://2captcha.com/blog/google-doesnt-accept-recaptcha-answers) to get rid of this issue.
3. **Internal RPC error! Please try after sometime**: This is an internal error in the Avalanche L1's node, on which we are making an RPC for sending transactions. A regular check will update the RPC's health status every 30 seconds (default) or whatever is set in the configuration. This may happen only in rare scenarios and you cannot do much about it, rather than waiting.
4. **Timeout of 10000ms exceeded**: There could be many reasons for this message. It could be an internal server error, or the request didn't receive by the server, slow internet, etc. You could try again after some time, and if the problem persists, then you should raise this issue on our [Discord](https://discord.com/channels/578992315641626624/) server.
5. **Couldn't see any transaction status on explorer**: The transaction hash that you get for each drop is pre-computed using the expected nonce, amount, and receiver's address. Though transactions on Avalanche are near-instant, the explorer may take time to index those transactions. You should wait for a few more seconds, before raising any issue or reaching out to us.
# X-Chain API
URL: /docs/api-reference/x-chain/api
This page is an overview of the X-Chain API associated with AvalancheGo.
The [X-Chain](https://build.avax.network/docs/quick-start/primary-network#x-chain),
Avalanche's native platform for creating and trading assets, is an instance of the Avalanche Virtual
Machine (AVM). This API allows clients to create and trade assets on the X-Chain and other instances
of the AVM.
## Format
This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see
[here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls).
## Endpoints
`/ext/bc/X` to interact with the X-Chain.
`/ext/bc/blockchainID` to interact with other AVM instances, where `blockchainID` is the ID of a
blockchain running the AVM.
## Methods
### `avm.getAllBalances`
Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Get the balances of all assets controlled by a given address.
**Signature:**
```sh
avm.getAllBalances({address:string}) -> {
balances: []{
asset: string,
balance: int
}
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" : 1,
"method" :"avm.getAllBalances",
"params" :{
"address":"X-avax1c79e0dd0susp7dc8udq34jgk2yvve7hapvdyht"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"balances": [
{
"asset": "AVAX",
"balance": "102"
},
{
"asset": "2sdnziCz37Jov3QSNMXcFRGFJ1tgauaj6L7qfk7yUcRPfQMC79",
"balance": "10000"
}
]
},
"id": 1
}
```
### `avm.getAssetDescription`
Get information about an asset.
**Signature:**
```sh
avm.getAssetDescription({assetID: string}) -> {
assetId: string,
name: string,
symbol: string,
denomination: int
}
```
* `assetID` is the id of the asset for which the information is requested.
* `name` is the asset’s human-readable, not necessarily unique name.
* `symbol` is the asset’s symbol.
* `denomination` determines how balances of this asset are displayed by user interfaces. If
denomination is 0, 100 units of this asset are displayed as 100. If denomination is 1, 100 units
of this asset are displayed as 10.0. If denomination is 2, 100 units of this asset are displays as
.100, etc.
The AssetID for AVAX differs depending on the network you are on.
Mainnet: FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z
Testnet: U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK
For finding the `assetID` of other assets, this \[info] might be useful.
Also, `avm.getUTXOs` returns the `assetID` in its output.
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avm.getAssetDescription",
"params" :{
"assetID" :"FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"name": "Avalanche",
"symbol": "AVAX",
"denomination": "9"
},
"id": 1
}`
```
### `avm.getBalance`
Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
Get the balance of an asset controlled by a given address.
**Signature:**
```sh
avm.getBalance({
address: string,
assetID: string
}) -> {balance: int}
```
* `address` owner of the asset
* `assetID` id of the asset for which the balance is requested
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" : 1,
"method" :"avm.getBalance",
"params" :{
"address":"X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"assetID": "2pYGetDWyKdHxpFxh2LHeoLNCH6H5vxxCxHQtFnnFaYxLsqtHC"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"balance": "299999999999900",
"utxoIDs": [
{
"txID": "WPQdyLNqHfiEKp4zcCpayRHYDVYuh1hqs9c1RqgZXS4VPgdvo",
"outputIndex": 1
}
]
}
}
```
### `avm.getBlock`
Returns the block with the given id.
**Signature:**
```sh
avm.getBlock({
blockID: string
encoding: string // optional
}) -> {
block: string,
encoding: string
}
```
**Request:**
* `blockID` is the block ID. It should be in cb58 format.
* `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`.
**Response:**
* `block` is the transaction encoded to `encoding`.
* `encoding` is the `encoding`.
#### Hex Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "avm.getBlock",
"params": {
"blockID": "tXJ4xwmR8soHE6DzRNMQPtiwQvuYsHn6eLLBzo2moDqBquqy6",
"encoding": "hex"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"block": "0x00000000002000000000641ad33ede17f652512193721df87994f783ec806bb5640c39ee73676caffcc3215e0651000000000049a80a000000010000000e0000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000002e1a2a3910000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b400000001e0d825c5069a7336671dd27eaa5c7851d2cf449e7e1cdc469c5c9e5a953955950000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000008908223b680000000100000000000000005e45d02fcc9e585544008f1df7ae5c94bf7f0f2600000000641ad3b600000000642d48b60000005aedf802580000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005aedf80258000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b40000000b000000000000000000000001000000012892441ba9a160bcdc596dcd2cc3ad83c3493589000000010000000900000001adf2237a5fe2dfd906265e8e14274aa7a7b2ee60c66213110598ba34fb4824d74f7760321c0c8fb1e8d3c5e86909248e48a7ae02e641da5559351693a8a1939800286d4fa2",
"encoding": "hex"
},
"id": 1
}
```
### `avm.getBlockByHeight`
Returns block at the given height.
**Signature:**
```sh
avm.getBlockByHeight({
height: string
encoding: string // optional
}) -> {
block: string,
encoding: string
}
```
**Request:**
* `blockHeight` is the block height. It should be in `string` format.
* `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`.
**Response:**
* `block` is the transaction encoded to `encoding`.
* `encoding` is the `encoding`.
#### Hex Example
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "avm.getBlockByHeight",
"params": {
"height": "275686313486",
"encoding": "hex"
},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"block": "0x00000000002000000000642f6739d4efcdd07e4d4919a7fc2020b8a0f081dd64c262aaace5a6dad22be0b55fec0700000000004db9e100000001000000110000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005c6ece390000000000000000000000000100000001930ab7bf5018bfc6f9435c8b15ba2fe1e619c0230000000000000000ed5f38341e436e5d46e2bb00b45d62ae97d1b050c64bc634ae10626739e35c4b00000001c6dda861341665c3b555b46227fb5e56dc0a870c5482809349f04b00348af2a80000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000005c6edd7b40000000010000000000000001000000090000000178688f4d5055bd8733801f9b52793da885bef424c90526c18e4dd97f7514bf6f0c3d2a0e9a5ea8b761bc41902eb4902c34ef034c4d18c3db7c83c64ffeadd93600731676de",
"encoding": "hex"
},
"id": 1
}
```
### `avm.getHeight`
Returns the height of the last accepted block.
**Signature:**
```sh
avm.getHeight() ->
{
height: uint64,
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc": "2.0",
"method": "avm.getHeight",
"params": {},
"id": 1
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"height": "5094088"
},
"id": 1
}
```
### `avm.getTx`
Returns the specified transaction. The `encoding` parameter sets the format of the returned
transaction. Can be either `"hex"` or `"json"`. Defaults to `"hex"`.
**Signature:**
```sh
avm.getTx({
txID: string,
encoding: string, //optional
}) -> {
tx: string,
encoding: string,
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avm.getTx",
"params" :{
"txID":"2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL",
"encoding": "json"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"result": {
"tx": {
"unsignedTx": {
"networkID": 1,
"blockchainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM",
"outputs": [],
"inputs": [
{
"txID": "2jbZUvi6nHy3Pgmk8xcMpSg5cW6epkPqdKkHSCweb4eRXtq4k9",
"outputIndex": 1,
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"input": {
"amount": 2570382395,
"signatureIndices": [0]
}
}
],
"memo": "0x",
"destinationChain": "11111111111111111111111111111111LpoYY",
"exportedOutputs": [
{
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"output": {
"addresses": ["X-avax1tnuesf6cqwnjw7fxjyk7lhch0vhf0v95wj5jvy"],
"amount": 2569382395,
"locktime": 0,
"threshold": 1
}
}
]
},
"credentials": [
{
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"credential": {
"signatures": [
"0x46ebcbcfbee3ece1fd15015204045cf3cb77f42c48d0201fc150341f91f086f177cfca8894ca9b4a0c55d6950218e4ea8c01d5c4aefb85cd7264b47bd57d224400"
]
}
}
],
"id": "2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL"
},
"encoding": "json"
},
"id": 1
}
```
Where:
* `credentials` is a list of this transaction's credentials. Each credential proves that this
transaction's creator is allowed to consume one of this transaction's inputs. Each credential is a
list of signatures.
* `unsignedTx` is the non-signature portion of the transaction.
* `networkID` is the ID of the network this transaction happened on. (Avalanche Mainnet is `1`.)
* `blockchainID` is the ID of the blockchain this transaction happened on. (Avalanche Mainnet
X-Chain is `2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM`.)
* Each element of `outputs` is an output (UTXO) of this transaction that is not being exported to
another chain.
* Each element of `inputs` is an input of this transaction which has not been imported from another
chain.
* Import Transactions have additional fields `sourceChain` and `importedInputs`, which specify the
blockchain ID that assets are being imported from, and the inputs that are being imported.
* Export Transactions have additional fields `destinationChain` and `exportedOutputs`, which specify
the blockchain ID that assets are being exported to, and the UTXOs that are being exported.
An output contains:
* `assetID`: The ID of the asset being transferred. (The Mainnet Avax ID is
`FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z`.)
* `fxID`: The ID of the FX this output uses.
* `output`: The FX-specific contents of this output.
Most outputs use the secp256k1 FX, look like this:
```json
{
"assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
"fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
"output": {
"addresses": ["X-avax126rd3w35xwkmj8670zvf7y5r8k36qa9z9803wm"],
"amount": 1530084210,
"locktime": 0,
"threshold": 1
}
}
```
The above output can be consumed after Unix time `locktime` by a transaction that has signatures
from `threshold` of the addresses in `addresses`.
### `avm.getTxFee`
Get the fees of the network.
**Signature**:
```
avm.getTxFee() ->
{
txFee: uint64,
createAssetTxFee: uint64,
}
```
* `txFee` is the default fee for making transactions.
* `createAssetTxFee` is the fee for creating a new asset.
All fees are denominated in nAVAX.
**Example Call**:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" : 1,
"method" :"avm.getTxFee",
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response**:
```json
{
"jsonrpc": "2.0",
"result": {
"txFee": "1000000",
"createAssetTxFee": "10000000"
}
}
```
### `avm.getTxStatus`
Deprecated as of **v1.10.0**.
Get the status of a transaction sent to the network.
**Signature:**
```sh
avm.getTxStatus({txID: string}) -> {status: string}
```
`status` is one of:
* `Accepted`: The transaction is (or will be) accepted by every node
* `Processing`: The transaction is being voted on by this node
* `Rejected`: The transaction will never be accepted by any node in the network
* `Unknown`: The transaction hasn’t been seen by this node
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avm.getTxStatus",
"params" :{
"txID":"2QouvFWUbjuySRxeX5xMbNCuAaKWfbk5FeEa2JmoF85RKLk2dD"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"status": "Accepted"
}
}
```
### `avm.getUTXOs`
Gets the UTXOs that reference a given address. If `sourceChain` is specified, then it will retrieve
the atomic UTXOs exported from that chain to the X Chain.
**Signature:**
```sh
avm.getUTXOs({
addresses: []string,
limit: int, //optional
startIndex: { //optional
address: string,
utxo: string
},
sourceChain: string, //optional
encoding: string //optional
}) -> {
numFetched: int,
utxos: []string,
endIndex: {
address: string,
utxo: string
},
sourceChain: string, //optional
encoding: string
}
```
* `utxos` is a list of UTXOs such that each UTXO references at least one address in `addresses`.
* At most `limit` UTXOs are returned. If `limit` is omitted or greater than 1024, it is set to 1024.
* This method supports pagination. `endIndex` denotes the last UTXO returned. To get the next set of
UTXOs, use the value of `endIndex` as `startIndex` in the next call.
* If `startIndex` is omitted, will fetch all UTXOs up to `limit`.
* When using pagination (when `startIndex` is provided), UTXOs are not guaranteed to be unique
across multiple calls. That is, a UTXO may appear in the result of the first call, and then again
in the second call.
* When using pagination, consistency is not guaranteed across multiple calls. That is, the UTXO set
of the addresses may have changed between calls.
* `encoding` sets the format for the returned UTXOs. Can only be `hex` when a value is provided.
#### **Example**
Suppose we want all UTXOs that reference at least one of
`X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5` and `X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6`.
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avm.getUTXOs",
"params" :{
"addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"],
"limit":5,
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
This gives response:
```json
{
"jsonrpc": "2.0",
"result": {
"numFetched": "5",
"utxos": [
"0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765",
"0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a",
"0x0000731ce04b1feefa9f4291d869adc30a33463f315491e164d89be7d6d2d7890cfc00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21600dd3047",
"0x0000b462030cc4734f24c0bc224cf0d16ee452ea6b67615517caffead123ab4fbf1500000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c71b387e",
"0x000054f6826c39bc957c0c6d44b70f961a994898999179cc32d21eb09c1908d7167b00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f2166290e79d"
],
"endIndex": {
"address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j"
},
"encoding": "hex"
},
"id": 1
}
```
Since `numFetched` is the same as `limit`, we can tell that there may be more UTXOs that were not
fetched. We call the method again, this time with `startIndex`:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :2,
"method" :"avm.getUTXOs",
"params" :{
"addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"],
"limit":5,
"startIndex": {
"address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j"
},
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
This gives response:
```json
{
"jsonrpc": "2.0",
"result": {
"numFetched": "4",
"utxos": [
"0x000020e182dd51ee4dcd31909fddd75bb3438d9431f8e4efce86a88a684f5c7fa09300000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21662861d59",
"0x0000a71ba36c475c18eb65dc90f6e85c4fd4a462d51c5de3ac2cbddf47db4d99284e00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21665f6f83f",
"0x0000925424f61cb13e0fbdecc66e1270de68de9667b85baa3fdc84741d048daa69fa00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216afecf76a",
"0x000082f30327514f819da6009fad92b5dba24d27db01e29ad7541aa8e6b6b554615c00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216779c2d59"
],
"endIndex": {
"address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "21jG2RfqyHUUgkTLe2tUp6ETGLriSDTW3th8JXFbPRNiSZ11jK"
},
"encoding": "hex"
},
"id": 1
}
```
Since `numFetched` is less than `limit`, we know that we are done fetching UTXOs and don’t need to
call this method again.
Suppose we want to fetch the UTXOs exported from the P Chain to the X Chain in order to build an
ImportTx. Then we need to call GetUTXOs with the `sourceChain` argument in order to retrieve the
atomic UTXOs:
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" :1,
"method" :"avm.getUTXOs",
"params" :{
"addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"],
"limit":5,
"sourceChain": "P",
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
This gives response:
```json
{
"jsonrpc": "2.0",
"result": {
"numFetched": "1",
"utxos": [
"0x00001f989ffaf18a18a59bdfbf209342aa61c6a62a67e8639d02bb3c8ddab315c6fa0000000039c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d550088000000070011c304cd7eb5c0000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c83497819"
],
"endIndex": {
"address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5",
"utxo": "2Sz2XwRYqUHwPeiKoRnZ6ht88YqzAF1SQjMYZQQaB5wBFkAqST"
},
"encoding": "hex"
},
"id": 1
}
```
### `avm.issueTx`
Send a signed transaction to the network. `encoding` specifies the format of the signed transaction.
Can only be `hex` when a value is provided.
**Signature:**
```sh
avm.issueTx({
tx: string,
encoding: string, //optional
}) -> {
txID: string
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" : 1,
"method" :"avm.issueTx",
"params" :{
"tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730",
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj"
}
}
```
### `wallet.issueTx`
Send a signed transaction to the network and assume the TX will be accepted. `encoding` specifies
the format of the signed transaction. Can only be `hex` when a value is provided.
This call is made to the wallet API endpoint:
`/ext/bc/X/wallet`
:::caution
Endpoint deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12).
:::
**Signature:**
```sh
wallet.issueTx({
tx: string,
encoding: string, //optional
}) -> {
txID: string
}
```
**Example Call:**
```sh
curl -X POST --data '{
"jsonrpc":"2.0",
"id" : 1,
"method" :"wallet.issueTx",
"params" :{
"tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730",
"encoding": "hex"
}
}' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X/wallet
```
**Example Response:**
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj"
}
}
```
# Transaction Format
URL: /docs/api-reference/x-chain/txn-format
This file is meant to be the single source of truth for how we serialize
transactions in the Avalanche Virtual Machine (AVM). This document uses the
[primitive serialization](/docs/api-reference/standards/serialization-primitives) format for packing and
[secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) for cryptographic
user identification.
## Codec ID
Some data is prepended with a codec ID (unt16) that denotes how the data should
be deserialized. Right now, the only valid codec ID is 0 (`0x00 0x00`).
## Transferable Output
Transferable outputs wrap an output with an asset ID.
### What Transferable Output Contains
A transferable output contains an `AssetID` and an [`Output`](/docs/api-reference/x-chain/txn-format#outputs).
* **`AssetID`** is a 32-byte array that defines which asset this output references.
* **`Output`** is an output, as defined
[below](/docs/api-reference/x-chain/txn-format#outputs). Outputs have four possible types:
[`SECP256K1TransferOutput`](/docs/api-reference/x-chain/txn-format#secp256k1-transfer-output),
[`SECP256K1MintOutput`](/docs/api-reference/x-chain/txn-format#secp256k1-mint-output),
[`NFTTransferOutput`](/docs/api-reference/x-chain/txn-format#nft-transfer-output)
and [`NFTMintOutput`](/docs/api-reference/x-chain/txn-format#nft-mint-output).
### Gantt Transferable Output Specification
```text
+----------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+----------+----------+-------------------------+
| output : Output | size(output) bytes |
+----------+----------+-------------------------+
| 32 + size(output) bytes |
+-------------------------+
```
### Proto Transferable Output Specification
```text
message TransferableOutput {
bytes asset_id = 1; // 32 bytes
Output output = 2; // size(output)
}
```
### Transferable Output Example
Let's make a transferable output:
* `AssetID`: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`
* `Output`: `"Example SECP256K1 Transfer Output from below"`
```text
[
AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
=
[
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// output:
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## Transferable Input
Transferable inputs describe a specific UTXO with a provided transfer input.
### What Transferable Input Contains
A transferable input contains a `TxID`, `UTXOIndex` `AssetID` and an `Input`.
* **`TxID`** is a 32-byte array that defines which transaction this input is
consuming an output from. Transaction IDs are calculated by taking sha256 of
the bytes of the signed transaction.
* **`UTXOIndex`** is an int that defines which UTXO this input is consuming in the specified transaction.
* **`AssetID`** is a 32-byte array that defines which asset this input references.
* **`Input`** is an input, as defined below. This can currently only be a [SECP256K1 transfer input](/docs/api-reference/x-chain/txn-format#secp256k1-transfer-input)
### Gantt Transferable Input Specification
```text
+------------+----------+------------------------+
| tx_id : [32]byte | 32 bytes |
+------------+----------+------------------------+
| utxo_index : int | 04 bytes |
+------------+----------+------------------------+
| asset_id : [32]byte | 32 bytes |
+------------+----------+------------------------+
| input : Input | size(input) bytes |
+------------+----------+------------------------+
| 68 + size(input) bytes |
+------------------------+
```
### Proto Transferable Input Specification
```text
message TransferableInput {
bytes tx_id = 1; // 32 bytes
uint32 utxo_index = 2; // 04 bytes
bytes asset_id = 3; // 32 bytes
Input input = 4; // size(input)
}
```
### Transferable Input Example
Let's make a transferable input:
* `TxID`: `0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000`
* `UTXOIndex`: `5`
* `AssetID`: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`
* `Input`: `"Example SECP256K1 Transfer Input from below"`
```text
[
TxID <- 0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000
UTXOIndex <- 0x00000005
AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
Input <- 0x0000000500000000075bcd15000000020000000700000003
]
=
[
// txID:
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
// utxoIndex:
0x00, 0x00, 0x00, 0x05,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// input:
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07
]
```
## Transferable Op
Transferable operations describe a set of UTXOs with a provided transfer
operation. Only one Asset ID is able to be referenced per operation.
### What Transferable Op Contains
A transferable operation contains an `AssetID`, `UTXOIDs`, and a `TransferOp`.
* **`AssetID`** is a 32-byte array that defines which asset this operation changes.
* **`UTXOIDs`** is an array of TxID-OutputIndex tuples. This array must be
sorted in lexicographical order.
* **`TransferOp`** is a [transferable operation object](/docs/api-reference/x-chain/txn-format#operations).
### Gantt Transferable Op Specification
```text
+-------------+------------+------------------------------+
| asset_id : [32]byte | 32 bytes |
+-------------+------------+------------------------------+
| utxo_ids : []UTXOID | 4 + 36 * len(utxo_ids) bytes |
+-------------+------------+------------------------------+
| transfer_op : TransferOp | size(transfer_op) bytes |
+-------------+------------+------------------------------+
| 36 + 36 * len(utxo_ids) |
| + size(transfer_op) bytes |
+------------------------------+
```
### Proto Transferable Op Specification
```text
message UTXOID {
bytes tx_id = 1; // 32 bytes
uint32 utxo_index = 2; // 04 bytes
}
message TransferableOp {
bytes asset_id = 1; // 32 bytes
repeated UTXOID utxo_ids = 2; // 4 + 36 * len(utxo_ids) bytes
TransferOp transfer_op = 3; // size(transfer_op)
}
```
### Transferable Op Example
Let's make a transferable operation:
* `AssetID`: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`
* `UTXOIDs`:
* `UTXOID`:
* `TxID`: `0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000`
* `UTXOIndex`: `5`
* `Op`: `"Example Transfer Op from below"`
```text
[
AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
UTXOIDs <- [
{
TxID:0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000
UTXOIndex:5
}
]
Op <- 0x0000000d0000000200000003000000070000303900000003431100000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859
]
=
[
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// number of utxoIDs:
0x00, 0x00, 0x00, 0x01,
// txID:
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
// utxoIndex:
0x00, 0x00, 0x00, 0x05,
// op:
0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x03,
0x43, 0x11, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00,
0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb,
0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34,
0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3,
0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde,
0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43,
0xab, 0x08, 0x59,
]
```
## Outputs
Outputs have four possible types:
[`SECP256K1TransferOutput`](/docs/api-reference/x-chain/txn-format#secp256k1-transfer-output),
[`SECP256K1MintOutput`](/docs/api-reference/x-chain/txn-format#secp256k1-mint-output),
[`NFTTransferOutput`](/docs/api-reference/x-chain/txn-format#nft-transfer-output) and
[`NFTMintOutput`](/docs/api-reference/x-chain/txn-format#nft-mint-output).
## SECP256K1 Mint Output
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) mint output is
an output that is owned by a collection of addresses.
### What SECP256K1 Mint Output Contains
A secp256k1 Mint output contains a `TypeID`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this output type. It is `0x00000006`.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt SECP256K1 Mint Output Specification
```text
+-----------+------------+--------------------------------+
| type_id : int | 4 bytes |
+-----------+------------+--------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+--------------------------------+
| threshold : int | 4 bytes |
+-----------+------------+--------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------+------------+--------------------------------+
| 20 + 20 * len(addresses) bytes |
+--------------------------------+
```
### Proto SECP256K1 Mint Output Specification
```text
message SECP256K1MintOutput {
uint32 typeID = 1; // 04 bytes
uint64 locktime = 2; // 08 bytes
uint32 threshold = 3; // 04 bytes
repeated bytes addresses = 4; // 04 bytes + 20 bytes * len(addresses)
}
```
### SECP256K1 Mint Output Example
Let's make a SECP256K1 mint output with:
* **`TypeID`**: `6`
* **`Locktime`**: `54321`
* **`Threshold`**: `1`
* **`Addresses`**:
* `0x51025c61fbcfc078f69334f834be6dd26d55a955`
* `0xc3344128e060128ede3523a24a461c8943ab0859`
```text
[
TypeID <- 0x00000006
Locktime <- 0x000000000000d431
Threshold <- 0x00000001
Addresses <- [
0x51025c61fbcfc078f69334f834be6dd26d55a955,
0xc3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// typeID:
0x00, 0x00, 0x00, 0x06,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x02,
// addrs[0]:
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55,
// addrs[1]:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## SECP256K1 Transfer Output
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) transfer output
allows for sending a quantity of an asset to a collection of addresses after a
specified Unix time.
### What SECP256K1 Transfer Output Contains
A secp256k1 transfer output contains a `TypeID`, `Amount`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this output type. It is `0x00000007`.
* **`Amount`** is a long that specifies the quantity of the asset that this output owns. Must be positive.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt SECP256K1 Transfer Output Specification
```text
+-----------+------------+--------------------------------+
| type_id : int | 4 bytes |
+-----------+------------+--------------------------------+
| amount : long | 8 bytes |
+-----------+------------+--------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+--------------------------------+
| threshold : int | 4 bytes |
+-----------+------------+--------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------+------------+--------------------------------+
| 28 + 20 * len(addresses) bytes |
+--------------------------------+
```
### Proto SECP256K1 Transfer Output Specification
```text
message SECP256K1TransferOutput {
uint32 typeID = 1; // 04 bytes
uint64 amount = 2; // 08 bytes
uint64 locktime = 3; // 08 bytes
uint32 threshold = 4; // 04 bytes
repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses)
}
```
### SECP256K1 Transfer Output Example
Let's make a secp256k1 transfer output with:
* **`TypeID`**: `7`
* **`Amount`**: `12345`
* **`Locktime`**: `54321`
* **`Threshold`**: `1`
* **`Addresses`**:
* `0x51025c61fbcfc078f69334f834be6dd26d55a955`
* `0xc3344128e060128ede3523a24a461c8943ab0859`
```text
[
TypeID <- 0x00000007
Amount <- 0x0000000000003039
Locktime <- 0x000000000000d431
Threshold <- 0x00000001
Addresses <- [
0x51025c61fbcfc078f69334f834be6dd26d55a955,
0xc3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// typeID:
0x00, 0x00, 0x00, 0x07,
// amount:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x02,
// addrs[0]:
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55,
// addrs[1]:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## NFT Mint Output
An NFT mint output is an NFT that is owned by a collection of addresses.
### What NFT Mint Output Contains
An NFT Mint output contains a `TypeID`, `GroupID`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this output type. It is `0x0000000a`.
* **`GroupID`** is an int that specifies the group this NFT is issued to.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt NFT Mint Output Specification
```text
+-----------+------------+--------------------------------+
| type_id : int | 4 bytes |
+-----------+------------+--------------------------------+
| group_id : int | 4 bytes |
+-----------+------------+--------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+--------------------------------+
| threshold : int | 4 bytes |
+-----------+------------+--------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------+------------+--------------------------------+
| 24 + 20 * len(addresses) bytes |
+--------------------------------+
```
### Proto NFT Mint Output Specification
```text
message NFTMintOutput {
uint32 typeID = 1; // 04 bytes
uint32 group_id = 2; // 04 bytes
uint64 locktime = 3; // 08 bytes
uint32 threshold = 4; // 04 bytes
repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses)
}
```
### NFT Mint Output Example
Let's make an NFT mint output with:
* **`TypeID`**: `10`
* **`GroupID`**: `12345`
* **`Locktime`**: `54321`
* **`Threshold`**: `1`
* **`Addresses`**:
* `0x51025c61fbcfc078f69334f834be6dd26d55a955`
* `0xc3344128e060128ede3523a24a461c8943ab0859`
```text
[
TypeID <- 0x0000000a
GroupID <- 0x00003039
Locktime <- 0x000000000000d431
Threshold <- 0x00000001
Addresses <- [
0x51025c61fbcfc078f69334f834be6dd26d55a955,
0xc3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// TypeID
0x00, 0x00, 0x00, 0x0a,
// groupID:
0x00, 0x00, 0x30, 0x39,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x02,
// addrs[0]:
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55,
// addrs[1]:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## NFT Transfer Output
An NFT transfer output is an NFT that is owned by a collection of addresses.
### What NFT Transfer Output Contains
An NFT transfer output contains a `TypeID`, `GroupID`, `Payload`, `Locktime`, `Threshold`, and `Addresses`.
* **`TypeID`** is the ID for this output type. It is `0x0000000b`.
* **`GroupID`** is an int that specifies the group this NFT was issued with.
* **`Payload`** is an arbitrary string of bytes no long longer than 1024 bytes.
* **`Locktime`** is a long that contains the Unix timestamp that this output can
be spent after. The Unix timestamp is specific to the second.
* **`Threshold`** is an int that names the number of unique signatures required
to spend the output. Must be less than or equal to the length of
**`Addresses`**. If **`Addresses`** is empty, must be 0.
* **`Addresses`** is a list of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt NFT Transfer Output Specification
```text
+-----------+------------+-------------------------------+
| type_id : int | 4 bytes |
+-----------+------------+-------------------------------+
| group_id : int | 4 bytes |
+-----------+------------+-------------------------------+
| payload : []byte | 4 + len(payload) bytes |
+-----------+------------+-------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+-------------------------------+
| threshold : int | 4 bytes |
+-----------+------------+-------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------+------------+-------------------------------+
| 28 + len(payload) |
| + 20 * len(addresses) bytes |
+-------------------------------+
```
### Proto NFT Transfer Output Specification
```text
message NFTTransferOutput {
uint32 typeID = 1; // 04 bytes
uint32 group_id = 2; // 04 bytes
bytes payload = 3; // 04 bytes + len(payload)
uint64 locktime = 4 // 08 bytes
uint32 threshold = 5; // 04 bytes
repeated bytes addresses = 6; // 04 bytes + 20 bytes * len(addresses)
}
```
### NFT Transfer Output Example
Let's make an NFT transfer output with:
* **`TypeID`**: `11`
* **`GroupID`**: `12345`
* **`Payload`**: `NFT Payload`
* **`Locktime`**: `54321`
* **`Threshold`**: `1`
* **`Addresses`**:
* `0x51025c61fbcfc078f69334f834be6dd26d55a955`
* `0xc3344128e060128ede3523a24a461c8943ab0859`
```text
[
TypeID <- 0x0000000b
GroupID <- 0x00003039
Payload <- 0x4e4654205061796c6f6164
Locktime <- 0x000000000000d431
Threshold <- 0x00000001
Addresses <- [
0x51025c61fbcfc078f69334f834be6dd26d55a955,
0xc3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// TypeID:
0x00, 0x00, 0x00, 0x0b,
// groupID:
0x00, 0x00, 0x30, 0x39,
// length of payload:
0x00, 0x00, 0x00, 0x0b,
// payload:
0x4e, 0x46, 0x54, 0x20, 0x50, 0x61, 0x79, 0x6c,
0x6f, 0x61, 0x64,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x02,
// addrs[0]:
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55,
// addrs[1]:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## Inputs
Inputs have one possible type: `SECP256K1TransferInput`.
## SECP256K1 Transfer Input
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) transfer input
allows for spending an unspent secp256k1 transfer output.
### What SECP256K1 Transfer Input Contains
A secp256k1 transfer input contains an `Amount` and `AddressIndices`.
* **`TypeID`** is the ID for this input type. It is `0x00000005`.
* **`Amount`** is a long that specifies the quantity that this input should be
consuming from the UTXO. Must be positive. Must be equal to the amount
specified in the UTXO.
* **`AddressIndices`** is a list of unique ints that define the private keys
that are being used to spend the UTXO. Each UTXO has an array of addresses
that can spend the UTXO. Each int represents the index in this address array
that will sign this transaction. The array must be sorted low to high.
### Gantt SECP256K1 Transfer Input Specification
```text
+-------------------------+-------------------------------------+
| type_id : int | 4 bytes |
+-----------------+-------+-------------------------------------+
| amount : long | 8 bytes |
+-----------------+-------+-------------------------------------+
| address_indices : []int | 4 + 4 * len(address_indices) bytes |
+-----------------+-------+-------------------------------------+
| 16 + 4 * len(address_indices) bytes |
+-------------------------------------+
```
### Proto SECP256K1 Transfer Input Specification
```text
message SECP256K1TransferInput {
uint32 typeID = 1; // 04 bytes
uint64 amount = 2; // 08 bytes
repeated uint32 address_indices = 3; // 04 bytes + 04 bytes * len(address_indices)
}
```
### SECP256K1 Transfer Input Example
Let's make a payment input with:
* **`TypeId`**: `5`
* **`Amount`**: `123456789`
* **`AddressIndices`**: \[`3`,`7`]
```text
[
TypeID <- 0x00000005
Amount <- 123456789 = 0x00000000075bcd15,
AddressIndices <- [0x00000003, 0x00000007]
]
=
[
// type id:
0x00, 0x00, 0x00, 0x05,
// amount:
0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15,
// length:
0x00, 0x00, 0x00, 0x02,
// sig[0]
0x00, 0x00, 0x00, 0x03,
// sig[1]
0x00, 0x00, 0x00, 0x07,
]
```
## Operations
Operations have three possible types: `SECP256K1MintOperation`, `NFTMintOp`, and `NFTTransferOp`.
## SECP256K1 Mint Operation
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) mint operation
consumes a SECP256K1 mint output, creates a new mint output and sends a transfer
output to a new set of owners.
### What SECP256K1 Mint Operation Contains
A secp256k1 Mint operation contains a `TypeID`, `AddressIndices`, `MintOutput`, and `TransferOutput`.
* **`TypeID`** is the ID for this output type. It is `0x00000008`.
* **`AddressIndices`** is a list of unique ints that define the private keys
that are being used to spend the
[UTXO](/docs/api-reference/x-chain/txn-format#utxo). Each UTXO has an array of
addresses that can spend the UTXO. Each int represents the index in this
address array that will sign this transaction. The array must be sorted low to
high.
* **`MintOutput`** is a [SECP256K1 Mint output](/docs/api-reference/x-chain/txn-format#secp256k1-mint-output).
* **`TransferOutput`** is a [SECP256K1 Transfer output](/docs/api-reference/x-chain/txn-format#secp256k1-transfer-output).
### Gantt SECP256K1 Mint Operation Specification
```text
+----------------------------------+------------------------------------+
| type_id : int | 4 bytes |
+----------------------------------+------------------------------------+
| address_indices : []int | 4 + 4 * len(address_indices) bytes |
+----------------------------------+------------------------------------+
| mint_output : MintOutput | size(mint_output) bytes |
+----------------------------------+------------------------------------+
| transfer_output : TransferOutput | size(transfer_output) bytes |
+----------------------------------+------------------------------------+
| 8 + 4 * len(address_indices) |
| + size(mint_output) |
| + size(transfer_output) bytes |
+------------------------------------+
```
### Proto SECP256K1 Mint Operation Specification
```text
message SECP256K1MintOperation {
uint32 typeID = 1; // 4 bytes
repeated uint32 address_indices = 2; // 04 bytes + 04 bytes * len(address_indices)
MintOutput mint_output = 3; // size(mint_output
TransferOutput transfer_output = 4; // size(transfer_output)
}
```
### SECP256K1 Mint Operation Example
Let's make a [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) mint
operation with:
* **`TypeId`**: `8`
* **`AddressIndices`**:
* `0x00000003`
* `0x00000007`
* **`MintOutput`**: `"Example SECP256K1 Mint Output from above"`
* **`TransferOutput`**: `"Example SECP256K1 Transfer Output from above"`
```text
[
TypeID <- 0x00000008
AddressIndices <- [0x00000003, 0x00000007]
MintOutput <- 0x00000006000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c89
TransferOutput <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859
]
=
[
// typeID
0x00, 0x00, 0x00, 0x08,
// number of address_indices:
0x00, 0x00, 0x00, 0x02,
// address_indices[0]:
0x00, 0x00, 0x00, 0x03,
// address_indices[1]:
0x00, 0x00, 0x00, 0x07,
// mint output
0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
// transfer output
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## NFT Mint Op
An NFT mint operation consumes an NFT mint output and sends an unspent output to a new set of owners.
### What NFT Mint Op Contains
An NFT mint operation contains a `TypeID`, `AddressIndices`, `GroupID`, `Payload`, and `Output` of addresses.
* **`TypeID`** is the ID for this operation type. It is `0x0000000c`.
* **`AddressIndices`** is a list of unique ints that define the private keys
that are being used to spend the UTXO. Each UTXO has an array of addresses
that can spend the UTXO. Each int represents the index in this address array
that will sign this transaction. The array must be sorted low to high.
* **`GroupID`** is an int that specifies the group this NFT is issued to.
* **`Payload`** is an arbitrary string of bytes no longer than 1024 bytes.
* **`Output`** is not a `TransferableOutput`, but rather is a lock time,
threshold, and an array of unique addresses that correspond to the private
keys that can be used to spend this output. Addresses must be sorted
lexicographically.
### Gantt NFT Mint Op Specification
```text
+------------------------------+------------------------------------+
| type_id : int | 4 bytes |
+-----------------+------------+------------------------------------+
| address_indices : []int | 4 + 4 * len(address_indices) bytes |
+-----------------+------------+------------------------------------+
| group_id : int | 4 bytes |
+-----------------+------------+------------------------------------+
| payload : []byte | 4 + len(payload) bytes |
+-----------------+------------+------------------------------------+
| outputs : []Output | 4 + size(outputs) bytes |
+-----------------+------------+------------------------------------+
| 20 + |
| 4 * len(address_indices) + |
| len(payload) + |
| size(outputs) bytes |
+------------------------------------+
```
### Proto NFT Mint Op Specification
```text
message NFTMintOp {
uint32 typeID = 1; // 04 bytes
repeated uint32 address_indices = 2; // 04 bytes + 04 bytes * len(address_indices)
uint32 group_id = 3; // 04 bytes
bytes payload = 4; // 04 bytes + len(payload)
repeated bytes outputs = 5; // 04 bytes + size(outputs)
}
```
### NFT Mint Op Example
Let's make an NFT mint operation with:
* **`TypeId`**: `12`
* **`AddressIndices`**:
* `0x00000003`
* `0x00000007`
* **`GroupID`**: `12345`
* **`Payload`**: `0x431100`
* **`Locktime`**: `54321`
* **`Threshold`**: `1`
* **`Addresses`**:
* `0xc3344128e060128ede3523a24a461c8943ab0859`
```text
[
TypeID <- 0x0000000c
AddressIndices <- [
0x00000003,
0x00000007,
]
GroupID <- 0x00003039
Payload <- 0x431100
Locktime <- 0x000000000000d431
Threshold <- 0x00000001
Addresses <- [
0xc3344128e060128ede3523a24a461c8943ab0859
]
]
=
[
// Type ID
0x00, 0x00, 0x00, 0x0c,
// number of address indices:
0x00, 0x00, 0x00, 0x02,
// address index 0:
0x00, 0x00, 0x00, 0x03,
// address index 1:
0x00, 0x00, 0x00, 0x07,
// groupID:
0x00, 0x00, 0x30, 0x39,
// length of payload:
0x00, 0x00, 0x00, 0x03,
// payload:
0x43, 0x11, 0x00,
// number of outputs:
0x00, 0x00, 0x00, 0x01,
// outputs[0]
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x01,
// addrs[0]:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## NFT Transfer Op
An NFT transfer operation sends an unspent NFT transfer output to a new set of owners.
### What NFT Transfer Op Contains
An NFT transfer operation contains a `TypeID`, `AddressIndices` and an untyped `NFTTransferOutput`.
* **`TypeID`** is the ID for this output type. It is `0x0000000d`.
* **`AddressIndices`** is a list of unique ints that define the private keys
that are being used to spend the UTXO. Each UTXO has an array of addresses
that can spend the UTXO. Each int represents the index in this address array
that will sign this transaction. The array must be sorted low to high.
* **`NFTTransferOutput`** is the output of this operation and must be an [NFT Transfer Output](/docs/api-reference/x-chain/txn-format#nft-transfer-output). This
output doesn't have the **`TypeId`**, because the type is known by the context
of being in this operation.
### Gantt NFT Transfer Op Specification
```text
+------------------------------+------------------------------------+
| type_id : int | 4 bytes |
+-----------------+------------+------------------------------------+
| address_indices : []int | 4 + 4 * len(address_indices) bytes |
+-----------------+------------+------------------------------------+
| group_id : int | 4 bytes |
+-----------------+------------+------------------------------------+
| payload : []byte | 4 + len(payload) bytes |
+-----------------+------------+------------------------------------+
| locktime : long | 8 bytes |
+-----------+------------+------------------------------------------+
| threshold : int | 4 bytes |
+-----------------+------------+------------------------------------+
| addresses : [][20]byte | 4 + 20 * len(addresses) bytes |
+-----------------+------------+------------------------------------+
| 36 + len(payload) |
| + 4 * len(address_indices) |
| + 20 * len(addresses) bytes |
+------------------------------------+
```
### Proto NFT Transfer Op Specification
```text
message NFTTransferOp {
uint32 typeID = 1; // 04 bytes
repeated uint32 address_indices = 2; // 04 bytes + 04 bytes * len(address_indices)
uint32 group_id = 3; // 04 bytes
bytes payload = 4; // 04 bytes + len(payload)
uint64 locktime = 5; // 08 bytes
uint32 threshold = 6; // 04 bytes
repeated bytes addresses = 7; // 04 bytes + 20 bytes * len(addresses)
}
```
### NFT Transfer Op Example
Let's make an NFT transfer operation with:
* **`TypeID`**: `13`
* **`AddressIndices`**:
* `0x00000007`
* `0x00000003`
* **`GroupID`**: `12345`
* **`Payload`**: `0x431100`
* **`Locktime`**: `54321`
* **`Threshold`**: `1`
* **`Addresses`**:
* `0xc3344128e060128ede3523a24a461c8943ab0859`
* `0x51025c61fbcfc078f69334f834be6dd26d55a955`
```text
[
TypeID <- 0x0000000d
AddressIndices <- [
0x00000007,
0x00000003,
]
GroupID <- 0x00003039
Payload <- 0x431100
Locktime <- 0x000000000000d431
Threshold <- 00000001
Addresses <- [
0x51025c61fbcfc078f69334f834be6dd26d55a955,
0xc3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// Type ID
0x00, 0x00, 0x00, 0x0d,
// number of address indices:
0x00, 0x00, 0x00, 0x02,
// address index 0:
0x00, 0x00, 0x00, 0x07,
// address index 1:
0x00, 0x00, 0x00, 0x03,
// groupID:
0x00, 0x00, 0x30, 0x39,
// length of payload:
0x00, 0x00, 0x00, 0x03,
// payload:
0x43, 0x11, 0x00,
// locktime:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
// threshold:
0x00, 0x00, 0x00, 0x01,
// number of addresses:
0x00, 0x00, 0x00, 0x02,
// addrs[0]:
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55,
// addrs[1]:
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## Initial State
Initial state describes the initial state of an asset when it is created. It
contains the ID of the feature extension that the asset uses, and a variable
length array of outputs that denote the genesis UTXO set of the asset.
### What Initial State Contains
Initial state contains a `FxID` and an array of `Output`.
* **`FxID`** is an int that defines which feature extension this state is part
of. For SECP256K1 assets, this is `0x00000000`. For NFT assets, this is
`0x00000001`.
* **`Outputs`** is a variable length array of
[outputs](/docs/api-reference/x-chain/txn-format#outputs), as defined above.
### Gantt Initial State Specification
```text
+---------------+----------+-------------------------------+
| fx_id : int | 4 bytes |
+---------------+----------+-------------------------------+
| outputs : []Output | 4 + size(outputs) bytes |
+---------------+----------+-------------------------------+
| 8 + size(outputs) bytes |
+-------------------------------+
```
### Proto Initial State Specification
```text
message InitialState {
uint32 fx_id = 1; // 04 bytes
repeated Output outputs = 2; // 04 + size(outputs) bytes
}
```
### Initial State Example
Let's make an initial state:
* `FxID`: `0x00000000`
* `InitialState`: `["Example SECP256K1 Transfer Output from above"]`
```text
[
FxID <- 0x00000000
InitialState <- [
0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// fxID:
0x00, 0x00, 0x00, 0x00,
// num outputs:
0x00, 0x00, 0x00, 0x01,
// output:
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## Credentials
Credentials have two possible types: `SECP256K1Credential`, and `NFTCredential`.
Each credential is paired with an Input or Operation. The order of the
credentials match the order of the inputs or operations.
## SECP256K1 Credential
A [secp256k1](/docs/api-reference/standards/cryptographic-primitives#secp256k1-addresses) credential
contains a list of 65-byte recoverable signatures.
### What SECP256K1 Credential Contains
* **`TypeID`** is the ID for this type. It is `0x00000009`.
* **`Signatures`** is an array of 65-byte recoverable signatures. The order of
the signatures must match the input's signature indices.
### Gantt SECP256K1 Credential Specification
```text
+------------------------------+---------------------------------+
| type_id : int | 4 bytes |
+-----------------+------------+---------------------------------+
| signatures : [][65]byte | 4 + 65 * len(signatures) bytes |
+-----------------+------------+---------------------------------+
| 8 + 65 * len(signatures) bytes |
+---------------------------------+
```
### Proto SECP256K1 Credential Specification
```text
message SECP256K1Credential {
uint32 typeID = 1; // 4 bytes
repeated bytes signatures = 2; // 4 bytes + 65 bytes * len(signatures)
}
```
### SECP256K1 Credential Example
Let's make a payment input with:
* **`TypeID`**: `9`
* **`signatures`**:
* `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00`
* `0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00`
```text
[
TypeID <- 0x00000009
Signatures <- [
0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00,
0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00,
]
]
=
[
// Type ID
0x00, 0x00, 0x00, 0x09,
// length:
0x00, 0x00, 0x00, 0x02,
// sig[0]
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f,
0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f,
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
0x00,
// sig[1]
0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d, 0x5f,
0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d, 0x6f,
0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f,
0x00,
]
```
## NFT Credential
An NFT credential is the same as an [secp256k1 credential](/docs/api-reference/x-chain/txn-format#secp256k1-credential) with a
different TypeID. The TypeID for an NFT credential is `0x0000000e`.
## Unsigned Transactions
Unsigned transactions contain the full content of a transaction with only the
signatures missing. Unsigned transactions have four possible types:
[`CreateAssetTx`](/docs/api-reference/x-chain/txn-format#what-unsigned-create-asset-tx-contains),
[`OperationTx`](/docs/api-reference/x-chain/txn-format#what-unsigned-operation-tx-contains),
[`ImportTx`](/docs/api-reference/x-chain/txn-format#what-unsigned-import-tx-contains),
and
[`ExportTx`](/docs/api-reference/x-chain/txn-format#what-unsigned-export-tx-contains).
They all embed
[`BaseTx`](/docs/api-reference/x-chain/txn-format#what-base-tx-contains), which
contains common fields and operations.
## Unsigned BaseTx
### What Base TX Contains
A base TX contains a `TypeID`, `NetworkID`, `BlockchainID`, `Outputs`, `Inputs`, and `Memo`.
* **`TypeID`** is the ID for this type. It is `0x00000000`.
* **`NetworkID`** is an int that defines which network this transaction is meant
to be issued to. This value is meant to support transaction routing and is not
designed for replay attack prevention.
* **`BlockchainID`** is a 32-byte array that defines which blockchain this
transaction was issued to. This is used for replay attack prevention for
transactions that could potentially be valid across network or blockchain.
* **`Outputs`** is an array of [transferable output objects](/docs/api-reference/x-chain/txn-format#transferable-output). Outputs must
be sorted lexicographically by their serialized representation. The total
quantity of the assets created in these outputs must be less than or equal to
the total quantity of each asset consumed in the inputs minus the transaction
fee.
* **`Inputs`** is an array of [transferable input objects](/docs/api-reference/x-chain/txn-format#transferable-input). Inputs must be
sorted and unique. Inputs are sorted first lexicographically by their
**`TxID`** and then by the **`UTXOIndex`** from low to high. If there are
inputs that have the same **`TxID`** and **`UTXOIndex`**, then the transaction
is invalid as this would result in a double spend.
* **`Memo`** Memo field contains arbitrary bytes, up to 256 bytes.
### Gantt Base TX Specification
```text
+--------------------------------------+-----------------------------------------+
| type_id : int | 4 bytes |
+---------------+----------------------+-----------------------------------------+
| network_id : int | 4 bytes |
+---------------+----------------------+-----------------------------------------+
| blockchain_id : [32]byte | 32 bytes |
+---------------+----------------------+-----------------------------------------+
| outputs : []TransferableOutput | 4 + size(outputs) bytes |
+---------------+----------------------+-----------------------------------------+
| inputs : []TransferableInput | 4 + size(inputs) bytes |
+---------------+----------------------+-----------------------------------------+
| memo : [256]byte | 4 + size(memo) bytes |
+---------------+----------------------+-----------------------------------------+
| 52 + size(outputs) + size(inputs) + size(memo) bytes |
+------------------------------------------------------+
```
### Proto Base TX Specification
```text
message BaseTx {
uint32 typeID = 1; // 04 bytes
uint32 network_id = 2; // 04 bytes
bytes blockchain_id = 3; // 32 bytes
repeated Output outputs = 4; // 04 bytes + size(outs)
repeated Input inputs = 5; // 04 bytes + size(ins)
bytes memo = 6; // 04 bytes + size(memo)
}
```
### Base TX Example
Let's make an base TX that uses the inputs and outputs from the previous examples:
* **`TypeID`**: `0`
* **`NetworkID`**: `4`
* **`BlockchainID`**: `0xffffffffeeeeeeeeddddddddcccccccbbbbbbbbaaaaaaaa9999999988888888`
* **`Outputs`**:
* `"Example Transferable Output as defined above"`
* **`Inputs`**:
* `"Example Transferable Input as defined above"`
* **`Memo`**: `0x00010203`
```text
[
TypeID <- 0x00000000
NetworkID <- 0x00000004
BlockchainID <- 0xffffffffeeeeeeeeddddddddcccccccbbbbbbbbaaaaaaaa9999999988888888
Outputs <- [
0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859
]
Inputs <- [
0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd15000000020000000700000003
]
Memo <- 0x00010203
]
=
[
// typeID
0x00, 0x00, 0x00, 0x00,
// networkID:
0x00, 0x00, 0x00, 0x04,
// blockchainID:
0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee,
0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc,
0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa,
0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88,
// number of outputs:
0x00, 0x00, 0x00, 0x01,
// transferable output:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
// number of inputs:
0x00, 0x00, 0x00, 0x01,
// transferable input:
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15,
0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x03,
// Memo length:
0x00, 0x00, 0x00, 0x04,
// Memo:
0x00, 0x01, 0x02, 0x03,
]
```
## Unsigned CreateAssetTx
### What Unsigned Create Asset TX Contains
An unsigned create asset TX contains a `BaseTx`, `Name`, `Symbol`,
`Denomination`, and `InitialStates`. The `TypeID` is `0x00000001`.
* **`BaseTx`**
* **`Name`** is a human readable string that defines the name of the asset this
transaction will create. The name is not guaranteed to be unique. The name
must consist of only printable ASCII characters and must be no longer than 128
characters.
* **`Symbol`** is a human readable string that defines the symbol of the asset
this transaction will create. The symbol is not guaranteed to be unique. The
symbol must consist of only printable ASCII characters and must be no longer
than 4 characters.
* **`Denomination`** is a byte that defines the divisibility of the asset this
transaction will create. For example, the AVAX token is divisible into
billionths. Therefore, the denomination of the AVAX token is 9. The
denomination must be no more than 32.
* **`InitialStates`** is a variable length array that defines the feature
extensions this asset supports, and the [initial state](/docs/api-reference/x-chain/txn-format#initial-state) of those feature
extensions.
### Gantt Unsigned Create Asset TX Specification
```text
+----------------+----------------+--------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+----------------+----------------+--------------------------------------+
| name : string | 2 + len(name) bytes |
+----------------+----------------+--------------------------------------+
| symbol : string | 2 + len(symbol) bytes |
+----------------+----------------+--------------------------------------+
| denomination : byte | 1 bytes |
+----------------+----------------+--------------------------------------+
| initial_states : []InitialState | 4 + size(initial_states) bytes |
+----------------+----------------+--------------------------------------+
| size(base_tx) + size(initial_states) |
| + 9 + len(name) + len(symbol) bytes |
+--------------------------------------+
```
### Proto Unsigned Create Asset TX Specification
```text
message CreateAssetTx {
BaseTx base_tx = 1; // size(base_tx)
string name = 2; // 2 bytes + len(name)
name symbol = 3; // 2 bytes + len(symbol)
uint8 denomination = 4; // 1 bytes
repeated InitialState initial_states = 5; // 4 bytes + size(initial_states)
}
```
### Unsigned Create Asset TX Example
Let's make an unsigned base TX that uses the inputs and outputs from the previous examples:
* `BaseTx`: `"Example BaseTx as defined above with ID set to 1"`
* `Name`: `Volatility Index`
* `Symbol`: `VIX`
* `Denomination`: `2`
* **`InitialStates`**:
* `"Example Initial State as defined above"`
```text
[
BaseTx <- 0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203
Name <- 0x0010566f6c6174696c69747920496e646578
Symbol <- 0x0003564958
Denomination <- 0x02
InitialStates <- [
0x0000000000000001000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x04,
0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee,
0xdd, 0xdd, 0xdd, 0xdd,
0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb,
0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01,
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15,
0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04,
0x00, 0x01, 0x02, 0x03
// name:
0x00, 0x10, 0x56, 0x6f, 0x6c, 0x61, 0x74, 0x69,
0x6c, 0x69, 0x74, 0x79, 0x20, 0x49, 0x6e, 0x64,
0x65, 0x78,
// symbol length:
0x00, 0x03,
// symbol:
0x56, 0x49, 0x58,
// denomination:
0x02,
// number of InitialStates:
0x00, 0x00, 0x00, 0x01,
// InitialStates[0]:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
## Unsigned OperationTx
### What Unsigned Operation TX Contains
An unsigned operation TX contains a `BaseTx`, and `Ops`. The `TypeID` for this type is `0x00000002`.
* **`BaseTx`**
* **`Ops`** is a variable-length array of [Transferable Ops](/docs/api-reference/x-chain/txn-format#transferable-op).
### Gantt Unsigned Operation TX Specification
```text
+---------+------------------+-------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+---------+------------------+-------------------------------------+
| ops : []TransferableOp | 4 + size(ops) bytes |
+---------+------------------+-------------------------------------+
| 4 + size(ops) + size(base_tx) bytes |
+-------------------------------------+
```
### Proto Unsigned Operation TX Specification
```text
message OperationTx {
BaseTx base_tx = 1; // size(base_tx)
repeated TransferOp ops = 2; // 4 bytes + size(ops)
}
```
### Unsigned Operation TX Example
Let's make an unsigned operation TX that uses the inputs and outputs from the previous examples:
* `BaseTx`: `"Example BaseTx above" with TypeID set to 2`
* **`Ops`**: \[`"Example Transferable Op as defined above"`]
```text
[
BaseTx <- 0x0000000200000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203
Ops <- [
0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f00000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000000000050000000d0000000200000003000000070000303900000003431100000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff,
0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd,
0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb,
0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01,
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15,
0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04,
0x00, 0x01, 0x02, 0x03
// number of operations:
0x00, 0x00, 0x00, 0x01,
// transfer operation:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1,
0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41,
0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0,
0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40,
0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x03,
0x43, 0x11, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00,
0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb,
0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34,
0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3,
0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde,
0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43,
0xab, 0x08, 0x59,
]
```
## Unsigned ImportTx
### What Unsigned Import TX Contains
An unsigned import TX contains a `BaseTx`, `SourceChain` and `Ins`. \* The `TypeID`for this type is `0x00000003`.
* **`BaseTx`**
* **`SourceChain`** is a 32-byte source blockchain ID.
* **`Ins`** is a variable length array of [Transferable Inputs](/docs/api-reference/x-chain/txn-format#transferable-input).
### Gantt Unsigned Import TX Specification
```text
+---------+----------------------+-----------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+-----------------+--------------+-----------------------------+
| source_chain : [32]byte | 32 bytes |
+---------+----------------------+-----------------------------+
| ins : []TransferIn | 4 + size(ins) bytes |
+---------+----------------------+-----------------------------+
| 36 + size(ins) + size(base_tx) bytes |
+--------------------------------------+
```
### Proto Unsigned Import TX Specification
```text
message ImportTx {
BaseTx base_tx = 1; // size(base_tx)
bytes source_chain = 2; // 32 bytes
repeated TransferIn ins = 3; // 4 bytes + size(ins)
}
```
### Unsigned Import TX Example
Let's make an unsigned import TX that uses the inputs from the previous examples:
* `BaseTx`: `"Example BaseTx as defined above"`, but with `TypeID` set to `3`
* `SourceChain`: `0x0000000000000000000000000000000000000000000000000000000000000000`
* `Ins`: `"Example SECP256K1 Transfer Input as defined above"`
```text
[
BaseTx <- 0x0000000300000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203
SourceChain <- 0x0000000000000000000000000000000000000000000000000000000000000000
Ins <- [
f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd15000000020000000300000007,
]
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x03,
0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff,
0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd,
0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb,
0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01,
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15,
0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04,
0x00, 0x01, 0x02, 0x03
// source chain:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// input count:
0x00, 0x00, 0x00, 0x01,
// txID:
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
// utxoIndex:
0x00, 0x00, 0x00, 0x05,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// input:
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07,
]
```
## Unsigned ExportTx
### What Unsigned Export TX Contains
An unsigned export TX contains a `BaseTx`, `DestinationChain`, and `Outs`. The
`TypeID` for this type is `0x00000004`.
* **`DestinationChain`** is the 32 byte ID of the chain where the funds are being exported to.
* **`Outs`** is a variable length array of [Transferable Outputs](/docs/api-reference/x-chain/txn-format#transferable-output).
### Gantt Unsigned Export TX Specification
```text
+-------------------+---------------+--------------------------------------+
| base_tx : BaseTx | size(base_tx) bytes |
+-------------------+---------------+--------------------------------------+
| destination_chain : [32]byte | 32 bytes |
+-------------------+---------------+--------------------------------------+
| outs : []TransferOut | 4 + size(outs) bytes |
+-------------------+---------------+--------------------------------------+
| 36 + size(outs) + size(base_tx) bytes |
+---------------------------------------+
```
### Proto Unsigned Export TX Specification
```text
message ExportTx {
BaseTx base_tx = 1; // size(base_tx)
bytes destination_chain = 2; // 32 bytes
repeated TransferOut outs = 3; // 4 bytes + size(outs)
}
```
### Unsigned Export TX Example
Let's make an unsigned export TX that uses the outputs from the previous examples:
* `BaseTx`: `"Example BaseTx as defined above"`, but with `TypeID` set to `4`
* `DestinationChain`: `0x0000000000000000000000000000000000000000000000000000000000000000`
* `Outs`: `"Example SECP256K1 Transfer Output as defined above"`
```text
[
BaseTx <- 0x0000000400000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203
DestinationChain <- 0x0000000000000000000000000000000000000000000000000000000000000000
Outs <- [
000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859,
]
]
=
[
// base tx:
0x00, 0x00, 0x00, 0x04
0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff,
0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd,
0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb,
0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99,
0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01,
0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81,
0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01,
0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80,
0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00,
0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05,
0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15,
0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04,
0x00, 0x01, 0x02, 0x03
// destination_chain:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// outs[] count:
0x00, 0x00, 0x00, 0x01,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// output:
0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02,
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28,
0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2,
0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59,
]
```
## Signed Transaction
A signed transaction is an unsigned transaction with the addition of an array of [credentials](/docs/api-reference/x-chain/txn-format#credentials).
### What Signed Transaction Contains
A signed transaction contains a `CodecID`, `UnsignedTx`, and `Credentials`.
* **`CodecID`** The only current valid codec id is `00 00`.
* **`UnsignedTx`** is an unsigned transaction, as described above.
* **`Credentials`** is an array of
[credentials](/docs/api-reference/x-chain/txn-format#credentials). Each credential
will be paired with the input in the same index at this credential.
### Gantt Signed Transaction Specification
```text
+---------------------+--------------+------------------------------------------------+
| codec_id : uint16 | 2 bytes |
+---------------------+--------------+------------------------------------------------+
| unsigned_tx : UnsignedTx | size(unsigned_tx) bytes |
+---------------------+--------------+------------------------------------------------+
| credentials : []Credential | 4 + size(credentials) bytes |
+---------------------+--------------+------------------------------------------------+
| 6 + size(unsigned_tx) + len(credentials) bytes |
+------------------------------------------------+
```
### Proto Signed Transaction Specification
```text
message Tx {
uint16 codec_id = 1; // 2 bytes
UnsignedTx unsigned_tx = 2; // size(unsigned_tx)
repeated Credential credentials = 3; // 4 bytes + size(credentials)
}
```
### Signed Transaction Example
Let's make a signed transaction that uses the unsigned transaction and
credentials from the previous examples.
* **`CodecID`**: `0`
* **`UnsignedTx`**: `0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203`
* **`Credentials`** `0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00`
```text
[
CodecID <- 0x0000
UnsignedTx <- 0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203
Credentials <- [
0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00,
]
]
=
[
// Codec ID
0x00, 0x00,
// unsigned transaction:
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x04,
0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee,
0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc,
0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa,
0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88,
0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02,
0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78,
0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2,
0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28,
0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2,
0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59,
0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1,
0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41,
0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0,
0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40,
0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00,
0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03,
0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03
// number of credentials:
0x00, 0x00, 0x00, 0x01,
// credential[0]:
0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x02,
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f,
0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f,
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46,
0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e,
0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56,
0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d,
0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66,
0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d,
0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76,
0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e,
0x7f, 0x00,
```
## UTXO
A UTXO is a standalone representation of a transaction output.
### What UTXO Contains
A UTXO contains a `CodecID`, `TxID`, `UTXOIndex`, `AssetID`, and `Output`.
* **`CodecID`** The only valid `CodecID` is `00 00`
* **`TxID`** is a 32-byte transaction ID. Transaction IDs are calculated by
taking sha256 of the bytes of the signed transaction.
* **`UTXOIndex`** is an int that specifies which output in the transaction
specified by **`TxID`** that this UTXO was created by.
* **`AssetID`** is a 32-byte array that defines which asset this UTXO
references.
* **`Output`** is the output object that created this UTXO. The serialization of
Outputs was defined above. Valid output types are [SECP Mint Output](/docs/api-reference/x-chain/txn-format#secp256k1-mint-output), [SECP Transfer Output](/docs/api-reference/x-chain/txn-format#secp256k1-transfer-output),
[NFT Mint Output](/docs/api-reference/x-chain/txn-format#nft-mint-output), [NFT Transfer Output](/docs/api-reference/x-chain/txn-format#nft-transfer-output).
### Gantt UTXO Specification
```text
+--------------+----------+-------------------------+
| codec_id : uint16 | 2 bytes |
+--------------+----------+-------------------------+
| tx_id : [32]byte | 32 bytes |
+--------------+----------+-------------------------+
| output_index : int | 4 bytes |
+--------------+----------+-------------------------+
| asset_id : [32]byte | 32 bytes |
+--------------+----------+-------------------------+
| output : Output | size(output) bytes |
+--------------+----------+-------------------------+
| 70 + size(output) bytes |
+-------------------------+
```
### Proto UTXO Specification
```text
message Utxo {
uint16 codec_id = 1; // 02 bytes
bytes tx_id = 2; // 32 bytes
uint32 output_index = 3; // 04 bytes
bytes asset_id = 4; // 32 bytes
Output output = 5; // size(output)
}
```
### UTXO Examples
Let's make a UTXO with a SECP Mint Output:
* **`CodecID`**: `0`
* **`TxID`**: `0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f`
* **`UTXOIndex`**: `0` = `0x00000001`
* **`AssetID`**: `0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f`
* **`Output`**: `0x00000006000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a`
```text
[
CodecID <- 0x0000
TxID <- 0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f
UTXOIndex <- 0x00000001
AssetID <- 0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f
Output <- 00000006000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a
]
=
[
// codecID:
0x00, 0x00,
// txID:
0x47, 0xc9, 0x2e, 0xd6, 0x2d, 0x18, 0xe3, 0xcc,
0xcd, 0xa5, 0x12, 0xf6, 0x0a, 0x0d, 0x5b, 0x1e,
0x93, 0x9b, 0x6a, 0xb7, 0x3f, 0xb2, 0xd0, 0x11,
0xe5, 0xe3, 0x06, 0xe7, 0x9b, 0xd0, 0x44, 0x8f,
// utxo index:
0x00, 0x00, 0x00, 0x01,
// assetID:
0x47, 0xc9, 0x2e, 0xd6, 0x2d, 0x18, 0xe3, 0xcc,
0xcd, 0xa5, 0x12, 0xf6, 0x0a, 0x0d, 0x5b, 0x1e,
0x93, 0x9b, 0x6a, 0xb7, 0x3f, 0xb2, 0xd0, 0x11,
0xe5, 0xe3, 0x06, 0xe7, 0x9b, 0xd0, 0x44, 0x8f,
// secp mint output:
0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84,
0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1,
0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c,
0x62, 0x76, 0xaa, 0x2a,
]
```
Let's make a UTXO with a SECP Transfer Output from the signed transaction created above:
* **`CodecID`**: `0`
* **`TxID`**: `0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7`
* **`UTXOIndex`**: `0` = `0x00000000`
* **`AssetID`**: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f`
* **`Output`**: `"Example SECP256K1 Transferable Output as defined above"`
```text
[
CodecID <- 0x0000
TxID <- 0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7
UTXOIndex <- 0x00000000
AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859
]
=
[
// codecID:
0x00, 0x00,
// txID:
0xf9, 0x66, 0x75, 0x0f, 0x43, 0x88, 0x67, 0xc3,
0xc9, 0x82, 0x8d, 0xdc, 0xdb, 0xe6, 0x60, 0xe2,
0x1c, 0xcd, 0xbb, 0x36, 0xa9, 0x27, 0x69, 0x58,
0xf0, 0x11, 0xba, 0x47, 0x2f, 0x75, 0xd4, 0xe7,
// utxo index:
0x00, 0x00, 0x00, 0x00,
// assetID:
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
// secp transfer output:
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03,
0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,
0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13,
0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b,
0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23,
0x24, 0x25, 0x26, 0x27,
]
```
Let's make a UTXO with an NFT Mint Output:
* **`CodecID`**: `0`
* **`TxID`**: `0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7`
* **`UTXOIndex`**: `0` = `0x00000001`
* **`AssetID`**: `0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7`
* **`Output`**: `0x0000000a00000000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a`
```text
[
CodecID <- 0x0000
TxID <- 0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7
UTXOIndex <- 0x00000001
AssetID <- 0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7
Output <- 0000000a00000000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a
]
=
[
// codecID:
0x00, 0x00,
// txID:
0x03, 0xc6, 0x86, 0xef, 0xe8, 0xd8, 0x0c, 0x51,
0x9f, 0x35, 0x69, 0x29, 0xf6, 0xda, 0x94, 0x5f,
0x7f, 0xf9, 0x03, 0x78, 0xf0, 0x04, 0x4b, 0xb0,
0xe1, 0xa5, 0xd6, 0xc1, 0xad, 0x06, 0xba, 0xe7,
// utxo index:
0x00, 0x00, 0x00, 0x01,
// assetID:
0x03, 0xc6, 0x86, 0xef, 0xe8, 0xd8, 0x0c, 0x51,
0x9f, 0x35, 0x69, 0x29, 0xf6, 0xda, 0x94, 0x5f,
0x7f, 0xf9, 0x03, 0x78, 0xf0, 0x04, 0x4b, 0xb0,
0xe1, 0xa5, 0xd6, 0xc1, 0xad, 0x06, 0xba, 0xe7,
// nft mint output:
0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01,
0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a,
0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68,
0x61, 0xe1, 0xb2, 0x9c, 0x62, 0x76, 0xaa, 0x2a,
]
```
Let's make a UTXO with an NFT Transfer Output:
* **`CodecID`**: `0`
* **`TxID`**: `0xa68f794a7de7bdfc5db7ba5b73654304731dd586bbf4a6d7b05be6e49de2f936`
* **`UTXOIndex`**: `0` = `0x00000001`
* **`AssetID`**: `0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7`
* **`Output`**: `0x0000000b000000000000000b4e4654205061796c6f6164000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a`
```text
[
CodecID <- 0x0000
TxID <- 0xa68f794a7de7bdfc5db7ba5b73654304731dd586bbf4a6d7b05be6e49de2f936
UTXOIndex <- 0x00000001
AssetID <- 0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7
Output <- 0000000b000000000000000b4e4654205061796c6f6164000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a
]
=
[
// codecID:
0x00, 0x00,
// txID:
0xa6, 0x8f, 0x79, 0x4a, 0x7d, 0xe7, 0xbd, 0xfc,
0x5d, 0xb7, 0xba, 0x5b, 0x73, 0x65, 0x43, 0x04,
0x73, 0x1d, 0xd5, 0x86, 0xbb, 0xf4, 0xa6, 0xd7,
0xb0, 0x5b, 0xe6, 0xe4, 0x9d, 0xe2, 0xf9, 0x36,
// utxo index:
0x00, 0x00, 0x00, 0x01,
// assetID:
0x03, 0xc6, 0x86, 0xef, 0xe8, 0xd8, 0x0c, 0x51,
0x9f, 0x35, 0x69, 0x29, 0xf6, 0xda, 0x94, 0x5f,
0x7f, 0xf9, 0x03, 0x78, 0xf0, 0x04, 0x4b, 0xb0,
0xe1, 0xa5, 0xd6, 0xc1, 0xad, 0x06, 0xba, 0xe7,
// nft transfer output:
0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x0b, 0x4e, 0x46, 0x54, 0x20,
0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c,
0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e,
0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61,
0xe1, 0xb2, 0x9c, 0x62, 0x76, 0xaa, 0x2a,
]
```
## GenesisAsset
An asset to be issued in an instance of the AVM's Genesis
### What GenesisAsset Contains
An instance of a GenesisAsset contains an `Alias`, `NetworkID`, `BlockchainID`,
`Outputs`, `Inputs`, `Memo`, `Name`, `Symbol`, `Denomination`, and
`InitialStates`.
* **`Alias`** is the alias for this asset.
* **`NetworkID`** defines which network this transaction is meant to be issued
to. This value is meant to support transaction routing and is not designed for
replay attack prevention.
* **`BlockchainID`** is the ID (32-byte array) that defines which blockchain
this transaction was issued to. This is used for replay attack prevention for
transactions that could potentially be valid across network or blockchain.
* **`Outputs`** is an array of [transferable output objects](/docs/api-reference/x-chain/txn-format#transferable-output). Outputs must
be sorted lexicographically by their serialized representation. The total
quantity of the assets created in these outputs must be less than or equal to
the total quantity of each asset consumed in the inputs minus the transaction
fee.
* **`Inputs`** is an array of [transferable input objects](/docs/api-reference/x-chain/txn-format#transferable-input). Inputs must be
sorted and unique. Inputs are sorted first lexicographically by their
**`TxID`** and then by the **`UTXOIndex`** from low to high. If there are
inputs that have the same **`TxID`** and **`UTXOIndex`**, then the transaction
is invalid as this would result in a double spend.
* **`Memo`** is a memo field that contains arbitrary bytes, up to 256 bytes.
* **`Name`** is a human readable string that defines the name of the asset this
transaction will create. The name is not guaranteed to be unique. The name
must consist of only printable ASCII characters and must be no longer than 128
characters.
* **`Symbol`** is a human readable string that defines the symbol of the asset
this transaction will create. The symbol is not guaranteed to be unique. The
symbol must consist of only printable ASCII characters and must be no longer
than 4 characters.
* **`Denomination`** is a byte that defines the divisibility of the asset this
transaction will create. For example, the AVAX token is divisible into
billionths. Therefore, the denomination of the AVAX token is 9. The
denomination must be no more than 32.
* **`InitialStates`** is a variable length array that defines the feature
extensions this asset supports, and the [initial state](/docs/api-reference/x-chain/txn-format#initial-state) of those feature
extensions.
### Gantt GenesisAsset Specification
````text
+----------------+----------------------+--------------------------------+
| alias : string | 2 + len(alias) bytes |
+----------------+----------------------+--------------------------------+
| network_id : int | 4 bytes |
+----------------+----------------------+--------------------------------+
| blockchain_id : [32]byte | 32 bytes |
+----------------+----------------------+--------------------------------+
| outputs : []TransferableOutput | 4 + size(outputs) bytes |
+----------------+----------------------+--------------------------------+
| inputs : []TransferableInput | 4 + size(inputs) bytes |
+----------------+----------------------+--------------------------------+
| memo : [256]byte | 4 + size(memo) bytes |
+----------------+----------------------+--------------------------------+
| name : string | 2 + len(name) bytes |
+----------------+----------------------+--------------------------------+
| symbol : string | 2 + len(symbol) bytes |
+----------------+----------------------+--------------------------------+
| denomination : byte | 1 bytes |
+----------------+----------------------+--------------------------------+
| initial_states : []InitialState | 4 + size(initial_states) bytes |
+----------------+----------------------+--------------------------------+
| 59 + size(alias) + size(outputs) + size(inputs) + size(memo) |
| + len(name) + len(symbol) + size(initial_states) bytes |
+------------------------------------------------------------------------+
### Proto GenesisAsset Specification
```text
message GenesisAsset {
string alias = 1; // 2 bytes + len(alias)
uint32 network_id = 2; // 04 bytes
bytes blockchain_id = 3; // 32 bytes
repeated Output outputs = 4; // 04 bytes + size(outputs)
repeated Input inputs = 5; // 04 bytes + size(inputs)
bytes memo = 6; // 04 bytes + size(memo)
string name = 7; // 2 bytes + len(name)
name symbol = 8; // 2 bytes + len(symbol)
uint8 denomination = 9; // 1 bytes
repeated InitialState initial_states = 10; // 4 bytes + size(initial_states)
}
````
### GenesisAsset Example
Let's make a GenesisAsset:
* **`Alias`**: `asset1`
* **`NetworkID`**: `12345`
* **`BlockchainID`**: `0x0000000000000000000000000000000000000000000000000000000000000000`
* **`Outputs`**: \[]
* **`Inputs`**: \[]
* **`Memo`**: `2Zc54v4ek37TEwu4LiV3j41PUMRd6acDDU3ZCVSxE7X`
* **`Name`**: `asset1`
* **`Symbol`**: `MFCA`
* **`Denomination`**: `1`
* **`InitialStates`**:
* `"Example Initial State as defined above"`
```text
[
Alias <- 0x617373657431
NetworkID <- 0x00003039
BlockchainID <- 0x0000000000000000000000000000000000000000000000000000000000000000
Outputs <- []
Inputs <- []
Memo <- 0x66x726f6d20736e6f77666c616b6520746f206176616c616e636865
Name <- 0x617373657431
Symbol <- 0x66x726f6d20736e6f77666c616b6520746f206176616c616e636865
Denomination <- 0x66x726f6d20736e6f77666c616b6520746f206176616c616e636865
InitialStates <- [
0x0000000000000001000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859
]
]
=
[
// asset alias len:
0x00, 0x06,
// asset alias:
0x61, 0x73, 0x73, 0x65, 0x74, 0x31,
// network_id:
0x00, 0x00, 0x30, 0x39,
// blockchain_id:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
// output_len:
0x00, 0x00, 0x00, 0x00,
// input_len:
0x00, 0x00, 0x00, 0x00,
// memo_len:
0x00, 0x00, 0x00, 0x1b,
// memo:
0x66, 0x72, 0x6f, 0x6d, 0x20, 0x73, 0x6e, 0x6f, 0x77, 0x66, 0x6c, 0x61,
0x6b, 0x65, 0x20, 0x74, 0x6f, 0x20, 0x61, 0x76, 0x61, 0x6c, 0x61, 0x6e, 0x63, 0x68, 0x65,
// asset_name_len:
0x00, 0x0f,
// asset_name:
0x6d, 0x79, 0x46, 0x69, 0x78, 0x65, 0x64, 0x43, 0x61, 0x70, 0x41, 0x73, 0x73, 0x65, 0x74,
// symbol_len:
0x00, 0x04,
// symbol:
0x4d, 0x46, 0x43, 0x41,
// denomination:
0x07,
// number of InitialStates:
0x00, 0x00, 0x00, 0x01,
// InitialStates[0]:
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61,
0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8,
0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55,
0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e,
0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89,
0x43, 0xab, 0x08, 0x59,
]
```
# cli info
URL: /docs/avalanche-l1s/deploy-a-avalanche-l1/cli_structure
cli flags and stuff
## avalanche blockchain
The blockchain command suite provides a collection of tools for developing
and deploying Blockchains.
To get started, use the blockchain create command wizard to walk through the
configuration of your very first Blockchain. Then, go ahead and deploy it
with the blockchain deploy command. You can use the rest of the commands to
manage your Blockchain configurations and live deployments.
**Usage:**
```bash
avalanche blockchain [subcommand] [flags]
```
**Subcommands:**
* [`addValidator`](#avalanche-blockchain-addvalidator): The blockchain addValidator command adds a node as a validator to
an L1 of the user provided deployed network. If the network is proof of
authority, the owner of the validator manager contract must sign the
transaction. If the network is proof of stake, the node must stake the L1's
staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain.
This command currently only works on Blockchains deployed to either the Fuji
Testnet or Mainnet.
* [`changeOwner`](#avalanche-blockchain-changeowner): The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain.
* [`changeWeight`](#avalanche-blockchain-changeweight): The blockchain changeWeight command changes the weight of a Subnet Validator.
The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet.
* [`configure`](#avalanche-blockchain-configure): AvalancheGo nodes support several different configuration files. Subnets have their own
Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet
can have its own chain config. A chain can also have special requirements for the AvalancheGo node
configuration itself. This command allows you to set all those files.
* [`create`](#avalanche-blockchain-create): The blockchain create command builds a new genesis file to configure your Blockchain.
By default, the command runs an interactive wizard. It walks you through
all the steps you need to create your first Blockchain.
The tool supports deploying Subnet-EVM, and custom VMs. You
can create a custom, user-generated genesis with a custom VM by providing
the path to your genesis and VM binaries with the --genesis and --vm flags.
By default, running the command with a blockchainName that already exists
causes the command to fail. If you'd like to overwrite an existing
configuration, pass the -f flag.
* [`delete`](#avalanche-blockchain-delete): The blockchain delete command deletes an existing blockchain configuration.
* [`deploy`](#avalanche-blockchain-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet.
At the end of the call, the command prints the RPC URL you can use to interact with the Subnet.
Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent
attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't
allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call
avalanche network clean to reset all deployed chain state. Subsequent local deploys
redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks,
so you can take your locally tested Subnet and deploy it on Fuji or Mainnet.
* [`describe`](#avalanche-blockchain-describe): The blockchain describe command prints the details of a Blockchain configuration to the console.
By default, the command prints a summary of the configuration. By providing the --genesis
flag, the command instead prints out the raw genesis file.
* [`export`](#avalanche-blockchain-export): The blockchain export command write the details of an existing Blockchain deploy to a file.
The command prompts for an output path. You can also provide one with
the --output flag.
* [`import`](#avalanche-blockchain-import): Import blockchain configurations into avalanche-cli.
This command suite supports importing from a file created on another computer,
or importing from blockchains running public networks
(e.g. created manually or with the deprecated subnet-cli)
* [`join`](#avalanche-blockchain-join): The subnet join command configures your validator node to begin validating a new Blockchain.
To complete this process, you must have access to the machine running your validator. If the
CLI is running on the same machine as your validator, it can generate or update your node's
config file automatically. Alternatively, the command can print the necessary instructions
to update your node manually. To complete the validation process, the Subnet's admins must add
the NodeID of your validator to the Subnet's allow list by calling addValidator with your
NodeID.
After you update your validator's config, you need to restart your validator manually. If
you provide the --avalanchego-config flag, this command attempts to edit the config file
at that path.
This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet.
* [`list`](#avalanche-blockchain-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags,
it prints some general, static information about the Blockchain. With the --deployed flag, the command
shows additional information including the VMID, BlockchainID and SubnetID.
* [`publish`](#avalanche-blockchain-publish): The blockchain publish command publishes the Blockchain's VM to a repository.
* [`removeValidator`](#avalanche-blockchain-removevalidator): The blockchain removeValidator command stops a whitelisted, subnet network validator from
validating your deployed Blockchain.
To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass
these prompts by providing the values with flags.
* [`stats`](#avalanche-blockchain-stats): The blockchain stats command prints validator statistics for the given Blockchain.
* [`upgrade`](#avalanche-blockchain-upgrade): The blockchain upgrade command suite provides a collection of tools for
updating your developmental and deployed Blockchains.
* [`validators`](#avalanche-blockchain-validators): The blockchain validators command lists the validators of a blockchain's subnet and provides
several statistics about them.
* [`vmid`](#avalanche-blockchain-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain.
**Flags:**
```bash
-h, --help help for blockchain
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### addValidator
The blockchain addValidator command adds a node as a validator to
an L1 of the user provided deployed network. If the network is proof of
authority, the owner of the validator manager contract must sign the
transaction. If the network is proof of stake, the node must stake the L1's
staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain.
This command currently only works on Blockchains deployed to either the Fuji
Testnet or Mainnet.
**Usage:**
```bash
avalanche blockchain addValidator [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Off")
--balance uint set the AVAX balance of the validator that will be used for continuous fee on P-Chain
--blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token)
--blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token)
--blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token)
--bls-proof-of-possession string set the BLS proof of possession of the validator to add
--bls-public-key string set the BLS public key of the validator to add
--cluster string operate on the given cluster
--create-local-validator create additional local validator and add it to existing running local node
--default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period
--default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet)
--default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator
--delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100)
--devnet operate on a devnet network
--disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet only]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for addValidator
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint
--node-id string node-id of the validator to add
--output-tx-path string (for Subnets, not L1s) file path of the add validator tx
--partial-sync set primary network partial sync for new validators (default true)
--remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet
--rpc string connect to validator manager at the given rpc endpoint
--stake-amount uint (PoS only) amount of tokens to stake
--staking-period duration how long this validator will be staking
--start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
--subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx
-t, --testnet fuji operate on testnet (alias to fuji)
--wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true)
--weight uint set the staking weight of the validator to add (default 20)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### changeOwner
The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain.
**Usage:**
```bash
avalanche blockchain changeOwner [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--control-keys strings addresses that may make subnet changes
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for changeOwner
-k, --key string select the key to use [fuji/devnet]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--output-tx-path string file path of the transfer subnet ownership tx
-s, --same-control-key use the fee-paying key as control key
--subnet-auth-keys strings control keys that will be used to authenticate transfer subnet ownership tx
-t, --testnet fuji operate on testnet (alias to fuji)
--threshold uint32 required number of control key signatures to make subnet changes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### changeWeight
The blockchain changeWeight command changes the weight of a Subnet Validator.
The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet.
**Usage:**
```bash
avalanche blockchain changeWeight [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet only]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for changeWeight
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string node-id of the validator
-t, --testnet fuji operate on testnet (alias to fuji)
--weight uint set the new staking weight of the validator (default 20)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### configure
AvalancheGo nodes support several different configuration files. Subnets have their own
Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet
can have its own chain config. A chain can also have special requirements for the AvalancheGo node
configuration itself. This command allows you to set all those files.
**Usage:**
```bash
avalanche blockchain configure [subcommand] [flags]
```
**Flags:**
```bash
--chain-config string path to the chain configuration
-h, --help help for configure
--node-config string path to avalanchego node configuration
--per-node-chain-config string path to per node chain configuration for local network
--subnet-config string path to the subnet configuration
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### create
The blockchain create command builds a new genesis file to configure your Blockchain.
By default, the command runs an interactive wizard. It walks you through
all the steps you need to create your first Blockchain.
The tool supports deploying Subnet-EVM, and custom VMs. You
can create a custom, user-generated genesis with a custom VM by providing
the path to your genesis and VM binaries with the --genesis and --vm flags.
By default, running the command with a blockchainName that already exists
causes the command to fail. If you'd like to overwrite an existing
configuration, pass the -f flag.
**Usage:**
```bash
avalanche blockchain create [subcommand] [flags]
```
**Flags:**
```bash
--custom use a custom VM template
--custom-vm-branch string custom vm branch or commit
--custom-vm-build-script string custom vm build-script
--custom-vm-path string file path of custom vm to use
--custom-vm-repo-url string custom vm repository url
--debug enable blockchain debugging (default true)
--evm use the Subnet-EVM as the base template
--evm-chain-id uint chain ID to use with Subnet-EVM
--evm-defaults deprecation notice: use '--production-defaults'
--evm-token string token symbol to use with Subnet-EVM
--external-gas-token use a gas token from another blockchain
-f, --force overwrite the existing configuration if one exists
--from-github-repo generate custom VM binary from github repository
--genesis string file path of genesis to use
-h, --help help for create
--icm interoperate with other blockchains using ICM
--icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental]
--latest use latest Subnet-EVM released version, takes precedence over --vm-version
--pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version
--production-defaults use default production settings for your blockchain
--proof-of-authority use proof of authority(PoA) for validator management
--proof-of-stake use proof of stake(PoS) for validator management
--proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract
--reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100)
--sovereign set to false if creating non-sovereign blockchain (default true)
--teleporter interoperate with other blockchains using ICM
--test-defaults use default test settings for your blockchain
--validator-manager-owner string EVM address that controls Validator Manager Owner
--vm string file path of custom vm to use. alias to custom-vm-path
--vm-version string version of Subnet-EVM template to use
--warp generate a vm with warp support (needed for ICM) (default true)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### delete
The blockchain delete command deletes an existing blockchain configuration.
**Usage:**
```bash
avalanche blockchain delete [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for delete
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet.
At the end of the call, the command prints the RPC URL you can use to interact with the Subnet.
Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent
attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't
allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call
avalanche network clean to reset all deployed chain state. Subsequent local deploys
redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks,
so you can take your locally tested Subnet and deploy it on Fuji or Mainnet.
**Usage:**
```bash
avalanche blockchain deploy [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Off")
--avalanchego-path string use this avalanchego binary path
--avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease")
--balance float set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (default 0.1)
--blockchain-genesis-key use genesis allocated key to fund validator manager initialization
--blockchain-key string CLI stored key to use to fund validator manager initialization
--blockchain-private-key string private key to use to fund validator manager initialization
--bootstrap-endpoints strings take validator node info from the given endpoints
--bootstrap-filepath string JSON file path that provides details about bootstrap validators, leave Node-ID and BLS values empty if using --generate-node-id=true
--cchain-funding-key string key to be used to fund relayer account on cchain
--cchain-icm-key string key to be used to pay for ICM deploys on C-Chain
--change-owner-address string address that will receive change if node is no longer L1 validator
--cluster string operate on the given cluster
--control-keys strings addresses that may make subnet changes
--convert-only avoid node track, restart and poa manager setup
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet deploy only]
-f, --fuji testnet operate on fuji (alias to testnet
--generate-node-id whether to create new node id for bootstrap validators (Node-ID and BLS values in bootstrap JSON file will be overridden if --bootstrap-filepath flag is used)
-h, --help help for deploy
--icm-key string key to be used to pay for ICM deploys (default "cli-teleporter-deployer")
--icm-version string ICM version to deploy (default "latest")
-k, --key string select the key to use [fuji/devnet deploy only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--mainnet-chain-id uint32 use different ChainID for mainnet deployment
--noicm skip automatic ICM deploy
--num-bootstrap-validators int (only if --generate-node-id is true) number of bootstrap validators to set up in sovereign L1 validator)
--num-local-nodes int number of nodes to be created on local machine
--num-nodes uint32 number of nodes to be created on local network deploy (default 2)
--output-tx-path string file path of the blockchain creation tx
--partial-sync set primary network partial sync for new validators (default true)
--pos-maximum-stake-amount uint maximum stake amount (default 1000)
--pos-maximum-stake-multiplier uint8 maximum stake multiplier (default 1)
--pos-minimum-delegation-fee uint16 minimum delegation fee (default 1)
--pos-minimum-stake-amount uint minimum stake amount (default 1)
--pos-minimum-stake-duration uint minimum stake duration (default 100)
--pos-weight-to-value-factor uint weight to value factor (default 1)
--relay-cchain relay C-Chain as source and destination (default true)
--relayer-allow-private-ips allow relayer to connec to private ips (default true)
--relayer-amount float automatically fund relayer fee payments with the given amount
--relayer-key string key to be used by default both for rewards and to pay fees
--relayer-log-level string log level to be used for relayer logs (default "info")
--relayer-path string relayer binary to use
--relayer-version string relayer version to deploy (default "latest-prerelease")
-s, --same-control-key use the fee-paying key as control key
--skip-icm-deploy skip automatic ICM deploy
--skip-local-teleporter skip automatic ICM deploy on local networks [to be deprecated]
--skip-relayer skip relayer deploy
--skip-teleporter-deploy skip automatic ICM deploy
--subnet-auth-keys strings control keys that will be used to authenticate chain creation
-u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id
--subnet-only only create a subnet
--teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file
--teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file
--teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file
--teleporter-registry-bytecode-path string path to an ICM Registry bytecode file
--teleporter-version string ICM version to deploy (default "latest")
-t, --testnet fuji operate on testnet (alias to fuji)
--threshold uint32 required number of control key signatures to make subnet changes
--use-local-machine use local machine as a blockchain validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### describe
The blockchain describe command prints the details of a Blockchain configuration to the console.
By default, the command prints a summary of the configuration. By providing the --genesis
flag, the command instead prints out the raw genesis file.
**Usage:**
```bash
avalanche blockchain describe [subcommand] [flags]
```
**Flags:**
```bash
-g, --genesis Print the genesis to the console directly instead of the summary
-h, --help help for describe
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### export
The blockchain export command write the details of an existing Blockchain deploy to a file.
The command prompts for an output path. You can also provide one with
the --output flag.
**Usage:**
```bash
avalanche blockchain export [subcommand] [flags]
```
**Flags:**
```bash
--custom-vm-branch string custom vm branch
--custom-vm-build-script string custom vm build-script
--custom-vm-repo-url string custom vm repository url
-h, --help help for export
-o, --output string write the export data to the provided file path
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### import
Import blockchain configurations into avalanche-cli.
This command suite supports importing from a file created on another computer,
or importing from blockchains running public networks
(e.g. created manually or with the deprecated subnet-cli)
**Usage:**
```bash
avalanche blockchain import [subcommand] [flags]
```
**Subcommands:**
* [`file`](#avalanche-blockchain-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository.
To import from a file, you can optionally provide the path as a command-line argument.
Alternatively, running the command without any arguments triggers an interactive wizard.
To import from a repository, go through the wizard. By default, an imported Blockchain doesn't
overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
* [`public`](#avalanche-blockchain-import-public): The blockchain import public command imports a Blockchain configuration from a running network.
By default, an imported Blockchain
doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Flags:**
```bash
-h, --help help for import
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### import file
The blockchain import command will import a blockchain configuration from a file or a git repository.
To import from a file, you can optionally provide the path as a command-line argument.
Alternatively, running the command without any arguments triggers an interactive wizard.
To import from a repository, go through the wizard. By default, an imported Blockchain doesn't
overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Usage:**
```bash
avalanche blockchain import file [subcommand] [flags]
```
**Flags:**
```bash
--branch string the repo branch to use if downloading a new repo
-f, --force overwrite the existing configuration if one exists
-h, --help help for file
--repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from
--subnet string the subnet configuration to import from the provided repo
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### import public
The blockchain import public command imports a Blockchain configuration from a running network.
By default, an imported Blockchain
doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Usage:**
```bash
avalanche blockchain import public [subcommand] [flags]
```
**Flags:**
```bash
--blockchain-id string the blockchain ID
--cluster string operate on the given cluster
--custom use a custom VM template
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--evm import a subnet-evm
--force overwrite the existing configuration if one exists
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for public
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-url string [optional] URL of an already running subnet validator
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### join
The subnet join command configures your validator node to begin validating a new Blockchain.
To complete this process, you must have access to the machine running your validator. If the
CLI is running on the same machine as your validator, it can generate or update your node's
config file automatically. Alternatively, the command can print the necessary instructions
to update your node manually. To complete the validation process, the Subnet's admins must add
the NodeID of your validator to the Subnet's allow list by calling addValidator with your
NodeID.
After you update your validator's config, you need to restart your validator manually. If
you provide the --avalanchego-config flag, this command attempts to edit the config file
at that path.
This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet.
**Usage:**
```bash
avalanche blockchain join [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-config string file path of the avalanchego config file
--cluster string operate on the given cluster
--data-dir string path of avalanchego's data dir directory
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-write if true, skip to prompt to overwrite the config file
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for join
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string set the NodeID of the validator to check
--plugin-dir string file path of avalanchego's plugin directory
--print if true, print the manual config without prompting
--stake-amount uint amount of tokens to stake on validator
--staking-period duration how long validator validates for after start time
--start-time string start time that validator starts validating
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
The blockchain list command prints the names of all created Blockchain configurations. Without any flags,
it prints some general, static information about the Blockchain. With the --deployed flag, the command
shows additional information including the VMID, BlockchainID and SubnetID.
**Usage:**
```bash
avalanche blockchain list [subcommand] [flags]
```
**Flags:**
```bash
--deployed show additional deploy information
-h, --help help for list
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### publish
The blockchain publish command publishes the Blockchain's VM to a repository.
**Usage:**
```bash
avalanche blockchain publish [subcommand] [flags]
```
**Flags:**
```bash
--alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo).
--force If true, ignores if the subnet has been published in the past, and attempts a forced publish.
-h, --help help for publish
--no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag.
--repo-url string The URL of the repo where we are publishing
--subnet-file-path string Path to the Subnet description file. If not given, a prompting sequence will be initiated.
--vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated.
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### removeValidator
The blockchain removeValidator command stops a whitelisted, subnet network validator from
validating your deployed Blockchain.
To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass
these prompts by providing the values with flags.
**Usage:**
```bash
avalanche blockchain removeValidator [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Off")
--blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token)
--blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token)
--blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token)
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force force validator removal even if it's not getting rewarded
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for removeValidator
-k, --key string select the key to use [fuji deploy only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-endpoint string remove validator that responds to the given endpoint
--node-id string node-id of the validator
--output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx
--rpc string connect to validator manager at the given rpc endpoint
--subnet-auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx
-t, --testnet fuji operate on testnet (alias to fuji)
--uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### stats
The blockchain stats command prints validator statistics for the given Blockchain.
**Usage:**
```bash
avalanche blockchain stats [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for stats
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### upgrade
The blockchain upgrade command suite provides a collection of tools for
updating your developmental and deployed Blockchains.
**Usage:**
```bash
avalanche blockchain upgrade [subcommand] [flags]
```
**Subcommands:**
* [`apply`](#avalanche-blockchain-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade.
For public networks (Fuji Testnet or Mainnet), to complete this process,
you must have access to the machine running your validator.
If the CLI is running on the same machine as your validator, it can manipulate your node's
configuration automatically. Alternatively, the command can print the necessary instructions
to upgrade your node manually.
After you update your validator's configuration, you need to restart your validator manually.
If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path.
Refer to [https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs](https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs) for related documentation.
* [`export`](#avalanche-blockchain-upgrade-export): Export the upgrade bytes file to a location of choice on disk
* [`generate`](#avalanche-blockchain-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It
guides the user through the process using an interactive wizard.
* [`import`](#avalanche-blockchain-upgrade-import): Import the upgrade bytes file into the local environment
* [`print`](#avalanche-blockchain-upgrade-print): Print the upgrade.json file content
* [`vm`](#avalanche-blockchain-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command
can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet.
The command walks the user through an interactive wizard. The user can skip the wizard by providing
command line flags.
**Flags:**
```bash
-h, --help help for upgrade
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade apply
Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade.
For public networks (Fuji Testnet or Mainnet), to complete this process,
you must have access to the machine running your validator.
If the CLI is running on the same machine as your validator, it can manipulate your node's
configuration automatically. Alternatively, the command can print the necessary instructions
to upgrade your node manually.
After you update your validator's configuration, you need to restart your validator manually.
If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path.
Refer to [https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs](https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs) for related documentation.
**Usage:**
```bash
avalanche blockchain upgrade apply [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/Users/owen.wahlgren/.avalanchego/chains")
--config create upgrade config for future subnet deployments (same as generate)
--force If true, don't prompt for confirmation of timestamps in the past
--fuji fuji apply upgrade existing fuji deployment (alias for `testnet`)
-h, --help help for apply
--local local apply upgrade existing local deployment
--mainnet mainnet apply upgrade existing mainnet deployment
--print if true, print the manual config without prompting (for public networks only)
--testnet testnet apply upgrade existing testnet deployment (alias for `fuji`)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade export
Export the upgrade bytes file to a location of choice on disk
**Usage:**
```bash
avalanche blockchain upgrade export [subcommand] [flags]
```
**Flags:**
```bash
--force If true, overwrite a possibly existing file without prompting
-h, --help help for export
--upgrade-filepath string Export upgrade bytes file to location of choice on disk
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade generate
The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It
guides the user through the process using an interactive wizard.
**Usage:**
```bash
avalanche blockchain upgrade generate [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for generate
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade import
Import the upgrade bytes file into the local environment
**Usage:**
```bash
avalanche blockchain upgrade import [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for import
--upgrade-filepath string Import upgrade bytes file into local environment
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade print
Print the upgrade.json file content
**Usage:**
```bash
avalanche blockchain upgrade print [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for print
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade vm
The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command
can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet.
The command walks the user through an interactive wizard. The user can skip the wizard by providing
command line flags.
**Usage:**
```bash
avalanche blockchain upgrade vm [subcommand] [flags]
```
**Flags:**
```bash
--binary string Upgrade to custom binary
--config upgrade config for future subnet deployments
--fuji fuji upgrade existing fuji deployment (alias for `testnet`)
-h, --help help for vm
--latest upgrade to latest version
--local local upgrade existing local deployment
--mainnet mainnet upgrade existing mainnet deployment
--plugin-dir string plugin directory to automatically upgrade VM
--print print instructions for upgrading
--testnet testnet upgrade existing testnet deployment (alias for `fuji`)
--version string Upgrade to custom version
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### validators
The blockchain validators command lists the validators of a blockchain's subnet and provides
several statistics about them.
**Usage:**
```bash
avalanche blockchain validators [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for validators
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### vmid
The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain.
**Usage:**
```bash
avalanche blockchain vmid [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for vmid
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche config
Customize configuration for Avalanche-CLI
**Usage:**
```bash
avalanche config [subcommand] [flags]
```
**Subcommands:**
* [`authorize-cloud-access`](#avalanche-config-authorize-cloud-access): set preferences to authorize access to cloud resources
* [`metrics`](#avalanche-config-metrics): set user metrics collection preferences
* [`migrate`](#avalanche-config-migrate): migrate command migrates old \~/.avalanche-cli.json and \~/.avalanche-cli/config to /.avalanche-cli/config.json..
* [`snapshotsAutoSave`](#avalanche-config-snapshotsautosave): set user preference between auto saving local network snapshots or not
* [`update`](#avalanche-config-update): set user preference between update check or not
**Flags:**
```bash
-h, --help help for config
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### authorize-cloud-access
set preferences to authorize access to cloud resources
**Usage:**
```bash
avalanche config authorize-cloud-access [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for authorize-cloud-access
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### metrics
set user metrics collection preferences
**Usage:**
```bash
avalanche config metrics [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for metrics
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### migrate
migrate command migrates old \~/.avalanche-cli.json and \~/.avalanche-cli/config to /.avalanche-cli/config.json..
**Usage:**
```bash
avalanche config migrate [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for migrate
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### snapshotsAutoSave
set user preference between auto saving local network snapshots or not
**Usage:**
```bash
avalanche config snapshotsAutoSave [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for snapshotsAutoSave
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### update
set user preference between update check or not
**Usage:**
```bash
avalanche config update [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for update
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche contract
The contract command suite provides a collection of tools for deploying
and interacting with smart contracts.
**Usage:**
```bash
avalanche contract [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-contract-deploy): The contract command suite provides a collection of tools for deploying
smart contracts.
* [`initValidatorManager`](#avalanche-contract-initvalidatormanager): Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to [https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager)
**Flags:**
```bash
-h, --help help for contract
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
The contract command suite provides a collection of tools for deploying
smart contracts.
**Usage:**
```bash
avalanche contract deploy [subcommand] [flags]
```
**Subcommands:**
* [`erc20`](#avalanche-contract-deploy-erc20): Deploy an ERC20 token into a given Network and Blockchain
**Flags:**
```bash
-h, --help help for deploy
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### deploy erc20
Deploy an ERC20 token into a given Network and Blockchain
**Usage:**
```bash
avalanche contract deploy erc20 [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string deploy the ERC20 contract into the given CLI blockchain
--blockchain-id string deploy the ERC20 contract into the given blockchain ID/Alias
--c-chain deploy the ERC20 contract into C-Chain
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--funded string set the funded address
--genesis-key use genesis allocated key as contract deployer
-h, --help help for erc20
--key string CLI stored key to use as contract deployer
-l, --local operate on a local network
--private-key string private key to use as contract deployer
--rpc string deploy the contract into the given rpc endpoint
--supply uint set the token supply
--symbol string set the token symbol
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### initValidatorManager
Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to [https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager)
**Usage:**
```bash
avalanche contract initValidatorManager [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Off")
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key as contract deployer
-h, --help help for initValidatorManager
--key string CLI stored key to use as contract deployer
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--pos-maximum-stake-amount uint (PoS only) maximum stake amount (default 1000)
--pos-maximum-stake-multiplier uint8 (PoS only )maximum stake multiplier (default 1)
--pos-minimum-delegation-fee uint16 (PoS only) minimum delegation fee (default 1)
--pos-minimum-stake-amount uint (PoS only) minimum stake amount (default 1)
--pos-minimum-stake-duration uint (PoS only) minimum stake duration (default 100)
--pos-reward-calculator-address string (PoS only) initialize the ValidatorManager with reward calculator address
--pos-weight-to-value-factor uint (PoS only) weight to value factor (default 1)
--private-key string private key to use as contract deployer
--rpc string deploy the contract into the given rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche help
Help provides help for any command in the application.
Simply type avalanche help \[path to command] for full details.
**Usage:**
```bash
avalanche help [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for help
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche icm
The messenger command suite provides a collection of tools for interacting
with ICM messenger contracts.
**Usage:**
```bash
avalanche icm [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-icm-deploy): Deploys ICM Messenger and Registry into a given L1.
* [`sendMsg`](#avalanche-icm-sendmsg): Sends and wait reception for a ICM msg between two subnets.
**Flags:**
```bash
-h, --help help for icm
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
Deploys ICM Messenger and Registry into a given L1.
**Usage:**
```bash
avalanche icm deploy [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string deploy ICM into the given CLI blockchain
--blockchain-id string deploy ICM into the given blockchain ID/Alias
--c-chain deploy ICM into C-Chain
--cchain-key string key to be used to pay fees to deploy ICM to C-Chain
--cluster string operate on the given cluster
--deploy-messenger deploy ICM Messenger (default true)
--deploy-registry deploy ICM Registry (default true)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-registry-deploy deploy ICM Registry even if Messenger has already been deployed
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key to fund ICM deploy
-h, --help help for deploy
--include-cchain deploy ICM also to C-Chain
--key string CLI stored key to use to fund ICM deploy
-l, --local operate on a local network
--messenger-contract-address-path string path to a messenger contract address file
--messenger-deployer-address-path string path to a messenger deployer address file
--messenger-deployer-tx-path string path to a messenger deployer tx file
--private-key string private key to use to fund ICM deploy
--registry-bytecode-path string path to a registry bytecode file
--rpc-url string use the given RPC URL to connect to the subnet
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to deploy (default "latest")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### sendMsg
Sends and wait reception for a ICM msg between two subnets.
**Usage:**
```bash
avalanche icm sendMsg [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--dest-rpc string use the given destination blockchain rpc endpoint
--destination-address string deliver the message to the given contract destination address
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key as message originator and to pay source blockchain fees
-h, --help help for sendMsg
--hex-encoded given message is hex encoded
--key string CLI stored key to use as message originator and to pay source blockchain fees
-l, --local operate on a local network
--private-key string private key to use as message originator and to pay source blockchain fees
--source-rpc string use the given source blockchain rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche ictt
The ictt command suite provides tools to deploy and manage Interchain Token Transferrers.
**Usage:**
```bash
avalanche ictt [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-ictt-deploy): Deploys a Token Transferrer into a given Network and Subnets
**Flags:**
```bash
-h, --help help for ictt
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
Deploys a Token Transferrer into a given Network and Subnets
**Usage:**
```bash
avalanche ictt deploy [subcommand] [flags]
```
**Flags:**
```bash
--c-chain-home set the Transferrer's Home Chain into C-Chain
--c-chain-remote set the Transferrer's Remote Chain into C-Chain
--cluster string operate on the given cluster
--deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token
--deploy-native-home deploy a Transferrer Home for the Chain's Native Token
--deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for deploy
--home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain
--home-genesis-key use genesis allocated key to deploy Transferrer Home
--home-key string CLI stored key to use to deploy Transferrer Home
--home-private-key string private key to use to deploy Transferrer Home
--home-rpc string use the given RPC URL to connect to the home blockchain
-l, --local operate on a local network
--remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain
--remote-genesis-key use genesis allocated key to deploy Transferrer Remote
--remote-key string CLI stored key to use to deploy Transferrer Remote
--remote-private-key string private key to use to deploy Transferrer Remote
--remote-rpc string use the given RPC URL to connect to the remote blockchain
--remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)]
--remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis
-t, --testnet fuji operate on testnet (alias to fuji)
--use-home string use the given Transferrer's Home Address
--version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche interchain
The interchain command suite provides a collection of tools to
set and manage interoperability between blockchains.
**Usage:**
```bash
avalanche interchain [subcommand] [flags]
```
**Subcommands:**
* [`messenger`](#avalanche-interchain-messenger): The messenger command suite provides a collection of tools for interacting
with ICM messenger contracts.
* [`relayer`](#avalanche-interchain-relayer): The relayer command suite provides a collection of tools for deploying
and configuring an ICM relayers.
* [`tokenTransferrer`](#avalanche-interchain-tokentransferrer): The tokenTransfer command suite provides tools to deploy and manage Token Transferrers.
**Flags:**
```bash
-h, --help help for interchain
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### messenger
The messenger command suite provides a collection of tools for interacting
with ICM messenger contracts.
**Usage:**
```bash
avalanche interchain messenger [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-interchain-messenger-deploy): Deploys ICM Messenger and Registry into a given L1.
* [`sendMsg`](#avalanche-interchain-messenger-sendmsg): Sends and wait reception for a ICM msg between two subnets.
**Flags:**
```bash
-h, --help help for messenger
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### messenger deploy
Deploys ICM Messenger and Registry into a given L1.
**Usage:**
```bash
avalanche interchain messenger deploy [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string deploy ICM into the given CLI blockchain
--blockchain-id string deploy ICM into the given blockchain ID/Alias
--c-chain deploy ICM into C-Chain
--cchain-key string key to be used to pay fees to deploy ICM to C-Chain
--cluster string operate on the given cluster
--deploy-messenger deploy ICM Messenger (default true)
--deploy-registry deploy ICM Registry (default true)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-registry-deploy deploy ICM Registry even if Messenger has already been deployed
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key to fund ICM deploy
-h, --help help for deploy
--include-cchain deploy ICM also to C-Chain
--key string CLI stored key to use to fund ICM deploy
-l, --local operate on a local network
--messenger-contract-address-path string path to a messenger contract address file
--messenger-deployer-address-path string path to a messenger deployer address file
--messenger-deployer-tx-path string path to a messenger deployer tx file
--private-key string private key to use to fund ICM deploy
--registry-bytecode-path string path to a registry bytecode file
--rpc-url string use the given RPC URL to connect to the subnet
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to deploy (default "latest")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### messenger sendMsg
Sends and wait reception for a ICM msg between two subnets.
**Usage:**
```bash
avalanche interchain messenger sendMsg [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--dest-rpc string use the given destination blockchain rpc endpoint
--destination-address string deliver the message to the given contract destination address
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key as message originator and to pay source blockchain fees
-h, --help help for sendMsg
--hex-encoded given message is hex encoded
--key string CLI stored key to use as message originator and to pay source blockchain fees
-l, --local operate on a local network
--private-key string private key to use as message originator and to pay source blockchain fees
--source-rpc string use the given source blockchain rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### relayer
The relayer command suite provides a collection of tools for deploying
and configuring an ICM relayers.
**Usage:**
```bash
avalanche interchain relayer [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-interchain-relayer-deploy): Deploys an ICM Relayer for the given Network.
* [`logs`](#avalanche-interchain-relayer-logs): Shows pretty formatted AWM relayer logs
* [`start`](#avalanche-interchain-relayer-start): Starts AWM relayer on the specified network (Currently only for local network).
* [`stop`](#avalanche-interchain-relayer-stop): Stops AWM relayer on the specified network (Currently only for local network, cluster).
**Flags:**
```bash
-h, --help help for relayer
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### relayer deploy
Deploys an ICM Relayer for the given Network.
**Usage:**
```bash
avalanche interchain relayer deploy [subcommand] [flags]
```
**Flags:**
```bash
--allow-private-ips allow relayer to connec to private ips (default true)
--amount float automatically fund l1s fee payments with the given amount
--bin-path string use the given relayer binary
--blockchain-funding-key string key to be used to fund relayer account on all l1s
--blockchains strings blockchains to relay as source and destination
--cchain relay C-Chain as source and destination
--cchain-amount float automatically fund cchain fee payments with the given amount
--cchain-funding-key string key to be used to fund relayer account on cchain
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for deploy
--key string key to be used by default both for rewards and to pay fees
-l, --local operate on a local network
--log-level string log level to use for relayer logs
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to deploy (default "latest-prerelease")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--skip-update-check skip check for new versions
```
#### relayer logs
Shows pretty formatted AWM relayer logs
**Usage:**
```bash
avalanche interchain relayer logs [subcommand] [flags]
```
**Flags:**
```bash
--endpoint string use the given endpoint for network operations
--first uint output first N log lines
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for logs
--last uint output last N log lines
-l, --local operate on a local network
--raw raw logs output
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### relayer start
Starts AWM relayer on the specified network (Currently only for local network).
**Usage:**
```bash
avalanche interchain relayer start [subcommand] [flags]
```
**Flags:**
```bash
--bin-path string use the given relayer binary
--cluster string operate on the given cluster
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for start
-l, --local operate on a local network
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to use (default "latest-prerelease")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### relayer stop
Stops AWM relayer on the specified network (Currently only for local network, cluster).
**Usage:**
```bash
avalanche interchain relayer stop [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for stop
-l, --local operate on a local network
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### tokenTransferrer
The tokenTransfer command suite provides tools to deploy and manage Token Transferrers.
**Usage:**
```bash
avalanche interchain tokenTransferrer [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-interchain-tokentransferrer-deploy): Deploys a Token Transferrer into a given Network and Subnets
**Flags:**
```bash
-h, --help help for tokenTransferrer
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### tokenTransferrer deploy
Deploys a Token Transferrer into a given Network and Subnets
**Usage:**
```bash
avalanche interchain tokenTransferrer deploy [subcommand] [flags]
```
**Flags:**
```bash
--c-chain-home set the Transferrer's Home Chain into C-Chain
--c-chain-remote set the Transferrer's Remote Chain into C-Chain
--cluster string operate on the given cluster
--deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token
--deploy-native-home deploy a Transferrer Home for the Chain's Native Token
--deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for deploy
--home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain
--home-genesis-key use genesis allocated key to deploy Transferrer Home
--home-key string CLI stored key to use to deploy Transferrer Home
--home-private-key string private key to use to deploy Transferrer Home
--home-rpc string use the given RPC URL to connect to the home blockchain
-l, --local operate on a local network
--remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain
--remote-genesis-key use genesis allocated key to deploy Transferrer Remote
--remote-key string CLI stored key to use to deploy Transferrer Remote
--remote-private-key string private key to use to deploy Transferrer Remote
--remote-rpc string use the given RPC URL to connect to the remote blockchain
--remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)]
--remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis
-t, --testnet fuji operate on testnet (alias to fuji)
--use-home string use the given Transferrer's Home Address
--version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche key
The key command suite provides a collection of tools for creating and managing
signing keys. You can use these keys to deploy Subnets to the Fuji Testnet,
but these keys are NOT suitable to use in production environments. DO NOT use
these keys on Mainnet.
To get started, use the key create command.
**Usage:**
```bash
avalanche key [subcommand] [flags]
```
**Subcommands:**
* [`create`](#avalanche-key-create): The key create command generates a new private key to use for creating and controlling
test Subnets. Keys generated by this command are NOT cryptographically secure enough to
use in production environments. DO NOT use these keys on Mainnet.
The command works by generating a secp256 key and storing it with the provided keyName. You
can use this key in other commands by providing this keyName.
If you'd like to import an existing key instead of generating one from scratch, provide the
\--file flag.
* [`delete`](#avalanche-key-delete): The key delete command deletes an existing signing key.
To delete a key, provide the keyName. The command prompts for confirmation
before deleting the key. To skip the confirmation, provide the --force flag.
* [`export`](#avalanche-key-export): The key export command exports a created signing key. You can use an exported key in other
applications or import it into another instance of Avalanche-CLI.
By default, the tool writes the hex encoded key to stdout. If you provide the --output
flag, the command writes the key to a file of your choosing.
* [`list`](#avalanche-key-list): The key list command prints information for all stored signing
keys or for the ledger addresses associated to certain indices.
* [`transfer`](#avalanche-key-transfer): The key transfer command allows to transfer funds between stored keys or ledger addresses.
**Flags:**
```bash
-h, --help help for key
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### create
The key create command generates a new private key to use for creating and controlling
test Subnets. Keys generated by this command are NOT cryptographically secure enough to
use in production environments. DO NOT use these keys on Mainnet.
The command works by generating a secp256 key and storing it with the provided keyName. You
can use this key in other commands by providing this keyName.
If you'd like to import an existing key instead of generating one from scratch, provide the
\--file flag.
**Usage:**
```bash
avalanche key create [subcommand] [flags]
```
**Flags:**
```bash
--file string import the key from an existing key file
-f, --force overwrite an existing key with the same name
-h, --help help for create
--skip-balances do not query public network balances for an imported key
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### delete
The key delete command deletes an existing signing key.
To delete a key, provide the keyName. The command prompts for confirmation
before deleting the key. To skip the confirmation, provide the --force flag.
**Usage:**
```bash
avalanche key delete [subcommand] [flags]
```
**Flags:**
```bash
-f, --force delete the key without confirmation
-h, --help help for delete
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### export
The key export command exports a created signing key. You can use an exported key in other
applications or import it into another instance of Avalanche-CLI.
By default, the tool writes the hex encoded key to stdout. If you provide the --output
flag, the command writes the key to a file of your choosing.
**Usage:**
```bash
avalanche key export [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for export
-o, --output string write the key to the provided file path
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
The key list command prints information for all stored signing
keys or for the ledger addresses associated to certain indices.
**Usage:**
```bash
avalanche key list [subcommand] [flags]
```
**Flags:**
```bash
-a, --all-networks list all network addresses
--blockchains strings blockchains to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c)
-c, --cchain list C-Chain addresses (default true)
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for list
--keys strings list addresses for the given keys
-g, --ledger uints list ledger addresses for the given indices (default [])
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--pchain list P-Chain addresses (default true)
--subnets strings subnets to show information about (p=p-chain, x=x-chain, c=c-chain, and subnet names) (default p,x,c)
-t, --testnet fuji operate on testnet (alias to fuji)
--tokens strings provide balance information for the given token contract addresses (Evm only) (default [Native])
--use-gwei use gwei for EVM balances
-n, --use-nano-avax use nano Avax for balances
--xchain list X-Chain addresses (default true)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### transfer
The key transfer command allows to transfer funds between stored keys or ledger addresses.
**Usage:**
```bash
avalanche key transfer [subcommand] [flags]
```
**Flags:**
```bash
-o, --amount float amount to send or receive (AVAX or TOKEN units)
--c-chain-receiver receive at C-Chain
--c-chain-sender send from C-Chain
--cluster string operate on the given cluster
-a, --destination-addr string destination address
--destination-key string key associated to a destination address
--destination-subnet string subnet where the funds will be sent (token transferrer experimental)
--destination-transferrer-address string token transferrer address at the destination subnet (token transferrer experimental)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for transfer
-k, --key string key associated to the sender or receiver address
-i, --ledger uint32 ledger index associated to the sender or receiver address (default 32768)
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--origin-subnet string subnet where the funds belong (token transferrer experimental)
--origin-transferrer-address string token transferrer address at the origin subnet (token transferrer experimental)
--p-chain-receiver receive at P-Chain
--p-chain-sender send from P-Chain
--receiver-blockchain string receive at the given CLI blockchain
--receiver-blockchain-id string receive at the given blockchain ID/Alias
--sender-blockchain string send from the given CLI blockchain
--sender-blockchain-id string send from the given blockchain ID/Alias
-t, --testnet fuji operate on testnet (alias to fuji)
--x-chain-receiver receive at X-Chain
--x-chain-sender send from X-Chain
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche network
The network command suite provides a collection of tools for managing local Subnet
deployments.
When you deploy a Subnet locally, it runs on a local, multi-node Avalanche network. The
subnet deploy command starts this network in the background. This command suite allows you
to shutdown, restart, and clear that network.
This network currently supports multiple, concurrently deployed Subnets.
**Usage:**
```bash
avalanche network [subcommand] [flags]
```
**Subcommands:**
* [`clean`](#avalanche-network-clean): The network clean command shuts down your local, multi-node network. All deployed Subnets
shutdown and delete their state. You can restart the network by deploying a new Subnet
configuration.
* [`start`](#avalanche-network-start): The network start command starts a local, multi-node Avalanche network on your machine.
By default, the command loads the default snapshot. If you provide the --snapshot-name
flag, the network loads that snapshot instead. The command fails if the local network is
already running.
* [`status`](#avalanche-network-status): The network status command prints whether or not a local Avalanche
network is running and some basic stats about the network.
* [`stop`](#avalanche-network-stop): The network stop command shuts down your local, multi-node network.
All deployed Subnets shutdown gracefully and save their state. If you provide the
\--snapshot-name flag, the network saves its state under this named snapshot. You can
reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the
network saves to the default snapshot, overwriting any existing state. You can reload the
default snapshot with network start.
**Flags:**
```bash
-h, --help help for network
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### clean
The network clean command shuts down your local, multi-node network. All deployed Subnets
shutdown and delete their state. You can restart the network by deploying a new Subnet
configuration.
**Usage:**
```bash
avalanche network clean [subcommand] [flags]
```
**Flags:**
```bash
--hard Also clean downloaded avalanchego and plugin binaries
-h, --help help for clean
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### start
The network start command starts a local, multi-node Avalanche network on your machine.
By default, the command loads the default snapshot. If you provide the --snapshot-name
flag, the network loads that snapshot instead. The command fails if the local network is
already running.
**Usage:**
```bash
avalanche network start [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-path string use this avalanchego binary path
--avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease")
-h, --help help for start
--num-nodes uint32 number of nodes to be created on local network (default 2)
--relayer-path string use this relayer binary path
--relayer-version string use this relayer version (default "latest-prerelease")
--snapshot-name string name of snapshot to use to start the network from (default "default-1654102509")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### status
The network status command prints whether or not a local Avalanche
network is running and some basic stats about the network.
**Usage:**
```bash
avalanche network status [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for status
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### stop
The network stop command shuts down your local, multi-node network.
All deployed Subnets shutdown gracefully and save their state. If you provide the
\--snapshot-name flag, the network saves its state under this named snapshot. You can
reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the
network saves to the default snapshot, overwriting any existing state. You can reload the
default snapshot with network start.
**Usage:**
```bash
avalanche network stop [subcommand] [flags]
```
**Flags:**
```bash
--dont-save do not save snapshot, just stop the network
-h, --help help for stop
--snapshot-name string name of snapshot to use to save network state into (default "default-1654102509")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche node
The node command suite provides a collection of tools for creating and maintaining
validators on Avalanche Network.
To get started, use the node create command wizard to walk through the
configuration to make your node a primary validator on Avalanche public network. You can use the
rest of the commands to maintain your node and make your node a Subnet Validator.
**Usage:**
```bash
avalanche node [subcommand] [flags]
```
**Subcommands:**
* [`addDashboard`](#avalanche-node-adddashboard): (ALPHA Warning) This command is currently in experimental mode.
The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the
cluster.
* [`create`](#avalanche-node-create): (ALPHA Warning) This command is currently in experimental mode.
The node create command sets up a validator on a cloud server of your choice.
The validator will be validating the Avalanche Primary Network and Subnet
of your choice. By default, the command runs an interactive wizard. It
walks you through all the steps you need to set up a validator.
Once this command is completed, you will have to wait for the validator
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet. You can check the bootstrapping
status by running avalanche node status
The created node will be part of group of validators called `clusterName`
and users can call node commands with `clusterName` so that the command
will apply to all nodes in the cluster
* [`destroy`](#avalanche-node-destroy): (ALPHA Warning) This command is currently in experimental mode.
The node destroy command terminates all running nodes in cloud server and deletes all storage disks.
If there is a static IP address attached, it will be released.
* [`devnet`](#avalanche-node-devnet): (ALPHA Warning) This command is currently in experimental mode.
The node devnet command suite provides a collection of commands related to devnets.
You can check the updated status by calling avalanche node status `clusterName`
* [`export`](#avalanche-node-export): (ALPHA Warning) This command is currently in experimental mode.
The node export command exports cluster configuration and its nodes config to a text file.
If no file is specified, the configuration is printed to the stdout.
Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information.
Exported cluster configuration without secrets can be imported by another user using node import command.
* [`import`](#avalanche-node-import): (ALPHA Warning) This command is currently in experimental mode.
The node import command imports cluster configuration and its nodes configuration from a text file
created from the node export command.
Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by
the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster.
Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands
affecting cloud nodes like node create or node destroy will be not applicable to it.
* [`list`](#avalanche-node-list): (ALPHA Warning) This command is currently in experimental mode.
The node list command lists all clusters together with their nodes.
* [`loadtest`](#avalanche-node-loadtest): (ALPHA Warning) This command is currently in experimental mode.
The node loadtest command suite starts and stops a load test for an existing devnet cluster.
* [`local`](#avalanche-node-local): (ALPHA Warning) This command is currently in experimental mode.
The node local command suite provides a collection of commands related to local nodes
* [`refresh-ips`](#avalanche-node-refresh-ips): (ALPHA Warning) This command is currently in experimental mode.
The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster,
and updates the local node information used by CLI commands.
* [`resize`](#avalanche-node-resize): (ALPHA Warning) This command is currently in experimental mode.
The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes.
* [`scp`](#avalanche-node-scp): (ALPHA Warning) This command is currently in experimental mode.
The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format:
\[clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/\*.txt.
File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path.
If both destinations are remote, they must be nodes for the same cluster and not clusters themselves.
For example:
$avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt$ avalanche node scp /tmp/file.txt \[cluster1|NodeID-XXXX]:/tmp/file.txt
\$ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt
* [`ssh`](#avalanche-node-ssh): (ALPHA Warning) This command is currently in experimental mode.
The node ssh command execute a given command \[cmd] using ssh on all nodes in the cluster if ClusterName is given.
If no command is given, just prints the ssh command to be used to connect to each node in the cluster.
For provided NodeID or InstanceID or IP, the command \[cmd] will be executed on that node.
If no \[cmd] is provided for the node, it will open ssh shell there.
* [`status`](#avalanche-node-status): (ALPHA Warning) This command is currently in experimental mode.
The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network.
If no cluster is given, defaults to node list behaviour.
To get the bootstrap status of a node with a Blockchain, use --blockchain flag
* [`sync`](#avalanche-node-sync): (ALPHA Warning) This command is currently in experimental mode.
The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain.
You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName`
* [`update`](#avalanche-node-update): (ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM config.
You can check the status after update by calling avalanche node status
* [`upgrade`](#avalanche-node-upgrade): (ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM version.
You can check the status after upgrade by calling avalanche node status
* [`validate`](#avalanche-node-validate): (ALPHA Warning) This command is currently in experimental mode.
The node validate command suite provides a collection of commands for nodes to join
the Primary Network and Subnets as validators.
If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command
will fail. You can check the bootstrap status by calling avalanche node status `clusterName`
* [`whitelist`](#avalanche-node-whitelist): (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster.
Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http.
It also command adds SSH public key to all nodes in the cluster if --ssh params is there.
If no params provided it detects current user IP automaticaly and whitelists it
**Flags:**
```bash
-h, --help help for node
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### addDashboard
(ALPHA Warning) This command is currently in experimental mode.
The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the
cluster.
**Usage:**
```bash
avalanche node addDashboard [subcommand] [flags]
```
**Flags:**
```bash
--add-grafana-dashboard string path to additional grafana dashboard json file
-h, --help help for addDashboard
--subnet string subnet that the dasbhoard is intended for (if any)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### create
(ALPHA Warning) This command is currently in experimental mode.
The node create command sets up a validator on a cloud server of your choice.
The validator will be validating the Avalanche Primary Network and Subnet
of your choice. By default, the command runs an interactive wizard. It
walks you through all the steps you need to set up a validator.
Once this command is completed, you will have to wait for the validator
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet. You can check the bootstrapping
status by running avalanche node status
The created node will be part of group of validators called `clusterName`
and users can call node commands with `clusterName` so that the command
will apply to all nodes in the cluster
**Usage:**
```bash
avalanche node create [subcommand] [flags]
```
**Flags:**
```bash
--add-grafana-dashboard string path to additional grafana dashboard json file
--alternative-key-pair-name string key pair name to use if default one generates conflicts
--authorize-access authorize CLI to create cloud resources
--auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found
--avalanchego-version-from-subnet string install latest avalanchego version, that is compatible with the given subnet, on node/s
--aws create node/s in AWS cloud
--aws-profile string aws profile to use (default "default")
--aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000)
--aws-volume-size int AWS volume size in GB (default 1000)
--aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125)
--aws-volume-type string AWS volume type (default "gp3")
--bootstrap-ids stringArray nodeIDs of bootstrap nodes
--bootstrap-ips stringArray IP:port pairs of bootstrap nodes
--cluster string operate on the given cluster
--custom-avalanchego-version string install given avalanchego version on node/s
--devnet operate on a devnet network
--enable-monitoring set up Prometheus monitoring for created nodes. This option creates a separate monitoring cloud instance and incures additional cost
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--gcp create node/s in GCP cloud
--gcp-credentials string use given GCP credentials
--gcp-project string use given GCP project
--genesis string path to genesis file
--grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb
-h, --help help for create
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s
--latest-avalanchego-version install latest avalanchego release version on node/s
-m, --mainnet operate on mainnet
--node-type string cloud instance type. Use 'default' to use recommended default instance type
--num-apis ints number of API nodes(nodes without stake) to create in the new Devnet
--num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag
--partial-sync primary network partial sync (default true)
--public-http-port allow public access to avalanchego HTTP port
--region strings create node(s) in given region(s). Use comma to separate multiple regions
--ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used
-t, --testnet fuji operate on testnet (alias to fuji)
--upgrade string path to upgrade file
--use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth
--use-static-ip attach static Public IP on cloud servers (default true)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### destroy
(ALPHA Warning) This command is currently in experimental mode.
The node destroy command terminates all running nodes in cloud server and deletes all storage disks.
If there is a static IP address attached, it will be released.
**Usage:**
```bash
avalanche node destroy [subcommand] [flags]
```
**Flags:**
```bash
--all destroy all existing clusters created by Avalanche CLI
--authorize-access authorize CLI to release cloud resources
-y, --authorize-all authorize all CLI requests
--authorize-remove authorize CLI to remove all local files related to cloud nodes
--aws-profile string aws profile to use (default "default")
-h, --help help for destroy
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### devnet
(ALPHA Warning) This command is currently in experimental mode.
The node devnet command suite provides a collection of commands related to devnets.
You can check the updated status by calling avalanche node status `clusterName`
**Usage:**
```bash
avalanche node devnet [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-node-devnet-deploy): (ALPHA Warning) This command is currently in experimental mode.
The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it.
It saves the deploy info both locally and remotely.
* [`wiz`](#avalanche-node-devnet-wiz): (ALPHA Warning) This command is currently in experimental mode.
The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed.
**Flags:**
```bash
-h, --help help for devnet
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### devnet deploy
(ALPHA Warning) This command is currently in experimental mode.
The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it.
It saves the deploy info both locally and remotely.
**Usage:**
```bash
avalanche node devnet deploy [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for deploy
--no-checks do not check for healthy status or rpc compatibility of nodes against subnet
--subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name
--subnet-only only create a subnet
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### devnet wiz
(ALPHA Warning) This command is currently in experimental mode.
The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed.
**Usage:**
```bash
avalanche node devnet wiz [subcommand] [flags]
```
**Flags:**
```bash
--add-grafana-dashboard string path to additional grafana dashboard json file
--alternative-key-pair-name string key pair name to use if default one generates conflicts
--authorize-access authorize CLI to create cloud resources
--auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found
--aws create node/s in AWS cloud
--aws-profile string aws profile to use (default "default")
--aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000)
--aws-volume-size int AWS volume size in GB (default 1000)
--aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125)
--aws-volume-type string AWS volume type (default "gp3")
--chain-config string path to the chain configuration for subnet
--custom-avalanchego-version string install given avalanchego version on node/s
--custom-subnet use a custom VM as the subnet virtual machine
--custom-vm-branch string custom vm branch or commit
--custom-vm-build-script string custom vm build-script
--custom-vm-repo-url string custom vm repository url
--default-validator-params use default weight/start/duration params for subnet validator
--deploy-icm-messenger deploy Interchain Messenger (default true)
--deploy-icm-registry deploy Interchain Registry (default true)
--deploy-teleporter-messenger deploy Interchain Messenger (default true)
--deploy-teleporter-registry deploy Interchain Registry (default true)
--enable-monitoring set up Prometheus monitoring for created nodes. Please note that this option creates a separate monitoring instance and incures additional cost
--evm-chain-id uint chain ID to use with Subnet-EVM
--evm-defaults use default production settings with Subnet-EVM
--evm-production-defaults use default production settings for your blockchain
--evm-subnet use Subnet-EVM as the subnet virtual machine
--evm-test-defaults use default test settings for your blockchain
--evm-token string token name to use with Subnet-EVM
--evm-version string version of Subnet-EVM to use
--force-subnet-create overwrite the existing subnet configuration if one exists
--gcp create node/s in GCP cloud
--gcp-credentials string use given GCP credentials
--gcp-project string use given GCP project
--grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb
-h, --help help for wiz
--icm generate an icm-ready vm
--icm-messenger-contract-address-path string path to an icm messenger contract address file
--icm-messenger-deployer-address-path string path to an icm messenger deployer address file
--icm-messenger-deployer-tx-path string path to an icm messenger deployer tx file
--icm-registry-bytecode-path string path to an icm registry bytecode file
--icm-version string icm version to deploy (default "latest")
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s
--latest-avalanchego-version install latest avalanchego release version on node/s
--latest-evm-version use latest Subnet-EVM released version
--latest-pre-released-evm-version use latest Subnet-EVM pre-released version
--node-config string path to avalanchego node configuration for subnet
--node-type string cloud instance type. Use 'default' to use recommended default instance type
--num-apis ints number of API nodes(nodes without stake) to create in the new Devnet
--num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag
--public-http-port allow public access to avalanchego HTTP port
--region strings create node/s in given region(s). Use comma to separate multiple regions
--relayer run AWM relayer when deploying the vm
--ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used.
--subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name
--subnet-config string path to the subnet configuration for subnet
--subnet-genesis string file path of the subnet genesis
--teleporter generate an icm-ready vm
--teleporter-messenger-contract-address-path string path to an icm messenger contract address file
--teleporter-messenger-deployer-address-path string path to an icm messenger deployer address file
--teleporter-messenger-deployer-tx-path string path to an icm messenger deployer tx file
--teleporter-registry-bytecode-path string path to an icm registry bytecode file
--teleporter-version string icm version to deploy (default "latest")
--use-ssh-agent use ssh agent for ssh
--use-static-ip attach static Public IP on cloud servers (default true)
--validators strings deploy subnet into given comma separated list of validators. defaults to all cluster nodes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### export
(ALPHA Warning) This command is currently in experimental mode.
The node export command exports cluster configuration and its nodes config to a text file.
If no file is specified, the configuration is printed to the stdout.
Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information.
Exported cluster configuration without secrets can be imported by another user using node import command.
**Usage:**
```bash
avalanche node export [subcommand] [flags]
```
**Flags:**
```bash
--file string specify the file to export the cluster configuration to
--force overwrite the file if it exists
-h, --help help for export
--include-secrets include keys in the export
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### import
(ALPHA Warning) This command is currently in experimental mode.
The node import command imports cluster configuration and its nodes configuration from a text file
created from the node export command.
Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by
the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster.
Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands
affecting cloud nodes like node create or node destroy will be not applicable to it.
**Usage:**
```bash
avalanche node import [subcommand] [flags]
```
**Flags:**
```bash
--file string specify the file to export the cluster configuration to
-h, --help help for import
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
(ALPHA Warning) This command is currently in experimental mode.
The node list command lists all clusters together with their nodes.
**Usage:**
```bash
avalanche node list [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for list
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### loadtest
(ALPHA Warning) This command is currently in experimental mode.
The node loadtest command suite starts and stops a load test for an existing devnet cluster.
**Usage:**
```bash
avalanche node loadtest [subcommand] [flags]
```
**Subcommands:**
* [`start`](#avalanche-node-loadtest-start): (ALPHA Warning) This command is currently in experimental mode.
The node loadtest command starts load testing for an existing devnet cluster. If the cluster does
not have an existing load test host, the command creates a separate cloud server and builds the load
test binary based on the provided load test Git Repo URL and load test binary build command.
The command will then run the load test binary based on the provided load test run command.
* [`stop`](#avalanche-node-loadtest-stop): (ALPHA Warning) This command is currently in experimental mode.
The node loadtest stop command stops load testing for an existing devnet cluster and terminates the
separate cloud server created to host the load test.
**Flags:**
```bash
-h, --help help for loadtest
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### loadtest start
(ALPHA Warning) This command is currently in experimental mode.
The node loadtest command starts load testing for an existing devnet cluster. If the cluster does
not have an existing load test host, the command creates a separate cloud server and builds the load
test binary based on the provided load test Git Repo URL and load test binary build command.
The command will then run the load test binary based on the provided load test run command.
**Usage:**
```bash
avalanche node loadtest start [subcommand] [flags]
```
**Flags:**
```bash
--authorize-access authorize CLI to create cloud resources
--aws create loadtest node in AWS cloud
--aws-profile string aws profile to use (default "default")
--gcp create loadtest in GCP cloud
-h, --help help for start
--load-test-branch string load test branch or commit
--load-test-build-cmd string command to build load test binary
--load-test-cmd string command to run load test
--load-test-repo string load test repo url to use
--node-type string cloud instance type for loadtest script
--region string create load test node in a given region
--ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used
--use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### loadtest stop
(ALPHA Warning) This command is currently in experimental mode.
The node loadtest stop command stops load testing for an existing devnet cluster and terminates the
separate cloud server created to host the load test.
**Usage:**
```bash
avalanche node loadtest stop [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for stop
--load-test strings stop specified load test node(s). Use comma to separate multiple load test instance names
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### local
(ALPHA Warning) This command is currently in experimental mode.
The node local command suite provides a collection of commands related to local nodes
**Usage:**
```bash
avalanche node local [subcommand] [flags]
```
**Subcommands:**
* [`destroy`](#avalanche-node-local-destroy): Cleanup local node.
* [`start`](#avalanche-node-local-start): (ALPHA Warning) This command is currently in experimental mode.
The node local start command sets up a validator on a local server.
The validator will be validating the Avalanche Primary Network and Subnet
of your choice. By default, the command runs an interactive wizard. It
walks you through all the steps you need to set up a validator.
Once this command is completed, you will have to wait for the validator
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet. You can check the bootstrapping
status by running avalanche node status local
* [`status`](#avalanche-node-local-status): Get status of local node.
* [`stop`](#avalanche-node-local-stop): Stop local node.
* [`track`](#avalanche-node-local-track): (ALPHA Warning) make the local node at the cluster to track given blockchain
**Flags:**
```bash
-h, --help help for local
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local destroy
Cleanup local node.
**Usage:**
```bash
avalanche node local destroy [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for destroy
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local start
(ALPHA Warning) This command is currently in experimental mode.
The node local start command sets up a validator on a local server.
The validator will be validating the Avalanche Primary Network and Subnet
of your choice. By default, the command runs an interactive wizard. It
walks you through all the steps you need to set up a validator.
Once this command is completed, you will have to wait for the validator
to finish bootstrapping on the primary network before running further
commands on it, e.g. validating a Subnet. You can check the bootstrapping
status by running avalanche node status local
**Usage:**
```bash
avalanche node local start [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-path string use this avalanchego binary path
--bootstrap-id stringArray nodeIDs of bootstrap nodes
--bootstrap-ip stringArray IP:port pairs of bootstrap nodes
--cluster string operate on the given cluster
--custom-avalanchego-version string install given avalanchego version on node/s
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis string path to genesis file
-h, --help help for start
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true)
--latest-avalanchego-version install latest avalanchego release version on node/s
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-config string path to common avalanchego config settings for all nodes
--num-nodes uint32 number of nodes to start (default 1)
--partial-sync primary network partial sync (default true)
--staking-cert-key-path string path to provided staking cert key for node
--staking-signer-key-path string path to provided staking signer key for node
--staking-tls-key-path string path to provided staking tls key for node
-t, --testnet fuji operate on testnet (alias to fuji)
--upgrade string path to upgrade file
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local status
Get status of local node.
**Usage:**
```bash
avalanche node local status [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string specify the blockchain the node is syncing with
-h, --help help for status
--subnet string specify the blockchain the node is syncing with
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local stop
Stop local node.
**Usage:**
```bash
avalanche node local stop [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for stop
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### local track
(ALPHA Warning) make the local node at the cluster to track given blockchain
**Usage:**
```bash
avalanche node local track [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-path string use this avalanchego binary path
--custom-avalanchego-version string install given avalanchego version on node/s
-h, --help help for track
--latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true)
--latest-avalanchego-version install latest avalanchego release version on node/s
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### refresh-ips
(ALPHA Warning) This command is currently in experimental mode.
The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster,
and updates the local node information used by CLI commands.
**Usage:**
```bash
avalanche node refresh-ips [subcommand] [flags]
```
**Flags:**
```bash
--aws-profile string aws profile to use (default "default")
-h, --help help for refresh-ips
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### resize
(ALPHA Warning) This command is currently in experimental mode.
The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes.
**Usage:**
```bash
avalanche node resize [subcommand] [flags]
```
**Flags:**
```bash
--aws-profile string aws profile to use (default "default")
--disk-size string Disk size to resize in Gb (e.g. 1000Gb)
-h, --help help for resize
--node-type string Node type to resize (e.g. t3.2xlarge)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### scp
(ALPHA Warning) This command is currently in experimental mode.
The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format:
\[clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/\*.txt.
File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path.
If both destinations are remote, they must be nodes for the same cluster and not clusters themselves.
For example:
$avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt$ avalanche node scp /tmp/file.txt \[cluster1|NodeID-XXXX]:/tmp/file.txt
\$ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt
**Usage:**
```bash
avalanche node scp [subcommand] [flags]
```
**Flags:**
```bash
--compress use compression for ssh
-h, --help help for scp
--recursive copy directories recursively
--with-loadtest include loadtest node for scp cluster operations
--with-monitor include monitoring node for scp cluster operations
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### ssh
(ALPHA Warning) This command is currently in experimental mode.
The node ssh command execute a given command \[cmd] using ssh on all nodes in the cluster if ClusterName is given.
If no command is given, just prints the ssh command to be used to connect to each node in the cluster.
For provided NodeID or InstanceID or IP, the command \[cmd] will be executed on that node.
If no \[cmd] is provided for the node, it will open ssh shell there.
**Usage:**
```bash
avalanche node ssh [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for ssh
--parallel run ssh command on all nodes in parallel
--with-loadtest include loadtest node for ssh cluster operations
--with-monitor include monitoring node for ssh cluster operations
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### status
(ALPHA Warning) This command is currently in experimental mode.
The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network.
If no cluster is given, defaults to node list behaviour.
To get the bootstrap status of a node with a Blockchain, use --blockchain flag
**Usage:**
```bash
avalanche node status [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string specify the blockchain the node is syncing with
-h, --help help for status
--subnet string specify the blockchain the node is syncing with
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### sync
(ALPHA Warning) This command is currently in experimental mode.
The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain.
You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName`
**Usage:**
```bash
avalanche node sync [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for sync
--no-checks do not check for bootstrapped/healthy status or rpc compatibility of nodes against subnet
--subnet-aliases strings subnet alias to be used for RPC calls. defaults to subnet blockchain ID
--validators strings sync subnet into given comma separated list of validators. defaults to all cluster nodes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### update
(ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM config.
You can check the status after update by calling avalanche node status
**Usage:**
```bash
avalanche node update [subcommand] [flags]
```
**Subcommands:**
* [`subnet`](#avalanche-node-update-subnet): (ALPHA Warning) This command is currently in experimental mode.
The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM.
You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName`
**Flags:**
```bash
-h, --help help for update
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### update subnet
(ALPHA Warning) This command is currently in experimental mode.
The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM.
You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName`
**Usage:**
```bash
avalanche node update subnet [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for subnet
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### upgrade
(ALPHA Warning) This command is currently in experimental mode.
The node update command suite provides a collection of commands for nodes to update
their avalanchego or VM version.
You can check the status after upgrade by calling avalanche node status
**Usage:**
```bash
avalanche node upgrade [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for upgrade
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### validate
(ALPHA Warning) This command is currently in experimental mode.
The node validate command suite provides a collection of commands for nodes to join
the Primary Network and Subnets as validators.
If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command
will fail. You can check the bootstrap status by calling avalanche node status `clusterName`
**Usage:**
```bash
avalanche node validate [subcommand] [flags]
```
**Subcommands:**
* [`primary`](#avalanche-node-validate-primary): (ALPHA Warning) This command is currently in experimental mode.
The node validate primary command enables all nodes in a cluster to be validators of Primary
Network.
* [`subnet`](#avalanche-node-validate-subnet): (ALPHA Warning) This command is currently in experimental mode.
The node validate subnet command enables all nodes in a cluster to be validators of a Subnet.
If the command is run before the nodes are Primary Network validators, the command will first
make the nodes Primary Network validators before making them Subnet validators.
If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail.
You can check the bootstrap status by calling avalanche node status `clusterName`
If The command is run before the nodes are synced to the subnet, the command will fail.
You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName`
**Flags:**
```bash
-h, --help help for validate
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### validate primary
(ALPHA Warning) This command is currently in experimental mode.
The node validate primary command enables all nodes in a cluster to be validators of Primary
Network.
**Usage:**
```bash
avalanche node validate primary [subcommand] [flags]
```
**Flags:**
```bash
-e, --ewoq use ewoq key [fuji/devnet only]
-h, --help help for primary
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
--stake-amount uint how many AVAX to stake in the validator
--staking-period duration how long validator validates for after start time
--start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### validate subnet
(ALPHA Warning) This command is currently in experimental mode.
The node validate subnet command enables all nodes in a cluster to be validators of a Subnet.
If the command is run before the nodes are Primary Network validators, the command will first
make the nodes Primary Network validators before making them Subnet validators.
If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail.
You can check the bootstrap status by calling avalanche node status `clusterName`
If The command is run before the nodes are synced to the subnet, the command will fail.
You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName`
**Usage:**
```bash
avalanche node validate subnet [subcommand] [flags]
```
**Flags:**
```bash
--default-validator-params use default weight/start/duration params for subnet validator
-e, --ewoq use ewoq key [fuji/devnet only]
-h, --help help for subnet
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
--no-checks do not check for bootstrapped status or healthy status
--no-validation-checks do not check if subnet is already synced or validated (default true)
--stake-amount uint how many AVAX to stake in the validator
--staking-period duration how long validator validates for after start time
--start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
--validators strings validate subnet for the given comma separated list of validators. defaults to all cluster nodes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### whitelist
(ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster.
Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http.
It also command adds SSH public key to all nodes in the cluster if --ssh params is there.
If no params provided it detects current user IP automaticaly and whitelists it
**Usage:**
```bash
avalanche node whitelist [subcommand] [flags]
```
**Flags:**
```bash
-y, --current-ip whitelist current host ip
-h, --help help for whitelist
--ip string ip address to whitelist
--ssh string ssh public key to whitelist
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche primary
The primary command suite provides a collection of tools for interacting with the
Primary Network
**Usage:**
```bash
avalanche primary [subcommand] [flags]
```
**Subcommands:**
* [`addValidator`](#avalanche-primary-addvalidator): The primary addValidator command adds a node as a validator
in the Primary Network
* [`describe`](#avalanche-primary-describe): The subnet describe command prints details of the primary network configuration to the console.
**Flags:**
```bash
-h, --help help for primary
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### addValidator
The primary addValidator command adds a node as a validator
in the Primary Network
**Usage:**
```bash
avalanche primary addValidator [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--delegation-fee uint32 set the delegation fee (20 000 is equivalent to 2%)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for addValidator
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-m, --mainnet operate on mainnet
--nodeID string set the NodeID of the validator to add
--proof-of-possession string set the BLS proof of possession of the validator to add
--public-key string set the BLS public key of the validator to add
--staking-period duration how long this validator will be staking
--start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
-t, --testnet fuji operate on testnet (alias to fuji)
--weight uint set the staking weight of the validator to add
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### describe
The subnet describe command prints details of the primary network configuration to the console.
**Usage:**
```bash
avalanche primary describe [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
-h, --help help for describe
-l, --local operate on a local network
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche subnet
The subnet command suite provides a collection of tools for developing
and deploying Blockchains.
To get started, use the subnet create command wizard to walk through the
configuration of your very first Blockchain. Then, go ahead and deploy it
with the subnet deploy command. You can use the rest of the commands to
manage your Blockchain configurations and live deployments.
Deprecation notice: use 'avalanche blockchain'
**Usage:**
```bash
avalanche subnet [subcommand] [flags]
```
**Subcommands:**
* [`addValidator`](#avalanche-subnet-addvalidator): The blockchain addValidator command adds a node as a validator to
an L1 of the user provided deployed network. If the network is proof of
authority, the owner of the validator manager contract must sign the
transaction. If the network is proof of stake, the node must stake the L1's
staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain.
This command currently only works on Blockchains deployed to either the Fuji
Testnet or Mainnet.
* [`changeOwner`](#avalanche-subnet-changeowner): The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain.
* [`changeWeight`](#avalanche-subnet-changeweight): The blockchain changeWeight command changes the weight of a Subnet Validator.
The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet.
* [`configure`](#avalanche-subnet-configure): AvalancheGo nodes support several different configuration files. Subnets have their own
Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet
can have its own chain config. A chain can also have special requirements for the AvalancheGo node
configuration itself. This command allows you to set all those files.
* [`create`](#avalanche-subnet-create): The blockchain create command builds a new genesis file to configure your Blockchain.
By default, the command runs an interactive wizard. It walks you through
all the steps you need to create your first Blockchain.
The tool supports deploying Subnet-EVM, and custom VMs. You
can create a custom, user-generated genesis with a custom VM by providing
the path to your genesis and VM binaries with the --genesis and --vm flags.
By default, running the command with a blockchainName that already exists
causes the command to fail. If you'd like to overwrite an existing
configuration, pass the -f flag.
* [`delete`](#avalanche-subnet-delete): The blockchain delete command deletes an existing blockchain configuration.
* [`deploy`](#avalanche-subnet-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet.
At the end of the call, the command prints the RPC URL you can use to interact with the Subnet.
Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent
attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't
allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call
avalanche network clean to reset all deployed chain state. Subsequent local deploys
redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks,
so you can take your locally tested Subnet and deploy it on Fuji or Mainnet.
* [`describe`](#avalanche-subnet-describe): The blockchain describe command prints the details of a Blockchain configuration to the console.
By default, the command prints a summary of the configuration. By providing the --genesis
flag, the command instead prints out the raw genesis file.
* [`export`](#avalanche-subnet-export): The blockchain export command write the details of an existing Blockchain deploy to a file.
The command prompts for an output path. You can also provide one with
the --output flag.
* [`import`](#avalanche-subnet-import): Import blockchain configurations into avalanche-cli.
This command suite supports importing from a file created on another computer,
or importing from blockchains running public networks
(e.g. created manually or with the deprecated subnet-cli)
* [`join`](#avalanche-subnet-join): The subnet join command configures your validator node to begin validating a new Blockchain.
To complete this process, you must have access to the machine running your validator. If the
CLI is running on the same machine as your validator, it can generate or update your node's
config file automatically. Alternatively, the command can print the necessary instructions
to update your node manually. To complete the validation process, the Subnet's admins must add
the NodeID of your validator to the Subnet's allow list by calling addValidator with your
NodeID.
After you update your validator's config, you need to restart your validator manually. If
you provide the --avalanchego-config flag, this command attempts to edit the config file
at that path.
This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet.
* [`list`](#avalanche-subnet-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags,
it prints some general, static information about the Blockchain. With the --deployed flag, the command
shows additional information including the VMID, BlockchainID and SubnetID.
* [`publish`](#avalanche-subnet-publish): The blockchain publish command publishes the Blockchain's VM to a repository.
* [`removeValidator`](#avalanche-subnet-removevalidator): The blockchain removeValidator command stops a whitelisted, subnet network validator from
validating your deployed Blockchain.
To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass
these prompts by providing the values with flags.
* [`stats`](#avalanche-subnet-stats): The blockchain stats command prints validator statistics for the given Blockchain.
* [`upgrade`](#avalanche-subnet-upgrade): The blockchain upgrade command suite provides a collection of tools for
updating your developmental and deployed Blockchains.
* [`validators`](#avalanche-subnet-validators): The blockchain validators command lists the validators of a blockchain's subnet and provides
several statistics about them.
* [`vmid`](#avalanche-subnet-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain.
**Flags:**
```bash
-h, --help help for subnet
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### addValidator
The blockchain addValidator command adds a node as a validator to
an L1 of the user provided deployed network. If the network is proof of
authority, the owner of the validator manager contract must sign the
transaction. If the network is proof of stake, the node must stake the L1's
staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain.
This command currently only works on Blockchains deployed to either the Fuji
Testnet or Mainnet.
**Usage:**
```bash
avalanche subnet addValidator [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Off")
--balance uint set the AVAX balance of the validator that will be used for continuous fee on P-Chain
--blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token)
--blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token)
--blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token)
--bls-proof-of-possession string set the BLS proof of possession of the validator to add
--bls-public-key string set the BLS public key of the validator to add
--cluster string operate on the given cluster
--create-local-validator create additional local validator and add it to existing running local node
--default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period
--default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet)
--default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator
--delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100)
--devnet operate on a devnet network
--disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet only]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for addValidator
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint
--node-id string node-id of the validator to add
--output-tx-path string (for Subnets, not L1s) file path of the add validator tx
--partial-sync set primary network partial sync for new validators (default true)
--remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet
--rpc string connect to validator manager at the given rpc endpoint
--stake-amount uint (PoS only) amount of tokens to stake
--staking-period duration how long this validator will be staking
--start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format
--subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx
-t, --testnet fuji operate on testnet (alias to fuji)
--wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true)
--weight uint set the staking weight of the validator to add (default 20)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### changeOwner
The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain.
**Usage:**
```bash
avalanche subnet changeOwner [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--control-keys strings addresses that may make subnet changes
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for changeOwner
-k, --key string select the key to use [fuji/devnet]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--output-tx-path string file path of the transfer subnet ownership tx
-s, --same-control-key use the fee-paying key as control key
--subnet-auth-keys strings control keys that will be used to authenticate transfer subnet ownership tx
-t, --testnet fuji operate on testnet (alias to fuji)
--threshold uint32 required number of control key signatures to make subnet changes
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### changeWeight
The blockchain changeWeight command changes the weight of a Subnet Validator.
The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet.
**Usage:**
```bash
avalanche subnet changeWeight [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet only]
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for changeWeight
-k, --key string select the key to use [fuji/devnet only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string node-id of the validator
-t, --testnet fuji operate on testnet (alias to fuji)
--weight uint set the new staking weight of the validator (default 20)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### configure
AvalancheGo nodes support several different configuration files. Subnets have their own
Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet
can have its own chain config. A chain can also have special requirements for the AvalancheGo node
configuration itself. This command allows you to set all those files.
**Usage:**
```bash
avalanche subnet configure [subcommand] [flags]
```
**Flags:**
```bash
--chain-config string path to the chain configuration
-h, --help help for configure
--node-config string path to avalanchego node configuration
--per-node-chain-config string path to per node chain configuration for local network
--subnet-config string path to the subnet configuration
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### create
The blockchain create command builds a new genesis file to configure your Blockchain.
By default, the command runs an interactive wizard. It walks you through
all the steps you need to create your first Blockchain.
The tool supports deploying Subnet-EVM, and custom VMs. You
can create a custom, user-generated genesis with a custom VM by providing
the path to your genesis and VM binaries with the --genesis and --vm flags.
By default, running the command with a blockchainName that already exists
causes the command to fail. If you'd like to overwrite an existing
configuration, pass the -f flag.
**Usage:**
```bash
avalanche subnet create [subcommand] [flags]
```
**Flags:**
```bash
--custom use a custom VM template
--custom-vm-branch string custom vm branch or commit
--custom-vm-build-script string custom vm build-script
--custom-vm-path string file path of custom vm to use
--custom-vm-repo-url string custom vm repository url
--debug enable blockchain debugging (default true)
--evm use the Subnet-EVM as the base template
--evm-chain-id uint chain ID to use with Subnet-EVM
--evm-defaults deprecation notice: use '--production-defaults'
--evm-token string token symbol to use with Subnet-EVM
--external-gas-token use a gas token from another blockchain
-f, --force overwrite the existing configuration if one exists
--from-github-repo generate custom VM binary from github repository
--genesis string file path of genesis to use
-h, --help help for create
--icm interoperate with other blockchains using ICM
--icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental]
--latest use latest Subnet-EVM released version, takes precedence over --vm-version
--pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version
--production-defaults use default production settings for your blockchain
--proof-of-authority use proof of authority(PoA) for validator management
--proof-of-stake use proof of stake(PoS) for validator management
--proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract
--reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100)
--sovereign set to false if creating non-sovereign blockchain (default true)
--teleporter interoperate with other blockchains using ICM
--test-defaults use default test settings for your blockchain
--validator-manager-owner string EVM address that controls Validator Manager Owner
--vm string file path of custom vm to use. alias to custom-vm-path
--vm-version string version of Subnet-EVM template to use
--warp generate a vm with warp support (needed for ICM) (default true)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### delete
The blockchain delete command deletes an existing blockchain configuration.
**Usage:**
```bash
avalanche subnet delete [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for delete
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet.
At the end of the call, the command prints the RPC URL you can use to interact with the Subnet.
Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent
attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't
allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call
avalanche network clean to reset all deployed chain state. Subsequent local deploys
redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks,
so you can take your locally tested Subnet and deploy it on Fuji or Mainnet.
**Usage:**
```bash
avalanche subnet deploy [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Off")
--avalanchego-path string use this avalanchego binary path
--avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease")
--balance float set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (default 0.1)
--blockchain-genesis-key use genesis allocated key to fund validator manager initialization
--blockchain-key string CLI stored key to use to fund validator manager initialization
--blockchain-private-key string private key to use to fund validator manager initialization
--bootstrap-endpoints strings take validator node info from the given endpoints
--bootstrap-filepath string JSON file path that provides details about bootstrap validators, leave Node-ID and BLS values empty if using --generate-node-id=true
--cchain-funding-key string key to be used to fund relayer account on cchain
--cchain-icm-key string key to be used to pay for ICM deploys on C-Chain
--change-owner-address string address that will receive change if node is no longer L1 validator
--cluster string operate on the given cluster
--control-keys strings addresses that may make subnet changes
--convert-only avoid node track, restart and poa manager setup
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-e, --ewoq use ewoq key [fuji/devnet deploy only]
-f, --fuji testnet operate on fuji (alias to testnet
--generate-node-id whether to create new node id for bootstrap validators (Node-ID and BLS values in bootstrap JSON file will be overridden if --bootstrap-filepath flag is used)
-h, --help help for deploy
--icm-key string key to be used to pay for ICM deploys (default "cli-teleporter-deployer")
--icm-version string ICM version to deploy (default "latest")
-k, --key string select the key to use [fuji/devnet deploy only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--mainnet-chain-id uint32 use different ChainID for mainnet deployment
--noicm skip automatic ICM deploy
--num-bootstrap-validators int (only if --generate-node-id is true) number of bootstrap validators to set up in sovereign L1 validator)
--num-local-nodes int number of nodes to be created on local machine
--num-nodes uint32 number of nodes to be created on local network deploy (default 2)
--output-tx-path string file path of the blockchain creation tx
--partial-sync set primary network partial sync for new validators (default true)
--pos-maximum-stake-amount uint maximum stake amount (default 1000)
--pos-maximum-stake-multiplier uint8 maximum stake multiplier (default 1)
--pos-minimum-delegation-fee uint16 minimum delegation fee (default 1)
--pos-minimum-stake-amount uint minimum stake amount (default 1)
--pos-minimum-stake-duration uint minimum stake duration (default 100)
--pos-weight-to-value-factor uint weight to value factor (default 1)
--relay-cchain relay C-Chain as source and destination (default true)
--relayer-allow-private-ips allow relayer to connec to private ips (default true)
--relayer-amount float automatically fund relayer fee payments with the given amount
--relayer-key string key to be used by default both for rewards and to pay fees
--relayer-log-level string log level to be used for relayer logs (default "info")
--relayer-path string relayer binary to use
--relayer-version string relayer version to deploy (default "latest-prerelease")
-s, --same-control-key use the fee-paying key as control key
--skip-icm-deploy skip automatic ICM deploy
--skip-local-teleporter skip automatic ICM deploy on local networks [to be deprecated]
--skip-relayer skip relayer deploy
--skip-teleporter-deploy skip automatic ICM deploy
--subnet-auth-keys strings control keys that will be used to authenticate chain creation
-u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id
--subnet-only only create a subnet
--teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file
--teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file
--teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file
--teleporter-registry-bytecode-path string path to an ICM Registry bytecode file
--teleporter-version string ICM version to deploy (default "latest")
-t, --testnet fuji operate on testnet (alias to fuji)
--threshold uint32 required number of control key signatures to make subnet changes
--use-local-machine use local machine as a blockchain validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### describe
The blockchain describe command prints the details of a Blockchain configuration to the console.
By default, the command prints a summary of the configuration. By providing the --genesis
flag, the command instead prints out the raw genesis file.
**Usage:**
```bash
avalanche subnet describe [subcommand] [flags]
```
**Flags:**
```bash
-g, --genesis Print the genesis to the console directly instead of the summary
-h, --help help for describe
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### export
The blockchain export command write the details of an existing Blockchain deploy to a file.
The command prompts for an output path. You can also provide one with
the --output flag.
**Usage:**
```bash
avalanche subnet export [subcommand] [flags]
```
**Flags:**
```bash
--custom-vm-branch string custom vm branch
--custom-vm-build-script string custom vm build-script
--custom-vm-repo-url string custom vm repository url
-h, --help help for export
-o, --output string write the export data to the provided file path
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### import
Import blockchain configurations into avalanche-cli.
This command suite supports importing from a file created on another computer,
or importing from blockchains running public networks
(e.g. created manually or with the deprecated subnet-cli)
**Usage:**
```bash
avalanche subnet import [subcommand] [flags]
```
**Subcommands:**
* [`file`](#avalanche-subnet-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository.
To import from a file, you can optionally provide the path as a command-line argument.
Alternatively, running the command without any arguments triggers an interactive wizard.
To import from a repository, go through the wizard. By default, an imported Blockchain doesn't
overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
* [`public`](#avalanche-subnet-import-public): The blockchain import public command imports a Blockchain configuration from a running network.
By default, an imported Blockchain
doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Flags:**
```bash
-h, --help help for import
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### import file
The blockchain import command will import a blockchain configuration from a file or a git repository.
To import from a file, you can optionally provide the path as a command-line argument.
Alternatively, running the command without any arguments triggers an interactive wizard.
To import from a repository, go through the wizard. By default, an imported Blockchain doesn't
overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Usage:**
```bash
avalanche subnet import file [subcommand] [flags]
```
**Flags:**
```bash
--branch string the repo branch to use if downloading a new repo
-f, --force overwrite the existing configuration if one exists
-h, --help help for file
--repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from
--subnet string the subnet configuration to import from the provided repo
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### import public
The blockchain import public command imports a Blockchain configuration from a running network.
By default, an imported Blockchain
doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force
flag.
**Usage:**
```bash
avalanche subnet import public [subcommand] [flags]
```
**Flags:**
```bash
--blockchain-id string the blockchain ID
--cluster string operate on the given cluster
--custom use a custom VM template
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--evm import a subnet-evm
--force overwrite the existing configuration if one exists
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for public
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-url string [optional] URL of an already running subnet validator
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### join
The subnet join command configures your validator node to begin validating a new Blockchain.
To complete this process, you must have access to the machine running your validator. If the
CLI is running on the same machine as your validator, it can generate or update your node's
config file automatically. Alternatively, the command can print the necessary instructions
to update your node manually. To complete the validation process, the Subnet's admins must add
the NodeID of your validator to the Subnet's allow list by calling addValidator with your
NodeID.
After you update your validator's config, you need to restart your validator manually. If
you provide the --avalanchego-config flag, this command attempts to edit the config file
at that path.
This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet.
**Usage:**
```bash
avalanche subnet join [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-config string file path of the avalanchego config file
--cluster string operate on the given cluster
--data-dir string path of avalanchego's data dir directory
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-write if true, skip to prompt to overwrite the config file
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for join
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string set the NodeID of the validator to check
--plugin-dir string file path of avalanchego's plugin directory
--print if true, print the manual config without prompting
--stake-amount uint amount of tokens to stake on validator
--staking-period duration how long validator validates for after start time
--start-time string start time that validator starts validating
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
The blockchain list command prints the names of all created Blockchain configurations. Without any flags,
it prints some general, static information about the Blockchain. With the --deployed flag, the command
shows additional information including the VMID, BlockchainID and SubnetID.
**Usage:**
```bash
avalanche subnet list [subcommand] [flags]
```
**Flags:**
```bash
--deployed show additional deploy information
-h, --help help for list
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### publish
The blockchain publish command publishes the Blockchain's VM to a repository.
**Usage:**
```bash
avalanche subnet publish [subcommand] [flags]
```
**Flags:**
```bash
--alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo).
--force If true, ignores if the subnet has been published in the past, and attempts a forced publish.
-h, --help help for publish
--no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag.
--repo-url string The URL of the repo where we are publishing
--subnet-file-path string Path to the Subnet description file. If not given, a prompting sequence will be initiated.
--vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated.
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### removeValidator
The blockchain removeValidator command stops a whitelisted, subnet network validator from
validating your deployed Blockchain.
To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass
these prompts by providing the values with flags.
**Usage:**
```bash
avalanche subnet removeValidator [subcommand] [flags]
```
**Flags:**
```bash
--aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true)
--aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation
--aggregator-log-level string log level to use with signature aggregator (default "Off")
--blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token)
--blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token)
--blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token)
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force force validator removal even if it's not getting rewarded
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for removeValidator
-k, --key string select the key to use [fuji deploy only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-endpoint string remove validator that responds to the given endpoint
--node-id string node-id of the validator
--output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx
--rpc string connect to validator manager at the given rpc endpoint
--subnet-auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx
-t, --testnet fuji operate on testnet (alias to fuji)
--uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### stats
The blockchain stats command prints validator statistics for the given Blockchain.
**Usage:**
```bash
avalanche subnet stats [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for stats
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### upgrade
The blockchain upgrade command suite provides a collection of tools for
updating your developmental and deployed Blockchains.
**Usage:**
```bash
avalanche subnet upgrade [subcommand] [flags]
```
**Subcommands:**
* [`apply`](#avalanche-subnet-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade.
For public networks (Fuji Testnet or Mainnet), to complete this process,
you must have access to the machine running your validator.
If the CLI is running on the same machine as your validator, it can manipulate your node's
configuration automatically. Alternatively, the command can print the necessary instructions
to upgrade your node manually.
After you update your validator's configuration, you need to restart your validator manually.
If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path.
Refer to [https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs](https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs) for related documentation.
* [`export`](#avalanche-subnet-upgrade-export): Export the upgrade bytes file to a location of choice on disk
* [`generate`](#avalanche-subnet-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It
guides the user through the process using an interactive wizard.
* [`import`](#avalanche-subnet-upgrade-import): Import the upgrade bytes file into the local environment
* [`print`](#avalanche-subnet-upgrade-print): Print the upgrade.json file content
* [`vm`](#avalanche-subnet-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command
can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet.
The command walks the user through an interactive wizard. The user can skip the wizard by providing
command line flags.
**Flags:**
```bash
-h, --help help for upgrade
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade apply
Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade.
For public networks (Fuji Testnet or Mainnet), to complete this process,
you must have access to the machine running your validator.
If the CLI is running on the same machine as your validator, it can manipulate your node's
configuration automatically. Alternatively, the command can print the necessary instructions
to upgrade your node manually.
After you update your validator's configuration, you need to restart your validator manually.
If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path.
Refer to [https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs](https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs) for related documentation.
**Usage:**
```bash
avalanche subnet upgrade apply [subcommand] [flags]
```
**Flags:**
```bash
--avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/Users/owen.wahlgren/.avalanchego/chains")
--config create upgrade config for future subnet deployments (same as generate)
--force If true, don't prompt for confirmation of timestamps in the past
--fuji fuji apply upgrade existing fuji deployment (alias for `testnet`)
-h, --help help for apply
--local local apply upgrade existing local deployment
--mainnet mainnet apply upgrade existing mainnet deployment
--print if true, print the manual config without prompting (for public networks only)
--testnet testnet apply upgrade existing testnet deployment (alias for `fuji`)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade export
Export the upgrade bytes file to a location of choice on disk
**Usage:**
```bash
avalanche subnet upgrade export [subcommand] [flags]
```
**Flags:**
```bash
--force If true, overwrite a possibly existing file without prompting
-h, --help help for export
--upgrade-filepath string Export upgrade bytes file to location of choice on disk
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade generate
The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It
guides the user through the process using an interactive wizard.
**Usage:**
```bash
avalanche subnet upgrade generate [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for generate
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade import
Import the upgrade bytes file into the local environment
**Usage:**
```bash
avalanche subnet upgrade import [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for import
--upgrade-filepath string Import upgrade bytes file into local environment
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade print
Print the upgrade.json file content
**Usage:**
```bash
avalanche subnet upgrade print [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for print
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
#### upgrade vm
The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command
can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet.
The command walks the user through an interactive wizard. The user can skip the wizard by providing
command line flags.
**Usage:**
```bash
avalanche subnet upgrade vm [subcommand] [flags]
```
**Flags:**
```bash
--binary string Upgrade to custom binary
--config upgrade config for future subnet deployments
--fuji fuji upgrade existing fuji deployment (alias for `testnet`)
-h, --help help for vm
--latest upgrade to latest version
--local local upgrade existing local deployment
--mainnet mainnet upgrade existing mainnet deployment
--plugin-dir string plugin directory to automatically upgrade VM
--print print instructions for upgrading
--testnet testnet upgrade existing testnet deployment (alias for `fuji`)
--version string Upgrade to custom version
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### validators
The blockchain validators command lists the validators of a blockchain's subnet and provides
several statistics about them.
**Usage:**
```bash
avalanche subnet validators [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for validators
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### vmid
The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain.
**Usage:**
```bash
avalanche subnet vmid [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for vmid
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche teleporter
The messenger command suite provides a collection of tools for interacting
with ICM messenger contracts.
**Usage:**
```bash
avalanche teleporter [subcommand] [flags]
```
**Subcommands:**
* [`deploy`](#avalanche-teleporter-deploy): Deploys ICM Messenger and Registry into a given L1.
* [`sendMsg`](#avalanche-teleporter-sendmsg): Sends and wait reception for a ICM msg between two subnets.
**Flags:**
```bash
-h, --help help for teleporter
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### deploy
Deploys ICM Messenger and Registry into a given L1.
**Usage:**
```bash
avalanche teleporter deploy [subcommand] [flags]
```
**Flags:**
```bash
--blockchain string deploy ICM into the given CLI blockchain
--blockchain-id string deploy ICM into the given blockchain ID/Alias
--c-chain deploy ICM into C-Chain
--cchain-key string key to be used to pay fees to deploy ICM to C-Chain
--cluster string operate on the given cluster
--deploy-messenger deploy ICM Messenger (default true)
--deploy-registry deploy ICM Registry (default true)
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
--force-registry-deploy deploy ICM Registry even if Messenger has already been deployed
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key to fund ICM deploy
-h, --help help for deploy
--include-cchain deploy ICM also to C-Chain
--key string CLI stored key to use to fund ICM deploy
-l, --local operate on a local network
--messenger-contract-address-path string path to a messenger contract address file
--messenger-deployer-address-path string path to a messenger deployer address file
--messenger-deployer-tx-path string path to a messenger deployer tx file
--private-key string private key to use to fund ICM deploy
--registry-bytecode-path string path to a registry bytecode file
--rpc-url string use the given RPC URL to connect to the subnet
-t, --testnet fuji operate on testnet (alias to fuji)
--version string version to deploy (default "latest")
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### sendMsg
Sends and wait reception for a ICM msg between two subnets.
**Usage:**
```bash
avalanche teleporter sendMsg [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--dest-rpc string use the given destination blockchain rpc endpoint
--destination-address string deliver the message to the given contract destination address
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
--genesis-key use genesis allocated key as message originator and to pay source blockchain fees
-h, --help help for sendMsg
--hex-encoded given message is hex encoded
--key string CLI stored key to use as message originator and to pay source blockchain fees
-l, --local operate on a local network
--private-key string private key to use as message originator and to pay source blockchain fees
--source-rpc string use the given source blockchain rpc endpoint
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche transaction
The transaction command suite provides all of the utilities required to sign multisig transactions.
**Usage:**
```bash
avalanche transaction [subcommand] [flags]
```
**Subcommands:**
* [`commit`](#avalanche-transaction-commit): The transaction commit command commits a transaction by submitting it to the P-Chain.
* [`sign`](#avalanche-transaction-sign): The transaction sign command signs a multisig transaction.
**Flags:**
```bash
-h, --help help for transaction
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### commit
The transaction commit command commits a transaction by submitting it to the P-Chain.
**Usage:**
```bash
avalanche transaction commit [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for commit
--input-tx-filepath string Path to the transaction signed by all signatories
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### sign
The transaction sign command signs a multisig transaction.
**Usage:**
```bash
avalanche transaction sign [subcommand] [flags]
```
**Flags:**
```bash
-h, --help help for sign
--input-tx-filepath string Path to the transaction file for signing
-k, --key string select the key to use [fuji only]
-g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji)
--ledger-addrs strings use the given ledger addresses
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche update
Check if an update is available, and prompt the user to install it
**Usage:**
```bash
avalanche update [subcommand] [flags]
```
**Flags:**
```bash
-c, --confirm Assume yes for installation
-h, --help help for update
-v, --version version for update
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
## avalanche validator
The validator command suite provides a collection of tools for managing validator
balance on P-Chain.
Validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0,
the validator will be considered inactive and will no longer participate in validating the L1
**Usage:**
```bash
avalanche validator [subcommand] [flags]
```
**Subcommands:**
* [`getBalance`](#avalanche-validator-getbalance): This command gets the remaining validator P-Chain balance that is available to pay
P-Chain continuous fee
* [`increaseBalance`](#avalanche-validator-increasebalance): This command increases the validator P-Chain balance
* [`list`](#avalanche-validator-list): This command gets a list of the validators of the L1
**Flags:**
```bash
-h, --help help for validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### getBalance
This command gets the remaining validator P-Chain balance that is available to pay
P-Chain continuous fee
**Usage:**
```bash
avalanche validator getBalance [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for getBalance
--l1 string name of L1
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string node ID of the validator
-t, --testnet fuji operate on testnet (alias to fuji)
--validation-id string validation ID of the validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### increaseBalance
This command increases the validator P-Chain balance
**Usage:**
```bash
avalanche validator increaseBalance [subcommand] [flags]
```
**Flags:**
```bash
--balance float amount of AVAX to increase validator's balance by
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for increaseBalance
-k, --key string select the key to use [fuji/devnet deploy only]
--l1 string name of L1 (to increase balance of bootstrap validators only)
-l, --local operate on a local network
-m, --mainnet operate on mainnet
--node-id string node ID of the validator
-t, --testnet fuji operate on testnet (alias to fuji)
--validation-id string validationIDStr of the validator
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
### list
This command gets a list of the validators of the L1
**Usage:**
```bash
avalanche validator list [subcommand] [flags]
```
**Flags:**
```bash
--cluster string operate on the given cluster
--devnet operate on a devnet network
--endpoint string use the given endpoint for network operations
-f, --fuji testnet operate on fuji (alias to testnet
-h, --help help for list
-l, --local operate on a local network
-m, --mainnet operate on mainnet
-t, --testnet fuji operate on testnet (alias to fuji)
--config string config file (default is $HOME/.avalanche-cli/config.json)
--log-level string log level for the application (default "ERROR")
--skip-update-check skip check for new versions
```
# Make Avalanche L1 Permissionless
URL: /docs/avalanche-l1s/elastic-avalanche-l1s/make-avalanche-l1-permissionless
Learn how to transform a Permissioned Avalanche L1 into an Elastic Avalanche L1.
> Elastic L1s / Elastic Subnets have been deprecated. Please check out the PoS Validator Manager instead
Elastic Avalanche L1s are permissionless Avalanche L1s. More information can be found [here](/docs/avalanche-l1s/elastic-avalanche-l1s/parameters).
This how-to guide focuses on taking an already created permissioned Avalanche L1 and transforming it to an elastic (or permissionless) Avalanche L1.
## Prerequisites
* [Avalanche-CLI installed](/docs/tooling/get-avalanche-cli)
* You have deployed a permissioned Avalanche L1 on [local](/docs/avalanche-l1s/deploy-a-avalanche-l1/local-network), on [Fuji](/docs/avalanche-l1s/deploy-a-avalanche-l1/fuji-testnet) or on [Mainnet](/docs/avalanche-l1s/deploy-a-avalanche-l1/avalanche-mainnet)
## Getting Started[](#getting-started "Direct link to heading")
In the following commands, make sure to substitute the name of your Avalanche L1 configuration for ``.
To transform your permissioned Avalanche L1 into an Elastic Avalanche L1 (NOTE: this action is irreversible), run:
```bash
avalanche blockchain elastic
```
and, select the network that you want to transform the Avalanche L1 on. Alternatively, you can bypass this prompt by providing the `--local`, `--fuji`, or `--mainnet` flag.
Provide the name and the symbol for the permissionless Avalanche L1's native token. You can also bypass this prompt by providing the `--tokenName` and `--tokenSymbol` flags.
Next, select the Elastic Avalanche L1 config. You can choose to use the default values detailed [here](/docs/avalanche-l1s/elastic-avalanche-l1s/parameters#primary-network-parameters-on-mainnet) or customize the Elastic Avalanche L1 config. To bypass the prompt, you can use `--default` flag to use the default Elastic Avalanche L1 config.
The command may take a couple minutes to run.
### Elastic Avalanche L1 Transformation on Fuji and Mainnet[](#elastic-avalanche-l1-transformation-on-fuji-and-mainnet "Direct link to heading")
Elastic Avalanche L1 transformation on public network requires private key loaded into the tool, or a connected ledger device.
Both stored key usage and ledger usage are enabled on Fuji (see more on creating keys [here](/docs/avalanche-l1s/deploy-a-avalanche-l1/fuji-testnet#private-key)) while only ledger usage is enabled on Mainnet (see more on setting up your ledger [here](/docs/avalanche-l1s/deploy-a-avalanche-l1/avalanche-mainnet#setting-up-your-ledger)).
To transform a permissioned Avalanche L1 into Elastic Avalanche L1 on public networks, users are required to provide the keys that control the Avalanche L1 defined during the Avalanche L1 deployment process (more info on keys in Fuji can be found [here](/docs/avalanche-l1s/deploy-a-avalanche-l1/fuji-testnet#deploy-the-avalanche-l1), while more info on ledger signing in Mainnet can be found [here](/docs/avalanche-l1s/deploy-a-avalanche-l1/avalanche-mainnet#deploy-the-avalanche-l1)).
### Results[](#results "Direct link to heading")
If all works as expected, you then have the option to automatically transform all existing permissioned validators to permissionless validators.
You can also to skip automatic transformation at this point and choose to manually add permissionless validators later.
You can use the output details such as the Asset ID and Elastic Avalanche L1 ID (SubnetID) to connect to and interact with your Elastic Avalanche L1.
## Adding Permissionless Validators to Elastic Avalanche L1[](#adding-permissionless-validators-to-elastic-avalanche-l1 "Direct link to heading")
If you are running this command on local network, you will need to first remove permissioned validators (by running `avalanche subnet removeValidator `) so that you can have a list of local nodes to choose from to be added as a permissionless validator in the Elastic Avalanche L1.
To add permissionless validators to an Elastic Avalanche L1, run:
```bash
avalanche blockchain join --elastic
```
You will be prompted with which node you would like to add as a permissionless validator. You can skip this prompt by using `--nodeID` flag.
You will then be prompted with the amount of the Avalanche L1 native token that you like to stake in the validator. Alternatively, you can bypass this prompt by providing the `--stake-amount` flag. Note that choosing to add the maximum validator stake amount (defined during Elastic Avalanche L1 transformation step above) means that you effectively disable delegation in your validator.
Next, select when the validator will start validating and how long it will be validating for. You can also bypass these prompts by using `--start-time` and `--staking-period` flags.
## Adding Permissionless Delegator to a Permissionless Validator in Elastic Avalanche L1[](#adding-permissionless-delegator-to-a-permissionless-validator-in-elastic-avalanche-l1 "Direct link to heading")
To add permissionless delegators, run:
```bash
avalanche blockchain addPermissionlessDelegator
```
You will be prompted with which Avalanche L1 validator you would like to delegate to. You can skip this prompt by using `--nodeID` flag.
You will then be prompted with the amount of the Avalanche L1 native token that you like to stake in the validator. Alternatively, you can bypass this prompt by providing the `--stake-amount` flag. The amount that can be delegated to a validator is detailed [here](/docs/avalanche-l1s/elastic-avalanche-l1s/parameters#delegators-weight-checks).
Next, select when you want to start delegating and how long you want to delegate for. You can also bypass these prompts by using `--start-time` and `--staking-period` flags.
# Parameters
URL: /docs/avalanche-l1s/elastic-avalanche-l1s/parameters
Learn about the different parameters of Elastic Avalanche L1s.
> Elastic L1s / Elastic Subnets have been deprecated. Please check out the PoS Validator Manager instead
Avalanche Permissioned Avalanche L1s can be turned into Elastic Avalanche L1s via the `TransformSubnetTx` transaction. `TransformSubnetTx` specifies a set of structural parameters for the Elastic Avalanche L1.
This reference document describes these structural parameters and illustrates the constraints they must satisfy.
## Elastic Avalanche L1 Parameters
### `Subnet`
`Subnet` has type `ids.ID` and it's the Avalanche L1 ID (SubnetID). `Subnet` is the ID of the `CreateSubnetTx` transaction that created the Avalanche L1 in the first place. The following constraints apply:
* `Subnet` must be different from `PrimaryNetworkID`.
### `AssetID`
`AssetID` has type `ids.ID` and it's the ID of the asset to use when staking on the Avalanche L1. The following constraints apply:
* `AssetID` must not be the `Empty ID`.
* `AssetID` must not be `AVAX ID`, the Primary Network asset.
### `InitialSupply`
`InitialSupply` has type `uint64` and it's the initial amount of `AssetID` transferred in the Elastic Avalanche L1 upon its transformation. Such amount is available for distributing staking rewards. The following constraints apply:
* `InitialSupply` must be larger than zero.
### `MaximumSupply`
`MaximumSupply` has type `uint64` and it's the maximum amount of `AssetID` that Avalanche L1 has available for staking and rewards at any time. The following constraints apply:
* `MaximumSupply` must be larger or equal to `InitialSupply`.
an Avalanche L1 supply can vary in time but it should be no larger than the configured maximum at any point in time, including at Avalanche L1 creation.
### `MinConsumptionRate`
`MinConsumptionRate` has type `uint64` and it's the minimal rate a validator can earn if the `UptimeRequirement` is satisfied. If `StakingPeriod` == `MinStakeDuration`, the validator will earn the `MinConsumptionRate`.
You can find more details about it in the [Reward Formula section](#reward-formula). The following constraints apply:
* `MinConsumptionRate` must be smaller or equal to `PercentDenominator`.
See [Notes on Percentages](#notes-on-percentages) section to understand `PercentDenominator` role.
### `MaxConsumptionRate`
`MaxConsumptionRate` has type `uint64` and it's the maximal rate a validator can earn if the `UptimeRequirement` is satisfied. If `StakingPeriod` == `MaxStakeDuration` == `MintingPeriod`, the validator will earn the `MaxConsumptionRate`.
You can find more details about it in the [Reward Formula section](#reward-formula). The following constraints apply:
* `MaxConsumptionRate` must be larger or equal to `MinConsumptionRate`.
* `MaxConsumptionRate` must be smaller or equal to `PercentDenominator`.
See [Notes on Percentages](#notes-on-percentages) section to understand `PercentDenominator` role.
### `MinValidatorStake`
`MinValidatorStake` has type `uint64` and it's the minimum amount of funds required to become a validator. The following constraints apply:
* `MinValidatorStake` must be larger than zero
* `MinValidatorStake` must be smaller or equal to `InitialSupply`
### `MaxValidatorStake`
`MaxValidatorStake` has type `uint64` and it's the maximum amount of funds a single validator can be allocated, including delegated funds. The following constraints apply:
* `MaxValidatorStake` must be larger or equal to `MinValidatorStake`
* `MaxValidatorStake` must be smaller or equal to `MaximumSupply`
### `MinStakeDuration`
`MinStakeDuration` has type `uint32` and it's the minimum number of seconds a staker can stake for. The following constraints apply:
* `MinStakeDuration` must be larger than zero.
### `MaxStakeDuration`
`MaxStakeDuration` has type `uint32` and it's the maximum number of seconds a staker can stake for. The following constraints apply:
* `MaxStakeDuration` must be larger or equal to `MinStakeDuration`.
* `MaxStakeDuration` must be smaller or equal to `GlobalMaxStakeDuration`.
`GlobalMaxStakeDuration` is defined in genesis and applies to both the Primary Network and all Avalanche L1s.
Its Mainnet value is $365 \times 24 \times time.Hour$.
### `MinDelegationFee`
`MinDelegationFee` has type `uint32` and it's the minimum fee rate a delegator must pay to its validator for delegating. `MinDelegationFee` is a percentage; the actual fee is calculated multiplying the fee rate for the delegator reward. The following constraints apply:
* `MinDelegationFee` must be smaller or equal to `PercentDenominator`.
The `MinDelegationFee` rate applies to Primary Network as well. Its Mainnet value is $2\%$.
### `MinDelegatorStake`
`MinDelegatorStake` has type `uint64` and it's the minimum amount of funds required to become a delegator. The following constraints apply:
* `MinDelegatorStake` must be larger than zero.
### `MaxValidatorWeightFactor`
`MaxValidatorWeightFactor` has type `uint8` and it's the factor which calculates the maximum amount of delegation a validator can receive. A value of 1 effectively disables delegation. You can find more details about it in the [Delegators Weight Checks section](#delegators-weight-checks). The following constraints apply:
* `MaxValidatorWeightFactor` must be larger than zero.
### `UptimeRequirement`
`UptimeRequirement` has type `uint32` and it's the minimum percentage of its staking time that a validator must be online and responsive for to receive a reward. The following constraints apply:
* `UptimeRequirement` must be smaller or equal `PercentDenominator`.
See [Notes on Percentages](#notes-on-percentages) section to understand `PercentDenominator` role.
## Reward Formula
Consider an Elastic Avalanche L1 validator which stakes a $Stake$ amount `AssetID` for $StakingPeriod$ seconds.
Assume that at the start of the staking period there is a $Supply$ amount of `AssetID` in the Avalanche L1. The maximum amount of Avalanche L1 asset is $MaximumSupply$ `AssetID`.
Then at the end of its staking period, a responsive Elastic Avalanche L1 validator receives a reward calculated as follows:
$$
Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{Staking Period}{Minting Period} \times EffectiveConsumptionRate
$$
where,
$$
MaximumSupply - Supply = \text{the number of tokens left to emit in the avalanche-l1}
$$
$$
\frac{Stake}{Supply} = \text{the individual's stake as a percentage of all available tokens in the network}
$$
$$
\frac{StakingPeriod}{MintingPeriod} = \text{time tokens are locked up divided by the $MintingPeriod$}
$$
$$
\text{$MintingPeriod$ is one year as configured by the Primary Network).}
$$
$$
EffectiveConsumptionRate =
$$
$$
\frac{MinConsumptionRate}{PercentDenominator} \times \left(1- \frac{Staking Period}{Minting Period}\right) + \frac{MaxConsumptionRate}{PercentDenominator} \times \frac{Staking Period}{Minting Period}
$$
Note that $StakingPeriod$ is the staker's entire staking period, not just the staker's uptime, that is the aggregated time during which the staker has been responsive. The uptime comes into play only to decide whether a staker should be rewarded; to calculate the actual reward only the staking period duration is taken into account.
$EffectiveConsumptionRate$ is the rate at which the validator is rewarded based on $StakingPeriod$ selection.
$MinConsumptionRate$ and $MaxConsumptionRate$ bound $EffectiveConsumptionRate$:
$$
MinConsumptionRate \leq EffectiveConsumptionRate \leq MaxConsumptionRate
$$
The larger $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MaxConsumptionRate$. The smaller $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MinConsumptionRate$.
A staker achieves the maximum reward for its stake if $StakingPeriod$ = $Minting Period$. The reward is:
$$
Max Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{MaxConsumptionRate}{PercentDenominator}
$$
Note that this formula is the same as the reward formula at the top of this section because $EffectiveConsumptionRate$ = $MaxConsumptionRate$.
The reward formula above is used in the Primary Network to calculate stakers reward. For reference, you can find Primary network parameters in [the section below](#primary-network-parameters-on-mainnet).
## Delegators Weight Checks
There are bounds set of the maximum amount of delegators' stake that a validator can receive.
The maximum weight $MaxWeight$ a validator $Validator$ can have is:
$$
MaxWeight = \min(Validator.Weight \times MaxValidatorWeightFactor, MaxValidatorStake)
$$
where $MaxValidatorWeightFactor$ and $MaxValidatorStake$ are the Elastic Avalanche L1 Parameters described above.
A delegator won't be added to a validator if the combination of their weights and all other validator's delegators' weight is larger than $MaxWeight$. Note that this must be true at any point in time.
Note that setting $MaxValidatorWeightFactor$ to 1 disables delegation since the $MaxWeight = Validator.Weight$.
## Notes on Percentages
`PercentDenominator = 1_000_000` is the denominator used to calculate percentages.
It allows you to specify percentages up to 4 digital positions. To denominate your percentage in `PercentDenominator` just multiply it by `10_000`. For example:
* `100%` corresponds to `100 * 10_000 = 1_000_000`
* `1%` corresponds to `1* 10_000 = 10_000`
* `0.02%` corresponds to `0.002 * 10_000 = 200`
* `0.0007%` corresponds to `0.0007 * 10_000 = 7`
## Primary Network Parameters on Mainnet
An Elastic Avalanche L1 is free to pick any parameters affecting rewards, within the constraints specified above. For reference we list below Primary Network parameters on Mainnet:
* `AssetID = Avax`
* `InitialSupply = 240_000_000 Avax`
* `MaximumSupply = 720_000_000 Avax`.
* `MinConsumptionRate = 0.10 * reward.PercentDenominator`.
* `MaxConsumptionRate = 0.12 * reward.PercentDenominator`.
* `Minting Period = 365 * 24 * time.Hour`.
* `MinValidatorStake = 2_000 Avax`.
* `MaxValidatorStake = 3_000_000 Avax`.
* `MinStakeDuration = 2 * 7 * 24 * time.Hour`.
* `MaxStakeDuration = 365 * 24 * time.Hour`.
* `MinDelegationFee = 20000`, that is `2%`.
* `MinDelegatorStake = 25 Avax`.
* `MaxValidatorWeightFactor = 5`. This is a platformVM parameter rather than a genesis one, so it's shared across networks.
* `UptimeRequirement = 0.8`, that is `80%`.
### Interactive Graph
The graph below demonstrates the reward as a function of the length of time
staked. The x-axis depicts $\frac{StakingPeriod}{MintingPeriod}$ as a percentage
while the y-axis depicts $Reward$ as a percentage of $MaximumSupply - Supply$,
the amount of tokens left to be emitted.
Graph variables correspond to those defined above:
* `h` (high) = $MaxConsumptionRate$
* `l` (low) = $MinConsumptionRate$
* `s` = $\frac{Stake}{Supply}$
# Considerations
URL: /docs/avalanche-l1s/upgrade/considerations
Learn about some of the key considerations while upgrading your Avalanche L1.
In the course of Avalanche L1 operation, you will inevitably need to upgrade or change some part of the software stack that is running your Avalanche L1. If nothing else, you will have to upgrade the AvalancheGo node client. Same goes for the VM plugin binary that is used to run the blockchain on your Avalanche L1, which is most likely the [Subnet-EVM](https://github.com/ava-labs/subnet-evm), the Avalanche L1 implementation of the Ethereum virtual machine.
Node and VM upgrades usually don't change the way your Avalanche L1 functions, instead they keep your Avalanche L1 in sync with the rest of the network, bringing security, performance and feature upgrades. Most upgrades are optional, but all of them are recommended, and you should make optional upgrades part of your routine Avalanche L1 maintenance. Some upgrades will be mandatory, and those will be clearly communicated as such ahead of time, you need to pay special attention to those.
Besides the upgrades due to new releases, you also may want to change the configuration of the VM, to alter the way Avalanche L1 runs, for various business or operational needs. These upgrades are solely the purview of your team, and you have complete control over the timing of their roll out. Any such change represents a **network upgrade** and needs to be carefully planned and executed.
Network Upgrades Permanently Change the Rules of Your Avalanche L1. Procedural mistakes or a botched upgrade can halt your Avalanche L1 or lead to data loss!
When performing an Avalanche L1 upgrade, every single validator on the Avalanche L1 will need to perform the identical upgrade.
If you are coordinating a network upgrade, you must schedule advance notice to every Avalanche L1 validator so that they have time to perform the upgrade prior to activation. Make sure you have direct line of communication to all your validators!
This tutorial will guide you through the process of doing various Avalanche L1 upgrades and changes. We will point out things to watch out for and precautions you need to be mindful about.
## General Upgrade Considerations[](#general-upgrade-considerations "Direct link to heading")
When operating an Avalanche L1, you should always keep in mind that Proof of Stake networks like Avalanche can only make progress if sufficient amount of validating nodes are connected and processing transactions. Each validator on an Avalanche L1 is assigned a certain `weight`, which is a numerical value representing the significance of the node in consensus decisions. On the Primary Network, weight is equal to the amount of AVAX staked on the node. On Avalanche L1s, weight is currently assigned by the Avalanche L1 owners when they issue the transaction adding a validator to the Avalanche L1.
Avalanche L1s can operate normally only if validators representing 80% or more of the cumulative validator weight is connected. If the amount of connected stake falls close to or below 80%, Avalanche L1 performance (time to finality) will suffer, and ultimately the Avalanche L1 will halt (stop processing transactions).
You as an Avalanche L1 operator need to ensure that whatever you do, at least 80% of the validators' cumulative weight is connected and working at all times.
It is mandatory that the cumulative weight of all validators in the Avalanche L1 must be at least the value of [`snow-sample-size`](/docs/nodes/configure/configs-flags#--snow-sample-size-int) (default 20). For example, if there is only one validator in the Avalanche L1, its weight must be at least `snow-sample-size` . Hence, when assigning weight to the nodes, always use values greater than 20. Recall that a validator's weight can't be changed while it is validating, so take care to use an appropriate value.
## Upgrading Avalanche L1 Validator Nodes[](#upgrading-avalanche-l1-validator-nodes "Direct link to heading")
AvalancheGo, the node client that is running the Avalanche validators is under constant and rapid development. New versions come out often (roughly every two weeks), bringing added capabilities, performance improvements or security fixes. Updates are usually optional, but from time to time (much less frequently than regular updates) there will be an update that includes a mandatory network upgrade. Those upgrades are **MANDATORY** for every node running the Avalanche L1. Any node that does not perform the update before the activation timestamp will immediately stop working when the upgrade activates.
That's why having a node upgrade strategy is absolutely vital, and you should always update to the latest AvalancheGo client immediately when it is made available.
For a general guide on upgrading AvalancheGo check out [this tutorial](/docs/nodes/maintain/upgrade). When upgrading Avalanche L1 nodes and keeping in mind the previous section, make sure to stagger node upgrades and start a new upgrade only once the previous node has successfully upgraded. Use the [Health API](/docs/api-reference/health-api#healthhealth) to check that `healthy` value in the response is `true` on the upgraded node, and on other Avalanche L1 validators check that [platform.getCurrentValidators()](/docs/api-reference/p-chain/api#platformgetcurrentvalidators) has `true` in `connected` attribute for the upgraded node's `nodeID`. Once those two conditions are satisfied, node is confirmed to be online and validating the Avalanche L1 and you can start upgrading another node.
Continue the upgrade cycle until all the Avalanche L1 nodes are upgraded.
## Upgrading Avalanche L1 VM Plugin Binaries[](#upgrading-avalanche-l1-vm-plugin-binaries "Direct link to heading")
Besides the AvalancheGo client itself, new versions get released for the VM binaries that run the blockchains on the Avalanche L1. On most Avalanche L1s, that is the [Subnet-EVM](https://github.com/ava-labs/subnet-evm), so this tutorial will go through the steps for updating the `subnet-evm` binary. The update process will be similar for updating any VM plugin binary.
All the considerations for doing staggered node upgrades as discussed in previous section are valid for VM upgrades as well.
In the future, VM upgrades will be handled by the [Avalanche-CLI tool](https://github.com/ava-labs/avalanche-cli), but for now we need to do it manually.
Go to the [releases page](https://github.com/ava-labs/subnet-evm/releases) of the Subnet-EVM repository. Locate the latest version, and copy link that corresponds to the OS and architecture of the machine the node is running on (`darwin` = Mac, `amd64` = Intel/AMD processor, `arm64` = Arm processor). Log into the machine where the node is running and download the archive, using `wget` and the link to the archive, like this:
```bash
wget https://github.com/ava-labs/subnet-evm/releases/download/v0.2.9/subnet-evm_0.2.9_linux_amd64.tar.gz
```
This will download the archive to the machine. Unpack it like this (use the correct filename, of course):
```bash
tar xvf subnet-evm_0.2.9_linux_amd64.tar.gz
```
This will unpack and place the contents of the archive in the current directory, file `subnet-evm` is the plugin binary. You need to stop the node now (if the node is running as a service, use `sudo systemctl stop avalanchego` command). You need to place that file into the plugins directory where the AvalancheGo binary is located. If the node is installed using the install script, the path will be `~/avalanche-node/plugins` Instead of the `subnet-evm` filename, VM binary needs to be named as the VM ID of the chain on the Avalanche L1. For example, for the [WAGMI Avalanche L1](/docs/avalanche-l1s/wagmi-avalanche-l1) that VM ID is `srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy`. So, the command to copy the new plugin binary would look like:
```bash
cp subnet-evm ~/avalanche-node/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy
```
Make sure you use the correct VM ID, otherwise, your VM will not get updated and your Avalanche L1 may halt.
After you do that, you can start the node back up (if running as service do `sudo systemctl start avalanchego`). You can monitor the log output on the node to check that everything is OK, or you can use the [info.getNodeVersion()](/docs/api-reference/info-api#infogetnodeversion) API to check the versions. Example output would look like:
```json
{
"jsonrpc": "2.0",
"result": {
"version": "avalanche/1.7.18",
"databaseVersion": "v1.4.5",
"gitCommit": "b6d5827f1a87e26da649f932ad649a4ea0e429c4",
"vmVersions": {
"avm": "v1.7.18",
"evm": "v0.8.15",
"platform": "v1.7.18",
"sqja3uK17MJxfC7AN8nGadBw9JK5BcrsNwNynsqP5Gih8M5Bm": "v0.0.7",
"srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy": "v0.2.9"
}
},
"id": 1
}
```
Note that entry next to the VM ID we upgraded correctly says `v0.2.9`. You have successfully upgraded the VM!
Refer to the previous section on how to make sure node is healthy and connected before moving on to upgrading the next Avalanche L1 validator.
If you don't get the expected result, you can stop the `AvalancheGo`, examine and follow closely step-by-step of the above. You are free to remove files under `~/avalanche-node/plugins`, however, you should keep in mind that removing files is to remove an existing VM binary. You must put the correct VM plugin in place before you restart AvalancheGo.
## Network Upgrades[](#network-upgrades "Direct link to heading")
Sometimes you need to do a network upgrade to change the configured rules in the genesis under which the Chain operates. In regular EVM, network upgrades are a pretty involved process that includes deploying the new EVM binary, coordinating the timed upgrade and deploying changes to the nodes. But since [Subnet-EVM v0.2.8](https://github.com/ava-labs/subnet-evm/releases/tag/v0.2.8), we introduced the long awaited feature to perform network upgrades by just using a few lines of JSON. Upgrades can consist of enabling/disabling particular precompiles, or changing their parameters. Currently available precompiles allow you to:
* Restrict Smart Contract Deployers
* Restrict Who Can Submit Transactions
* Mint Native Coins
* Configure Dynamic Fees
Please refer to [Customize an Avalanche L1](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#network-upgrades-enabledisable-precompiles) for a detailed discussion of possible precompile upgrade parameters.
## Summary[](#summary "Direct link to heading")
Vital part of Avalanche L1 maintenance is performing timely upgrades at all levels of the software stack running your Avalanche L1. We hope this tutorial will give you enough information and context to allow you to do those upgrades with confidence and ease. If you have additional questions or any issues, please reach out to us on [Discord](https://chat.avalabs.org/).
# Customize an Avalanche L1
URL: /docs/avalanche-l1s/upgrade/customize-avalanche-l1
Learn how to customize your EVM-powered Avalanche L1.
All Avalanche L1s can be customized by utilizing [`L1s Configs`](#avalanche-l1-configs).
an Avalanche L1 can have one or more blockchains. For example, the Primary Network, which is an Avalanche L1, a special one nonetheless, has 3 blockchains. Each chain can be further customized using chain specific configuration file. See [here](/docs/nodes/configure/configs-flags) for detailed explanation.
An Avalanche L1 created by or forked from [Subnet-EVM](https://github.com/ava-labs/subnet-evm) can be customized by utilizing one or more of the following methods:
* [Genesis](#genesis)
* [Precompile](#precompiles)
* [Upgrade Configs](#network-upgrades-enabledisable-precompiles)
* [Chain Configs](#avalanchego-chain-configs)
## Avalanche L1 Configs[](#avalanche-l1-configs "Direct link to heading")
an Avalanche L1 can customized by setting parameters for the following:
* [Validator-only communication to create a private Avalanche L1](/docs/nodes/configure/avalanche-l1-configs#validatoronly-bool)
* [Consensus](/docs/nodes/configure/avalanche-l1-configs#consensus-parameters)
* [Gossip](/docs/nodes/configure/avalanche-l1-configs#gossip-configs)
See [here](/docs/nodes/configure/avalanche-l1-configs) for more info.
## Genesis[](#genesis "Direct link to heading")
Each blockchain has some genesis state when it's created. Each Virtual Machine defines the format and semantics of its genesis data.
The default genesis Subnet-EVM provided below has some well defined parameters:
```json
{
"config": {
"chainId": 43214,
"homesteadBlock": 0,
"eip150Block": 0,
"eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0",
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"muirGlacierBlock": 0,
"feeConfig": {
"gasLimit": 15000000,
"minBaseFee": 25000000000,
"targetGas": 15000000,
"baseFeeChangeDenominator": 36,
"minBlockGasCost": 0,
"maxBlockGasCost": 1000000,
"targetBlockRate": 2,
"blockGasCostStep": 200000
},
"allowFeeRecipients": false
},
"alloc": {
"8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": {
"balance": "0x295BE96E64066972000000"
}
},
"nonce": "0x0",
"timestamp": "0x0",
"extraData": "0x00",
"gasLimit": "0xe4e1c0",
"difficulty": "0x0",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"number": "0x0",
"gasUsed": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
}
```
### Chain Config[](#chain-config "Direct link to heading")
`chainID`: Denotes the ChainID of to be created chain. Must be picked carefully since a conflict with other chains can cause issues. One suggestion is to check with [chainlist.org](https://chainlist.org/) to avoid ID collision, reserve and publish your ChainID properly.
You can use `eth_getChainConfig` RPC call to get the current chain config. See [here](/docs/api-reference/subnet-evm-api#eth_getchainconfig) for more info.
#### Hard Forks[](#hard-forks "Direct link to heading")
`homesteadBlock`, `eip150Block`, `eip150Hash`, `eip155Block`, `byzantiumBlock`, `constantinopleBlock`, `petersburgBlock`, `istanbulBlock`, `muirGlacierBlock` are EVM hard fork activation times. Changing these may cause issues, so treat them carefully.
#### Fee Config[](#fee-config "Direct link to heading")
`gasLimit`: Sets the max amount of gas consumed per block. This restriction puts a cap on the amount of computation that can be done in a single block, which in turn sets a limit on the maximum gas usage allowed for a single transaction. For reference, C-Chain value is set to `15,000,000`.
`targetBlockRate`: Sets the target rate of block production in seconds. A target of 2 will target producing a block every 2 seconds. If the network starts producing blocks at a faster rate, it indicates that more blocks than anticipated are being issued to the network, resulting in an increase in base fees. For C-chain this value is set to `2`.
`minBaseFee`: Sets a lower bound on the EIP-1559 base fee of a block. Since the block's base fee sets the minimum gas price for any transaction included in that block, this effectively sets a minimum gas price for any transaction.
`targetGas`: Specifies the targeted amount of gas (including block gas cost) to consume within a rolling 10-seconds window. When the dynamic fee algorithm observes that network activity is above/below the `targetGas`, it increases/decreases the base fee proportionally to how far above/below the target actual network activity is. If the network starts producing blocks with gas cost higher than this, base fees are increased accordingly.
`baseFeeChangeDenominator`: Divides the difference between actual and target utilization to determine how much to increase/decrease the base fee. A larger denominator indicates a slower changing, stickier base fee, while a lower denominator allows the base fee to adjust more quickly. For reference, the C-chain value is set to `36`. This value sets the base fee to increase or decrease by a factor of `1/36` of the parent block's base fee.
`minBlockGasCost`: Sets the minimum amount of gas to charge for the production of a block. This value is set to `0` in C-Chain.
`maxBlockGasCost`: Sets the maximum amount of gas to charge for the production of a block.
`blockGasCostStep`: Determines how much to increase/decrease the block gas cost depending on the amount of time elapsed since the previous block.
If the block is produced at the target rate, the block gas cost will stay the same as the block gas cost for the parent block.
If it is produced faster/slower, the block gas cost will be increased/decreased by the step value for each second faster/slower than the target block rate accordingly.
If the `blockGasCostStep` is set to a very large number, it effectively requires block production to go no faster than the `targetBlockRate`. For example, if a block is produced two seconds faster than the target block rate, the block gas cost will increase by `2 * blockGasCostStep`.
#### Custom Fee Recipients[](#custom-fee-recipients "Direct link to heading")
See section [Setting a Custom Fee Recipient](#setting-a-custom-fee-recipient)
### Alloc[](#alloc "Direct link to heading")
The fields `nonce`, `timestamp`, `extraData`, `gasLimit`, `difficulty`, `mixHash`, `coinbase`, `number`, `gasUsed`, `parentHash` defines the genesis block header. The field `gasLimit` should be set to match the `gasLimit` set in the `feeConfig`. You do not need to change any of the other genesis header fields.
`nonce`, `mixHash` and `difficulty` are remnant parameters from Proof of Work systems. For Avalanche, these don't play any relevant role, so you should just leave them as their default values:
`nonce`: The result of the mining process iteration is this value. It can be any value in the genesis block. Default value is `0x0`.
`mixHash`: The combination of `nonce` and `mixHash` allows to verify that the Block has really been cryptographically mined, thus, from this aspect, is valid. Default value is `0x0000000000000000000000000000000000000000000000000000000000000000`.
`difficulty`: The difficulty level applied during the nonce discovering process of this block. Default value is `0x0`.
`timestamp`: The timestamp of the creation of the genesis block. This is commonly set to `0x0`.
`extraData`: Optional extra data that can be included in the genesis block. This is commonly set to `0x`.
`gasLimit`: The total amount of gas that can be used in a single block. It should be set to the same value as in the [fee config](#fee-config). The value `e4e1c0` is hexadecimal and is equal to `15,000,000`.
`coinbase`: Refers to the address of the block producers. This also means it represents the recipient of the block reward. It is usually set to `0x0000000000000000000000000000000000000000` for the genesis block. To allow fee recipients in Subnet-EVM, refer to [this section.](#setting-a-custom-fee-recipient)
`parentHash`: This is the Keccak 256-bit hash of the entire parent block's header. It is usually set to `0x0000000000000000000000000000000000000000000000000000000000000000` for the genesis block.
`gasUsed`: This is the amount of gas used by the genesis block. It is usually set to `0x0`.
`number`: This is the number of the genesis block. This required to be `0x0` for the genesis. Otherwise it will error.
### Genesis Examples[](#genesis-examples "Direct link to heading")
Another example of a genesis file can be found in the [networks folder](https://github.com/ava-labs/public-chain-assets/blob/1951594346dcc91682bdd8929bcf8c1bf6a04c33/chains/11111/genesis.json). Please remove `airdropHash` and `airdropAmount` fields if you want to start with it.
Here are a few examples on how a genesis file is used: [scripts/run.sh](https://github.com/ava-labs/subnet-evm/blob/master/scripts/run.sh#L99)
### Setting the Genesis Allocation[](#setting-the-genesis-allocation "Direct link to heading")
Alloc defines addresses and their initial balances. This should be changed accordingly for each chain. If you don't provide any genesis allocation, you won't be able to interact with your new chain (all transactions require a fee to be paid from the sender's balance).
The `alloc` field expects key-value pairs. Keys of each entry must be a valid `address`. The `balance` field in the value can be either a `hexadecimal` or `number` to indicate initial balance of the address. The default value contains `8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` with `50000000000000000000000000` balance in it. Default:
```json
"alloc": {
"8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": {
"balance": "0x295BE96E64066972000000"
}
}
```
To specify a different genesis allocation, populate the `alloc` field in the genesis JSON as follows:
```json
"alloc": {
"8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": {
"balance": "0x52B7D2DCC80CD2E4000000"
},
"Ab5801a7D398351b8bE11C439e05C5B3259aeC9B": {
"balance": "0xa796504b1cb5a7c0000"
}
},
```
The keys in the allocation are [hex](https://en.wikipedia.org/wiki/Hexadecimal) addresses **without the canonical `0x` prefix**. The balances are denominated in Wei ([10^18 Wei = 1 Whole Unit of Native Token](https://eth-converter.com/)) and expressed as hex strings **with the canonical `0x` prefix**. You can use [this converter](https://www.rapidtables.com/convert/number/hex-to-decimal.html) to translate between decimal and hex numbers.
The above example yields the following genesis allocations (denominated in whole units of the native token, that is 1 AVAX/1 WAGMI):
```bash
0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC: 100000000 (0x52B7D2DCC80CD2E4000000=100000000000000000000000000 Wei)
0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B: 49463 (0xa796504b1cb5a7c0000=49463000000000000000000 Wei)
```
### Setting a Custom Fee Recipient[](#setting-a-custom-fee-recipient "Direct link to heading")
By default, all fees are burned (sent to the black hole address with `"allowFeeRecipients": false`). However, it is possible to enable block producers to set a fee recipient (who will get compensated for blocks they produce).
To enable this feature, you'll need to add the following to your genesis file (under the `"config"` key):
```json
{
"config": {
"allowFeeRecipients": true
}
}
```
#### Fee Recipient Address[](#fee-recipient-address "Direct link to heading")
With `allowFeeRecipients` enabled, your validators can specify their addresses to collect fees. They need to update their EVM [chain config](#avalanchego-chain-configs) with the following to specify where the fee should be sent to.
```json
{
"feeRecipient": ""
}
```
If `allowFeeRecipients` feature is enabled on the Avalanche L1, but a validator doesn't specify a "feeRecipient", the fees will be burned in blocks it produces.
This mechanism can be also activated as a precompile. See [Changing Fee Reward Mechanisms](#changing-fee-reward-mechanisms) section for more details.
## Precompiles[](#precompiles "Direct link to heading")
Subnet-EVM can provide custom functionalities with precompiled contracts. These precompiled contracts can be activated through `ChainConfig` (in genesis or as an upgrade).
### AllowList Interface[](#allowlist-interface "Direct link to heading")
The `AllowList` interface is used by precompiles to check if a given address is allowed to use a precompiled contract. `AllowList` consist of three roles, `Admin`, `Manager` and `Enabled`. `Admin` can add/remove other `Admin` and `Enabled` addresses. `Manager` is introduced with Durango upgrade and can add/remove `Enabled` addresses, without the ability to add/remove `Admin` or `Manager` addresses. `Enabled` addresses can use the precompiled contract, but cannot modify other roles.
`AllowList` adds `adminAddresses`, `managerAddresses`, `enabledAddresses` fields to precompile contract configurations. For instance fee manager precompile contract configuration looks like this:
```json
{
"feeManagerConfig": {
"blockTimestamp": 0,
"adminAddresses": [],
"managerAddresses": [],
"enabledAddresses": [],
}
}
```
`AllowList` configuration affects only the related precompile. For instance, the admin address in `feeManagerConfig` does not affect admin addresses in other activated precompiles.
The `AllowList` solidity interface is defined as follows, and can be found in [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/contracts/contracts/interfaces/IAllowList.sol):
```solidity
//SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
interface IAllowList {
event RoleSet(
uint256 indexed role,
address indexed account,
address indexed sender,
uint256 oldRole
);
// Set [addr] to have the admin role over the precompile contract.
function setAdmin(address addr) external;
// Set [addr] to be enabled on the precompile contract.
function setEnabled(address addr) external;
// Set [addr] to have the manager role over the precompile contract.
function setManager(address addr) external;
// Set [addr] to have no role for the precompile contract.
function setNone(address addr) external;
// Read the status of [addr].
function readAllowList(address addr) external view returns (uint256 role);
}
```
`readAllowList(addr)` will return a uint256 with a value of 0, 1, or 2, corresponding to the roles `None`, `Enabled`, and `Admin` respectively.
`RoleSet` is an event that is emitted when a role is set for an address. It includes the role, the modified address, the sender as indexed parameters and the old role as non-indexed parameter. Events in precompiles are activated after Durango upgrade.
Note: `AllowList` is not an actual contract but just an interface. It's not callable by itself. This is used by other precompiles. Check other precompile sections to see how this works.
### Restricting Smart Contract Deployers[](#restricting-smart-contract-deployers "Direct link to heading")
If you'd like to restrict who has the ability to deploy contracts on your Avalanche L1, you can provide an `AllowList` configuration in your genesis or upgrade file:
```json
{
"contractDeployerAllowListConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
```
In this example, `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is named as the `Admin` of the `ContractDeployerAllowList`. This enables it to add other `Admin` or to add `Enabled` addresses. Both `Admin` and `Enabled` can deploy contracts. To provide a great UX with factory contracts, the `tx.Origin` is checked for being a valid deployer instead of the caller of `CREATE`. This means that factory contracts will still be able to create new contracts as long as the sender of the original transaction is an allow listed deployer.
The `Stateful Precompile` contract powering the `ContractDeployerAllowList` adheres to the [AllowList Solidity interface](#allowlist-interface) at `0x0200000000000000000000000000000000000000` (you can load this interface and interact directly in Remix):
* If you attempt to add a `Enabled` and you are not an `Admin`, you will see something like: 
* If you attempt to deploy a contract but you are not an `Admin` not a `Enabled`, you will see something like: 
* If you call `readAllowList(addr)` then you can read the current role of `addr`, which will return a uint256 with a value of 0, 1, or 2, corresponding to the roles `None`, `Enabled`, and `Admin` respectively.
If you remove all of the admins from the allow list, it will no longer be possible to update the allow list without modifying the Subnet-EVM to schedule a network upgrade.
#### Initial Contract Allow List Configuration[](#initial-contract-allow-list-configuration "Direct link to heading")
It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to enable the precompile without an admin address to manage the deployer list. With this, you can define a list of addresses that are allowed to deploy contracts. Since there will be no admin address to manage the deployer list, it can only be modified through a network upgrade.
To use initial configuration, you need to specify addresses in `enabledAddresses` field in your genesis or upgrade file:
```json
{
"contractDeployerAllowListConfig": {
"blockTimestamp": 0,
"enabledAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
```
This will allow only `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` to deploy contracts. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations).
### Restricting Who Can Submit Transactions[](#restricting-who-can-submit-transactions "Direct link to heading")
Similar to restricting contract deployers, this precompile restricts which addresses may submit transactions on chain. Like the previous section, you can activate the precompile by including an `AllowList` configuration in your genesis file:
```json
{
"config": {
"txAllowListConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
}
```
In this example, `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is named as the `Admin` of the `TransactionAllowList`. This enables them to add other `Admins` or to add `Allowed`. `Admins`, `Manager` and `Enabled` can submit transactions to the chain.
The `Stateful Precompile` contract powering the `TxAllowList` adheres to the [AllowList Solidity interface](#allowlist-interface) at `0x0200000000000000000000000000000000000002` (you can load this interface and interact directly in Remix):
* If you attempt to add an `Enabled` and you are not an `Admin`, you will see something like: 
* If you attempt to submit a transaction but you are not an `Admin`, `Manager` or not `Enabled`, you will see something like: `cannot issue transaction from non-allow listed address`
* If you call `readAllowList(addr)` then you can read the current role of `addr`, which will return a `uint256` with a value of 0, 1, 2 or 3 corresponding to the roles `None`, `Allowed`, `Admin` and `Manager` respectively.
If you remove all of the admins and managers from the allow list, it will no longer be possible to update the allow list without modifying the Subnet-EVM to schedule a network upgrade.
#### Initial TX Allow List Configuration[](#initial-tx-allow-list-configuration "Direct link to heading")
It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to enable the precompile without an admin address to manage the TX allow list. With this, you can define a list of addresses that are allowed to submit transactions.
Since there will be no admin address to manage the TX list, it can only be modified through a network upgrade. To use initial configuration, you need to specify addresses in `enabledAddresses` field in your genesis or upgrade file:
```json
{
"txAllowListConfig": {
"blockTimestamp": 0,
"enabledAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
```
This will allow only `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` to submit transactions. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations).
### Minting Native Coins[](#minting-native-coins "Direct link to heading")
You can mint native(gas) coins with a precompiled contract. In order to activate this feature, you can provide `nativeMinterConfig` in genesis:
```json
{
"config": {
"contractNativeMinterConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
}
```
`adminAddresses` denotes admin accounts who can add other `Admin`, `Manager` or `Enabled` accounts. `Admin`, `Manager` and `Enabled` are both eligible to mint native coins for other addresses. `ContractNativeMinter` uses same methods as in `ContractDeployerAllowList`.
The `Stateful Precompile` contract powering the `ContractNativeMinter` adheres to the following Solidity interface at `0x0200000000000000000000000000000000000001` (you can load this interface and interact directly in Remix):
```solidity
// (c) 2022-2023, Ava Labs, Inc. All rights reserved.
// See the file LICENSE for licensing terms.
pragma solidity ^0.8.0;
import "./IAllowList.sol";
interface INativeMinter is IAllowList {
event NativeCoinMinted(
address indexed sender,
address indexed recipient,
uint256 amount
);
// Mint [amount] number of native coins and send to [addr]
function mintNativeCoin(address addr, uint256 amount) external;
}
```
`mintNativeCoin` takes an address and amount of native coins to be minted. The amount denotes the amount in minimum denomination of native coins (10^18). For example, if you want to mint 1 native coin (in AVAX), you need to pass 1 \* 10^18 as the amount. A `NativeCoinMinted` event is emitted with the sender, recipient and amount when a native coin is minted.
Note that this uses `IAllowList` interface directly, meaning that it uses the same `AllowList` interface functions like `readAllowList` and `setAdmin`, `setManager`, `setEnabled`, `setNone`. For more information see [AllowList Solidity interface](#allowlist-interface).
EVM does not prevent overflows when storing the address balance. Overflows in balance opcodes are handled by setting the balance to maximum. However the same won't apply for API calls. If you try to mint more than the maximum balance, API calls will return the overflowed hex-balance. This can break external tooling. Make sure the total supply of native coins is always less than 2^256-1.
#### Initial Native Minter Configuration[](#initial-native-minter-configuration "Direct link to heading")
It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to enable the precompile without an admin address to mint native coins. With this, you can define a list of addresses that will receive an initial mint of the native coin when this precompile activates. This can be useful for networks that require a one-time mint without specifying any admin addresses. To use initial configuration, you need to specify a map of addresses with their corresponding mint amounts in `initialMint` field in your genesis or upgrade file:
```json
{
"contractNativeMinterConfig": {
"blockTimestamp": 0,
"initialMint": {
"0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": "1000000000000000000",
"0x10037Fb06Ec4aB8c870a92AE3f00cD58e5D484b3": "0xde0b6b3a7640000"
}
}
}
```
In the amount field you can specify either decimal or hex string. This will mint 1000000000000000000 (equivalent of 1 Native Coin denominated as 10^18) to both addresses. Note that these are both in string format. "0xde0b6b3a7640000" hex is equivalent to 1000000000000000000. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations).
### Configuring Dynamic Fees[](#configuring-dynamic-fees "Direct link to heading")
You can configure the parameters of the dynamic fee algorithm on chain using the `FeeConfigManager`. In order to activate this feature, you will need to provide the `FeeConfigManager` in the genesis:
```json
{
"config": {
"feeManagerConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
}
```
The precompile implements the `FeeManager` interface which includes the same `AllowList` interface used by ContractNativeMinter, TxAllowList, etc. For an example of the `AllowList` interface, see the [TxAllowList](#allowlist-interface) above.
The `Stateful Precompile` contract powering the `FeeConfigManager` adheres to the following Solidity interface at `0x0200000000000000000000000000000000000003` (you can load this interface and interact directly in Remix). It can be also found in [IFeeManager.sol](https://github.com/ava-labs/subnet-evm/blob/5faabfeaa021a64c2616380ed2d6ec0a96c8f96d/contract-examples/contracts/IFeeManager.sol):
```solidity
//SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import "./IAllowList.sol";
interface IFeeManager is IAllowList {
struct FeeConfig {
uint256 gasLimit;
uint256 targetBlockRate;
uint256 minBaseFee;
uint256 targetGas;
uint256 baseFeeChangeDenominator;
uint256 minBlockGasCost;
uint256 maxBlockGasCost;
uint256 blockGasCostStep;
}
event FeeConfigChanged(
address indexed sender,
FeeConfig oldFeeConfig,
FeeConfig newFeeConfig
);
// Set fee config fields to contract storage
function setFeeConfig(
uint256 gasLimit,
uint256 targetBlockRate,
uint256 minBaseFee,
uint256 targetGas,
uint256 baseFeeChangeDenominator,
uint256 minBlockGasCost,
uint256 maxBlockGasCost,
uint256 blockGasCostStep
) external;
// Get fee config from the contract storage
function getFeeConfig()
external
view
returns (
uint256 gasLimit,
uint256 targetBlockRate,
uint256 minBaseFee,
uint256 targetGas,
uint256 baseFeeChangeDenominator,
uint256 minBlockGasCost,
uint256 maxBlockGasCost,
uint256 blockGasCostStep
);
// Get the last block number changed the fee config from the contract storage
function getFeeConfigLastChangedAt()
external
view
returns (uint256 blockNumber);
}
```
FeeConfigManager precompile uses `IAllowList` interface directly, meaning that it uses the same `AllowList` interface functions like `readAllowList` and `setAdmin`, `setManager`, `setEnabled`, `setNone`. For more information see [AllowList Solidity interface](#allowlist-interface).
In addition to the `AllowList` interface, the FeeConfigManager adds the following capabilities:
* `getFeeConfig`: retrieves the current dynamic fee config
* `getFeeConfigLastChangedAt`: retrieves the timestamp of the last block where the fee config was updated
* `setFeeConfig`: sets the dynamic fee config on chain (see [here](#fee-config) for details on the fee config parameters). This function can only be called by an `Admin`, `Manager` or `Enabled` address.
* `FeeConfigChanged`: an event that is emitted when the fee config is updated. Topics include the sender, the old fee config, and the new fee config.
You can also get the fee configuration at a block with the `eth_feeConfig` RPC method. For more information see [here](/docs/api-reference/subnet-evm-api#eth_feeconfig).
#### Initial Fee Config Configuration[](#initial-fee-config-configuration "Direct link to heading")
It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to define your fee structure to take effect at the activation.
To use the initial configuration, you need to specify the fee config in `initialFeeConfig` field in your genesis or upgrade file:
```json
{
"feeManagerConfig": {
"blockTimestamp": 0,
"initialFeeConfig": {
"gasLimit": 20000000,
"targetBlockRate": 2,
"minBaseFee": 1000000000,
"targetGas": 100000000,
"baseFeeChangeDenominator": 48,
"minBlockGasCost": 0,
"maxBlockGasCost": 10000000,
"blockGasCostStep": 500000
}
}
}
```
This will set the fee config to the values specified in the `initialFeeConfig` field. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations).
### Avalanche Warp Messaging[](#avalanche-warp-messaging "Direct link to heading")
Currently Warp Precompile can only be activated in Mainnet after Durango occurs. Durango in Mainnet is set at 11 AM ET (4 PM UTC) on Wednesday, March 6th, 2024. If you plan to use Warp messaging in your own Subnet-EVM chain in Mainnet you should upgrade to AvalancheGo 1.11.11 or later and coordinate your precompile upgrade. Warp Config's "blockTimestamp" must be set after `1709740800`, Durango date (11 AM ET (4 PM UTC) on Wednesday, March 6th, 2024).
## Contract Examples[](#contract-examples "Direct link to heading")
Subnet-EVM contains example contracts for precompiles under `/contracts`. It's a hardhat project with tests and tasks. For more information see [contract examples README](https://github.com/ava-labs/subnet-evm/tree/master/contracts#subnet-evm-contracts).
## Network Upgrades: Enable/Disable Precompiles[](#network-upgrades-enabledisable-precompiles "Direct link to heading")
Performing a network upgrade requires coordinating the upgrade network-wide. A network upgrade changes the rule set used to process and verify blocks, such that any node that upgrades incorrectly or fails to upgrade by the time that upgrade goes into effect may become out of sync with the rest of the network.
Any mistakes in configuring network upgrades or coordinating them on validators may cause the network to halt and recovering may be difficult.
In addition to specifying the configuration for each of the above precompiles in the genesis chain config, they can be individually enabled or disabled at a given timestamp as a network upgrade. Disabling a precompile disables calling the precompile and destructs its storage so it can be enabled at a later timestamp with a new configuration if desired.
These upgrades must be specified in a file named `upgrade.json` placed in the same directory where [`config.json`](#avalanchego-chain-configs) resides: `{chain-config-dir}/{blockchainID}/upgrade.json`. For example, `WAGMI Subnet` upgrade should be placed in `~/.avalanchego/configs/chains/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/upgrade.json`.
The content of the `upgrade.json` should be formatted according to the following:
```json
{
"precompileUpgrades": [
{
"[PRECOMPILE_NAME]": {
"blockTimestamp": "[ACTIVATION_TIMESTAMP]", // unix timestamp precompile should activate at
"[PARAMETER]": "[VALUE]" // precompile specific configuration options, eg. "adminAddresses"
}
}
]
}
```
An invalid `blockTimestamp` in an upgrade file results the update failing. The `blockTimestamp` value should be set to a valid Unix timestamp value which is in the *future* relative to the *head of the chain*. If the node encounters a `blockTimestamp` which is in the past, it will fail on startup.
To disable a precompile, the following format should be used:
```json
{
"precompileUpgrades": [
{
"": {
"blockTimestamp": "[DEACTIVATION_TIMESTAMP]", // unix timestamp the precompile should deactivate at
"disable": true
}
}
]
}
```
Each item in `precompileUpgrades` must specify exactly one precompile to enable or disable and the block timestamps must be in increasing order. Once an upgrade has been activated (a block after the specified timestamp has been accepted), it must always be present in `upgrade.json` exactly as it was configured at the time of activation (otherwise the node will refuse to start).
Enabling and disabling a precompile is a network upgrade and should always be done with caution.
For safety, you should always treat `precompileUpgrades` as append-only.
As a last resort measure, it is possible to abort or reconfigure a precompile upgrade that has not been activated since the chain is still processing blocks using the prior rule set.
If aborting an upgrade becomes necessary, you can remove the precompile upgrade from `upgrade.json` from the end of the list of upgrades. As long as the blockchain has not accepted a block with a timestamp past that upgrade's timestamp, it will abort the upgrade for that node.
### Example[](#example "Direct link to heading")
```json
{
"precompileUpgrades": [
{
"feeManagerConfig": {
"blockTimestamp": 1668950000,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
},
{
"txAllowListConfig": {
"blockTimestamp": 1668960000,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
},
{
"feeManagerConfig": {
"blockTimestamp": 1668970000,
"disable": true
}
}
]
}
```
This example enables the `feeManagerConfig` at the first block with timestamp >= `1668950000`, enables `txAllowListConfig` at the first block with timestamp >= `1668960000`, and disables `feeManagerConfig` at the first block with timestamp >= `1668970000`.
When a precompile disable takes effect (that is, after its `blockTimestamp` has passed), its storage will be wiped. If you want to reenable it, you will need to treat it as a new configuration.
After you have created the `upgrade.json` and placed it in the chain config directory, you need to restart the node for the upgrade file to be loaded (again, make sure you don't restart all Avalanche L1 validators at once!). On node restart, it will print out the configuration of the chain, where you can double-check that the upgrade has loaded correctly. In our example:
```bash
INFO [08-15|15:09:36.772] <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain>
github.com/ava-labs/subnet-evm/eth/backend.go:155: Initialised chain configuration
config=“{ChainID: 11111 Homestead: 0 EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0
Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Subnet EVM: 0, FeeConfig:
{\“gasLimit\“:20000000,\“targetBlockRate\“:2,\“minBaseFee\“:1000000000,\“targetGas\
“:100000000,\“baseFeeChangeDenominator\“:48,\“minBlockGasCost\“:0,\“maxBlockGasCost\
“:10000000,\“blockGasCostStep\“:500000}, AllowFeeRecipients: false, NetworkUpgrades: {\
“subnetEVMTimestamp\“:0}, PrecompileUpgrade: {}, UpgradeConfig: {\"precompileUpgrades\":[{\"feeManagerConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668950000}},{\"txAllowListConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668960000}},{\"feeManagerConfig\":{\"adminAddresses\":null,\"enabledAddresses\":null,\"blockTimestamp\":1668970000,\"disable\":true}}]}, Engine: Dummy Consensus Engine}"
```
Notice that `precompileUpgrades` entry correctly reflects the changes. You can also check the activated precompiles at a timestamp with the [`eth_getActivePrecompilesAt`](/docs/api-reference/subnet-evm-api#eth_getactiveprecompilesat) RPC method. The [`eth_getChainConfig`](/docs/api-reference/subnet-evm-api#eth_getchainconfig) RPC method will also return the configured upgrades in the response.
That's it, your Avalanche L1 is all set and the desired upgrades will be activated at the indicated timestamp!
### Initial Precompile Configurations[](#initial-precompile-configurations "Direct link to heading")
Precompiles can be managed by some privileged addresses to change their configurations and activate their effects. For example, the `feeManagerConfig` precompile can have `adminAddresses` which can change the fee structure of the network.
```json
{
"precompileUpgrades": [
{
"feeManagerConfig": {
"blockTimestamp": 1668950000,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
]
}
```
In this example, only the address `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is allowed to change the fee structure of the network. The admin address has to call the precompile in order to activate its effect; that is it needs to send a transaction with a new fee config to perform the update. This is a very powerful feature, but it also gives a large amount of power to the admin address. If the address `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is compromised, the network is compromised.
With the initial configurations, precompiles can immediately activate their effect on the activation timestamp. With this way admin addresses can be omitted from the precompile configuration. For example, the `feeManagerConfig` precompile can have `initialFeeConfig` to setup the fee configuration on the activation:
```json
{
"precompileUpgrades": [
{
"feeManagerConfig": {
"blockTimestamp": 1668950000,
"initialFeeConfig": {
"gasLimit": 20000000,
"targetBlockRate": 2,
"minBaseFee": 1000000000,
"targetGas": 100000000,
"baseFeeChangeDenominator": 48,
"minBlockGasCost": 0,
"maxBlockGasCost": 10000000,
"blockGasCostStep": 500000
}
}
}
]
}
```
Notice that there is no `adminAddresses` field in the configuration. This means that there will be no admin addresses to manage the fee structure with this precompile. The precompile will simply update the fee configuration to the specified fee config when it activates on the `blockTimestamp` `1668950000`.
It's still possible to add `adminAddresses` or `enabledAddresses` along with these initial configurations. In this case, the precompile will be activated with the initial configuration, and admin/enabled addresses can access to the precompiled contract normally.
If you want to change the precompile initial configuration, you will need to first disable it then activate the precompile again with the new configuration.
See every precompile initial configuration in their relevant `Initial Configuration` sections under [Precompiles](#precompiles).
## AvalancheGo Chain Configs[](#avalanchego-chain-configs "Direct link to heading")
As described in [this doc](/docs/nodes/configure/configs-flags#avalanche-l1-chain-configs), each blockchain of Avalanche L1s can have its own custom configuration. If an Avalanche L1's ChainID is `2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt`, the config file for this chain is located at `{chain-config-dir}/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/config.json`.
For blockchains created by or forked from Subnet-EVM, most [C-Chain configs](/docs/nodes/chain-configs/c-chain) are applicable except [Avalanche Specific APIs](/docs/nodes/chain-configs/c-chain#enabling-avalanche-specific-apis).
### Priority Regossip[](#priority-regossip "Direct link to heading")
A transaction is "regossiped" when the node does not find the transaction in a block after `priority-regossip-frequency` (defaults to `1m`). By default, up to 16 transactions (max 1 per address) are regossiped to validators per minute.
Operators can use "priority regossip" to more aggressively "regossip" transactions for a set of important addresses (like bridge relayers). To do so, you'll need to update your [chain config](/docs/nodes/configure/configs-flags#avalanche-l1-chain-configs) with the following:
```json
{
"priority-regossip-addresses": [""]
}
```
By default, up to 32 transactions from priority addresses (max 16 per address) are regossipped to validators per second. You can override these defaults with the following config:
```json
{
"priority-regossip-frequency": "1s",
"priority-regossip-max-txs": 32,
"priority-regossip-addresses": [""],
"priority-regossip-txs-per-address": 16
}
```
### Fee Recipient[](#fee-recipient "Direct link to heading")
This works together with [`allowFeeRecipients`](#setting-a-custom-fee-recipient) and [RewardManager precompile](/docs/avalanche-l1s/evm-configuration/transaction-fees#reward-manager) to specify where the fees should be sent to.
With `allowFeeRecipients` enabled, validators can specify their addresses to collect fees.
```json
{
"feeRecipient": ""
}
```
If `allowFeeRecipients` or `RewardManager` precompile is enabled on the Avalanche L1, but a validator doesn't specify a "feeRecipient", the fees will be burned in blocks it produces.
### Archival Node Configuration[](#archival-node-configuration "Direct link to heading")
Running an archival node that retains all historical state data requires specific configuration settings. Incorrect configuration can lead to historical data being pruned despite attempts to run in archival mode. Here are the key settings to configure:
#### Disabling Pruning
To retain all historical state, you must disable pruning. For EVM chains (like C-Chain or Subnet-EVM chains), add the following to your chain's `config.json`:
```json
{
"pruning-enabled": false
}
```
#### State Sync Considerations
State sync allows nodes to quickly sync by downloading recent state without processing all historical blocks. This can lead to missing historical data. For archival nodes, either disable state sync or ensure you start from genesis:
```json
{
"state-sync-enabled": false
}
```
#### Transaction History Settings
To maintain access to all historical transactions, you might need to configure these additional settings:
```json
{
"transaction-history": 0
}
```
#### Database Considerations
Important: An already synced database cannot be fully converted to an archival node retroactively. The cleanest and most reliable way to set up an archival node is to start from scratch with the proper configuration.
When switching between database types (e.g., from LevelDB to PebbleDB), historical data does not carry over. If you need to change the database type for your archival node, you must start a fresh sync from genesis.
For information about all available configuration options and directory structures, see the [AvalancheGo Config Flags documentation](https://build.avax.network/docs/nodes/configure/configs-flags).
## Network Upgrades: State Upgrades[](#network-upgrades-state-upgrades "Direct link to heading")
Subnet-EVM allows the network operators to specify a modification to state that will take place at the beginning of the first block with a timestamp greater than or equal to the one specified in the configuration.
This provides a last resort path to updating non-upgradeable contracts via a network upgrade (for example, to fix issues when you are running your own blockchain).
This should only be used as a last resort alternative to forking `subnet-evm` and specifying the network upgrade in code.
Using a network upgrade to modify state is not part of normal operations of the EVM. You should ensure the modifications do not invalidate any of the assumptions of deployed contracts or cause incompatibilities with downstream infrastructure such as block explorers.
The timestamps for upgrades in `stateUpgrades` must be in increasing order. `stateUpgrades` can be specified along with `precompileUpgrades` or by itself.
The following three state modifications are supported:
* `balanceChange`: adds a specified amount to the balance of a given account. This amount can be specified as hex or decimal and must be positive.
* `storage`: modifies the specified storage slots to the specified values. Keys and values must be 32 bytes specified in hex, with a `0x` prefix.
* `code`: modifies the code stored in the specified account. The code must *only* be the runtime portion of a code. The code must start with a `0x` prefix.
If modifying the code, *only* the runtime portion of the bytecode should be provided in `upgrades.json`. Do not use the bytecode that would be used for deploying a new contract, as this includes the constructor code as well. Refer to your compiler's documentation for information on how to find the runtime portion of the contract you wish to modify.
The `upgrades.json` file shown below describes a network upgrade that will make the following state modifications at the first block after (or at) `March 8, 2023 1:30:00 AM GMT`:
* Sets the code for the account at `0x71562b71999873DB5b286dF957af199Ec94617F7`,
* And adds `100` wei to the balance of the account at `0xb794f5ea0ba39494ce839613fffba74279579268`,
* Sets the storage slot `0x1234` to the value `0x6666` for the account at `0xb794f5ea0ba39494ce839613fffba74279579268`.
```json
{
"stateUpgrades": [
{
"blockTimestamp": 1678239000,
"accounts": {
"0x71562b71999873DB5b286dF957af199Ec94617F7": {
"code": "0x6080604052348015600f57600080fd5b506004361060285760003560e01c80632e64cec114602d575b600080fd5b60336047565b604051603e91906067565b60405180910390f35b60008054905090565b6000819050919050565b6061816050565b82525050565b6000602082019050607a6000830184605a565b9291505056fea26469706673582212209421042a1fdabcfa2486fb80942da62c28e61fc8362a3f348c4a96a92bccc63c64736f6c63430008120033"
},
"0xb794f5ea0ba39494ce839613fffba74279579268": {
"balanceChange": "0x64",
"storage": {
"0x0000000000000000000000000000000000000000000000000000000000001234": "0x0000000000000000000000000000000000000000000000000000000000006666"
}
}
}
}
]
}
```
## Network Upgrades: Rescheduling Mandatory Network Upgrades[](#network-upgrades-rescheduling-mandatory-network-upgrades "Direct link to heading")
A typical case when a network misses any mandatory activation would result in a network that is not able to operate. This is because validators/nodes running the old version would process transactions differently than nodes running the new version and end up different state. This would result in a fork in the network and new nodes would not be able to sync with the network. Normally this puts the chain in a halt and requires a hard fork to fix the issue. Starting with Subnet-EVM v0.6.3, you can reschedule mandatory activations like Durango via upgrade configs (upgrade.json in chain directory). This is a very advanced operation and should be done only if your network cannot operate going forward. The reschedule operation should be coordinated with your entire network nodes. Network upgrade overrides can be defined in the `upgrade.json` as follows:
```json
{
"networkUpgradeOverrides": {
"{networkUpgrade1}": timestamp1,
"{networkUpgrade2}": timestamp2,
}
}
```
The `timestamp` should be a Unix timestamp in seconds.
For instance, if you missed the Durango activation in Fuji (February 13th, 2024, 16:00 UTC) or Mainnet (March 6th, 2024, 16:00 UTC) and having issues in your network, you can reschedule the Durango activation via upgrades. In order to do this, you need to prepare a new upgrade.json including following:
```json
{
"networkUpgradeOverrides": {
"durangoTimestamp": 1712419200
}
}
```
This reschedules the Durango activation to 2024-11-06 16:00:00 UTC (one month later than the actual activation). After preparing the upgrade.json, you need to update the chain directory with the new upgrade.json and restart your nodes. You should see logs similar to the following:
```bash
INFO [03-22|14:04:48.284] github.com/ava-labs/subnet-evm/plugin/evm/vm.go:367: Applying network upgrade overrides overrides="{\"durangoTimestamp\":1712419200}"
...
INFO [03-22|14:04:48.288] github.com/ava-labs/subnet-evm/core/blockchain.go:335: Avalanche Upgrades (timestamp based):
INFO [03-22|14:04:48.288] github.com/ava-labs/subnet-evm/core/blockchain.go:335: - SubnetEVM Timestamp: @0 (https://github.com/ava-labs/avalanchego/releases/tag/v1.10.0)
INFO [03-22|14:04:48.288] github.com/ava-labs/subnet-evm/core/blockchain.go:335: - Durango Timestamp: @1712419200 (https://github.com/ava-labs/avalanchego/releases/tag/v1.11.0)
...
```
This means your node is lock and loaded for the new Durango activation. After the new timestamp is reached, your node will activate Durango and start processing transactions with the new Durango features.
Nodes running non-compatible version (running pre-Durango version after Durango activation), should be updated to most recent version of Subnet-EVM (v0.6.3+) and must have the new upgrade.json to reschedule the Durango activation. Running a new version without the rescheduling upgrade.json might create a fork in the network.
All of network nodes, even ones correctly upgraded to Durango and running the correct version since Durango activation, should be restarted with the new upgrade.json to reschedule the Durango activation. This is a network-wide operation and should be coordinated with all network nodes.
# Durango Upgrade
URL: /docs/avalanche-l1s/upgrade/durango-upgrade
Learn how to upgrade your Subnet-EVM and precompiles for the Durango network upgrade.
Durango will be activated on the Avalanche Mainnet at 11 AM ET (4 PM UTC) on Wednesday, March 6th, 2024. Subnet-EVM introduces a set of new features and backwards-incompatible changes with the Durango network upgrade. This guide will walk you through the process of upgrading your Subnet-EVM and precompiles for the Durango network upgrade.
The Durango network upgrade introduces new features and improvements to the Avalanche platform and Subnet-EVM. This guide will help you ensure that your Subnet-EVM and precompiles are compatible with the Durango network upgrade.
Note: Subnet-EVM already performs these upgrades in native stateful precompiles. This guide is for users who have custom precompiles and need to upgrade them for Durango.
Durango introduces following changes to Subnet-EVM:
* Avalanche Warp Messaging
* Events in Precompiles
* Manager Role
* Non-Strict Mode
* [Shanghai Upgrade from ACP-24](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/24-shanghai-eips) will be activated with Durango without any further modification in Subnet-EVM.
## Avalanche Warp Messaging[](#avalanche-warp-messaging "Direct link to heading")
Avalanche Warp Messaging (AWM) is a new feature introduced with the Durango network upgrade. It enables native cross-Avalanche L1 communication and allows [Virtual Machine (VM)](/docs/quick-start/virtual-machines) developers to implement arbitrary communication protocols between any two Avalanche L1s. For more information about AWM see [here](/docs/avalanche-l1s/evm-configuration/warpmessenger).
Warp Precompile must be enabled for Avalanche L1s after Durango. For more information the precompile and activation see [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#avalanche-warp-messaging)
## Events[](#events "Direct link to heading")
Subnet-EVM Native Precompiles will start emitting events with the Durango network upgrade. This will allow you to listen to events emitted by Subnet-EVM native precompiles. Following events will be introduced:
* `event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole)`: This event will be emitted when a role is set for an account. Precompiles that uses `IAllowList` interface will emit this event without requiring any changes. The event contains the role, account, sender as indexed parameters and old role as non-indexed parameter.
* `event FeeConfigChanged(address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig)`: This event will be emitted when the fee configuration is changed in `FeeManager` precompile. The event contains the sender as indexed parameter and old and new fee configuration as non-indexed parameters.
* `event NativeCoinMinted(address indexed sender, address indexed recipient, uint256 amount)`: This event will be emitted when new native coins are minted. The event contains the sender, recipient as indexed parameters and the amount as non-indexed parameter.
* `event RewardAddressChanged(address indexed sender, address indexed oldRewardAddress, address indexed newRewardAddress)`: This event will be emitted when the reward address is changed in `RewardManager` precompile. The event contains the sender, old and new reward addresses as non-indexed parameters.
* `event FeeRecipientsAllowed(address indexed sender)`: This event will be emitted when the fee recipients are allowed in `RewardManager` precompile. The event contains the sender as indexed parameter.
* `event RewardsDisabled(address indexed sender)`: This event will be emitted when the rewards are disabled in `RewardManager` precompile. The event contains the sender as indexed parameter.
### Custom Events[](#custom-events "Direct link to heading")
Events above are already introduced and handled in Subnet-EVM native precompiles. If you have a custom precompile, you can start emitting your custom events using Durango activation. In order to do this you can define your custom event in your solidity interface and regenerate the go bindings using `precompilegen` tool, for more information see [here](/docs/virtual-machines/custom-precompiles/create-precompile).
Generally you will generate an `event.go` file in addition to your existing precompile files. You need to implement how to emit your events and your events' gas costs, as in [hello world example](/docs/virtual-machines/custom-precompiles/defining-precompile#event-file). In this guide we will use the hello world example to demonstrate how to emit custom events. The event that will be introduced is:
```go
event GreetingChanged(address indexed sender, string oldGreeting, string newGreeting)
```
It will be emitted when the greeting is changed in the hello world precompile. You can find the hello world precompile [here](https://github.com/ava-labs/subnet-evm/tree/helloworld-official-tutorial-v2/precompile/contracts/helloworld). We also assume that hello world precompile is already deployed before Durango and we will be upgrading it for Durango.
#### Adjusting Gas Costs[](#adjusting-gas-costs "Direct link to heading")
Adjusting gas costs for your custom events is very important. Emitted events are written to state and consume resources. You should make sure you're charging the right amount of gas before emitting your event.
`precompilegen` tool automatically generates a scaffold for your gas calculations, however you should review and adjust the gas costs according to your needs, especially if you're using any arbitrary size data in your events.
Gas cost function for the event `GreetingChanged(address indexed sender, string oldGreeting, string newGreeting)` looks like this:
```go
func GetGreetingChangedEventGasCost(data GreetingChangedEventData) uint64 {
gas := contract.LogGas // base gas cost
// Add topics gas cost (2 topics)
// Topics always include the signature hash of the event. The rest are the indexed event arguments.
gas += contract.LogTopicGas * 2
// CUSTOM CODE STARTS HERE
// Keep in mind that the data here will be encoded using the ABI encoding scheme.
// So the computation cost might change according to the data type + data size and should be charged accordingly.
// i.e gas += LogDataGas * uint64(len(data.oldGreeting))
gas += contract.LogDataGas * uint64(len(data.OldGreeting)) // * ...
// CUSTOM CODE ENDS HERE
// CUSTOM CODE STARTS HERE
// Keep in mind that the data here will be encoded using the ABI encoding scheme.
// So the computation cost might change according to the data type + data size and should be charged accordingly.
// i.e gas += LogDataGas * uint64(len(data.newGreeting))
gas += contract.LogDataGas * uint64(len(data.NewGreeting)) // * ...
// CUSTOM CODE ENDS HERE
// CUSTOM CODE STARTS HERE
return gas
}
```
We have charged the base gas cost, topics gas cost and data gas cost for the old and new greeting. Topic gas cost includes 2 topics, one for the signature hash of the event and the other for the indexed sender argument.
Data gas cost is calculated according to the data type and size. If you're not using any arbitrary size data in your events, you can define it as a constant.
#### Durango Activation Check[](#durango-activation-check "Direct link to heading")
After completing defining your custom events, you need to pack your events, charge your event gas and emit them in your precompile functions. Since this procedure is non-backward compatible, you need to ensure that this will only be activated after the Durango network upgrade.
You can use the `contract.IsDurangoActivated` function to check if the Durango network upgrade is activated. For Hello World example, we will be changing `setGreeting` function starting [here](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/contract.go#L187) via following:
```go
if contract.IsDurangoActivated(accessibleState) {
...
}
```
Note: this activation won't be needed if you plan to deploy your custom precompile after Durango.
#### Event Packers[](#event-packers "Direct link to heading")
Event packers and unpackers will be automatically generated with the `precompilegen` tool. You can use these packers and unpackers to pack and unpack your custom events. For hello world example:
```go
if remainingGas, err = contract.DeductGas(remainingGas, contract.ReadGasCostPerSlot); err != nil {
return nil, 0, err
}
oldGreeting := GetGreeting(stateDB)
eventData := GreetingChangedEventData{
OldGreeting: oldGreeting,
NewGreeting: inputStruct,
}
topics, data, err := PackGreetingChangedEvent(caller, eventData)
if err != nil {
return nil, remainingGas, err
}
```
This will first charge gas for fetching the old greeting from the state fetch and then fetch it from the state. Then it will pack the event data with old and new greeting as non-indexed event data.
#### Emitting Events[](#emitting-events "Direct link to heading")
After packing the event data, you can charge the event gas and emit your custom events using the `stateDB.AddLog` function. For hello world example it starts [here](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/contract.go#L203-L214):
```go
// Charge the gas for emitting the event.
eventGasCost := GetGreetingChangedEventGasCost(eventData)
if remainingGas, err = contract.DeductGas(remainingGas, eventGasCost); err != nil {
return nil, 0, err
}
// Emit the event
stateDB.AddLog(
ContractAddress,
topics,
data,
accessibleState.GetBlockContext().Number().Uint64(),
)
```
## Manager Role[](#manager-role "Direct link to heading")
Durango introduces a new role called the manager role. The manager role is a mid-role between `Admin` and `Enabled`. The manager role is considered as an `Enabled` account and perform restricted state-changing operations in precompiles. Manager role also can modify other `Enabled` accounts. It can appoint new `Enabled` accounts and remove existing `Enabled` accounts. Manager role cannot modify other `Manager` or `Admin` accounts. For more information about AllowList see [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#allowlist-interface).
If you have any precompile using AllowList, you can issue a call to `setManager` in your precompile to appoint new Manager accounts. For upgrades or new precompiles with AllowList config you can use `managerAddresses` as follow:
```json
{
"feeManagerConfig": {
"blockTimestamp": 0,
"adminAddresses": [],
"managerAddresses": [],
"enabledAddresses": [],
}
}
```
## Non-Strict Mode[](#non-strict-mode "Direct link to heading")
Strict mode unpacking prevents using extra padded bytes in inputs. This created some issue in few legacy contracts like Gnosis Multisig wallet. For more information about strict mode see [solidity docs](https://docs.soliditylang.org/en/latest/abi-spec.html#strict-encoding-mode).
For Hello world example we were using this `UnpackSetGreetingInput` with strict mode enabled before:
```go
func UnpackSetGreetingInput(input []byte) (string, error) {
// This function was using strict mode unpacking by default.
res, err := HelloWorldABI.UnpackInput("setGreeting", input)
if err != nil {
return "", err
}
unpacked := *abi.ConvertType(res[0], new(string)).(*string)
return unpacked, nil
}
```
In order to handle extra padded bytes, Subnet-EVM will start using non-strict mode with Durango in `Input Unpackers`. However since this change will be non-backward compatible you need to ensure that this will only be activated after the Durango network upgrade.
You can use the `contract.IsDurangoActivated` function to check if the Durango network upgrade is activated. Now we will using this function to start using non-strict mode unpacking:
```go
// UnpackSetGreetingInput attempts to unpack [input] into the string type argument
// assumes that [input] does not include selector (omits first 4 func signature bytes)
// if [useStrictMode] is true, it will return an error if the length of [input] is not [common.HashLength]
func UnpackSetGreetingInput(input []byte, useStrictMode bool) (string, error) {
// Initially we had this check to ensure that the input was the correct length.
// However solidity does not always pack the input to the correct length, and allows
// for extra padding bytes to be added to the end of the input. Therefore, we have removed
// this check with the Durango. We still need to keep this check for backwards compatibility.
if useStrictMode && len(input) > common.HashLength {
return "", ErrInputExceedsLimit
}
res, err := HelloWorldABI.UnpackInput("setGreeting", input, useStrictMode)
if err != nil {
return "", err
}
unpacked := *abi.ConvertType(res[0], new(string)).(*string)
return unpacked, nil
}
```
To call this function in `setGreeting` we should use Durango activation as follows [here](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/contract.go#L159-L167):
```go
// do not use strict mode after Durango
useStrictMode := !contract.IsDurangoActivated(accessibleState)
// attempts to unpack [input] into the arguments to the SetGreetingInput.
// Assumes that [input] does not include selector
// You can use unpacked [inputStruct] variable in your code
inputStruct, err := UnpackSetGreetingInput(input, useStrictMode)
if err != nil {
return nil, remainingGas, err
}
```
This will ensure that non-strict mode unpacking is used after Durango activation.
This should not impose any critical issue for your custom precompiles. If you want to keep using the old strict mode and keep the backward compatibility you can use `true` for the `useStrictMode` parameter.
However if your precompile is mainly used from other deployed contracts (Solidity) you should do this transition in order to increase the compatibility of your precompile.
# Allowlist Interface
URL: /docs/avalanche-l1s/evm-configuration/allowlist
The AllowList interface is used by many default precompiles to permission access to the features they provide.
## Overview
The AllowList is a security feature used by precompiles to manage which addresses have permission to interact with certain contract functionalities. For example, in the Native Minter Precompile, the allow list is used to control who can mint new native tokens.
## Role-Based Permissions
The AllowList implements a consistent role-based permission system:
| Role | Value | Description | Permissions |
| ------- | ----- | ---------------------------- | ------------------------------------------------- |
| Admin | 2 | Can manage all roles | Can add/remove any role (Admin, Manager, Enabled) |
| Manager | 3 | Can manage enabled addresses | Can add/remove only Enabled addresses |
| Enabled | 1 | Basic permissions | Can use the precompile's functionality |
| None | 0 | No permissions | Cannot use the precompile or manage permissions |
Each precompile that uses the AllowList interface follows this permission structure, though the specific actions allowed for "Enabled" addresses vary depending on the precompile's purpose. For example:
* In the Contract Deployer AllowList, "Enabled" addresses can deploy contracts
* In the Transaction AllowList, "Enabled" addresses can submit transactions
* In the Native Minter, "Enabled" addresses can mint tokens
## Interface
The AllowList interface is defined as follows:
```solidity
//SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
interface IAllowList {
event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole);
// Set [addr] to have the admin role over the precompile contract.
function setAdmin(address addr) external;
// Set [addr] to be enabled on the precompile contract.
function setEnabled(address addr) external;
// Set [addr] to have the manager role over the precompile contract.
function setManager(address addr) external;
// Set [addr] to have no role for the precompile contract.
function setNone(address addr) external;
// Read the status of [addr].
function readAllowList(address addr) external view returns (uint256 role);
}
```
## Implementation
The AllowList interface is implemented by multiple precompiles in the Subnet-EVM. You can find the core implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/allowlist/allowlist.go).
## Precompiles Using AllowList
Several precompiles in Subnet-EVM use the AllowList interface:
* [Native Minter](/docs/avalanche-l1s/evm-configuration/tokenomics#native-minter)
* [Fee Manager](/docs/avalanche-l1s/evm-configuration/transaction-fees#fee-manager)
* [Reward Manager](/docs/avalanche-l1s/evm-configuration/transaction-fees#reward-manager)
* [Contract Deployer Allow List](/docs/avalanche-l1s/evm-configuration/permissions#contract-deployer-allowlist)
* [Transaction Allow List](/docs/avalanche-l1s/evm-configuration/permissions#transaction-allowlist)
# Introduction
URL: /docs/avalanche-l1s/evm-configuration/evm-l1-customization
Learn how to customize the Ethereum Virtual Machine with EVM and Precompiles.
Welcome to the EVM configuration guide. This documentation explores how to extend and customize your Avalanche L1 using **EVM** and **precompiles**. Building upon the Validator Manager capabilities we discussed in the previous section, we'll now dive into other powerful customization features available in EVM.
## Overview of EVM
EVM is Avalanche's customized version of the Ethereum Virtual Machine, tailored to run on Avalanche L1s. It allows developers to deploy Solidity smart contracts with enhanced capabilities, benefiting from Avalanche's high throughput and low latency. EVM enables more flexibility and performance optimizations compared to the standard EVM.
Beyond the Validator Manager functionality we've covered, EVM provides additional configuration options through precompiles, allowing you to extend your L1's capabilities even further.
## Genesis Configuration
Each blockchain has some genesis state when it's created. Each Virtual Machine defines the format and semantics of its genesis data. The genesis configuration is crucial for setting up your Avalanche L1's initial state and behavior.
### Chain Configuration
The chain configuration section in your genesis file defines fundamental parameters of your blockchain:
```json
{
"config": {
"chainId": 43214,
"homesteadBlock": 0,
"eip150Block": 0,
"eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0",
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"muirGlacierBlock": 0
}
}
```
#### Chain ID
`chainID`: Denotes the ChainID of to be created chain. Must be picked carefully since a conflict with other chains can cause issues. One suggestion is to check with [chainlist.org](https://chainlist.org/) to avoid ID collision, reserve and publish your ChainID properly.
You can use `eth_getChainConfig` RPC call to get the current chain config. See [here](/docs/api-reference/subnet-evm-api#eth_getchainconfig) for more info.
#### Hard Forks
The following parameters define EVM hard fork activation times. These should be handled with care as changes may cause compatibility issues:
* `homesteadBlock`
* `eip150Block`
* `eip150Hash`
* `eip155Block`
* `byzantiumBlock`
* `constantinopleBlock`
* `petersburgBlock`
* `istanbulBlock`
* `muirGlacierBlock`
### Genesis Block Header
The genesis block header is defined by several parameters that set the initial state of your blockchain:
```json
{
"nonce": "0x0",
"timestamp": "0x0",
"extraData": "0x00",
"difficulty": "0x0",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"number": "0x0",
"gasUsed": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
}
```
These parameters have specific roles:
* `nonce`, `mixHash`, `difficulty`: These are remnants from Proof of Work systems. For Avalanche, they don't play any relevant role and should be left as their default values.
* `timestamp`: The creation timestamp of the genesis block (commonly set to `0x0`).
* `extraData`: Optional extra data field (commonly set to `0x`).
* `coinbase`: The address of block producers (usually set to zero address for genesis).
* `parentHash`: The hash of the parent block (set to zero hash for genesis).
* `gasUsed`: Amount of gas used by the genesis block (usually `0x0`).
* `number`: The block number (must be `0x0` for genesis).
## Precompiles
Precompiles are specialized smart contracts that execute native Go code within the EVM context. They act as a bridge between Solidity and lower-level functionalities, allowing for performance optimizations and access to features not available in Solidity alone.
### Default Precompiles in EVM
EVM comes with a set of default precompiles that extend the EVM's functionality:
* **AllowList**: Interface that manages access control by allowing or restricting specific addresses, inherited by all precompiles.
* **Deployer AllowList**: Restricts which addresses can deploy smart contracts.
* **Native Minter**: Manages the minting and burning of native tokens.
* **Transaction AllowList**: Controls which addresses can submit transactions.
* **Fee Manager**: Controls gas fee parameters and fee markets.
* **Reward Manager**: Handles the distribution of staking rewards to validators.
* **Warp Messenger**: Enables cross-chain communication between Avalanche L1s.
### Precompile Addresses and Configuration
If a precompile is enabled within the `genesis.json` using the respective `ConfigKey`, you can interact with the precompile using Foundry or other tools such as Remix.
Below are the addresses and `ConfigKey` values of default precompiles available in EVM. The address and `ConfigKey` [are defined in the `module.go` of each precompile contract](https://github.com/ava-labs/subnet-evm/tree/master/precompile/contracts).
| Precompile | ConfigKey | Address |
| --------------------- | --------------------------------- | -------------------------------------------- |
| Deployer AllowList | `contractDeployerAllowListConfig` | `0x0200000000000000000000000000000000000000` |
| Native Minter | `contractNativeMinterConfig` | `0x0200000000000000000000000000000000000001` |
| Transaction AllowList | `txAllowListConfig` | `0x0200000000000000000000000000000000000002` |
| Fee Manager | `feeManagerConfig` | `0x0200000000000000000000000000000000000003` |
| Reward Manager | `rewardManagerConfig` | `0x0200000000000000000000000000000000000004` |
| Warp Messenger | `warpConfig` | `0x0200000000000000000000000000000000000005` |
#### Example Interaction
For example, if `contractDeployerAllowListConfig` is enabled in the `genesis.json`:
```json title="genesis.json"
"contractDeployerAllowListConfig": {
"adminAddresses": [ // Addresses that can manage (add/remove) enabled addresses. They are also enabled themselves for contract deployment.
"0x4f9e12d407b18ad1e96e4f139ef1c144f4058a4e",
"0x4b9e5977a46307dd93674762f9ddbe94fb054def"
],
"blockTimestamp": 0,
"enabledAddresses": [
"0x09c6fa19dd5d41ec6d0f4ca6f923ec3d941cc569" // Addresses that can only deploy contracts
]
},
```
We can then add an `Enabled` address to the Deployer AllowList by interacting with the `IAllowList` interface at `0x0200000000000000000000000000000000000000`:
```bash
cast send 0x0200000000000000000000000000000000000000 setEnabled(address addr) --rpc-url $MY_L1_RPC --private-key $ADMIN_PRIVATE_KEY
```
# Interacting with Precompiles
URL: /docs/avalanche-l1s/evm-configuration/interacting-with-precompiles
Learn how to interact with Avalanche L1 precompiles using the Builder Hub Developer Console or Remix IDE.
This guide shows you how to interact with precompiled contracts on your Avalanche L1. For standard precompile implementations, we recommend using the **Builder Hub Developer Console** for the best experience. For custom implementations or advanced use cases, you can use **Remix IDE** with browser wallets.
## Recommended: Using Builder Hub Developer Console
The Builder Hub provides dedicated tools for interacting with standard Avalanche L1 precompiles. These tools offer:
* ✅ **User-friendly interface** - No need to manually enter contract addresses or ABIs
* ✅ **Built-in validation** - Prevents common configuration mistakes
* ✅ **Connected to your Builder account** - Track your L1s and configurations
* ✅ **Visual feedback** - See changes reflected in real-time
### Available Console Tools
| Precompile | Console Tool |
| --------------------------- | ------------------------------------------------------------------------------------ |
| Fee Manager | [Fee Manager Console](/console/l1-tokenomics/fee-manager) |
| Reward Manager | [Reward Manager Console](/console/l1-tokenomics/reward-manager) |
| Native Minter | [Native Minter Console](/console/l1-tokenomics/native-minter) |
| Contract Deployer Allowlist | [Deployer Allowlist Console](/console/l1-access-restrictions/deployer-allowlist) |
| Transaction Allowlist | [Transactor Allowlist Console](/console/l1-access-restrictions/transactor-allowlist) |
### How to Use Console Tools
1. **Navigate** to the appropriate console tool from the table above
2. **Connect** your wallet (Core or MetaMask)
3. **Switch** to your L1 network in your wallet
4. The tool will automatically detect your permissions
5. **Configure** using the visual interface:
* For Fee Manager: Adjust gas limits, base fees, and target rates
* For Native Minter: Mint tokens to specific addresses
* For Allowlists: Add or remove addresses with specific roles
* For Reward Manager: Configure fee distribution settings
6. **Review** the transaction details
7. **Submit** and approve in your wallet
**Why use the Developer Console?**
Using the Builder Hub console tools allows us to:
* Provide better support for your L1
* Track feature usage to improve the platform
* Build your profile in our builders/developers database
* Offer personalized recommendations and resources
### Example Workflows
**Configuring Transaction Fees:**
1. Go to [Fee Manager Console](/console/l1-tokenomics/fee-manager)
2. Connect wallet and switch to your L1
3. Adjust fee parameters using sliders and inputs
4. See real-time preview of how changes affect gas costs
5. Submit transaction to update fees
**Minting Native Tokens:**
1. Go to [Native Minter Console](/console/l1-tokenomics/native-minter)
2. Connect with an admin/manager address
3. Enter recipient address and amount
4. Review the minting transaction
5. Approve to mint tokens instantly
**Managing Permissions:**
1. Go to [Deployer Allowlist](/console/l1-access-restrictions/deployer-allowlist) or [Transactor Allowlist](/console/l1-access-restrictions/transactor-allowlist)
2. Connect with an admin address
3. Add addresses with desired roles (Admin, Manager, Enabled)
4. Remove addresses by changing their role to "None"
5. View current allowlist status
## Alternative: Using Remix IDE
For custom precompile implementations or if you prefer a code-based approach, you can use Remix IDE to interact with precompiles directly.
### When to Use Remix
Use Remix when:
* You have a **custom precompile** implementation (non-standard addresses or interfaces)
* You need to interact with precompiles **programmatically**
* You're **debugging** contract interactions
* The Builder Console doesn't support your specific use case
### Prerequisites
* Access to an Avalanche L1 where you have admin/manager rights for a precompile
* [Core Browser Extension](https://core.app) or MetaMask
* Private key for an admin/manager address on your L1
* Your L1's RPC URL and Chain ID
## Setup Your Wallet
### Using Core
1. Install the [Core Browser Extension](https://core.app)
2. Import or create the account with admin/manager privileges
3. Enable **Testnet Mode** (if using testnet):
* Open Core extension
* Click hamburger menu → **Advanced**
* Toggle **Testnet Mode** on
4. Add your L1 network:
* Click the networks dropdown
* Select **Manage Networks**
* Click **Add Network** and enter:
* **Network Name**: Your L1 name
* **RPC URL**: Your L1's RPC endpoint
* **Chain ID**: Your L1's chain ID
* **Symbol**: Your native token symbol
* **Explorer**: (Optional) Your L1's explorer URL
5. Switch to your L1 network in the dropdown
### Using MetaMask
1. Install MetaMask browser extension
2. Import the account with admin/manager privileges
3. Add your L1 network:
* Click the networks dropdown
* Click **Add Network** → **Add a network manually**
* Enter your L1's network details
* Click **Save**
## Connect Remix to Your L1
1. Open [Remix IDE](https://remix.ethereum.org/) in your browser
2. In the left sidebar, click the **Deploy & run transactions** icon
3. In the **Environment** dropdown, select **Injected Provider - MetaMask** (or Core)
4. Approve the connection request in your wallet extension
5. Verify the connection shows your L1's network (e.g., "Custom (11111) network")
## Load Precompile Interfaces
You need to load the Solidity interfaces for the precompiles you want to interact with.
### Available Precompile Interfaces
From the Remix home screen, use **load from GitHub** to import:
**Required for all precompiles:**
* [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol)
**Specific precompile interfaces:**
* [IFeeManager.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IFeeManager.sol) - For fee configuration
* [INativeMinter.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/INativeMinter.sol) - For minting native tokens
* [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol) - For transaction/deployer allowlists
* [IRewardManager.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IRewardManager.sol) - For block rewards
### Compile the Interface
1. In Remix, click the **Solidity Compiler** icon in the left sidebar
2. Select the precompile interface file (e.g., `IFeeManager.sol`)
3. Click **Compile**
## Interact with Precompiles
### Connect to Deployed Precompile
Each precompile is deployed at a fixed address on your L1:
| Precompile | Address |
| ------------------------- | -------------------------------------------- |
| NativeMinter | `0x0200000000000000000000000000000000000001` |
| ContractDeployerAllowList | `0x0200000000000000000000000000000000000000` |
| FeeManager | `0x0200000000000000000000000000000000000003` |
| RewardManager | `0x0200000000000000000000000000000000000004` |
| TransactionAllowList | `0x0200000000000000000000000000000000000002` |
1. In Remix, click **Deploy & run transactions**
2. In the **Contract** dropdown, select your compiled interface
3. Paste the precompile address in the **At Address** field
4. Click **At Address**
The precompile contract will appear in the **Deployed Contracts** section.
## Example: Using Fee Manager
### Read Current Fee Configuration
Anyone can read the current fee configuration (no special permissions required):
1. Expand the FeeManager contract in **Deployed Contracts**
2. Click **getFeeConfig**
3. View the current fee parameters:
* `gasLimit`: Maximum gas per block
* `targetBlockRate`: Target time between blocks (seconds)
* `minBaseFee`: Minimum base fee (wei)
* `targetGas`: Target gas per second
* `baseFeeChangeDenominator`: Rate of base fee adjustment
* `minBlockGasCost`: Minimum gas cost for a block
* `maxBlockGasCost`: Maximum gas cost for a block
* `blockGasCostStep`: Increment for block gas cost
### Update Fee Configuration
Only admin addresses can update the fee configuration:
1. Ensure you're connected with the admin address in your wallet
2. Expand **setFeeConfig** in the FeeManager contract
3. Fill in the new fee parameters:
```
gasLimit: 8000000
targetBlockRate: 2
minBaseFee: 25000000000
targetGas: 15000000
baseFeeChangeDenominator: 36
minBlockGasCost: 0
maxBlockGasCost: 1000000
blockGasCostStep: 200000
```
4. Click **transact**
5. Approve the transaction in your wallet
6. Wait for transaction confirmation
The new fee configuration takes effect immediately after the transaction is accepted.
## Example: Using Native Minter
### Mint Native Tokens
Only admin, manager, or enabled addresses can mint native tokens:
1. Expand the NativeMinter contract in **Deployed Contracts**
2. Click on **mintNativeCoin**
3. Fill in the parameters:
* `addr`: Recipient address (e.g., `0xB78cbAa319ffBD899951AA30D4320f5818938310`)
* `amount`: Amount to mint in wei (e.g., `1000000000000000000` for 1 token)
4. Click **transact**
5. Approve the transaction in your wallet
The minted tokens are added directly to the recipient's balance by the EVM (no sender transaction).
### Check Minting Permissions
Anyone can check who has minting permissions:
1. Click **readAllowList** with an address parameter
2. Returns:
* `0`: No permission
* `1`: Enabled (can mint)
* `2`: Manager (can mint and manage enabled addresses)
* `3`: Admin (full control)
## Example: Managing Allow Lists
### Add Address to Allow List
Admins can add addresses to transaction or deployer allow lists:
1. Expand the AllowList contract
2. Use **setAdmin**, **setManager**, or **setEnabled**:
```
addr: 0x1234...5678
```
3. Click **transact**
4. Approve in wallet
### Remove Address from Allow List
1. Use **setNone** with the address:
```
addr: 0x1234...5678
```
2. Click **transact**
### Check Address Status
1. Click **readAllowList**:
```
addr: 0x1234...5678
```
2. Returns permission level (0-3)
## Best Practices
### Security
* **Never share private keys** for admin addresses
* **Use hardware wallets** for admin accounts when possible
* **Test on testnet first** before making changes on mainnet
* **Use multi-sig contracts** for critical admin operations
* **Document all changes** and announce them to validators
### Network Upgrades
When enabling precompiles via network upgrades:
1. **Announce upgrades** well in advance on social media and Discord
2. **Coordinate with validators** to ensure they update their nodes
3. **Use upgrade.json** to schedule precompile activation (see [Precompile Upgrades](/docs/avalanche-l1s/evm-configuration/precompile-upgrades))
4. **Test the upgrade** on a testnet first
5. **Monitor** the network after activation
### Troubleshooting
**Connection Issues:**
* Verify your wallet is connected to the correct network
* Check that the RPC URL is accessible
* Ensure you have native tokens for gas fees
**Transaction Failures:**
* Confirm you're using an admin/manager address
* Check that the precompile is enabled on your L1
* Verify parameter formats (addresses must be checksummed)
* Ensure sufficient gas limit
**Precompile Not Found:**
* Verify the precompile address is correct
* Confirm the precompile is activated in your genesis or upgrade.json
* Check that you're on the correct network
## Additional Resources
### Builder Hub Console Tools
* [Fee Manager Console](/console/l1-tokenomics/fee-manager) - Configure transaction fees
* [Reward Manager Console](/console/l1-tokenomics/reward-manager) - Manage fee distribution
* [Native Minter Console](/console/l1-tokenomics/native-minter) - Mint native tokens
* [Deployer Allowlist Console](/console/l1-access-restrictions/deployer-allowlist) - Control contract deployment
* [Transactor Allowlist Console](/console/l1-access-restrictions/transactor-allowlist) - Control transaction submission
### Documentation
* [Precompile Configuration](/docs/avalanche-l1s/evm-configuration/evm-l1-customization) - Overview of precompiles
* [Transaction Fees](/docs/avalanche-l1s/evm-configuration/transaction-fees) - Fee Manager and Reward Manager details
* [Tokenomics](/docs/avalanche-l1s/evm-configuration/tokenomics) - Native Minter details
* [Permissions](/docs/avalanche-l1s/evm-configuration/permissions) - Allowlist precompiles
* [Precompile Upgrades](/docs/avalanche-l1s/evm-configuration/precompile-upgrades) - Network upgrade process
* [AllowList Interface](/docs/avalanche-l1s/evm-configuration/allowlist) - Role-based access control
* [Subnet-EVM Contracts](https://github.com/ava-labs/subnet-evm/tree/master/contracts/contracts/interfaces) - Precompile interfaces
## Conclusion
For standard Avalanche L1 precompiles, **we strongly recommend using the [Builder Hub Developer Console tools](/console)** for the best experience. These tools provide:
* ✅ Guided workflows with validation
* ✅ No need to manage contract addresses or ABIs manually
* ✅ Integration with your Builder Hub account
* ✅ Support from the Builder Hub team
For custom implementations or advanced scenarios, the Remix IDE approach provides flexibility to interact with any contract at any address. This is useful for:
* Custom precompile implementations
* Testing and debugging
* Programmatic interactions
* Non-standard use cases
Whichever method you choose, always test on testnet first and follow security best practices when managing admin keys.
# Permissions
URL: /docs/avalanche-l1s/evm-configuration/permissions
Control access to contract deployment and transaction submission on your Avalanche L1 blockchain.
## Overview
The Subnet-EVM provides two powerful precompiles for managing permissions on your Avalanche L1 blockchain:
* **Contract Deployer Allowlist**: Control which addresses can deploy smart contracts
* **Transaction Allowlist**: Control which addresses can submit transactions
Both precompiles use the [AllowList interface](/docs/avalanche-l1s/evm-configuration/allowlist) to manage permissions with a consistent role-based system.
## Contract Deployer Allowlist
### Purpose
The Contract Deployer Allowlist allows you to maintain a controlled environment where only authorized addresses can deploy new smart contracts. This is particularly useful for:
* Maintaining a curated ecosystem of verified contracts
* Preventing malicious contract deployments
* Implementing KYC/AML requirements for contract deployers
### Configuration
Located at address `0x0200000000000000000000000000000000000000`, you can activate this precompile in your genesis file:
```json
{
"config": {
"contractDeployerAllowListConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
}
```
By enabling this feature, you can define which addresses are allowed to deploy smart contracts and manage these permissions over time.
## Transaction Allowlist
### Purpose
The Transaction Allowlist enables you to control which addresses can submit transactions to your network. This is essential for:
* Creating fully permissioned networks
* Implementing KYC/AML requirements for users
* Controlling network access during testing or initial deployment
### Configuration
Located at address `0x0200000000000000000000000000000000000002`, you can activate this precompile in your genesis file:
```json
{
"config": {
"txAllowListConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
}
```
By enabling this feature, you can define which addresses are allowed to submit transactions and manage these permissions over time.
## Permissions Management
Both precompiles use the [AllowList interface](/docs/avalanche-l1s/evm-configuration/allowlist) to manage permissions. This provides a consistent way to:
* Assign and revoke permissions
* Manage admin and manager roles
* Control who can deploy contracts or submit transactions
For detailed information about the role-based permission system and available functions, see the [AllowList interface documentation](/docs/avalanche-l1s/evm-configuration/allowlist).
## Implementation
The precompiles implement the following interface:
```solidity
//SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
interface IAllowList {
event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole);
// Set [addr] to have the admin role over the precompile contract.
function setAdmin(address addr) external;
// Set [addr] to be enabled on the precompile contract.
function setEnabled(address addr) external;
// Set [addr] to have the manager role over the precompile contract.
function setManager(address addr) external;
// Set [addr] to have no role for the precompile contract.
function setNone(address addr) external;
// Read the status of [addr].
function readAllowList(address addr) external view returns (uint256 role);
}
```
## Best Practices
1. **Initial Setup**: Always configure at least one admin address in the genesis file to ensure you can manage permissions after deployment.
2. **Role Management**:
* Use Admin roles sparingly and secure their private keys
* Assign Manager roles to trusted entities who need to manage user access
* Regularly audit the list of enabled addresses
3. **Security Considerations**:
* Keep private keys of admin addresses secure
* Implement a multi-sig wallet as an admin for additional security
* Maintain an off-chain record of role assignments
4. **Monitoring**:
* Monitor the `RoleSet` events to track permission changes
* Regularly audit the enabled addresses list
* Keep documentation of why each address was granted permissions
You can find the implementations in the subnet-evm repository:
* [Contract Deployer Allowlist](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/deployerallowlist/contract.go)
* [Transaction Allowlist](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/txallowlist/contract.go)
# Precompile Upgrades
URL: /docs/avalanche-l1s/evm-configuration/precompile-upgrades
Learn how to enable, disable, and configure precompiles in your Subnet-EVM.
# Precompile Upgrades
Performing a network upgrade requires coordinating the upgrade network-wide. A network upgrade changes the rule set used to process and verify blocks, such that any node that upgrades incorrectly or fails to upgrade by the time that upgrade goes into effect may become out of sync with the rest of the network.
Any mistakes in configuring network upgrades or coordinating them on validators may cause the network to halt and recovering may be difficult.
Subnet-EVM precompiles can be individually enabled or disabled at a given timestamp as a network upgrade. When disabling a precompile, it disables calling the precompile and destructs its storage, allowing it to be enabled later with a new configuration if desired.
## Configuration File
These upgrades must be specified in a file named `upgrade.json` placed in the same directory where `config.json` resides: `{chain-config-dir}/{blockchainID}/upgrade.json`. For example, `WAGMI Subnet` upgrade should be placed in `~/.avalanchego/configs/chains/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/upgrade.json`.
The content of the `upgrade.json` should be formatted according to the following:
```json
{
"precompileUpgrades": [
{
"[PRECOMPILE_NAME]": {
"blockTimestamp": "[ACTIVATION_TIMESTAMP]", // unix timestamp precompile should activate at
"[PARAMETER]": "[VALUE]" // precompile specific configuration options, eg. "adminAddresses"
}
}
]
}
```
An invalid `blockTimestamp` in an upgrade file results the update failing. The `blockTimestamp` value should be set to a valid Unix timestamp value which is in the *future* relative to the *head of the chain*. If the node encounters a `blockTimestamp` which is in the past, it will fail on startup.
## Disabling Precompiles
To disable a precompile, use the following format:
```json
{
"precompileUpgrades": [
{
"": {
"blockTimestamp": "[DEACTIVATION_TIMESTAMP]", // unix timestamp the precompile should deactivate at
"disable": true
}
}
]
}
```
Each item in `precompileUpgrades` must specify exactly one precompile to enable or disable and the block timestamps must be in increasing order. Once an upgrade has been activated (a block after the specified timestamp has been accepted), it must always be present in `upgrade.json` exactly as it was configured at the time of activation (otherwise the node will refuse to start).
For safety, you should always treat `precompileUpgrades` as append-only.
As a last resort measure, it is possible to abort or reconfigure a precompile upgrade that has not been activated since the chain is still processing blocks using the prior rule set.
If aborting an upgrade becomes necessary, you can remove the precompile upgrade from `upgrade.json` from the end of the list of upgrades. As long as the blockchain has not accepted a block with a timestamp past that upgrade's timestamp, it will abort the upgrade for that node.
## Example Configuration
Here's a complete example that demonstrates enabling and disabling precompiles:
```json
{
"precompileUpgrades": [
{
"feeManagerConfig": {
"blockTimestamp": 1668950000,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
},
{
"txAllowListConfig": {
"blockTimestamp": 1668960000,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
},
{
"feeManagerConfig": {
"blockTimestamp": 1668970000,
"disable": true
}
}
]
}
```
This example:
1. Enables the `feeManagerConfig` at timestamp `1668950000`
2. Enables `txAllowListConfig` at timestamp `1668960000`
3. Disables `feeManagerConfig` at timestamp `1668970000`
## Initial Precompile Configurations
Precompiles can be managed by privileged addresses to change their configurations and activate their effects. For example, the `feeManagerConfig` precompile can have `adminAddresses` which can change the fee structure of the network:
```json
{
"precompileUpgrades": [
{
"feeManagerConfig": {
"blockTimestamp": 1668950000,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
]
}
```
In this example, only the specified address can change the network's fee structure. The admin must call the precompile to activate changes by sending a transaction with a new fee config.
### Initial Configurations Without Admin
Precompiles can also activate their effect immediately at the activation timestamp without admin addresses. For example:
```json
{
"precompileUpgrades": [
{
"feeManagerConfig": {
"blockTimestamp": 1668950000,
"initialFeeConfig": {
"gasLimit": 20000000,
"targetBlockRate": 2,
"minBaseFee": 1000000000,
"targetGas": 100000000,
"baseFeeChangeDenominator": 48,
"minBlockGasCost": 0,
"maxBlockGasCost": 10000000,
"blockGasCostStep": 500000
}
}
}
]
}
```
It's still possible to add `adminAddresses` or `enabledAddresses` along with these initial configurations. In this case, the precompile will be activated with the initial configuration, and admin/enabled addresses can access to the precompiled contract normally.
If you want to change the precompile initial configuration, you will need to first disable it then activate the precompile again with the new configuration.
## Verifying Upgrades
After creating or modifying `upgrade.json`, restart your node to load the changes. The node will print the chain configuration on startup, allowing you to verify the upgrade configuration:
```bash
INFO [08-15|15:09:36.772] <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain>
github.com/ava-labs/subnet-evm/eth/backend.go:155: Initialised chain configuration
config="{ChainID: 11111 Homestead: 0 EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0
Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Subnet EVM: 0, FeeConfig:
{\"gasLimit\":20000000,\"targetBlockRate\":2,\"minBaseFee\":1000000000,\"targetGas\
":100000000,\"baseFeeChangeDenominator\":48,\"minBlockGasCost\":0,\"maxBlockGasCost\
":10000000,\"blockGasCostStep\":500000}, AllowFeeRecipients: false, NetworkUpgrades: {\
"subnetEVMTimestamp\":0}, PrecompileUpgrade: {}, UpgradeConfig: {\"precompileUpgrades\":[{\"feeManagerConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668950000}},{\"txAllowListConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668960000}},{\"feeManagerConfig\":{\"adminAddresses\":null,\"enabledAddresses\":null,\"blockTimestamp\":1668970000,\"disable\":true}}]}, Engine: Dummy Consensus Engine}"
```
You can also verify precompile configurations using:
* [`eth_getActiveRulesAt`](/docs/api-reference/subnet-evm-api#eth_getactiverulesat) RPC method to check activated precompiles at a timestamp
* [`eth_getChainConfig`](/docs/api-reference/subnet-evm-api#eth_getchainconfig) RPC method to view the complete configuration including upgrades
# Tokenomics
URL: /docs/avalanche-l1s/evm-configuration/tokenomics
Configure and manage the native token supply of your Avalanche L1 blockchain.
## Overview
The tokenomics of your Avalanche L1 blockchain is a crucial aspect that determines how value flows through your network. The Subnet-EVM provides powerful tools to manage your token economy:
* Initial token allocation in genesis
* Dynamic token minting through the Native Minter precompile
* Fee burning or redistribution mechanisms (via [Transaction Fees & Gas](/docs/avalanche-l1s/evm-configuration/transaction-fees))
## Initial Token Supply
When creating your Avalanche L1, you can configure the initial token distribution in the genesis file:
```json
{
"alloc": {
"0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": {
"balance": "0x3635C9ADC5DEA00000" // 1000 tokens (in wei)
},
"0x1234567890123456789012345678901234567890": {
"balance": "0x21E19E0C9BAB2400000" // 10000 tokens (in wei)
}
}
}
```
Consider the following when planning initial allocation:
* Reserve tokens for validator rewards
* Allocate tokens for development and ecosystem growth
* Set aside tokens for future community initiatives
* Consider vesting schedules for team allocations
## Native Minter
### Purpose
The Native Minter precompile allows authorized addresses to mint additional tokens after network launch. This is useful for:
* Implementing programmatic token emission schedules
* Providing validator rewards
* Supporting ecosystem growth initiatives
* Implementing monetary policy
### Configuration
Located at address `0x0200000000000000000000000000000000000001`, you can activate this precompile in your genesis file:
```json
{
"config": {
"contractNativeMinterConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"]
}
}
}
```
### Interface
```solidity
//SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
interface INativeMinter {
event NativeCoinMinted(address indexed sender, address indexed recipient, uint256 amount);
// Mint [amount] number of native coins and send to [addr]
function mintNativeCoin(address addr, uint256 amount) external;
// IAllowList
event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole);
// Set [addr] to have the admin role over the precompile contract.
function setAdmin(address addr) external;
// Set [addr] to be enabled on the precompile contract.
function setEnabled(address addr) external;
// Set [addr] to have the manager role over the precompile contract.
function setManager(address addr) external;
// Set [addr] to have no role for the precompile contract.
function setNone(address addr) external;
// Read the status of [addr].
function readAllowList(address addr) external view returns (uint256 role);
}
```
The Native Minter precompile uses the [AllowList interface](/docs/avalanche-l1s/evm-configuration/allowlist) to restrict access to its functionality with the following roles.
## Tokenomics Best Practices
1. **Initial Distribution**:
* Ensure fair distribution among stakeholders
* Reserve sufficient tokens for network operation
* Consider long-term sustainability
* Document allocation rationale
2. **Minting Policy**:
* Define clear minting guidelines
* Use multi-sig for admin control
* Implement transparent emission schedules
* Monitor total supply changes
3. **Supply Management**:
* Balance minting with burning mechanisms
* Consider implementing supply caps
* Monitor token velocity and distribution
* Plan for long-term sustainability
4. **Security Considerations**:
* Use multi-sig wallets for admin addresses
* Implement time-locks for large mints
* Regular audits of minting activity
* Monitor for unusual minting patterns
5. **Validator Incentives**:
* Design sustainable reward mechanisms
* Balance inflation with network security
* Consider validator stake requirements
* Plan for long-term validator participation
## Example Implementations
### Fixed Supply with Emergency Minting
```json
{
"config": {
"contractNativeMinterConfig": {
"blockTimestamp": 0,
"adminAddresses": ["MULTISIG_ADDRESS"],
"enabledAddresses": []
}
},
"alloc": {
"TREASURY": {"balance": "TOTAL_SUPPLY"},
"VALIDATOR_REWARDS": {"balance": "VALIDATOR_ALLOCATION"},
"ECOSYSTEM_FUND": {"balance": "ECOSYSTEM_ALLOCATION"}
}
}
```
### Programmatic Emission Schedule
```solidity
contract EmissionSchedule {
INativeMinter public constant NATIVE_MINTER = INativeMinter(0x0200000000000000000000000000000000000001);
uint256 public constant EMISSION_RATE = 1000 * 1e18; // 1000 tokens per day
uint256 public constant EMISSION_DURATION = 365 days;
uint256 public immutable startTime;
constructor() {
startTime = block.timestamp;
}
function mintDailyEmission() external {
require(block.timestamp < startTime + EMISSION_DURATION, "Emission ended");
NATIVE_MINTER.mintNativeCoin(address(this), EMISSION_RATE);
// Distribution logic here
}
}
```
### Validator Reward Contract
```solidity
contract ValidatorRewards {
INativeMinter public constant NATIVE_MINTER = INativeMinter(0x0200000000000000000000000000000000000001);
uint256 public constant REWARD_RATE = 10 * 1e18; // 10 tokens per block
function distributeRewards(address[] calldata validators) external {
uint256 reward = REWARD_RATE / validators.length;
for (uint i = 0; i < validators.length; i++) {
NATIVE_MINTER.mintNativeCoin(validators[i], reward);
}
}
}
```
You can find the Native Minter implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/nativeminter/contract.go).
# Transaction Fees & Validator Rewards
URL: /docs/avalanche-l1s/evm-configuration/transaction-fees
Configure fee parameters and reward mechanisms for your Avalanche L1 blockchain.
## Overview
The Subnet-EVM provides two powerful precompiles for managing transaction fees and rewards:
* **Fee Manager**: Configure dynamic fee parameters and gas costs
* **Reward Manager**: Control how transaction fees are distributed or burned
Both precompiles use the [AllowList interface](/docs/avalanche-l1s/evm-configuration/allowlist) to restrict access to their functionality.
## Fee Manager
### Purpose
The Fee Manager allows you to configure the parameters of the dynamic fee algorithm on-chain. This gives you control over:
* Gas limits and target block rates
* Base fee parameters
* Block gas cost parameters
### Configuration
Located at address `0x0200000000000000000000000000000000000003`, you can activate this precompile in your genesis file:
```json
{
"config": {
"feeManagerConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"],
"initialFeeConfig": {
"gasLimit": 20000000,
"targetBlockRate": 2,
"minBaseFee": 1000000000,
"targetGas": 100000000,
"baseFeeChangeDenominator": 48,
"minBlockGasCost": 0,
"maxBlockGasCost": 10000000,
"blockGasCostStep": 500000
}
}
}
}
```
### Fee Parameters
The Fee Manager allows configuration of the following parameters:
| Parameter | Description | Recommended Range |
| ------------------------ | -------------------------------------------- | ----------------- |
| gasLimit | Maximum gas allowed per block | 8M - 100M |
| targetBlockRate | Target time between blocks (seconds) | 2 - 10 |
| minBaseFee | Minimum base fee (in wei) | 25 - 500 gwei |
| targetGas | Target gas spending over the last 10 seconds | 5M - 50M |
| baseFeeChangeDenominator | Controls how quickly base fee changes | 8 - 1000 |
| minBlockGasCost | Minimum gas cost for a block | 0 - 1B |
| maxBlockGasCost | Maximum gas cost for a block | > minBlockGasCost |
| blockGasCostStep | How quickly block gas cost changes | \< 5M |
### Access Control and Additional Features
The FeeManager precompile uses the [AllowList interface](/docs/avalanche-l1s/evm-configuration/allowlist) to restrict access to its functionality.
In addition to the AllowList interface, the FeeManager adds the following capabilities:
* `getFeeConfig`: retrieves the current dynamic fee config
* `getFeeConfigLastChangedAt`: retrieves the timestamp of the last block where the fee config was updated
* `setFeeConfig`: sets the dynamic fee config on chain. This function can only be called by an Admin, Manager or Enabled address.
* `FeeConfigChanged`: an event that is emitted when the fee config is updated. Topics include the sender, the old fee config, and the new fee config.
You can also get the fee configuration at a block with the `eth_feeConfig` RPC method. For more information see [here](/docs/api-reference/subnet-evm-api#eth_feeconfig).
## Reward Manager
### Purpose
The Reward Manager allows you to control how transaction fees are handled in your network. You can:
* Send fees to a specific address (e.g., treasury)
* Allow validators to collect fees
* Burn fees entirely
### Configuration
Located at address `0x0200000000000000000000000000000000000004`, you can activate this precompile in your genesis file:
```json
{
"config": {
"rewardManagerConfig": {
"blockTimestamp": 0,
"adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"],
"initialRewardConfig": {
// Choose one of:
"allowFeeRecipients": true, // Allow validators to collect fees
"rewardAddress": "0x...", // Send fees to specific address
// Empty config = burn fees
}
}
}
}
```
### Reward Mechanisms
The Reward Manager supports three mutually exclusive mechanisms:
1. **Validator Fee Collection** (`allowFeeRecipients`)
* Validators can specify their own fee recipient addresses
* Fees go to the block producer's chosen address
* Good for incentivizing network participation
2. **Fixed Reward Address** (`rewardAddress`)
* All fees go to a single specified address
* Can be a contract or EOA
* Useful for treasury or DAO-controlled fee collection
3. **Fee Burning** (default)
* All transaction fees are burned
* Reduces total token supply over time
* Similar to Ethereum's EIP-1559
## Implementation
### Fee Manager Interface
```solidity
interface IFeeManager {
struct FeeConfig {
uint256 gasLimit;
uint256 targetBlockRate;
uint256 minBaseFee;
uint256 targetGas;
uint256 baseFeeChangeDenominator;
uint256 minBlockGasCost;
uint256 maxBlockGasCost;
uint256 blockGasCostStep;
}
event FeeConfigChanged(address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig);
function setFeeConfig(
uint256 gasLimit,
uint256 targetBlockRate,
uint256 minBaseFee,
uint256 targetGas,
uint256 baseFeeChangeDenominator,
uint256 minBlockGasCost,
uint256 maxBlockGasCost,
uint256 blockGasCostStep
) external;
function getFeeConfig() external view returns (
uint256 gasLimit,
uint256 targetBlockRate,
uint256 minBaseFee,
uint256 targetGas,
uint256 baseFeeChangeDenominator,
uint256 minBlockGasCost,
uint256 maxBlockGasCost,
uint256 blockGasCostStep
);
function getFeeConfigLastChangedAt() external view returns (uint256 blockNumber);
}
```
### Reward Manager Interface
```solidity
interface IRewardManager {
event RewardAddressChanged(
address indexed sender,
address indexed oldRewardAddress,
address indexed newRewardAddress
);
event FeeRecipientsAllowed(address indexed sender);
event RewardsDisabled(address indexed sender);
function setRewardAddress(address addr) external;
function allowFeeRecipients() external;
function disableRewards() external;
function currentRewardAddress() external view returns (address rewardAddress);
function areFeeRecipientsAllowed() external view returns (bool isAllowed);
}
```
## Best Practices
1. **Reward Management**:
* Choose reward mechanism based on network goals
* Consider using a multi-sig or DAO as reward address
* Monitor fee collection and distribution
* Keep documentation of fee policy changes
2. **Security Considerations**:
* Use multi-sig for admin addresses
* Test fee changes on testnet first
* Monitor events for unauthorized changes
* Have a plan for fee parameter adjustments
You can find the implementations in the subnet-evm repository:
* [Fee Manager](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/feemanager/contract.go)
* [Reward Manager](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/rewardmanager/contract.go)
# WarpMessenger Precompile - Technical Details
URL: /docs/avalanche-l1s/evm-configuration/warpmessenger
Technical documentation for the WarpMessenger precompile implementation in subnet-evm.
# Integrating Avalanche Warp Messaging into the EVM
Avalanche Warp Messaging offers a basic primitive to enable Cross-L1 communication on the Avalanche Network.
It is intended to allow communication between arbitrary Custom Virtual Machines (including, but not limited to Subnet-EVM and Coreth).
## How does Avalanche Warp Messaging Work?
Avalanche Warp Messaging uses BLS Multi-Signatures with Public-Key Aggregation where every Avalanche validator registers a public key alongside its NodeID on the Avalanche P-Chain.
Every node tracking an Avalanche L1 has read access to the Avalanche P-Chain. This provides weighted sets of BLS Public Keys that correspond to the validator sets of each L1 on the Avalanche Network. Avalanche Warp Messaging provides a basic primitive for signing and verifying messages between L1s: the receiving network can verify whether an aggregation of signatures from a set of source L1 validators represents a threshold of stake large enough for the receiving network to process the message.
For more details on Avalanche Warp Messaging, see the AvalancheGo [Warp README](https://docs.avax.network/build/cross-chain/awm/deep-dive).
### Flow of Sending / Receiving a Warp Message within the EVM
The Avalanche Warp Precompile enables this flow to send a message from blockchain A to blockchain B:
1. Call the Warp Precompile `sendWarpMessage` function with the arguments for the `UnsignedMessage`
2. Warp Precompile emits an event / log containing the `UnsignedMessage` specified by the caller of `sendWarpMessage`
3. Network accepts the block containing the `UnsignedMessage` in the log, so that validators are willing to sign the message
4. An off-chain relayer queries the validators for their signatures of the message and aggregates the signatures to create a `SignedMessage`
5. The off-chain relayer encodes the `SignedMessage` as the [predicate](#predicate-encoding) in the AccessList of a transaction to deliver on blockchain B
6. The transaction is delivered on blockchain B, the signature is verified prior to executing the block, and the message is accessible via the Warp Precompile's `getVerifiedWarpMessage` during the execution of that transaction
### Warp Precompile
The Warp Precompile is broken down into three functions defined in the Solidity interface file [here](https://github.com/ava-labs/subnet-evm/blob/1ab7114c339f866b65cc02dfd586b2ed9041dd0b/contracts/contracts/interfaces/IWarpMessenger.sol).
#### sendWarpMessage
`sendWarpMessage` is used to send a verifiable message. Calling this function results in sending a message with the following contents:
* `SourceChainID` - blockchainID of the sourceChain on the Avalanche P-Chain
* `SourceAddress` - `msg.sender` encoded as a 32 byte value that calls `sendWarpMessage`
* `Payload` - `payload` argument specified in the call to `sendWarpMessage` emitted as the unindexed data of the resulting log
Calling this function will issue a `SendWarpMessage` event from the Warp Precompile. Since the EVM limits the number of topics to 4 including the EventID, this message includes only the topics that would be expected to help filter messages emitted from the Warp Precompile the most.
Specifically, the `payload` is not emitted as a topic because each topic must be encoded as a hash. Therefore, we opt to take advantage of each possible topic to maximize the possible filtering for emitted Warp Messages.
Additionally, the `SourceChainID` is excluded because anyone parsing the chain can be expected to already know the blockchainID. Therefore, the `SendWarpMessage` event includes the indexable attributes:
* `sender`
* The `messageID` of the unsigned message (sha256 of the unsigned message)
The actual `message` is the entire [Avalanche Warp Unsigned Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go#L14) including an [AddressedCall](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm/warp/payload#readme). The unsigned message is emitted as the unindexed data in the log.
#### getVerifiedMessage
`getVerifiedMessage` is used to read the contents of the delivered Avalanche Warp Message into the expected format.
It returns the message if present and a boolean indicating if a message is present.
To use this function, the transaction must include the signed Avalanche Warp Message encoded in the [predicate](#predicate-encoding) of the transaction. Prior to executing a block, the VM iterates through transactions and pre-verifies all predicates. If a transaction's predicate is invalid, then it is considered invalid to include in the block and dropped.
This leads to the following advantages:
1. The EVM execution does not need to verify the Warp Message at runtime (no signature verification or external calls to the P-Chain)
2. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping)
This pre-verification is performed using the ProposerVM Block header during [block verification](https://github.com/ava-labs/subnet-evm/blob/1ab7114c339f866b65cc02dfd586b2ed9041dd0b/plugin/evm/block.go#L220) and [block building](https://github.com/ava-labs/subnet-evm/blob/1ab7114c339f866b65cc02dfd586b2ed9041dd0b/miner/worker.go#L200).
#### getBlockchainID
`getBlockchainID` returns the blockchainID of the blockchain that the VM is running on.
This is different from the conventional Ethereum ChainID registered to [ChainList](https://chainlist.org/).
The `blockchainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://docs.avax.network/specs/platform-transaction-serialization#unsigned-create-chain-tx)).
### Predicate Encoding
Avalanche Warp Messages are encoded as a signed Avalanche [Warp Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/message.go) where the [UnsignedMessage](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go)'s payload includes an [AddressedPayload](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/payload/payload.go).
Since the predicate is encoded into the [Transaction Access List](https://eips.ethereum.org/EIPS/eip-2930), it is packed into 32 byte hashes intended to declare storage slots that should be pre-warmed into the cache prior to transaction execution.
Therefore, we use the [Predicate Utils](https://github.com/ava-labs/subnet-evm/blob/master/predicate/Predicate.md) package to encode the actual byte slice of size N into the access list.
### Performance Optimization: Primary Network to Avalanche L1
The Primary Network has a large validator set compared to most Subnets and L1s, which makes Warp signature collection and verification from the entire Primary Network validator set costly. All Subnets and L1s track at least one blockchain of the Primary Network, so we can instead optimize this by using the validator set of the receiving L1 instead of the Primary Network for certain Warp messages.
#### Subnets
Recall that Avalanche Subnet validators must also validate the Primary Network, so it tracks all of the blockchains in the Primary Network (X, C, and P-Chains).
When an Avalanche Subnet receives a message from a blockchain on the Primary Network, we use the validator set of the receiving Subnet instead of the entire network when validating the message.
Sending messages from the X, C, or P-Chain remains unchanged.
However, when the Subnet receives the message, it changes the semantics to the following:
1. Read the `SourceChainID` of the signed message
2. Look up the `SubnetID` that validates `SourceChainID`. In this case it will be the Primary Network's `SubnetID`
3. Look up the validator set of the Subnet (instead of the Primary Network) and the registered BLS Public Keys of the Subnet validators at the P-Chain height specified by the ProposerVM header
4. Continue Warp Message verification using the validator set of the Subnet instead of the Primary Network
This means that Primary Network to Subnet communication only requires a threshold of stake on the receiving Subnet to sign the message instead of a threshold of stake for the entire Primary Network.
Since the security of the Subnet is provided by trust in its validator set, requiring a threshold of stake from the receiving Subnet's validator set instead of the whole Primary Network does not meaningfully change the security of the receiving L1.
Note: this special case is ONLY applied during Warp Message verification. The message sent by the Primary Network will still contain the blockchainID of the Primary Network chain that sent the message as the sourceChainID and signatures will be served by querying the source chain directly.
#### L1s
Avalanche L1s are only required to sync the P-Chain, but are not required to validate the Primary Network. Therefore, **for L1s, this optimization only applies to Warp messages sent by the P-Chain.** The rest of the description of this optimization in the above section applies to L1s.
Note that **in order to properly verify messages from the C-Chain and X-Chain, the Warp precompile must be configured with `requirePrimaryNetworkSigners` set to `true`**. Otherwise, we will attempt to verify the message signature against the receiving L1's validator set, which is not required to track the C-Chain or X-Chain, and therefore will not in general be able to produce a valid Warp message.
## Design Considerations
### Re-Processing Historical Blocks
Avalanche Warp Messaging depends on the Avalanche P-Chain state at the P-Chain height specified by the ProposerVM block header.
Verifying a message requires looking up the validator set of the source L1 on the P-Chain. To support this, Avalanche Warp Messaging uses the ProposerVM header, which includes the P-Chain height it was issued at as the canonical point to lookup the source L1's validator set.
This means verifying the Warp Message and therefore the state transition on a block depends on state that is external to the blockchain itself: the P-Chain.
The Avalanche P-Chain tracks only its current state and reverse diff layers (reversing the changes from past blocks) in order to re-calculate the validator set at a historical height. This means calculating a very old validator set that is used to verify a Warp Message in an old block may become prohibitively expensive.
Therefore, we need a heuristic to ensure that the network can correctly re-process old blocks (note: re-processing old blocks is a requirement to perform bootstrapping and is used in some VMs to serve or verify historical data).
As a result, we require that the block itself provides a deterministic hint which determines which Avalanche Warp Messages were considered valid/invalid during the block's execution. This ensures that we can always re-process blocks and use the hint to decide whether an Avalanche Warp Message should be treated as valid/invalid even after the P-Chain state that was used at the original execution time may no longer support fast lookups.
To provide that hint, we've explored two designs:
1. Include a predicate in the transaction to ensure any referenced message is valid
2. Append the results of checking whether a Warp Message is valid/invalid to the block data itself
The current implementation uses option (1).
The original reason for this was that the notion of predicates for precompiles was designed with Shared Memory in mind. In the case of shared memory, there is no canonical "P-Chain height" in the block which determines whether or not Avalanche Warp Messages are valid.
Instead, the VM interprets a shared memory import operation as valid as soon as the UTXO is available in shared memory. This means that if it were up to the block producer to staple the valid/invalid results of whether or not an attempted atomic operation should be treated as valid, a byzantine block producer could arbitrarily report that such atomic operations were invalid and cause a griefing attack to burn the gas of users that attempted to perform an import.
Therefore, a transaction specified predicate is required to implement the shared memory precompile to prevent such a griefing attack.
In contrast, Avalanche Warp Messages are validated within the context of an exact P-Chain height. Therefore, if a block producer attempted to lie about the validity of such a message, the network would interpret that block as invalid.
### Guarantees Offered by Warp Precompile vs. Built on Top
#### Guarantees Offered by Warp Precompile
The Warp Precompile was designed with the intention of minimizing the trusted computing base for the VM as much as possible. Therefore, it makes several tradeoffs which encourage users to use protocols built ON TOP of the Warp Precompile itself as opposed to directly using the Warp Precompile.
The Warp Precompile itself provides ONLY the following ability:
* Emit a verifiable message from (Address A, Blockchain A) to (Address B, Blockchain B) that can be verified by the destination chain
#### Explicitly Not Provided / Built on Top
The Warp Precompile itself does not provide any guarantees of:
* Eventual message delivery (may require re-send on blockchain A and additional assumptions about off-chain relayers and chain progress)
* Ordering of messages (requires ordering provided a layer above)
* Replay protection (requires replay protection provided a layer above)
# Add Validator
URL: /docs/avalanche-l1s/validator-manager/add-validator
Learn how to add validators to your Avalanche L1 blockchain.
### Register a Validator
Validator registration is initiated with a call to `initializeValidatorRegistration`. The sender of this transaction is registered as the validator owner. Churn limitations are checked - only a certain (configurable) percentage of the total weight is allowed to be added or removed in a (configurable) period of time. The `ValidatorManager` then constructs a `RegisterL1ValidatorMessage` Warp message to be sent to the P-Chain. Each validator registration request includes all of the information needed to identify the validator and its stake weight, as well as an `expiry` timestamp before which the `RegisterL1ValidatorMessage` must be delivered to the P-Chain. If the validator is not registered on the P-Chain before the `expiry`, then the validator may be removed from the contract state by calling `completeEndValidation`.
The `RegisterL1ValidatorMessage` is delivered to the P-Chain as the Warp message payload of a `RegisterL1ValidatorTx`. Please see the transaction specification for validity requirements. The P-Chain then signs a `L1ValidatorRegistrationMessage` Warp message indicating that the specified validator was successfully registered on the P-Chain.
The `L1ValidatorRegistrationMessage` is delivered to the `ValidatorManager` via a call to `completeValidatorRegistration`. For PoS Validator Managers, staking rewards begin accruing at this time.
### (PoS only) Register a Delegator
`PoSValidatorManager` supports delegation to an actively staked validator as a way for users to earn staking rewards without having to validate the chain. Delegators pay a configurable percentage fee on any earned staking rewards to the host validator. A delegator may be registered by calling `initializeDelegatorRegistration` and providing an amount to stake. The delegator will be registered as long as churn restrictions are not violated. The delegator is reflected on the P-Chain by adjusting the validator's registered weight via a `SetL1ValidatorWeightTx`. The weight change acknowledgement is delivered to the `PoSValidatorManager` via an `L1ValidatorWeightMessage`, which is provided by calling `completeDelegatorRegistration`.
The P-Chain is only willing to sign an `L1ValidatorWeightMessage` for an active validator. Once a validator exit has been initiated (via a call to `initializeEndValidation`), the `PoSValidatorManager` must assume that the validator has been deactivated on the P-Chain, and will therefore not sign any further weight updates. Therefore, it is invalid to initiate adding or removing a delegator when the validator is in this state, though it may be valid to complete an already initiated delegator action, depending on the order of delivery to the P-Chain. If the delegator weight change was submitted (and a Warp signature on the acknowledgement retrieved) before the validator was removed, then the delegator action may be completed. Otherwise, the acknowledgement of the validation end must first be delivered before completing the delegator action.
# Validator Manager Contracts
URL: /docs/avalanche-l1s/validator-manager/contract
This page lists all available contracts for the Validator Manager.
# Validator Manager Contracts
The contracts in this directory define the Validator Manager used to manage Avalanche L1 validators, as defined in [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets). They comply with [ACP-99](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/99-validatorsetmanager-contract), which specifies the standard minimal functionality that Validator Managers should implement. The contracts in this directory are are related as follows:
> ACP99Manager
class ValidatorManager {
+initializeValidatorSet()
+completeValidatorRegistration() onlyOwner
+completeValidatorRemoval() onlyOwner
+completeValidatorWeightUpdate() onlyOwner
+initiateValidatorRegistration() onlyOwner
+initiateValidatorRemoval() onlyOwner
+initiateValidatorWeightUpdate() onlyOwner
}
class PoAManager {
+completeValidatorRegistration()
+completeValidatorRemoval()
+completeValidatorWeightUpdate()
+initiateValidatorRegistration() onlyOwner
+initiateValidatorRemoval() onlyOwner
+initiateValidatorWeightUpdate() onlyOwner
+transferValidatorManagerOwnership() onlyOwner
}
class StakingManager {
+completeValidatorRegistration()
+initiateValidatorRemoval()
+completeValidatorRemoval()
+completeDelegatorRegistration()
+initiateDelegatorRemoval()
+completeDelegatorRemoval()
-_initiateValidatorRegistration()
-_initiateDelegatorRegistration()
}
<> StakingManager
class ERC20TokenStakingManager {
+initiateValidatorRegistration()
+initiateDelegatorRegistration()
}
class NativeTokenStakingManager {
+initiateValidatorRegistration() payable
+initiateDelegatorRegistration() payable
}
ACP99Manager <|-- ValidatorManager
ValidatorManager --o PoAManager : owner
ValidatorManager --o StakingManager : owner
StakingManager <|-- ERC20TokenStakingManager
StakingManager <|-- NativeTokenStakingManager`}
/>
## A Note on Nomenclature
The contracts in this directory are only useful to L1s that have been converted from Subnets as described in ACP-77. As such, `l1`/`L1` is generally preferred over `subnet`/`Subnet` in the source code. The one major exception is that `subnetID` should be used to refer to both Subnets that have not been converted, and L1s that have. This is because an L1 must first be initialized as a Subnet by issuing a `CreateSubnetTx` on the P-Chain, the transaction hash of which becomes the `subnetID`. Rather than change the name and/or value of this identifier, it is simpler for both to remain static in perpetuity.
## Deploying
The validator manager system consists of a `ValidatorManager`, and one of `NativeTokenStakingManager`, `ERC20TokenStakingManager`, or `PoAManager`. `ValidatorManager` is `Ownable`, and its owner should be set to the address of the other contract.
All of these are implemented as [upgradeable](https://github.com/OpenZeppelin/openzeppelin-contracts-upgradeable/blob/3d6a15108b50491ec3c51c03b32802c33e092a0f/contracts/proxy/utils/Initializable.sol#L56) contracts. There are numerous [guides](https://blog.chain.link/upgradable-smart-contracts/) for deploying upgradeable smart contracts, but the general steps are as follows:
1. Deploy the implementation contract
2. Deploy the proxy contract
3. Call the implementation contract's `initialize` function
* Each deployed contract requires different settings. For example, `ValidatorManagerSettings` specifies the churn parameters, while `StakingManagerSettings` specifies the staking and rewards parameters.
4. Initialize the validator set by calling `initializeValidatorSet` on `ValidatorManager`
* When an L1 is first created on the P-Chain, it must be explicitly converted to an L1 via [`ConvertSubnetToL1Tx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#convertsubnettol1tx). The resulting `SubnetToL1ConversionMessage` ICM [message](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#subnettol1conversionmessage) is provided in the call to `initializeValidatorSet` to specify the starting validator set in the `ValidatorManager`. Regardless of the setup of the overall validator manager system, these initial validators are treated as PoA and are not eligible for staking rewards.
### Proof-of-Authority
PoA validator management is provided by `PoAManager` by providing an `owner` in the call to `initialize`. Only the `owner` may initiate validator set changes, but anybody can complete the validator set change by providing the corresponding ICM message signed by the P-Chain.
> \[!NOTE]
> PoA validator management can also be implemented by `ValidatorManager` on its own, by setting the `owner` to the desired admin address. Unlike `PoAManager`, only the admin is able to initiate or complete validator set changes.
### Proof-of-Stake
PoS validator management is provided by the abstract contract `StakingManager`, which has two concrete implementations: `NativeTokenStakingManager` and `ERC20TokenStakingManager`. `StakingManager` supports uptime-based validation rewards, as well as delegation to a chosen validator. The `uptimeBlockchainID` used to initialize the `StakingManager` **must** be validated by the L1 validator set that the contract manages. **There is no way to verify this from within the contract, so take care when setting this value.** This [state transition diagram](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/StateTransition.md) illustrates the relationship between validators and delegators. After deploying `StakingManager` and a proxy, call the `initialize` function, which takes a `StakingManagerSettings` as well as any implementation-specific arguments.
> \[!NOTE]
> The `weightToValueFactor` fields of `StakingManagerSettings` sets the factor used to convert between the weight that the validator is registered with on the P-Chain, and the value transferred to the contract as stake. This involves integer division, which may result in loss of precision. When selecting `weightToValueFactor`, it's important to make the following considerations:
>
> 1. If `weightToValueFactor` is near the denomination of the asset, then staking amounts on the order of 1 unit of the asset may cause the converted weight to round down to 0. This may impose a larger-than-expected minimum stake amount.
> * Ex: If USDC (denomination of 6) is used as the staking token and `weightToValueFactor` is 1e9, then any amount less than 1,000 USDC will round down to 0 and therefore be invalid.
> 2. Staked amounts up to `weightValueFactor - 1` may be lost in the contract as dust, as the validator's registered weight is used to calculate the original staked amount.
> * Ex: `value=1001` and `weightToValueFactor=1e3`. The resulting weight will be `1`. Converting the weight back to a value results in `value=1000`.
> 3. The validator's weight is represented on the P-Chain as a `uint64`. `StakingManager` restricts values such that the calculated weight does not exceed the maximum value for that type.
### Migrating from Proof-of-Authority to Proof-of-Stake
See the [migration guide](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/PoAMigration.md) for details.
#### NativeTokenStakingManager
`NativeTokenStakingManager` allows permissionless addition and removal of validators that post the L1's native token as stake. Staking rewards are minted via the Native Minter Precompile, which is configured with a set of addresses with minting privileges. As such, the address that `NativeTokenStakingManager` is deployed to must be added as an admin to the precompile. This can be done by either calling the precompile's `setAdmin` method from an admin address, or setting the address in the Native Minter precompile settings in the chain's genesis (`config.contractNativeMinterConfig.adminAddresses`). There are a couple of methods to get this address: one is to calculate the resulting deployed address based on the deployer's address and account nonce: `keccak256(rlp.encode(address, nonce))`. The second method involves manually placing the `NativeTokenStakingManager` bytecode at a particular address in the genesis, then setting that address as an admin.
```json
{
"config" : {
...
"contractNativeMinterConfig": {
"blockTimestamp": 0,
"adminAddresses": [
"0xffffffffffffffffffffffffffffffffffffffff"
]
}
},
"alloc": {
"0xffffffffffffffffffffffffffffffffffffffff": {
"balance": "0x0",
"code": "",
"nonce": 1
}
}
}
```
#### ERC20TokenStakingManager
`ERC20TokenStakingManager` allows permissionless addition and removal of validators that post the an ERC20 token as stake. The ERC20 is specified in the call to `initialize`, and must implement [`IERC20Mintable`](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/interfaces/IERC20Mintable.sol). Care should be taken to enforce that only authorized users are able to `mint` the ERC20 staking token.
## Usage
### Register a Validator
#### PoA
Validator registration is initiated with a call to `PoAManager.initiateValidatorRegistration`. Churn limitations are checked - only a certain (configurable) percentage of the total weight is allowed to be added or removed in a (configurable) period of time. The `ValidatorManager` then constructs a [`RegisterL1ValidatorMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#registerl1validatormessage) ICM message to be sent to the P-Chain. Each validator registration request includes all of the information needed to identify the validator and its stake weight, as well as an `expiry` timestamp before which the `RegisterL1ValidatorMessage` must be delivered to the P-Chain. If the validator is not registered on the P-Chain before the `expiry`, then the validator may be removed from the contract state by calling `completeValidatorRemoval`.
The `RegisterL1ValidatorMessage` is delivered to the P-Chain as the ICM message payload of a `RegisterL1ValidatorTx`. Please see the transaction [specification](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#registerl1validatortx) for validity requirements. The P-Chain then signs a [`L1ValidatorRegistrationMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#l1validatorregistrationmessage) ICM message indicating that the specified validator was successfully registered on the P-Chain.
The `L1ValidatorRegistrationMessage` is delivered by calling `ValidatorManager.completeValidatorRegistration`.
#### PoS
When registering a PoS validator, the same steps as the PoA case apply, with the only difference being that `StakingManager.initiateValidatorRegistration` and `StakingManager.completeValidatorRegistration` must be called instead.
The sender of the transaction that called `StakingManager.initiateValidatorRegistration` is registered as the validator owner. Only this owner can remove the validator.
Staking rewards begin accruing once `StakingManager.completeValidatorRegistration` is called.
### Remove a Validator
### PoA
Validator exit is initiated with a call to `PoAManager.initiateValidatorRemoval`. The `ValidatorManager` contructs an [`L1ValidatorWeightMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#l1validatorweightmessage) ICM message with the weight set to `0`. This is delivered to the P-Chain as the payload of a [`SetL1ValidatorWeightTx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#setl1validatorweighttx). The P-Chain acknowledges the validator exit by signing an `L1ValidatorRegistrationMessage` with `valid=0`, which is delivered by calling `ValidatorManager.completeValidatorRemoval`. The validation is removed from the contract's state.
### PoS
PoS validator removal follows the same flow as the PoA case, except that `StakingManager.initiateValidatorRemoval` and `StakingManager.completeValidatorRemoval` must be called instead.
There are two additional considerations:
* A [`ValidationUptimeMessage`](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/UptimeMessageSpec.md) ICM message may optionally be provided in the call to `StakingManager.initiateValidatorRemoval` in order to calculate the staking rewards; otherwise the latest received uptime will be used (see [(PoS only) Submit and Uptime Proof](#pos-only-submit-an-uptime-proof)). This proof may be requested directly from the L1 validators, which will provide it in a `ValidationUptimeMessage` ICM message. If the uptime is not sufficient to earn validation rewards, the call to `initiateValidatorRemoval` will fail. `forceInitiateValidatorRemoval` acts the same as `initiateValidatorRemoval`, but bypasses the uptime-based rewards check. Once `initiateValidatorRemoval` or `forceInitiateValidatorRemoval` is called, staking rewards cease accruing for `StakingManagers`.
* Unlike with PoA, PoS validators are not able to decrease their weight. This can lead to a scenario in which a PoS validator manager with a high proportion of the L1's weight is not able to exit the validator set due to churn restrictions. Additional validators or delegators will need to first be registered to more evenly distribute weight across the L1's validator set.
Once acknowledgement from the P-Chain has been received via a call to `StakingManager.completeValidatorRemoval`, staking rewards are disbursed and stake is returned.
#### Disable a Validator Directly on the P-Chain
ACP-77 also provides a method to disable a validator without interacting with the L1 directly. The P-Chain transaction [`DisableL1ValidatorTx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#disablel1validatortx) disables the validator on the P-Chain. The disabled validator's weight will still count towards the L1's total weight.
Disabled L1 validators can re-activate at any time by increasing their balance with an `IncreaseBalanceTx`. Anyone can call `IncreaseBalanceTx` for any validator on the P-Chain. A disabled validator can only be completely and permanently removed from the validator set by a call to `initiateValidatorRemoval`.
### (PoS only) Register a Delegator
`StakingManager` supports delegation to an actively staked validator as a way for users to earn staking rewards without having to validate the chain. Delegators pay a configurable percentage fee on any earned staking rewards to the host validator. A delegator may be registered by calling `initiateDelegatorRegistration` and providing an amount to stake. The delegator will be registered as long as churn restrictions are not violated. The delegator is reflected on the P-Chain by adjusting the validator's registered weight via a [`SetL1ValidatorWeightTx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#setl1validatorweighttx). The weight change acknowledgement is delivered to the `StakingManager` via an [`L1ValidatorWeightMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#l1validatorweightmessage), which is provided by calling `completeDelegatorRegistration`.
> \[!NOTE]
> The P-Chain is only willing to sign an `L1ValidatorWeightMessage` for a validator in the L1's validator set. Once a validator exit has been initiated (via a call to `initiateValidatorRemoval`), the `StakingManager` must assume that the validator has been removed from the L1's validator set on the P-Chain, and therefore that the P-Chain will not sign any further weight updates. The contracts prohibit *initiating* adding or removing a delegator in between calls to `initiateValidatorRemoval` and `completeValidatorRemoval`. However, if the `L1ValidatorWeightMessage` pertaining to an already initiated delegator action is constructed *before* the validator is removed on the P-Chain, then the delegator action may be completed. Otherwise, `completeValidatorRemoval` must be called before completing the delegator action.
### (PoS only) Remove a Delegator
Delegator removal may be initiated by calling `initiateDelegatorRemoval`, as long as churn restrictions are not violated. Similar to `initiateValidatorRemoval`, an uptime proof may be provided to be used to determine delegator rewards eligibility. If no proof is provided, the latest known uptime will be used (see [(PoS only) Submit and Uptime Proof](#pos-only-submit-an-uptime-proof)). The validator's weight is updated on the P-Chain by the same mechanism used to register a delegator. The `L1ValidatorWeightMessage` from the P-Chain is delivered to the `StakingManager` in the call to `completeDelegatorRemoval`.
Either the delegator owner or the validator owner may initiate removing a delegator. This is to prevent the validator from being unable to remove itself due to churn limitations if it is has too high a proportion of the Subnet's total weight due to delegator additions. The validator owner may only remove Delegators after the minimum stake duration has elapsed.
### (PoS only) Submit an Uptime Proof
The [rewards calculator](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/interfaces/IRewardCalculator.sol) is a function of uptime seconds since the validator's start time. In addition to doing so in the calls to `initiateValidatorRemoval` and `initiateDelegatorRemoval` as described above, uptime proofs may also be supplied by calling `submitUptimeProof`. Unlike `initiateValidatorRemoval` and `initiateDelegatorRemoval`, `submitUptimeProof` may be called by anyone, decreasing the likelihood of a validation or delegation not being able to claim rewards that it deserved based on its actual uptime.
### (PoS only) Collect Staking Rewards
#### Validation Rewards
Validation rewards are distributed in the call to `completeValidatorRemoval` on the `StakingManager`.
#### Delegation Rewards
Delegation rewards are distributed in the call to `completeDelegatorRemoval` on the `StakingManager`.
#### Delegation Fees
Delegation fees owed to validators are *not* distributed when the validation ends as to bound the amount of gas consumed in the call to `completeValidatorRemoval`. Instead, `claimDelegationFees` on the `StakingManager` may be called after the validation is completed.
# Customize Validator Manager
URL: /docs/avalanche-l1s/validator-manager/custom-validator-manager
Learn how to implement a custom Validator Manager on your Avalanche L1 blockchain.
The Validator Manager contracts provide a framework for managing validators on an Avalanche L1 blockchain, as defined in [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets). `ValidatorManager.sol` is the top-level abstract contract that provides basic functionality. Developers can build upon it to implement custom logic for validator management tailored to their specific requirements.
## Building a Custom Validator Manager
To implement custom validator management logic, you can create a new contract that inherits from `ValidatorManager` or one of its derived contracts (`PoSValidatorManager`, `PoAValidatorManager`, etc.). By extending these contracts, you can override existing functions or add new ones to introduce your custom logic.
**Inherit from the Base Contract**
Decide which base contract suits your needs. If you require Proof-of-Stake functionality, consider inheriting from `PoSValidatorManager`. For Proof-of-Authority, `PoAValidatorManager` might be appropriate. If you need basic functionality, you can inherit directly from `ValidatorManager`.
```solidity
pragma solidity ^0.8.0;
import "./ValidatorManager.sol";
contract CustomValidatorManager is ValidatorManager {
// Your custom logic here
}
```
### Override Functions
Override existing functions to modify their behavior. Ensure that you adhere to the function signatures and access modifiers.
```solidity
function initializeValidatorRegistration() public override {
// Custom implementation
}
```
### Add Custom Functions
Introduce new functions that implement the custom logic required for your blockchain.
```solidity
function customValidatorLogic(address validator) public {
// Implement custom logic
}
```
### Modify Access Control
Adjust access control as needed using modifiers like onlyOwner or by implementing your own access control mechanisms.
```solidity
modifier onlyValidator() {
require(isValidator(msg.sender), "Not a validator");
_;
}
```
### Integrate with the P-Chain
Ensure that your custom contract correctly constructs and handles Warp messages for interaction with the P-Chain, following the specifications in ACP-77.
### Testing
Thoroughly test your custom Validator Manager contract to ensure it behaves as expected and adheres to the required protocols.
Example: Custom Reward Logic
Suppose you want to implement a custom reward distribution mechanism. You can create a new contract that inherits from PoSValidatorManager and override the reward calculation functions.
```solidity
pragma solidity ^0.8.0;
import "./PoSValidatorManager.sol";
contract CustomRewardValidatorManager is PoSValidatorManager {
function calculateValidatorReward(address validator) internal view override returns (uint256) {
// Implement custom reward calculation logic
return super.calculateValidatorReward(validator) * 2; // Example: double the reward
}
function calculateDelegatorReward(address delegator) internal view override returns (uint256) {
// Implement custom delegator reward calculation logic
return super.calculateDelegatorReward(delegator) / 2; // Example: halve the reward
}
}
```
### Considerations
**Security Audits**: Custom contracts should be audited to ensure security and correctness.
**Compliance with ACP-77**: Ensure your custom logic complies with the specifications of ACP-77 to maintain compatibility with Avalanche's protocols.
**Upgradeable Contracts**: If you plan to upgrade your contract in the future, follow best practices for upgradeable contracts.
### Conclusion
Building on top of `ValidatorManager.sol` allows you to customize validator management to fit the specific needs of your Avalanche L1 blockchain. By extending and modifying the base contracts, you can implement custom staking mechanisms, reward distribution, and access control tailored to your application.
# PoA vs PoS
URL: /docs/avalanche-l1s/validator-manager/poa-vs-pos
Learn the differences between Proof of Authority and Proof of Stake Validator Manager contracts.
import { Steps, Step } from 'fumadocs-ui/components/steps';
## Overview
At a high level, the `ValidatorManager` abstract can be used to manage the validator set on the P-Chain.
* Proof of Authority networks are secured by validators who can be added or removed from the `PoAValidatorManager` implementation by an owner address.
* Proof of Stake networks are secured by validators who stake some type of tokens for a duration into an implementation of `PoSValidatorManager`.
Once the transaction confirms, the `ValidatorManager` (which both `PoAValidatorManager` and `PoSValidatorManager` inherit) emits a warp message.
The warp message is signed off by quorum of the current validator set and submitted to the P-Chain.
The P-Chain then adds, removes or modifies the validator in the registry using information from the warp message.
## Proof of Authority
There is one implementation of Proof of Authority based on the `ValidatorManager` abstract called `PoAValidatorManager`.
In the `PoAValidatorManager` implementation, the owner of the contract can add and remove validators from the set. The owner can also set the weight of the validator.
The owner can either be a smart contract or an EOA.
### Rewards
By default, no rewards are distributed to validators in the `PoAValidatorManager` implementation. However, rewards can be distributed to validators by extending the `PoAValidatorManager` contract and adding the desired functionality.
### Parameters
The `PoAValidatorManager` has a couple of parameters that it can be initialized with to fit the needs of the network. These parameters include:
* `ChurnPeriodSeconds`: The time period in seconds that a validator must wait before being removed from the set.
* `MaximumChurnPercentage`: The maximum percentage of the validator set that can be removed in a single churn period.
## Proof of Stake
There are two implmentations offered of Proof of Stake based on the `PoSValidatorManager` abstract:
* `NativeTokenStakingManager`: This contract is used for staking native tokens.
* `ERC20TokenStakingManager`: This contract is used for staking ERC20 tokens.
These are both permissionless implementations, meaning that anyone can stake tokens and become a validator.
The `PoSValidatorManager` abstract also comes with a notion of `delegation`. Delegators can delegate their tokens to a validator. This will increase the validator's weight and the delegator will receive a portion of the rewards.
### Rewards
Rewards are calculated using a `RewardCalculator` contract. The `PoSValidatorManager` will distribute rewards to the validator based off their node's liveliness in consensus and after the validator is removed from the set.
In the `NativeTokenStakingManager` implmentation, rewards are minted through use of the `NativeMinter` precompile. Which the address of `NativeTokenStakingManager` is enabled on.
In the `ERC20TokenStakingManager` implementation, rewards are minted through calling the ERC20 token's `mint` function.
### Parameters
The `PoSValidatorManager` has a number of parameters that it can be initialized with to fit the needs of the network. These parameters include:
* `MinimumStakeAmount`: The minimum amount of tokens required to stake.
* `MaximumStakeAmount`: The maximum amount of tokens required to stake.
* `MinimumStakeDuration`: The minimum duration that tokens must be staked for.
* `MinimumDelegationFeeBips`: The minimum fee charged to delegators for delegating their tokens.
* `MaximumStakeMultiplier`: The maximum multiplier that can be applied to a validator's weight.
* `WeightToValueFactor`: The factor used to convert a validator's weight to a value.
* `RewardCalculator`: The address of the reward calculator contract.
## Customization
The `ValidatorManager` abstract contract is designed to be flexible and can be easily extensible to fit the needs of the network.
For example, the `PoSValidatorManager` abstract could be extended to include additional parameters or functionality. This can be done by creating a new contract that inherits from the `PoSValidatorManager` abstract and adding the desired functionality such as NFT staking, slashing, or additional rewards for certain actions.
# Remove Validator
URL: /docs/avalanche-l1s/validator-manager/remove-validator
Learn how to remove validators from your Avalanche L1 blockchain.
### Remove a Validator
Validator exit is initiated with a call to `initializeEndValidation` on the `ValidatorManager`. Only the validator owner may initiate exit. For `PoSValidatorManagers` a `ValidationUptimeMessage` Warp message may optionally be provided in order to calculate the staking rewards; otherwise the latest received uptime will be used (see [(PoS only) Submit an Uptime Proof](/docs/avalanche-l1s/validator-manager/contract#pos-only-submit-an-uptime-proof)). This proof may be requested directly from the L1 validators, which will provide it in a `ValidationUptimeMessage` Warp message. If the uptime is not sufficient to earn validation rewards, the call to `initializeEndValidation` will fail. `forceInitializeEndValidation` acts the same as `initializeEndValidation`, but bypasses the uptime-based rewards check. Once `initializeEndValidation` or `forceInitializeEndValidation` is called, staking rewards cease accruing for `PoSValidatorManagers`.
The `ValidatorManager` constructs an `L1ValidatorWeightMessage` Warp message with the weight set to `0`. This is delivered to the P-Chain as the payload of a `SetL1ValidatorWeightTx`. The P-Chain acknowledges the validator exit by signing an `L1ValidatorRegistrationMessage` with `valid=0`, which is delivered to the `ValidatorManager` by calling `completeEndValidation`. The validation is removed from the contract's state, and for `PoSValidatorManagers`, staking rewards are disbursed and stake is returned.
#### Disable a Validator Directly on the P-Chain
ACP-77 also provides a method to disable a validator without interacting with the L1 directly. The P-Chain transaction `DisableL1ValidatorTx` disables the validator on the P-Chain. The disabled validator's weight will still count towards the L1's total weight.
Disabled L1 validators can re-activate at any time by increasing their balance with an `IncreaseBalanceTx`. Anyone can call `IncreaseBalanceTx` for any validator on the P-Chain. A disabled validator can only be completely and permanently removed from the validator set by a call to `initializeEndValidation`.
### (PoS only) Remove a Delegator
Delegators removal may be initiated by calling `initializeEndDelegation`, as long as churn restrictions are not violated. Similar to `initializeEndValidation`, an uptime proof may be provided to be used to determine delegator rewards eligibility. If no proof is provided, the latest known uptime will be used (see [(PoS only) Submit an Uptime Proof](/docs/avalanche-l1s/validator-manager/contract#pos-only-submit-an-uptime-proof)). The validator's weight is updated on the P-Chain by the same mechanism used to register a delegator. The `L1ValidatorWeightMessage` from the P-Chain is delivered to the `PoSValidatorManager` in the call to `completeEndDelegation`.
Either the delegator owner or the validator owner may initiate removing a delegator. This is to prevent the validator from being unable to remove itself due to churn limitations if it has too high a proportion of the Subnet's total weight due to delegator additions. The validator owner may only remove Delegators after the minimum stake duration has elapsed.
### (PoS only) Collect Staking Rewards
#### Submit an Uptime Proof
The rewards calculator is a function of uptime seconds since the validator's start time. In addition to doing so in the calls to `initializeEndValidation` and `initializeEndDelegation` as described above, uptime proofs may also be supplied by calling `submitUptimeProof`. Unlike `initializeEndValidation` and `initializeEndDelegation`, `submitUptimeProof` may be called by anyone, decreasing the likelihood of a validation or delegation not being able to claim rewards that it deserved based on its actual uptime.
#### Validation Rewards
Validation rewards are distributed in the call to `completeEndValidation`.
#### Delegation Rewards
Delegation rewards are distributed in the call to `completeEndDelegation`.
#### Delegation Fees
Delegation fees owed to validators are *not* distributed when the validation ends as to bound the amount of gas consumed in the call to `completeEndValidation`. Instead, `claimDelegationFees` may be called after the validation is completed.
# Upgrade Validator Manager
URL: /docs/avalanche-l1s/validator-manager/upgrade
Learn how to upgrade the Validator Manager on your Avalanche L1 blockchain from PoA to PoS.
## Convert PoA to PoS
A `PoAValidatorManager` can later be converted to a `PoSValidatorManager` by upgrading the implementation contract pointed to by the proxy. After performing the upgrade, the `PoSValidatorManager` contract should be initialized by calling `initialize` as described above. The validator set contained in the `PoAValidatorManager` will be tracked by the `PoSValidatorManager` after the upgrade, but these validators will neither be eligible to stake and earn staking rewards, nor support delegation.
# Chain Components
URL: /docs/builderkit/components/chains
Components for displaying and selecting blockchain networks.
# Chain Components
Chain components help you manage network selection and display chain information.
## ChainIcon
The ChainIcon component displays chain logos.
```tsx
import { ChainIcon } from '@avalabs/builderkit';
// Basic usage
```
### Props
| Prop | Type | Default | Description |
| ----------- | -------- | ------- | ---------------------- |
| `chain_id` | `number` | - | Chain ID to display |
| `className` | `string` | - | Additional CSS classes |
## ChainDropdown
The ChainDropdown component provides network selection functionality.
```tsx
import { ChainDropdown } from '@avalabs/builderkit';
// Basic usage
{
console.log('Selected chain:', chainId);
}}
/>
```
### Props
| Prop | Type | Default | Description |
| -------------------- | ---------------------------- | ------- | --------------------------- |
| `selected` | `number` | - | Currently selected chain ID |
| `list` | `number[]` | - | List of available chain IDs |
| `onSelectionChanged` | `(chain_id: number) => void` | - | Selection change callback |
| `className` | `string` | - | Additional CSS classes |
## ChainRow
The ChainRow component displays detailed chain information.
```tsx
import { ChainRow } from '@avalabs/builderkit';
// Basic usage
```
### Props
| Prop | Type | Default | Description |
| ----------- | -------- | ------- | ---------------------- |
| `chain_id` | `number` | - | Chain ID |
| `name` | `string` | - | Chain name |
| `className` | `string` | - | Additional CSS classes |
# Control Components
URL: /docs/builderkit/components/control
Interactive control components like buttons and wallet connection interfaces.
# Control Components
Control components provide interactive elements for your Web3 application.
## Button
The Button component is a versatile control that supports multiple states and actions.
```tsx
import { Button } from '@avalabs/builderkit';
// Basic usage