onChain LLMs

Decentralized Large Language Models running directly on the blockchain, ensuring transparency, censorship resistance, and verifiable inference.

Key Features

Why run LLMs on-chain?

Verifiable Inference

Cryptographic proofs ensure that the model output was generated by the specific model and input, preventing manipulation.

Decentralized Compute

Distributed network of nodes executes model inference, eliminating single points of failure and censorship.

Smart Contract Native

Seamlessly integrate AI logic directly into your smart contracts without relying on centralized oracles.

How it Works

1

Request

User or smart contract sends a prompt to the on-chain model registry.

2

Inference

Decentralized nodes pick up the task and run the model inference off-chain (or on-chain for smaller models).

3

Verification

ZK-proofs or optimistic challenges verify the correctness of the computation.

4

Response

The verified result is returned to the blockchain, triggering downstream smart contract actions.

// Example Smart Contract Call
function askOracle(string memory prompt) public {
bytes32 requestId = onChainLLM.request(
"Llama-3-8b",
prompt
);
}
AI