onChain LLMs
Decentralized Large Language Models running directly on the blockchain, ensuring transparency, censorship resistance, and verifiable inference.
Key Features
Why run LLMs on-chain?
Verifiable Inference
Cryptographic proofs ensure that the model output was generated by the specific model and input, preventing manipulation.
Decentralized Compute
Distributed network of nodes executes model inference, eliminating single points of failure and censorship.
Smart Contract Native
Seamlessly integrate AI logic directly into your smart contracts without relying on centralized oracles.
How it Works
Request
User or smart contract sends a prompt to the on-chain model registry.
Inference
Decentralized nodes pick up the task and run the model inference off-chain (or on-chain for smaller models).
Verification
ZK-proofs or optimistic challenges verify the correctness of the computation.
Response
The verified result is returned to the blockchain, triggering downstream smart contract actions.