OmniTensor
  • Welcome
  • Introduction
    • What is OmniTensor?
    • Vision & Mission
    • Key Features
    • Why OmniTensor?
      • Current Challenges in AI
      • How OmniTensor Addresses These Challenges
  • Core Concepts
    • AI Grid as a Service (AI-GaaS)
      • Overview of AI-GaaS
      • Benefits of AI-GaaS
      • Use Cases of AI-GaaS
    • Decentralized Physical Infrastructure Network (DePIN)
      • GPU Sharing Model
      • Incentive Mechanisms
    • AI OmniChain
      • Layer 1 and Layer 2 Integration
      • AI Model Marketplace and Interoperability
    • DualProof Consensus Mechanism
      • Proof-of-Work (PoW) for AI Compute
      • Proof-of-Stake (PoS) for Validation
    • OMNIT Token
      • Overview
      • Utility
      • Governance
  • Tokenomics
    • Token Allocations
    • Token Locks
    • ERC20 Token
    • Audit
  • OmniTensor Infrastructure
    • L1 EVM Chain
      • Overview & Benefits
      • Development Tools & API
    • AI OmniChain
      • Interoperability
      • Scalability
      • Decentralized Data & Model Management
    • Nodes & Network Management
      • AI Consensus Validator Nodes
      • AI Compute Nodes (GPUs)
  • Roadmap & Updates
    • Roadmap
    • Future Features
  • PRODUCTS
    • AI Model Marketplace
    • dApp Store
    • Data Layer
    • Customizable Solutions
    • AI Inference Network
  • For the Community
    • Contributing to OmniTensor
      • Sharing Your GPU
      • Data Collection & Validation
    • Earning OMNIT Tokens
      • Computation Rewards
      • Data Processing & Validation Rewards
    • Community Incentives & Gamification
      • Participation Rewards
      • Leaderboards & Competitions
  • For Developers
    • Building on OmniTensor
      • dApp Development Overview
      • Using Pre-trained AI Models
    • SDK & Tools
      • OmniTensor SDK Overview
      • API Documentation
    • AI Model Training & Deployment
      • Training Custom Models
      • Deploying Models on OmniTensor
    • Decentralized Inference Network
      • Running AI Inference
      • Managing and Scaling Inference Tasks
    • Advanced Topics
      • Cross-Chain Interoperability
      • Custom AI Model Fine-Tuning
  • For Businesses
    • AI Solutions for Businesses
      • Ready-Made AI dApps
      • Custom AI Solution Development
    • Integrating OmniTensor with Existing Systems
      • Web2 & Web3 Integration
      • API Usage & Examples
    • Privacy & Security
      • Data Encryption & Privacy Measures
      • Secure AI Model Hosting
  • Getting Started
    • Setting Up Your Account
    • Installing SDK & CLI Tools
  • Tutorials & Examples
    • Building AI dApps Step by Step
    • Integrating AI Models with OmniTensor
    • Case Studies
      • AI dApp Implementations
      • Real-World Applications
  • FAQ
    • Common Questions & Issues
    • Troubleshooting
  • Glossary
    • Definitions of Key Terms & Concepts
  • Community and Support
    • Official Links
    • Community Channels
  • Legal
    • Terms of Service
    • Privacy Policy
    • Licensing Information
Powered by GitBook
On this page
  • API Integration
  • Smart Contract Deployment Example
  1. OmniTensor Infrastructure
  2. L1 EVM Chain

Development Tools & API

The L1 EVM Chain within OmniTensor serves as a base layer built around Ethereum’s Virtual Machine (EVM) architecture, specifically tailored for AI workloads. By using Optimistic Rollups, the chain guarantees scalability, security and efficient transactions, ensuring smooth operation for AI dApps in the ecosystem.

This section details the development tools and APIs available for building, deploying and managing AI-powered dApps on OmniTensor’s EVM Chain. The tools simplify blockchain integration, enabling developers to focus on their AI models while maintaining a scalable, interoperable and secure infrastructure.

Development Tools OmniTensor’s environment offers a full set of SDKs, APIs and CLI tools that facilitate seamless interaction with the L1 EVM Chain. These resources cater to developers experienced with smart contracts, decentralized applications and AI models.

  1. OmniTensor SDK The SDK includes essential libraries for smart contract development, model integration and interaction with the OmniChain. It supports Solidity, Rust and JavaScript-based environments, allowing developers to choose the language they are most comfortable with.

    # Install the OmniTensor SDK
    npm install -g omnitensor-sdk
    • Key Features:

      • Pre-built AI model management.

      • Tools for deploying and interacting with smart contracts.

      • Libraries to interact with decentralized compute nodes (AI Compute Nodes).

      • Full integration with MetaMask and other Web3 wallets for transaction signing and account management.

  2. CLI Tools The CLI offers a robust interface for interacting with the EVM Chain, deploying contracts and managing AI inference requests. It provides commands for both on-chain operations (e.g., deploying smart contracts) and off-chain compute resource management (e.g., GPU allocation).

    # Example of deploying a smart contract using the CLI
    omnitensor-cli deploy --contract path/to/contract.sol --network mainnet
    • Sample Commands:

      • deploy: Deploy smart contracts to the L1 chain.

      • interact: Execute function calls on deployed contracts.

      • ai-infer: Submit an AI inference task to the decentralized compute network.

      • status: Check the status of nodes, inference tasks and deployed models.

  3. Solidity Templates OmniTensor provides Solidity-based smart contract templates specifically designed for AI dApp development. These templates include functions for managing AI model marketplaces, handling token rewards and integrating AI compute tasks.

    pragma solidity ^0.8.0;
    
    contract AIModelMarketplace {
        struct Model {
            string name;
            address owner;
            uint price;
        }
    
        mapping(uint => Model) public models;
        uint public modelCount;
    
        function registerModel(string memory _name, uint _price) public {
            modelCount++;
            models[modelCount] = Model(_name, msg.sender, _price);
        }
    
        function buyModel(uint _modelId) public payable {
            Model memory _model = models[_modelId];
            require(msg.value >= _model.price, "Not enough Ether sent.");
            payable(_model.owner).transfer(msg.value);
        }
    }
    • Key Integrations:

      • Token management for purchasing AI models using OMNIT tokens.

      • Automatic registration of models within the OmniTensor ecosystem.

API Integration

The API layer enables seamless interaction between off-chain AI compute resources and the L1 EVM Chain. This integration ensures smooth communication between AI models deployed on the decentralized network and on-chain smart contracts.

  1. Smart Contract APIs OmniTensor provides a set of APIs to interact with deployed smart contracts, allowing developers to manage on-chain assets and operations related to AI models and compute tasks.

    # Example of calling a smart contract function via the API
    curl -X POST https://api.omnitensor.io/v1/contracts/call \
        -H "Content-Type: application/json" \
        -d '{
            "contract": "0xContractAddress",
            "function": "buyModel",
            "params": ["1"]
        }'
    • API Endpoints:

      • GET /contracts/:address: Retrieve contract details.

      • POST /contracts/call: Invoke a smart contract function.

      • GET /models/:id: Fetch information about registered AI models.

  2. AI Model APIs These APIs allow developers to interact with the OmniTensor decentralized AI compute network. Through the API, developers can submit AI inference jobs, retrieve the status of compute tasks and manage the deployment of custom AI models.

    # Submitting an AI inference job
    curl -X POST https://api.omnitensor.io/v1/ai-infer \
        -H "Content-Type: application/json" \
        -d '{
            "model_id": "12345",
            "input_data": "What is the future of AI?",
            "user": "0xYourWalletAddress"
        }'
    • API Endpoints:

      • POST /ai-infer: Submit an inference job to the compute network.

      • GET /ai-infer/status/:job_id: Check the status of an inference task.

      • POST /ai-models/deploy: Deploy a custom AI model to the network.

Smart Contract Deployment Example

Below is a step-by-step guide to deploying a simple AI-powered dApp on OmniTensor’s EVM Chain:

  1. Write the Smart Contract (Solidity) Create a smart contract for managing AI model purchases.

    pragma solidity ^0.8.0;
    
    contract AIMarket {
        mapping(uint => address) public owners;
    
        function purchaseModel(uint _modelId) public payable {
            require(msg.value >= 1 ether, "Minimum price not met.");
            owners[_modelId] = msg.sender;
        }
    }
  2. Compile and Deploy Use the OmniTensor CLI or Truffle to compile and deploy the contract to the EVM Chain.

    truffle compile
    truffle migrate --network omnitensor
  3. Interact with the Contract Interact with the deployed contract to register or purchase an AI model.

    omnitensor-cli interact --contract 0xContractAddress --function "purchaseModel" --params "1"

PreviousOverview & BenefitsNextAI OmniChain

Last updated 7 months ago