OmniTensor
  • Welcome
  • Introduction
    • What is OmniTensor?
    • Vision & Mission
    • Key Features
    • Why OmniTensor?
      • Current Challenges in AI
      • How OmniTensor Addresses These Challenges
  • Core Concepts
    • AI Grid as a Service (AI-GaaS)
      • Overview of AI-GaaS
      • Benefits of AI-GaaS
      • Use Cases of AI-GaaS
    • Decentralized Physical Infrastructure Network (DePIN)
      • GPU Sharing Model
      • Incentive Mechanisms
    • AI OmniChain
      • Layer 1 and Layer 2 Integration
      • AI Model Marketplace and Interoperability
    • DualProof Consensus Mechanism
      • Proof-of-Work (PoW) for AI Compute
      • Proof-of-Stake (PoS) for Validation
    • OMNIT Token
      • Overview
      • Utility
      • Governance
  • Tokenomics
    • Token Allocations
    • Token Locks
    • ERC20 Token
    • Audit
  • OmniTensor Infrastructure
    • L1 EVM Chain
      • Overview & Benefits
      • Development Tools & API
    • AI OmniChain
      • Interoperability
      • Scalability
      • Decentralized Data & Model Management
    • Nodes & Network Management
      • AI Consensus Validator Nodes
      • AI Compute Nodes (GPUs)
  • Roadmap & Updates
    • Roadmap
    • Future Features
  • PRODUCTS
    • AI Model Marketplace
    • dApp Store
    • Data Layer
    • Customizable Solutions
    • AI Inference Network
  • For the Community
    • Contributing to OmniTensor
      • Sharing Your GPU
      • Data Collection & Validation
    • Earning OMNIT Tokens
      • Computation Rewards
      • Data Processing & Validation Rewards
    • Community Incentives & Gamification
      • Participation Rewards
      • Leaderboards & Competitions
  • For Developers
    • Building on OmniTensor
      • dApp Development Overview
      • Using Pre-trained AI Models
    • SDK & Tools
      • OmniTensor SDK Overview
      • API Documentation
    • AI Model Training & Deployment
      • Training Custom Models
      • Deploying Models on OmniTensor
    • Decentralized Inference Network
      • Running AI Inference
      • Managing and Scaling Inference Tasks
    • Advanced Topics
      • Cross-Chain Interoperability
      • Custom AI Model Fine-Tuning
  • For Businesses
    • AI Solutions for Businesses
      • Ready-Made AI dApps
      • Custom AI Solution Development
    • Integrating OmniTensor with Existing Systems
      • Web2 & Web3 Integration
      • API Usage & Examples
    • Privacy & Security
      • Data Encryption & Privacy Measures
      • Secure AI Model Hosting
  • Getting Started
    • Setting Up Your Account
    • Installing SDK & CLI Tools
  • Tutorials & Examples
    • Building AI dApps Step by Step
    • Integrating AI Models with OmniTensor
    • Case Studies
      • AI dApp Implementations
      • Real-World Applications
  • FAQ
    • Common Questions & Issues
    • Troubleshooting
  • Glossary
    • Definitions of Key Terms & Concepts
  • Community and Support
    • Official Links
    • Community Channels
  • Legal
    • Terms of Service
    • Privacy Policy
    • Licensing Information
Powered by GitBook
On this page
  1. For the Community
  2. Earning OMNIT Tokens

Computation Rewards

Community members can contribute unused computational resources, such as GPUs or CPUs, to the OmniTensor network in exchange for OMNIT tokens. This process leverages the Decentralized Physical Infrastructure Network (DePIN), where participants offer computational power to handle AI tasks such as inference, model training, and other distributed AI workloads.

Workflow:

  1. Setup AI Compute Node:

    • Users install the OmniTensor SDK and configure their system (e.g., GPU, CPU) as an AI Compute Node.

    • Once active, the node is integrated into the decentralized network and starts receiving AI computational tasks.

  2. Task Execution:

    • The node processes AI-related tasks, including training AI models or performing inference. Each task completed contributes to the overall grid’s processing power.

  3. Earning OMNIT Tokens:

    • For every AI task successfully completed, the node operator is rewarded in OMNIT tokens. The reward is calculated based on factors such as the complexity of the task, the processing time, and the node’s computational power.

  4. Terminal Setup:

    # Install the OmniTensor CLI
    $ npm install -g omnitensor-cli
    
    # Register the node with the OmniTensor network
    $ omnitensor node register --gpu [GPU_INFO] --stake [AMOUNT]
    
    # Start processing tasks
    $ omnitensor node start
  5. Monitoring and Reward Accumulation:

    • Use the CLI to track task completion and earned OMNIT tokens.

    $ omnitensor node status
    $ omnitensor node rewards

By participating in the AI Grid as a Service (AI-GaaS), GPU owners effectively "mine" OMNIT tokens, providing valuable computational power to the network while being compensated for their resources.

PreviousEarning OMNIT TokensNextData Processing & Validation Rewards

Last updated 8 months ago