OmniTensor
  • Welcome
  • Introduction
    • What is OmniTensor?
    • Vision & Mission
    • Key Features
    • Why OmniTensor?
      • Current Challenges in AI
      • How OmniTensor Addresses These Challenges
  • Core Concepts
    • AI Grid as a Service (AI-GaaS)
      • Overview of AI-GaaS
      • Benefits of AI-GaaS
      • Use Cases of AI-GaaS
    • Decentralized Physical Infrastructure Network (DePIN)
      • GPU Sharing Model
      • Incentive Mechanisms
    • AI OmniChain
      • Layer 1 and Layer 2 Integration
      • AI Model Marketplace and Interoperability
    • DualProof Consensus Mechanism
      • Proof-of-Work (PoW) for AI Compute
      • Proof-of-Stake (PoS) for Validation
    • OMNIT Token
      • Overview
      • Utility
      • Governance
  • Tokenomics
    • Token Allocations
    • Token Locks
    • ERC20 Token
    • Audit
  • OmniTensor Infrastructure
    • L1 EVM Chain
      • Overview & Benefits
      • Development Tools & API
    • AI OmniChain
      • Interoperability
      • Scalability
      • Decentralized Data & Model Management
    • Nodes & Network Management
      • AI Consensus Validator Nodes
      • AI Compute Nodes (GPUs)
  • Roadmap & Updates
    • Roadmap
    • Future Features
  • PRODUCTS
    • AI Model Marketplace
    • dApp Store
    • Data Layer
    • Customizable Solutions
    • AI Inference Network
  • For the Community
    • Contributing to OmniTensor
      • Sharing Your GPU
      • Data Collection & Validation
    • Earning OMNIT Tokens
      • Computation Rewards
      • Data Processing & Validation Rewards
    • Community Incentives & Gamification
      • Participation Rewards
      • Leaderboards & Competitions
  • For Developers
    • Building on OmniTensor
      • dApp Development Overview
      • Using Pre-trained AI Models
    • SDK & Tools
      • OmniTensor SDK Overview
      • API Documentation
    • AI Model Training & Deployment
      • Training Custom Models
      • Deploying Models on OmniTensor
    • Decentralized Inference Network
      • Running AI Inference
      • Managing and Scaling Inference Tasks
    • Advanced Topics
      • Cross-Chain Interoperability
      • Custom AI Model Fine-Tuning
  • For Businesses
    • AI Solutions for Businesses
      • Ready-Made AI dApps
      • Custom AI Solution Development
    • Integrating OmniTensor with Existing Systems
      • Web2 & Web3 Integration
      • API Usage & Examples
    • Privacy & Security
      • Data Encryption & Privacy Measures
      • Secure AI Model Hosting
  • Getting Started
    • Setting Up Your Account
    • Installing SDK & CLI Tools
  • Tutorials & Examples
    • Building AI dApps Step by Step
    • Integrating AI Models with OmniTensor
    • Case Studies
      • AI dApp Implementations
      • Real-World Applications
  • FAQ
    • Common Questions & Issues
    • Troubleshooting
  • Glossary
    • Definitions of Key Terms & Concepts
  • Community and Support
    • Official Links
    • Community Channels
  • Legal
    • Terms of Service
    • Privacy Policy
    • Licensing Information
Powered by GitBook
On this page
  • Overview
  • How It Works
  • The key benefits include:
  • Setting Up Your GPU
  • Rewards and Incentives
  • Supported Hardware and Operating Systems
  • Example (Checking GPU Compatibility):
  • Join the DePIN Revolution
  1. For the Community
  2. Contributing to OmniTensor

Sharing Your GPU

Overview

By contributing your GPU to the OmniTensor network, you help strengthen the decentralized AI ecosystem while earning passive income. Sharing unused computing power allows individuals to join a community-driven Decentralized Physical Infrastructure Network (DePIN) designed specifically for AI tasks. This global network of GPUs supports OmniTensor's AI model training and inference, creating a collaborative backbone for decentralized computing. Participants are rewarded with OMNIT tokens for their contributions, promoting a fair and transparent system that encourages broad community involvement in AI advancements.

How It Works

When you share your GPU with OmniTensor, it becomes part of a global network that distributes tasks such as AI model training, inference, and data processing across multiple compute nodes. This setup is flexible, enabling GPU owners to contribute either on-demand or during periods of inactivity, optimizing the use of resources.

The key benefits include:

  • Earning OMNIT tokens

    GPU owners receive rewards based on their contributed computational power, with rewards calculated according to active usage or idle sharing.

  • Decentralized AI Compute

    OmniTensor provides a consumer-friendly decentralized hardware network for AI tasks, allowing businesses, developers and individuals to tap into a globally distributed resource pool.

  • Secure Participation

    OmniTensor is built on blockchain technology, ensuring secure processing and verification of all tasks and maintaining transparency and fairness in reward distribution.

Setting Up Your GPU

  1. Install the OmniTensor Compute Node Software

    • To participate, you need to install the OmniTensor client software on your machine. The software manages compute task assignments, resource utilization and reward distribution.

    Example (Linux Terminal Setup):

    sudo apt update
    sudo apt install omnitensor-client
    omnitensor-client setup --gpu <GPU-type>

    This command will configure the software to identify and optimize your GPU for AI compute tasks.

  2. Connect Your Wallet

    • Once the software is installed, link your OmniTensor wallet to receive OMNIT token rewards. Ensure your wallet is secure and linked to the platform.

    Example (CLI Command to Link Wallet):

    omnitensor-client wallet link <wallet-address>
  3. Start Sharing Your GPU

    • After setup, you can start contributing your GPU resources. The software runs in the background and allocates tasks automatically based on network demand.

    Example (Starting GPU Contribution):

    omnitensor-client start --gpu <GPU-type>

    This initiates the sharing process, allowing your GPU to perform compute tasks and generate rewards.

Rewards and Incentives

OmniTensor offers a dual-reward structure for GPU contributors:

  • AI Compute Rewards

    Earn 0.2 OmniPoints per compute task completed, where OmniPoints are convertible 1:1 into OMNIT tokens.

  • Idle Time Rewards

    Earn 1 OmniPoint per 50GB shared per hour during idle periods, translating to substantial monthly rewards for GPUs with high memory capacities.

Example Reward Calculation: For a 24GB GPU contributing 216 OmniPoints per month (during idle), users can potentially earn more if they actively participate in compute tasks. These OmniPoints can be exchanged for OMNIT tokens during the token generation event (TGE).

Supported Hardware and Operating Systems

  • GPU Support

    OmniTensor supports a wide range of consumer and enterprise GPUs, including NVIDIA and AMD models. GPUs with high memory (VRAM) and CUDA cores are optimal for earning higher rewards.

  • Operating Systems

    OmniTensor supports both Linux (e.g., Ubuntu) and Windows operating systems. The platform provides a lightweight client for easy setup across different hardware configurations.

Example (Checking GPU Compatibility):

omnitensor-client check --gpu <GPU-type>

Join the DePIN Revolution

By contributing your GPU to OmniTensor, you become part of a global network that decentralizes AI compute resources. The OmniTensor ecosystem not only rewards you for your participation but also ensures your contribution helps democratize AI development.

PreviousContributing to OmniTensorNextData Collection & Validation

Last updated 7 months ago