Computation Rewards

Community members can contribute unused computational resources, such as GPUs or CPUs, to the OmniTensor network in exchange for OMNIT tokens. This process leverages the Decentralized Physical Infrastructure Network (DePIN), where participants offer computational power to handle AI tasks such as inference, model training, and other distributed AI workloads.

Workflow:

  1. Setup AI Compute Node:

    • Users install the OmniTensor SDK and configure their system (e.g., GPU, CPU) as an AI Compute Node.

    • Once active, the node is integrated into the decentralized network and starts receiving AI computational tasks.

  2. Task Execution:

    • The node processes AI-related tasks, including training AI models or performing inference. Each task completed contributes to the overall grid’s processing power.

  3. Earning OMNIT Tokens:

    • For every AI task successfully completed, the node operator is rewarded in OMNIT tokens. The reward is calculated based on factors such as the complexity of the task, the processing time, and the node’s computational power.

  4. Terminal Setup:

    # Install the OmniTensor CLI
    $ npm install -g omnitensor-cli
    
    # Register the node with the OmniTensor network
    $ omnitensor node register --gpu [GPU_INFO] --stake [AMOUNT]
    
    # Start processing tasks
    $ omnitensor node start
  5. Monitoring and Reward Accumulation:

    • Use the CLI to track task completion and earned OMNIT tokens.

    $ omnitensor node status
    $ omnitensor node rewards

By participating in the AI Grid as a Service (AI-GaaS), GPU owners effectively "mine" OMNIT tokens, providing valuable computational power to the network while being compensated for their resources.

Last updated