OmniTensor
  • Welcome
  • Introduction
    • What is OmniTensor?
    • Vision & Mission
    • Key Features
    • Why OmniTensor?
      • Current Challenges in AI
      • How OmniTensor Addresses These Challenges
  • Core Concepts
    • AI Grid as a Service (AI-GaaS)
      • Overview of AI-GaaS
      • Benefits of AI-GaaS
      • Use Cases of AI-GaaS
    • Decentralized Physical Infrastructure Network (DePIN)
      • GPU Sharing Model
      • Incentive Mechanisms
    • AI OmniChain
      • Layer 1 and Layer 2 Integration
      • AI Model Marketplace and Interoperability
    • DualProof Consensus Mechanism
      • Proof-of-Work (PoW) for AI Compute
      • Proof-of-Stake (PoS) for Validation
    • OMNIT Token
      • Overview
      • Utility
      • Governance
  • Tokenomics
    • Token Allocations
    • Token Locks
    • ERC20 Token
    • Audit
  • OmniTensor Infrastructure
    • L1 EVM Chain
      • Overview & Benefits
      • Development Tools & API
    • AI OmniChain
      • Interoperability
      • Scalability
      • Decentralized Data & Model Management
    • Nodes & Network Management
      • AI Consensus Validator Nodes
      • AI Compute Nodes (GPUs)
  • Roadmap & Updates
    • Roadmap
    • Future Features
  • PRODUCTS
    • AI Model Marketplace
    • dApp Store
    • Data Layer
    • Customizable Solutions
    • AI Inference Network
  • For the Community
    • Contributing to OmniTensor
      • Sharing Your GPU
      • Data Collection & Validation
    • Earning OMNIT Tokens
      • Computation Rewards
      • Data Processing & Validation Rewards
    • Community Incentives & Gamification
      • Participation Rewards
      • Leaderboards & Competitions
  • For Developers
    • Building on OmniTensor
      • dApp Development Overview
      • Using Pre-trained AI Models
    • SDK & Tools
      • OmniTensor SDK Overview
      • API Documentation
    • AI Model Training & Deployment
      • Training Custom Models
      • Deploying Models on OmniTensor
    • Decentralized Inference Network
      • Running AI Inference
      • Managing and Scaling Inference Tasks
    • Advanced Topics
      • Cross-Chain Interoperability
      • Custom AI Model Fine-Tuning
  • For Businesses
    • AI Solutions for Businesses
      • Ready-Made AI dApps
      • Custom AI Solution Development
    • Integrating OmniTensor with Existing Systems
      • Web2 & Web3 Integration
      • API Usage & Examples
    • Privacy & Security
      • Data Encryption & Privacy Measures
      • Secure AI Model Hosting
  • Getting Started
    • Setting Up Your Account
    • Installing SDK & CLI Tools
  • Tutorials & Examples
    • Building AI dApps Step by Step
    • Integrating AI Models with OmniTensor
    • Case Studies
      • AI dApp Implementations
      • Real-World Applications
  • FAQ
    • Common Questions & Issues
    • Troubleshooting
  • Glossary
    • Definitions of Key Terms & Concepts
  • Community and Support
    • Official Links
    • Community Channels
  • Legal
    • Terms of Service
    • Privacy Policy
    • Licensing Information
Powered by GitBook
On this page
  1. Getting Started

Installing SDK & CLI Tools

OmniTensor provides a robust SDK and CLI tools for developers to interact with the platform's infrastructure and deploy AI models or decentralized applications (dApps).

  1. Install the OmniTensor SDK

    Prerequisites:

    • Ensure you have Python 3.8+ installed.

    • Install pip (Python package manager).

    Steps:

    • Install the SDK via pip:

      pip install omnitensor-sdk
    • After installation, initialize the SDK by configuring your API keys:

      omnitensor config set-api-key <Your API Key>
      omnitensor config set-wallet <Your Wallet Address>
  2. Set Up CLI Tools

    The OmniTensor Command Line Interface (CLI) simplifies the interaction with OmniTensor’s decentralized infrastructure for tasks such as deploying AI models and managing nodes.

    Steps:

    • Download and install the CLI tool.

    • Ensure you have the required permissions to execute the installer:

      chmod +x omnitensor-cli
    • Move the CLI to a directory in your system's PATH:

      sudo mv omnitensor-cli /usr/local/bin/omnitensor

    Usage Example:

    • Authenticate your CLI with the following command:

      omnitensor login --email <Your Email> --password <Your Password>
    • Deploy a pre-trained AI model:

      omnitensor model deploy --model-id <model-id> --dataset <path-to-dataset>
    • Check the status of your node or AI inference tasks:

      omnitensor node status
PreviousSetting Up Your AccountNextBuilding AI dApps Step by Step

Last updated 8 months ago