OmniTensor
  • Welcome
  • Introduction
    • What is OmniTensor?
    • Vision & Mission
    • Key Features
    • Why OmniTensor?
      • Current Challenges in AI
      • How OmniTensor Addresses These Challenges
  • Core Concepts
    • AI Grid as a Service (AI-GaaS)
      • Overview of AI-GaaS
      • Benefits of AI-GaaS
      • Use Cases of AI-GaaS
    • Decentralized Physical Infrastructure Network (DePIN)
      • GPU Sharing Model
      • Incentive Mechanisms
    • AI OmniChain
      • Layer 1 and Layer 2 Integration
      • AI Model Marketplace and Interoperability
    • DualProof Consensus Mechanism
      • Proof-of-Work (PoW) for AI Compute
      • Proof-of-Stake (PoS) for Validation
    • OMNIT Token
      • Overview
      • Utility
      • Governance
  • Tokenomics
    • Token Allocations
    • Token Locks
    • ERC20 Token
    • Audit
  • OmniTensor Infrastructure
    • L1 EVM Chain
      • Overview & Benefits
      • Development Tools & API
    • AI OmniChain
      • Interoperability
      • Scalability
      • Decentralized Data & Model Management
    • Nodes & Network Management
      • AI Consensus Validator Nodes
      • AI Compute Nodes (GPUs)
  • Roadmap & Updates
    • Roadmap
    • Future Features
  • PRODUCTS
    • AI Model Marketplace
    • dApp Store
    • Data Layer
    • Customizable Solutions
    • AI Inference Network
  • For the Community
    • Contributing to OmniTensor
      • Sharing Your GPU
      • Data Collection & Validation
    • Earning OMNIT Tokens
      • Computation Rewards
      • Data Processing & Validation Rewards
    • Community Incentives & Gamification
      • Participation Rewards
      • Leaderboards & Competitions
  • For Developers
    • Building on OmniTensor
      • dApp Development Overview
      • Using Pre-trained AI Models
    • SDK & Tools
      • OmniTensor SDK Overview
      • API Documentation
    • AI Model Training & Deployment
      • Training Custom Models
      • Deploying Models on OmniTensor
    • Decentralized Inference Network
      • Running AI Inference
      • Managing and Scaling Inference Tasks
    • Advanced Topics
      • Cross-Chain Interoperability
      • Custom AI Model Fine-Tuning
  • For Businesses
    • AI Solutions for Businesses
      • Ready-Made AI dApps
      • Custom AI Solution Development
    • Integrating OmniTensor with Existing Systems
      • Web2 & Web3 Integration
      • API Usage & Examples
    • Privacy & Security
      • Data Encryption & Privacy Measures
      • Secure AI Model Hosting
  • Getting Started
    • Setting Up Your Account
    • Installing SDK & CLI Tools
  • Tutorials & Examples
    • Building AI dApps Step by Step
    • Integrating AI Models with OmniTensor
    • Case Studies
      • AI dApp Implementations
      • Real-World Applications
  • FAQ
    • Common Questions & Issues
    • Troubleshooting
  • Glossary
    • Definitions of Key Terms & Concepts
  • Community and Support
    • Official Links
    • Community Channels
  • Legal
    • Terms of Service
    • Privacy Policy
    • Licensing Information
Powered by GitBook
On this page
  • Overview
  • Core Features
  • Example: Deploying AI Models via API
  • Example: Integrating OmniTensor with Web2 Systems
  • Advanced Usage: Web3 Integration
  1. For Businesses
  2. Integrating OmniTensor with Existing Systems

API Usage & Examples

OmniTensor offers a powerful API layer that enables seamless integration with existing Web2 and Web3 infrastructures. This section provides an overview of how businesses can leverage OmniTensor’s API to automate and enhance their operations through decentralized AI solutions.

Overview

The OmniTensor API enables seamless interoperability between AI services and business applications, allowing companies to incorporate AI models, inference capabilities and decentralized infrastructure into their current systems. Its modular design offers the flexibility to deploy AI-powered applications across various industries, including finance, healthcare and logistics.

Supporting both gRPC and HTTP protocols, the API ensures high-performance communication across different environments. These protocols enable businesses to interact with OmniTensor’s decentralized AI infrastructure while maintaining the level of security and reliability essential for enterprise-grade solutions.

Core Features

  • AI Model Access Direct access to OmniTensor's pre-trained models (LLMs, Text-to-Speech, etc.), or deployment of custom AI models.

  • Decentralized Inference

    Leverage the global GPU DePIN network for scalable AI inference with real-time API calls.

  • Tokenized AI Computation

    Pay for API usage and AI computations via OMNIT tokens, offering a decentralized and cost-effective solution.

  • Data Integration

    Import business-specific data into OmniTensor’s platform for model training or use validated datasets from the community.

Example: Deploying AI Models via API

This section will showcase how a business can integrate a custom AI model using OmniTensor’s API. Below is a Python example demonstrating the interaction with OmniTensor’s inference API:

import requests

# API endpoint for OmniTensor AI Inference
url = "https://api.omnitensor.io/v1/inference"

# Headers including authentication and content type
headers = {
    "Authorization": "Bearer <your_access_token>",
    "Content-Type": "application/json"
}

# Payload containing the AI model identifier and input data
payload = {
    "model_id": "gpt-3-v1",
    "input": {
        "text": "Summarize the latest financial report for Q3."
    },
    "params": {
        "temperature": 0.7,
        "max_tokens": 150
    }
}

# Make the request to OmniTensor API
response = requests.post(url, json=payload, headers=headers)

# Check response status and output result
if response.status_code == 200:
    result = response.json()
    print("AI Output:", result['output'])
else:
    print("Error:", response.status_code, response.text)

Example: Integrating OmniTensor with Web2 Systems

For companies using legacy Web2 systems, integrating OmniTensor can be as simple as invoking an HTTP-based API to run machine learning tasks. Here is an example of how to integrate AI model inference into an existing system via a web-based API call:

curl -X POST https://api.omnitensor.io/v1/inference \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
  "model_id": "image-classification-v2",
  "input": {
    "image_url": "https://example.com/image.jpg"
  },
  "params": {
    "threshold": 0.8
  }
}'

This integration facilitates fast, efficient image classification that can be directly incorporated into business workflows such as automating quality checks or fraud detection.

Advanced Usage: Web3 Integration

For businesses operating in the Web3 domain, OmniTensor provides seamless integration through smart contract interactions on Ethereum-based Layer 1 and Layer 2 solutions. Using the OmniTensor API, businesses can call AI functions within smart contracts, enabling decentralized applications to access powerful AI models.

pragma solidity ^0.8.0;

interface OmniTensorAPI {
    function inference(string memory modelId, string memory input) external returns (string memory);
}

contract AIEnhancedDApp {
    OmniTensorAPI omniTensor = OmniTensorAPI(0xAPIContractAddress);

    function runInference() public returns (string memory) {
        string memory result = omniTensor.inference("gpt-4-v2", "Provide market predictions for next quarter.");
        return result;
    }
}
PreviousWeb2 & Web3 IntegrationNextPrivacy & Security

Last updated 7 months ago