API Usage & Examples

OmniTensor offers a powerful API layer that enables seamless integration with existing Web2 and Web3 infrastructures. This section provides an overview of how businesses can leverage OmniTensor’s API to automate and enhance their operations through decentralized AI solutions.

Overview

The OmniTensor API enables seamless interoperability between AI services and business applications, allowing companies to incorporate AI models, inference capabilities and decentralized infrastructure into their current systems. Its modular design offers the flexibility to deploy AI-powered applications across various industries, including finance, healthcare and logistics.

Supporting both gRPC and HTTP protocols, the API ensures high-performance communication across different environments. These protocols enable businesses to interact with OmniTensor’s decentralized AI infrastructure while maintaining the level of security and reliability essential for enterprise-grade solutions.

Core Features

  • AI Model Access Direct access to OmniTensor's pre-trained models (LLMs, Text-to-Speech, etc.), or deployment of custom AI models.

  • Decentralized Inference

    Leverage the global GPU DePIN network for scalable AI inference with real-time API calls.

  • Tokenized AI Computation

    Pay for API usage and AI computations via OMNIT tokens, offering a decentralized and cost-effective solution.

  • Data Integration

    Import business-specific data into OmniTensor’s platform for model training or use validated datasets from the community.

Example: Deploying AI Models via API

This section will showcase how a business can integrate a custom AI model using OmniTensor’s API. Below is a Python example demonstrating the interaction with OmniTensor’s inference API:

import requests

# API endpoint for OmniTensor AI Inference
url = "https://api.omnitensor.io/v1/inference"

# Headers including authentication and content type
headers = {
    "Authorization": "Bearer <your_access_token>",
    "Content-Type": "application/json"
}

# Payload containing the AI model identifier and input data
payload = {
    "model_id": "gpt-3-v1",
    "input": {
        "text": "Summarize the latest financial report for Q3."
    },
    "params": {
        "temperature": 0.7,
        "max_tokens": 150
    }
}

# Make the request to OmniTensor API
response = requests.post(url, json=payload, headers=headers)

# Check response status and output result
if response.status_code == 200:
    result = response.json()
    print("AI Output:", result['output'])
else:
    print("Error:", response.status_code, response.text)

Example: Integrating OmniTensor with Web2 Systems

For companies using legacy Web2 systems, integrating OmniTensor can be as simple as invoking an HTTP-based API to run machine learning tasks. Here is an example of how to integrate AI model inference into an existing system via a web-based API call:

curl -X POST https://api.omnitensor.io/v1/inference \
-H "Authorization: Bearer <your_access_token>" \
-H "Content-Type: application/json" \
-d '{
  "model_id": "image-classification-v2",
  "input": {
    "image_url": "https://example.com/image.jpg"
  },
  "params": {
    "threshold": 0.8
  }
}'

This integration facilitates fast, efficient image classification that can be directly incorporated into business workflows such as automating quality checks or fraud detection.

Advanced Usage: Web3 Integration

For businesses operating in the Web3 domain, OmniTensor provides seamless integration through smart contract interactions on Ethereum-based Layer 1 and Layer 2 solutions. Using the OmniTensor API, businesses can call AI functions within smart contracts, enabling decentralized applications to access powerful AI models.

pragma solidity ^0.8.0;

interface OmniTensorAPI {
    function inference(string memory modelId, string memory input) external returns (string memory);
}

contract AIEnhancedDApp {
    OmniTensorAPI omniTensor = OmniTensorAPI(0xAPIContractAddress);

    function runInference() public returns (string memory) {
        string memory result = omniTensor.inference("gpt-4-v2", "Provide market predictions for next quarter.");
        return result;
    }
}

Last updated