The Vektor network is secured and powered by independent Tier III and Tier IV data centers around the world. By provisioning hardware to the network, Node Operators capture a continuous stream ofDocumentation Index
Fetch the complete documentation index at: https://vektorcompute-77d08130.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
$VKTR yield generated by global AI inference demand.
Hardware Requirements
Vektor does not support consumer-grade GPUs (e.g., RTX 4090s or Mac Studio clusters). To maintain our strict< 50ms latency and high-throughput SLAs for enterprise clients, the network requires data-center-grade Tensor Core architecture.
Currently, the Vektor Routing Engine only accepts nodes running the following verified hardware:
- NVIDIA H100 (80GB / 96GB) - Tier 1 Multiplier
- NVIDIA A100 (80GB) - Tier 2 Multiplier
- NVIDIA H200 - (Pending Mainnet Integration)
Network Connectivity & Uplink
Because AI models and inference outputs can be incredibly data-heavy, Node Operators must meet strict networking thresholds to avoid bottlenecking the mesh:- Minimum Uplink: 10 Gbps dedicated symmetric fiber.
- Redundancy: Dual-homed network configurations with BGP routing.
- Storage: NVMe SSD arrays for rapid model weight loading into VRAM.
Service Level Agreements (SLAs) & Slashing
The Vektor Protocol requires absolute reliability. Node Operators must stake a minimum of50,000 $VKTR as collateral to join the active routing pool.
To ensure network health, the protocol enforces strict cryptographic SLAs:
- The 99.9% Uptime Rule: If a node drops offline and fails to respond to health-check pings for more than 4 consecutive epochs (approx. 1 hour), its collateral is subject to a 0.5% slashing penalty.
- Latency Gating: If a node consistently returns inference outputs slower than the regional mesh average, the Routing Engine will temporarily demote the node, reducing its request flow and subsequent yield until the bottleneck is resolved.