Fluxme.io
FluxEdge GPU Computing

GPUs, Frameworks & Use Cases

Supported GPU models (RTX 4090 to H100), AI/ML frameworks, and real-world use cases from training to rendering.

10 min read
gpupytorchtensorflowai

Supported GPUs, Frameworks & Use Cases

FluxEdge supports a wide range of GPU hardware β€” from consumer-grade NVIDIA GeForce cards provided by community members to enterprise-grade H100 and Blackwell GPUs from NVIDIA partner Hyperstack. Because FluxEdge is infrastructure-level (Kubernetes + Docker), any framework that runs in a container works out of the box.

Supported GPU Models

Dedicated Machines (Community Providers)

GPU FamilyNotable ModelsTypical VRAMBest For
NVIDIA GeForce RTX 40-seriesRTX 4090, RTX 4080, RTX 4070 Ti12-24 GBAI inference, image generation, rendering
NVIDIA GeForce RTX 30-seriesRTX 3090, RTX 3080, RTX 30708-24 GBGeneral GPU compute, inference, gaming
NVIDIA ProfessionalRTX A6000, A5000, Quadro16-48 GBProfessional visualization, large model inference
AMD GPUsVarious modelsVariesCompute tasks with ROCm support

Premium Machines (Hyperstack / NVIDIA)

GPUVRAMArchitectureBest For
NVIDIA H10080 GB HBM3HopperLarge-scale AI training, enterprise inference
NVIDIA H200141 GB HBM3eHopperUltra-large model training, HPC workloads
NVIDIA A10040/80 GB HBM2eAmpereProduction ML training and inference
NVIDIA RTX A600048 GB GDDR6AmpereProfessional visualization, model fine-tuning
NVIDIA BlackwellTBABlackwellNext-gen AI training and inference (planned)

Premium machines require KYC1 verification. These are sourced from NexGen Cloud's Hyperstack platform, an NVIDIA partner, ensuring enterprise-grade reliability and performance.

Supported Frameworks & Tools

FluxEdge is infrastructure-level β€” any application that runs in a Docker container works. Here are commonly used frameworks and tools confirmed on the platform:

Framework/ToolCategoryDocker Image Example
PyTorchDeep Learningpytorch/pytorch:latest
TensorFlowDeep Learningtensorflow/tensorflow:latest-gpu
ONNX RuntimeInferencemcr.microsoft.com/onnxruntime/server
Jupyter NotebookInteractive DevelopmentQuick Launch template available
OllamaLLM InferenceQuick Launch template available
Stable DiffusionImage GenerationQuick Launch template available
NVIDIA NIMOptimized InferenceNVIDIA NGC catalog
NVIDIA NeMoAI Agents / RAGNVIDIA NGC catalog
Hugging FaceModel Hub / TransformersCustom images from HF Docker
vLLMHigh-throughput LLM Servingvllm/vllm-openai:latest
Blender3D RenderingUsed in benchmarks; custom images

Use Cases

  1. 1

    AI/ML Model Training

    Train deep learning models with GPU acceleration. Use Dedicated GPUs (RTX 4090) for smaller models or Premium machines (H100, A100) for large-scale training. FluxEdge is ideal for teams that need burst GPU capacity without long-term cloud commitments.

  2. 2

    AI Inference & LLM Hosting

    Run inference servers for production or development. Deploy Ollama, vLLM, NVIDIA NIM, or custom model servers. The zero egress fees make inference serving particularly cost-effective.

  3. 3

    Image & Video Generation

    Run Stable Diffusion, DALL-E alternatives, or video generation pipelines. Quick Launch templates make this a one-click deployment.

  4. 4

    3D Rendering

    Offload Blender, Unreal Engine, or other rendering workloads to GPU machines. Especially useful for animation studios needing burst render capacity.

  5. 5

    Scientific Computing & HPC

    Run computational fluid dynamics, molecular dynamics, bioinformatics, or other GPU-accelerated scientific workloads.

  6. 6

    Data Processing

    Use GPU-accelerated data processing tools (RAPIDS, cuDF) for large-scale data analytics. Dedicated CPU machines are also available for non-GPU data workloads.

  7. 7

    Development & Prototyping

    Spin up Jupyter notebooks with GPU access for quick prototyping and experimentation without setting up local GPU environments.

Crypto mining is also permitted on FluxCore provider machines that opt into it. The Auto-Switch feature seamlessly transitions between mining and rental workloads for maximum utilization.