Deploying Workloads
Quick Launch templates and custom Docker/YAML deployments — ports, domains, GPU selection, and persistent storage.
Deploying Workloads on FluxEdge
FluxEdge provides two ways to deploy workloads on rented machines: Quick Launch for pre-configured templates from the marketplace, and Custom Deployment for running your own Docker images with full Kubernetes YAML control. Any hardened Dockerized application can run on FluxEdge — from Jupyter notebooks to large language models.
Quick Launch
Quick Launch deploys pre-configured projects from the FluxEdge Marketplace with a few clicks. Available templates include popular AI/ML tools ready to use out of the box.
| Template | Description | Use Case |
|---|---|---|
| Jupyter Notebook | Interactive Python environment with GPU support | Data science, experimentation, prototyping |
| Ollama | Local LLM inference server | Running open-source LLMs (Llama, Mistral, etc.) |
| Stable Diffusion | Image generation with GPU acceleration | AI art, image generation workflows |
| Grafana | Monitoring and visualization dashboard | Infrastructure monitoring, data visualization |
| NVIDIA NIM | Optimized inference microservices | Production AI inference at scale |
To Quick Launch: navigate to your rented machine, click Add New Project, browse the marketplace, select a template, customize settings if needed via the builder interface, and deploy.
Custom Deployment
Custom deployments give you full control over the container configuration. You can use the Builder UI (user-friendly form) or the YAML editor (direct Kubernetes YAML) — both are synchronized in real-time.
Configuration Options
- 1
Project Name
A descriptive name for your deployment.
- 2
Docker Image
The container image in namespace:tag format. Supports Docker Hub, GitHub Container Registry (GHCR), Google Container Registry, Azure Container Registry, and private registries.
- 3
Ports
Map container ports to public ports. Select TCP or HTTP protocol per port.
- 4
Domains
Attach custom domains to your deployment or specific ports. Multiple domains can be comma-separated.
- 5
Environment Variables
KEY=VALUE pairs passed to the container. Use these for API keys, configuration, secrets.
- 6
Commands
Override the container entrypoint and arguments for custom startup behavior.
- 7
GPU Selection
Enable GPU access, set the GPU count, choose manufacturer (NVIDIA or AMD), and optionally specify a model.
- 8
Resources
Allocate CPU cores, RAM, and storage for the Kubernetes pod.
Example YAML deployment — Ollama with GPU
apiVersion: v1
kind: Pod
metadata:
name: ollama-inference
spec:
containers:
- name: ollama
image: ollama/ollama:latest
ports:
- containerPort: 11434
protocol: TCP
resources:
limits:
nvidia.com/gpu: 1
cpu: "4"
memory: "16Gi"
requests:
cpu: "2"
memory: "8Gi"
env:
- name: OLLAMA_HOST
value: "0.0.0.0"Deploying a corrupted or invalid YAML will prevent the pod from starting, but the machine rental will continue running and billing. Always validate your YAML before deploying. If a pod fails, fix the configuration and redeploy — or stop the machine if you no longer need it.
Supported Docker Registries
- •Docker Hub — public and private images (default registry)
- •GitHub Container Registry (GHCR) — ghcr.io/namespace/image:tag
- •Google Container Registry — gcr.io/project/image:tag
- •Azure Container Registry — myregistry.azurecr.io/image:tag
- •Private registries — any OCI-compatible registry with authentication
Local Storage (Persistent Volumes)
FluxEdge supports local persistent volumes for workloads that need data persistence during a rental session.
- •A volume is created on the provider's disk and mounted to the container at a configurable path
- •Data persists for the duration of the lease only — it does NOT persist across leases
- •Each container gets its own unique volume (shared volumes between containers are not supported)
- •Each container is limited to one persistent volume
- •Use the storage type filter (e.g., NVMe) to select machines with faster drives
- •Quick Launch templates have local storage enabled by default
Data stored in persistent volumes is lost when the lease ends or if the workload migrates to a different provider. Always back up important data to external storage (S3, cloud storage, etc.) before ending your rental.
Sources & Further Reading
Other articles in FluxEdge GPU Computing
What is FluxEdge?
Overview of the decentralized GPU compute marketplace — value proposition, network scale, and getting started.
Renting GPU Compute
How to rent Dedicated and Premium GPU machines — filtering, provisioning, and machine management.
Becoming a Provider with FluxCore
Install FluxCore, benchmark your GPU, join the marketplace, and earn from rentals with auto-switch mining fallback.
Pricing, Billing & Payments
Dynamic pricing formula, payment methods (fiat + crypto), deposit bonus, provider earnings, and KYC levels.
GPUs, Frameworks & Use Cases
Supported GPU models (RTX 4090 to H100), AI/ML frameworks, and real-world use cases from training to rendering.
Architecture, Security & Networking
Kubernetes orchestration, ArcaneOS chain-of-trust, container isolation, networking, and data encryption.
FluxEdge vs Traditional Cloud
Detailed comparison with AWS, GCP, Azure — egress fees, pricing, vendor lock-in, and migration strategy.