ibee

AI Models
Open-weight inference, fully managed.

Deploy and fine-tune open-source models on IBEE's GPU Cloud — an OpenAI-compatible API on top of India-resident GPUs, with predictable pricing and tenant-level isolation.
Who this is for

For teams shipping AI into real products.

Managed inference and fine-tuning for application engineers. No GPU cluster to run, no runtime to tune, no fleet of containers to keep alive.

Chatbots grounded in your own data

Retrieval, reranking, and generation in one pipeline — your documents stay on IBEE, your users get answers grounded in real context, not hallucinated boilerplate.

Multi-step agent workflows

Function calling, tool use, and conversation memory on a managed runtime. Build agent systems without wiring together five vendors and three SDKs.

Image and multimodal generation

Run generative media workloads on reserved GPU pools — consistent queue depth, no surprise rate limits, no shared throttling windows at peak.

Large-scale batch inference

Embedding, classification, enrichment across millions of records. Spot-priced GPU pools with automatic retry when a node is preempted.

Model catalogue

Open-weight models, ready to deploy.

Launch catalogue focuses on the families teams actually use in production. More coming as GPU capacity expands.

Language models
  • Llama 3.x
  • Mistral
  • Qwen
  • Gemma
  • + more at GA
Embedding & retrieval
  • BGE
  • E5
  • Nomic Embed
  • + more at GA
Code models
  • DeepSeek Coder
  • StarCoder
  • + more at GA

The boring infrastructure work, already done.

IBEE AI Models handles the parts of running inference that stop being interesting after the first production incident — runtime alignment, observability, rollback, tenant isolation, autoscaling.

OpenAI-compatible API

The endpoint speaks the OpenAI Chat Completions schema. Swap your base URL in one line — your existing SDK, tools, and eval harnesses keep working.

Fine-tuning that doesn't bite

Upload a dataset, pick a base model, get a fine-tuned checkpoint. LoRA, QLoRA, and full fine-tunes with versioned datasets and resume-from-failure built in.

Per-tenant model registry

Version, tag, and roll back your own checkpoints. Private to your tenancy by default — never part of a shared fleet, never used to train anything else.

Ready runtimes

vLLM, TGI, Triton, and TensorRT kept current and matched to the right GPU SKU. You pick the runtime; we handle driver alignment and patch cadence.

Token-level observability

Per-request token counts, p50/p95/p99 latency, cost per call, and GPU utilisation — exported straight into the metrics stack you already use.

Tenant isolation by default

Weights, fine-tuning data, and inference traffic never leave your tenancy. No cross-tenant caching. No "improving the base model with your data" clauses.

IBEE AI Cloud

Managed layer, or drop down to raw infra.

AI Models sits on top of IBEE's compute and storage stack. Use the managed layer when it fits, drop down to GPU Cloud or Bare Metal when you need lower-level control.

Ship open-weight AI
without operating GPUs.

IBEE AI Models is entering private beta. Register interest to get a base-URL swap guide, a cost estimate for your traffic shape, and early access to the fine-tuning pipeline.

Frequently Asked Questions

Teams shipping AI features inside real products — not ML researchers tuning fundamental models. If you're comparing IBEE to the OpenAI API, Bedrock, or a self-hosted vLLM cluster: the managed layer is the right fit. If you need raw GPU shells for R&D, pair it with GPU Cloud or Bare Metal.
The chat completions, embedding, and streaming endpoints follow the OpenAI schema. In practice, pointing your existing client library at an IBEE base URL is a one-line change. Function calling, tool use, and JSON-mode responses follow the same shape. Where the schemas differ (rate-limit headers, usage objects), the differences are documented.
Managed inference is entering private beta first, with fine-tuning pipelines following. Access is invite-based during beta — register via the contact form to join the queue. We prioritise teams with defined workloads and clear p50/p95 targets so we can tune the platform against real traffic.
At launch: Llama 3.x, Mistral, Qwen, and Gemma for language; BGE, E5, and Nomic Embed for retrieval; DeepSeek Coder and StarCoder for code. Image and multimodal models are being added as GPU capacity expands. Model-specific optimisations (speculative decoding, paged attention) are turned on per SKU.
Yes. The Model Registry is per tenant — you upload a checkpoint, tag it, and point a managed endpoint at it. Inference runs on the right GPU SKU for the model size. Your weights stay private; they're never loaded into a shared pool and never used to train a base model.
In your IBEE tenancy, under Indian data residency by default. Fine-tuning reads from your storage bucket, writes the checkpoint to your registry, and logs the run in your audit trail. Nothing about your dataset leaves the tenancy or the country unless you explicitly replicate it.

Have more questions?

Contact Our Technical Team