BasetenDeploy, optimize, and scale open-source and custom AI models in production with Baseten's high-performance inference platform. Cross-cloud, 99.99% uptime, blazing-fast cold starts.(0)AI Models & Infrastructure·LLM Developer Tools·AI Infrastructure Tools0
Banana AI InferenceBanana provides autoscaling GPU inference hosting with zero-markup, at-cost pricing and full DevOps tooling. Built for AI teams that need to ship and scale fast.(0)AI Models & Infrastructure·LLM Developer Tools·AI Infrastructure Tools0
BabyAGIBabyAGI is a pioneering open-source framework for self-building autonomous AI agents that plan, prioritize, and execute tasks using large language models.(0)Automation & Agents·AI Models & Infrastructure·AI Frameworks0
Aporia AIAporia AI provides real-time LLM guardrails and ML model monitoring to help enterprises deploy safe, reliable AI. Now part of Coralogix.(0)AI Models & Infrastructure·LLM Developer Tools·AI Infrastructure Tools0
AnyscaleAnyscale lets you build, run, and scale all ML and AI workloads on any cloud or on-prem using Ray. Ideal for LLM training, fine-tuning, batch inference, and distributed AI pipelines.(0)AI Models & Infrastructure·LLM Developer Tools·AI Infrastructure Tools0
Andromeda AIAndromeda Cluster offers instant access to thousands of H100, H200, and B200 GPUs for AI training and inference. No minimums, zero ingress/egress fees, 24/7 support.(0)AI Models & Infrastructure·AI Infrastructure Tools0
AIMLAPIAIMLAPI provides a unified gateway to 400+ AI models including GPT, Claude, Gemini, Sora, and more. Save up to 80% vs. OpenAI with one API key and one bill.(0)AI Models & Infrastructure·LLM Developer Tools·AI Infrastructure Tools0
OpenWrtOpenWrt is a free, open-source Linux OS for routers and embedded devices, offering full customization, a rich package manager, and advanced networking capabilities.(0)DevOps Tools·Command Line Tools·AI Infrastructure Tools0
LiteLLMLiteLLM is an open-source AI Gateway that provides a unified OpenAI-compatible proxy to 100+ LLMs, with spend tracking, rate limiting, fallbacks, and virtual key management.(0)AI Models & Infrastructure·LLM Developer Tools·AI Infrastructure Tools0
vLLMvLLM is an open-source high-throughput LLM inference library supporting GPU, CPU, and TPU backends with an OpenAI-compatible API, PagedAttention, and production deployment tools.(0)AI Models & Infrastructure·LLM Developer Tools·AI Frameworks0