AI & LLMs
Practical guides for running AI workloads. From local LLMs on your machine to deploying models on Kubernetes.
Run locally first
The fastest way to get started with LLMs is to run them on your own machine. No API keys, no subscription, full control over the model.
Coming soon
The next step: taking these models to Kubernetes. GPU node pools, model serving, and more.