blob: 9bbec82099b8ef7083ad6497d264a338a120b184 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
### Task
1. Stand up a local K8s cluster with `kind`, `k3d`, or `minikube`. Document exact versions.
2. Write a Helm chart (or use the upstream vLLM/SGLang chart and extend it) that deploys a small open-weights model — e.g. `Qwen2.5-0.5B-Instruct`, `Llama-3.2-1B-Instruct`, or any model that fits on CPU/small GPU. CPU-only inference is acceptable.
3. Wrap it in Terraform (or OpenTofu) using the `helm` and `kubernetes` providers.
4. Expose an OpenAI-compatible endpoint through a K8s Service / Ingress and prove it works with a `curl` example in the README.
5. Observability: scrape `/metrics` from the inference pod with Prometheus and show at least one dashboard or PromQL query for request latency and GPU/CPU utilization.
6. Two environments — `dev` and `prod` — differ by at least: replica count, resource requests/limits, and model choice. Use Terraform workspaces, tfvars, or environment directories; justify your choice.
Stretch Goals
- Deploy a separate application container containing an agentic system utilizing the deployed vLLM/SGLang as the backend model server. The agent system's use-case is free to you to choose.
- HPA based on a custom metric (e.g. queue depth or tokens/sec)
- Image digest pinning and an `atlantis.yaml` or equivalent GitOps config
- A smoke-test job that runs post-deploy and fails the apply if the endpoint is unhealthy
You will be assessed on the following criteria:
- the correctness of its output (stochastic functions notwithstanding);
- how reliable, testable, modular and clean your code is;
- other interesting add-ons you can think of.
|