PUBLIC RESUME PLATFORM

Chase Streicher

Cloud platform and DevOps engineer shipping infrastructure with Terraform, Kubernetes, Prometheus, Grafana, and practical local AI operations with Ollama.

Resume index: initializing Kubernetes + Ollama + Terraform Site: https://cstreicher.com

What Runs Where

Wireframe for the current hybrid Proxmox + OCI deployment and service routing.

On-Prem Host: Proxmox (`proxmox1`)

32 GB RAM and 24 TB storage pool

Runs local infrastructure, AI services, and observability workloads

down

On-Prem VM/Services: LabOps Stack

  • `labops-backend-1` (Resume API + Assistant API)
  • `labops-caddy-1` (local reverse proxy)
  • Grafana container on `:3000`
  • Ollama local inference on `:11434`

Local Platform Functions

  • Resume RAG + source-grounded Q&A
  • Local AI Assistant at `/assistant/`
  • iLO telemetry monitor via Redfish
  • Feature-isolated app routes per service
secure tunnel

OCI Edge VM: `ociwk01` (129.80.150.4)

  • NGINX + TLS termination for `cstreicher.com`
  • Routes `/api/*` and feature paths to on-prem
  • Reverse SSH tunnel endpoint at `127.0.0.1:18000`

OCI-Associated VM and Service Connectivity

  • Public edge pattern supports OCI-hosted VM integrations
  • Grafana endpoint mapped via `grafana.cstreicher.com`
  • SSH key-based administration and controlled proxy routing
public

Public Endpoints

  • cstreicher.com -> resume website + chat API
  • grafana.cstreicher.com -> Grafana dashboards
  • cstreicher.com/prometheus/ -> Prometheus UI

Ask Resume AI

Visitors can ask questions about my experience. Answers are grounded against data/resume.md.

Answer

Ask a question to get started.

Context Used

  • Top resume excerpts used by retrieval will appear here.

Platform Notes

IaC

Terraform + Hyper-V

Orchestration

k3s Kubernetes

Observability

Prometheus + Grafana

Local AI

Ollama + resume RAG

Storage

24 TB in-house