from zero to production: a hands‑on golang tutorial for building blazing‑fast cloud apis with ai‑powered dev tools
what you’ll build
in this hands-on tutorial, you’ll go from zero to a production-ready go (golang) rest api. you’ll learn how to scaffold a project, write clean handlers, use ai-powered developer tools, containerize with docker, deploy to the cloud, and wire up devops essentials like ci/cd, logging, and metrics. this guide is beginner-friendly but grounded in full‑stack realities and devops best practices.
prerequisites
- basic terminal knowledge and git installed
- go 1.21+ installed
- docker desktop (or docker engine) installed
- a code editor (vs code recommended) with the go extension
- optional: openai/anthropic key or github copilot for ai assistance
project goals
- fast api: idiomatic go, low overhead
- secure defaults: env configs, minimal attack surface
- cloud-ready: dockerized, health checks, metrics
- ai-powered dev: use ai to draft boilerplate, tests, and docs
- seo-friendly: clear structure for docs and landing pages
step 1 — initialize the go module
create a folder and initialize your module:
mkdir go-blazing-api && cd go-blazing-api
go mod init github.com/yourname/go-blazing-api
go get github.com/go-chi/chi/v5
go get github.com/joho/godotenv
go get github.com/rs/zerolog/log
go get github.com/prometheus/client_golang/prometheus/promhttp
why these? chi = lightweight, fast router; godotenv = local env management; zerolog = structured logs; promhttp = metrics endpoint.
step 2 — project structure
.
├─ cmd/
│ └─ api/
│ └─ main.go
├─ internal/
│ ├─ http/
│ │ ├─ router.go
│ │ └─ handlers.go
│ ├─ config/
│ │ └─ config.go
│ ├─ health/
│ │ └─ health.go
│ └─ version/
│ └─ version.go
├─ pkg/
│ └─ middleware/
│ └─ middleware.go
├─ .env.example
├─ dockerfile
├─ docker-compose.yml
├─ makefile
└─ readme.md
tip: ask your ai assistant to generate file headers, docstrings, and basic tests for each package to speed up iteration.
step 3 — configuration and environment variables
create internal/config/config.go:
package config
import (
"log"
"os"
"strconv"
"github.com/joho/godotenv"
)
type config struct {
env string
port string
readtimeout int
writetimeout int
}
func mustatoi(key, def string) int {
v := def
if ev := os.getenv(key); ev != "" {
v = ev
}
i, err := strconv.atoi(v)
if err != nil {
log.fatalf("invalid int for %s: %v", key, err)
}
return i
}
func load() config {
_ = godotenv.load() // load .env if present
cfg := config{
env: getenv("app_env", "development"),
port: getenv("port", "8080"),
readtimeout: mustatoi("read_timeout_sec", "5"),
writetimeout: mustatoi("write_timeout_sec", "10"),
}
return cfg
}
func getenv(key, def string) string {
if v := os.getenv(key); v != "" {
return v
}
return def
}
create .env.example:
app_env=development
port=8080
read_timeout_sec=5
write_timeout_sec=10
step 4 — router and handlers
add internal/http/router.go:
package http
import (
"net/http"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/prometheus/client_golang/prometheus/promhttp"
pmw "github.com/yourname/go-blazing-api/pkg/middleware"
)
func newrouter(h *handlers) http.handler {
r := chi.newrouter()
r.use(middleware.requestid)
r.use(middleware.realip)
r.use(pmw.logger) // structured logging
r.use(middleware.recoverer)
r.get("/healthz", h.health)
r.get("/v1/version", h.version)
r.get("/v1/hello", h.hello)
r.method("get", "/metrics", promhttp.handler())
return r
}
add internal/http/handlers.go:
package http
import (
"encoding/json"
"net/http"
"time"
"github.com/rs/zerolog/log"
"github.com/yourname/go-blazing-api/internal/version"
)
type handlers struct{}
func newhandlers() *handlers { return &handlers{} }
func (h *handlers) health(w http.responsewriter, r *http.request) {
w.header().set("content-type", "application/json")
_ = json.newencoder(w).encode(map[string]any{
"status": "ok",
"ts": time.now().utc(),
})
}
func (h *handlers) version(w http.responsewriter, r *http.request) {
writejson(w, http.statusok, version.info())
}
func (h *handlers) hello(w http.responsewriter, r *http.request) {
name := r.url.query().get("name")
if name == "" {
name = "world"
}
writejson(w, http.statusok, map[string]string{"message": "hello, " + name + "!"})
}
func writejson(w http.responsewriter, status int, v any) {
w.header().set("content-type", "application/json")
w.writeheader(status)
if err := json.newencoder(w).encode(v); err != nil {
log.error().err(err).msg("write response")
}
}
step 5 — middleware with structured logging
add pkg/middleware/middleware.go:
package middleware
import (
"net/http"
"time"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
)
func init() {
zerolog.timefieldformat = time.rfc3339nano
}
func logger(next http.handler) http.handler {
return http.handlerfunc(func(w http.responsewriter, r *http.request) {
start := time.now()
ww := &statuswriter{responsewriter: w, status: 200}
next.servehttp(ww, r)
log.info().
str("method", r.method).
str("path", r.url.path).
int("status", ww.status).
dur("duration_ms", time.since(start)).
msg("request")
})
}
type statuswriter struct {
http.responsewriter
status int
}
func (sw *statuswriter) writeheader(code int) {
sw.status = code
sw.responsewriter.writeheader(code)
}
step 6 — version info
add internal/version/version.go for build metadata:
package version
var (
version = "dev"
commit = "none"
builddate = "unknown"
)
func info() map[string]string {
return map[string]string{
"version": version,
"commit": commit,
"builddate": builddate,
}
}
step 7 — main application
create cmd/api/main.go:
package main
import (
"context"
"fmt"
"net/http"
"os"
"os/signal"
"syscall"
"time"
api "github.com/yourname/go-blazing-api/internal/http"
"github.com/yourname/go-blazing-api/internal/config"
)
func main() {
cfg := config.load()
h := api.newhandlers()
router := api.newrouter(h)
srv := &http.server{
addr: ":" + cfg.port,
handler: router,
readtimeout: time.duration(cfg.readtimeout) * time.second,
writetimeout: time.duration(cfg.writetimeout) * time.second,
}
go func() {
fmt.printf("listening on :%s (%s)\n", cfg.port, cfg.env)
if err := srv.listenandserve(); err != nil && err != http.errserverclosed {
panic(err)
}
}()
// graceful shutdown
quit := make(chan os.signal, 1)
signal.notify(quit, syscall.sigint, syscall.sigterm)
<-quit
ctx, cancel := context.withtimeout(context.background(), 10*time.second)
defer cancel()
_ = srv.shutdown(ctx)
}
step 8 — run and test locally
cp .env.example .env
go run ./cmd/api
# in another terminal:
curl -s http://localhost:8080/healthz | jq
curl -s http://localhost:8080/v1/hello?name=gopher | jq
curl -s http://localhost:8080/v1/version | jq
step 9 — containerize with docker
create a minimal, production-ready image using multi-stage builds.
# dockerfile
# ---------- builder ----------
from golang:1.22-alpine as builder
workdir /src
run apk add --no-cache git ca-certificates build-base
copy go.mod go.sum ./
run go mod download
copy . .
arg version=dev
arg commit=none
arg date=unknown
run cgo_enabled=0 goos=linux goarch=amd64 \
go build -trimpath -ldflags "-s -w \
-x github.com/yourname/go-blazing-api/internal/version.version=$version \
-x github.com/yourname/go-blazing-api/internal/version.commit=$commit \
-x github.com/yourname/go-blazing-api/internal/version.builddate=$date" \
-o /bin/api ./cmd/api
# ---------- runtime ----------
from gcr.io/distroless/base-debian12
env port=8080
expose 8080
copy --from=builder /bin/api /bin/api
user nonroot:nonroot
entrypoint ["/bin/api"]
optional docker-compose.yml for local stacks:
services:
api:
build: .
ports:
- "8080:8080"
environment:
- app_env=development
- port=8080
build and run:
docker build -t yourname/go-blazing-api:dev --build-arg version=0.1.0 \
--build-arg commit=$(git rev-parse --short head) \
--build-arg date=$(date -u +%y-%m-%dt%h:%m:%sz) .
docker run --rm -p 8080:8080 yourname/go-blazing-api:dev
step 10 — observability: metrics and health
- /healthz for readiness/liveness checks in kubernetes or cloud run-times
- /metrics exposes prometheus metrics: scrape with prometheus or use grafana cloud
- structured logs via zerolog: pipe to cloudwatch, stackdriver, or loki
step 11 — ai-powered dev workflows
- generate boilerplate: ask ai to draft handler skeletons, dtos, and tests
- explain errors: paste stack traces for quick fixes
- refactor: request performance or memory optimizations
- docs/seo: have ai create user-facing docs and landing copy with clear headings and keywords like “devops”, “full stack”, and “coding”
prompt example:
refactor this go handler for lower allocations and add table-driven tests:
<paste handler.go>
step 12 — ci/cd pipeline (github actions)
add .github/workflows/ci.yml:
name: ci
on:
push:
branches: [ main ]
pull_request:
jobs:
build-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: "1.22"
- run: go mod download
- run: go vet ./...
- run: go test ./... -coverprofile=coverage.out
- run: |
docker build -t ghcr.io/${{ github.repository }}:sha-${{ github.sha }} \
--build-arg version=${{ github.ref_name }} \
--build-arg commit=${{ github.sha }} \
--build-arg date=$(date -u +%y-%m-%dt%h:%m:%sz) .
tip: add a deploy job that pushes to ghcr and triggers your cloud deployment.
step 13 — deploy to the cloud (examples)
option a: render/fly.io
deploy the docker image with a simple service definition. configure env vars in the dashboard. add health checks pointing to /healthz.
option b: google cloud run
gcloud run deploy go-blazing-api \
--source . \
--region us-central1 \
--allow-unauthenticated \
--set-env-vars app_env=production,port=8080
option c: kubernetes (k8s)
apiversion: apps/v1
kind: deployment
metadata:
name: go-blazing-api
spec:
replicas: 2
selector:
matchlabels: { app: go-blazing-api }
template:
metadata:
labels: { app: go-blazing-api }
spec:
containers:
- name: api
image: yourname/go-blazing-api:0.1.0
ports: [{ containerport: 8080 }]
readinessprobe:
httpget: { path: /healthz, port: 8080 }
initialdelayseconds: 3
livenessprobe:
httpget: { path: /healthz, port: 8080 }
env:
- { name: app_env, value: "production" }
- { name: port, value: "8080" }
---
apiversion: v1
kind: service
metadata:
name: go-blazing-api
spec:
selector: { app: go-blazing-api }
ports:
- port: 80
targetport: 8080
protocol: tcp
step 14 — basic security and performance
- time-outs and recoverer middleware to avoid slowloris and panics
- distroless image with non-root user
- rate limiting: add token bucket (e.g.,
golang.org/x/time/rate) - cors if serving browser clients
- validation for inputs; never trust query/body without checks
example rate limit middleware:
import "golang.org/x/time/rate"
var limiter = rate.newlimiter(10, 20) // 10 req/s, burst 20
func ratelimit(next http.handler) http.handler {
return http.handlerfunc(func(w http.responsewriter, r *http.request) {
if !limiter.allow() {
http.error(w, "rate limit exceeded", http.statustoomanyrequests)
return
}
next.servehttp(w, r)
})
}
step 15 — add a simple ai endpoint (optional)
demonstrates integrating ai tools from your api. use env vars for keys.
# go get github.com/sashabaranov/go-openai
// internal/http/handlers_ai.go
package http
import (
"context"
"net/http"
"os"
openai "github.com/sashabaranov/go-openai"
)
func (h *handlers) complete(w http.responsewriter, r *http.request) {
apikey := os.getenv("openai_api_key")
if apikey == "" {
http.error(w, "ai disabled", http.statusserviceunavailable)
return
}
q := r.url.query().get("q")
if q == "" {
http.error(w, "missing q", http.statusbadrequest)
return
}
client := openai.newclient(apikey)
resp, err := client.createchatcompletion(context.background(), openai.chatcompletionrequest{
model: "gpt-4o-mini",
messages: []openai.chatcompletionmessage{
{role: "user", content: q},
},
})
if err != nil || len(resp.choices) == 0 {
http.error(w, "ai error", http.statusbadgateway)
return
}
writejson(w, http.statusok, map[string]string{"answer": resp.choices[0].message.content})
}
register the route in router.go:
r.get("/v1/ai/complete", h.complete)
seo tips for your project docs
- use keyword-rich headings like “golang devops deployment guide”, “full-stack coding with go apis”
- add a concise meta description and an faq section
- structure with h2/h3 and bullet lists for readability
- include code snippets and performance claims backed by metrics
troubleshooting
- port already in use: change
portor stop the other service - docker build fails: verify go version and
go mod download - 404s: ensure routes match the path and method
- ai requests failing: check
openai_api_keyand model name
next steps
- add persistence: postgres with sqlc or gorm, run migrations
- jwt auth and rbac roles
- more observability: tracing with opentelemetry
- performance tuning: pprof, wrk/hey load tests, caching layers
summary
you’ve built a blazing-fast, cloud-ready golang api with devops foundations, ai-assisted development, and production essentials: health checks, metrics, structured logs, docker, and ci/cd. with this scaffold, you can iterate confidently, scale to production, and document your work with seo-friendly content that helps users and search engines alike.
Comments
Share your thoughts and join the conversation
Loading comments...
Please log in to share your thoughts and engage with the community.