This site started as two innocent HTML files and immediately became a deployment problem. I wanted server-side rendering, real templates, translations, HTMX fragments, SQLite, Docker, HTTPS, and automatic deploys to a VPS. This is how a tiny blog turned into a production pipeline with just enough machinery to be respectable and just enough chaos to stay honest.

The premise

The stack is boring on purpose. SlimPHP handles routes. Twig renders templates. Symfony Translation keeps copy out of the markup. HTMX patches fragments. SQLite stores the posts. Composer locks dependencies. FrankenPHP serves PHP and Caddy from one container. Docker defines the runtime. GitHub Actions builds, pushes, deploys, and smoke-tests the result.

warning

None of this is trendy. That is the point. Trendy stacks have trendy failure modes, and I already have enough personal problems.

Why not static HTML

Static index.html and about.html were fine for mockups. They stopped being fine when the site needed routing, shared layout, translations, partial rendering, and real post data. Copy-pasting navigation across pages is not simplicity. It is debt wearing a small hat.

The active pages now live in Twig. Controllers pass locale, translated strings, and database rows into templates. The browser receives HTML, because this is still a website and we are allowed to remember that.

The deployment pipeline

The deploy path is intentionally direct: GitHub Actions checks out the repo, configures SSH, makes sure the VPS registry exists, opens a tunnel to 127.0.0.1:5000, builds the image, pushes it by commit SHA, writes the production .env, runs Compose on the VPS, and smoke-tests /health.

IMAGE_TAG=${{ github.sha }}
REGISTRY_IMAGE=localhost:5000/abacaxi/boilerplate

docker build --tag "$REGISTRY_IMAGE:$IMAGE_TAG" .
docker push "$REGISTRY_IMAGE:$IMAGE_TAG"
ssh prod-vps "cd /opt/abacaxi && docker compose pull app && docker compose up -d --remove-orphans app"
curl --fail http://abacaxi.dev/health

The registry is private. GitHub Actions reaches it through SSH. No public registry port, no third-party registry bill, and fewer exposed things for the internet to poke with a stick.

Health checks are not decoration

The app exposes /health and returns 204 No Content. There is no body because there is nothing useful to say when the process is healthy. The only job is to give automation a fast, boring signal.

$app->get('/health', static function ($request, $response) {
    return $response->withStatus(204);
});

The production problems

Static files were initially fighting the templates. Routes looked broken when the wrong layer answered the request. Port mapping needed to match the difference between local development and production HTTPS. Stale images became less confusing once the workflow deployed by commit SHA instead of trusting latest like it was a legally binding statement.

actual horror

A browser saying "connection refused" is not a frontend bug. It is infrastructure whispering that one of your assumptions about ports, containers, DNS, or TLS is false.

Crawl hygiene without CAPTCHA theater

CAPTCHA belongs on forms and write actions, not normal read-only pages. For the public site, the right first layer is robots.txt, sitemap.xml, canonical routes, and later rate limiting if traffic ever becomes real enough to shape.

User-agent: *
Allow: /
Disallow: /health
Disallow: /partials/

Sitemap: https://abacaxi.dev/sitemap.xml

What I would improve next

The next useful steps are a cleaner post editor, RSS at /feed.xml, SQLite backups, external uptime monitoring, and Caddy rate limits when there is actual traffic. Production is not one tool. It is the boring chain between code, build, registry, server, health check, and rollback plan.