[{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/","section":"Categories","summary":"","title":"Categories"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/devops/","section":"Tags","summary":"","title":"DevOps"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/","section":"DevOps Noldus","summary":"","title":"DevOps Noldus"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/infisical/","section":"Tags","summary":"","title":"Infisical"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/kubernetes/","section":"Tags","summary":"","title":"Kubernetes"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/platform-engineering/","section":"Categories","summary":"","title":"Platform Engineering"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/platform-engineering/","section":"Tags","summary":"","title":"Platform Engineering"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/posts/","section":"Posts","summary":"","title":"Posts"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/secrets-management/","section":"Tags","summary":"","title":"Secrets Management"},{"content":"It is 2026. There are .env files committed to private repositories right now. There are passwords in Kubernetes Secret objects encoded as base64, which is not encryption, and someone on that team thinks it is. There are production credentials in a shared Bitwarden folder with twelve people\u0026rsquo;s access that nobody has audited since the last two people left.\nSecrets management is a solved problem in the sense that we know what good looks like. It\u0026rsquo;s an unsolved problem in the sense that most teams aren\u0026rsquo;t doing it.\nThis post is a tour of the current landscape, what actually works, and what the traps are.\nThe options #HashiCorp Vault is the industry standard and earns that title. Dynamic secrets, fine-grained policies, audit logging, multiple auth backends, a mature operator ecosystem. It\u0026rsquo;s also operationally heavy. Running Vault in HA requires real thought about storage backends, unsealing, and cluster membership. The free tier covers most use cases but the BSL licence change in 2023 left some teams uneasy. OpenBao is the community fork if that matters to you.\nExternal Secrets Operator (ESO) is the Kubernetes-native answer. It doesn\u0026rsquo;t store secrets. It syncs them from wherever you already have them (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, Vault, 1Password, and a dozen others) into Kubernetes Secret objects. If you\u0026rsquo;re on Kubernetes and already using a cloud provider\u0026rsquo;s secret store, ESO is often the right choice: you get GitOps-friendly ExternalSecret manifests, automatic refresh, and you\u0026rsquo;re not adding another stateful thing to run.\nInfisical is the newcomer that\u0026rsquo;s earned a serious look. Open source, self-hostable, with a clean UI, native Kubernetes operator, CLI injection (infisical run --), and a managed cloud tier. It bridges the gap between \u0026ldquo;too simple\u0026rdquo; (Bitwarden) and \u0026ldquo;too much\u0026rdquo; (Vault) for teams that aren\u0026rsquo;t operating at Vault scale but want more than environment variables in a file.\nCloud-native stores (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) are the right answer if you\u0026rsquo;re already committed to one cloud and don\u0026rsquo;t need portability. Managed, cheap at low scale, IAM-integrated. The friction comes when you\u0026rsquo;re multi-cloud or need to inject secrets into non-cloud workloads.\nSealed Secrets deserves a mention: encrypts Kubernetes secrets so you can commit them to Git. Solves the \u0026ldquo;we use GitOps but secrets can\u0026rsquo;t go in the repo\u0026rdquo; problem specifically. Narrow tool, does its job well.\nThe pattern that actually works #The principle is the same regardless of tooling: secrets live in one place, everything else references them.\nThat means:\nNo copying secrets between systems No secrets in environment files that get deployed with the app No secrets in CI/CD variables that someone set three years ago and nobody knows what they\u0026rsquo;re for anymore Applications get secrets injected at runtime, not baked in at build time In Kubernetes this looks like ESO pulling from your secret store into short-lived Secret objects, with rotation handled by the store. In Compose environments it looks like infisical run -- docker compose up -d. In CI/CD it looks like OIDC-based access to your cloud secret store rather than long-lived tokens stored as pipeline variables.\nThe shape is: identity-based access, not credential-based access. Your workload authenticates as itself (service account, instance profile, OIDC token) and gets secrets it\u0026rsquo;s allowed to have. No pre-shared passwords to rotate. No human in the path.\nThe traps #Base64 is not encryption. Kubernetes Secret objects are base64-encoded by default, which is encoding, not security. If someone can read Secrets in your cluster, they can read your secrets. etcd encryption at rest helps, but the bigger issue is RBAC: most clusters are too permissive about who can list and get secrets across namespaces. Audit this before you deploy anything sensitive.\nSecret sprawl happens faster than you think. You start with one secret store, then someone needs a quick fix and puts a value in a CI variable. Then a contractor needs access and gets a personal API key. Then the key rotation script creates a new key but the old one isn\u0026rsquo;t deleted. Six months later you have secrets in four places and a rotation process that covers two of them. Centralisation is not a one-time migration, it\u0026rsquo;s a discipline.\nRotation is the hard part. Storing secrets centrally is relatively easy. Rotating them without downtime is where most systems fall apart. Dynamic secrets (Vault\u0026rsquo;s killer feature) sidestep this by issuing short-lived credentials on demand. No rotation needed because the credential expires on its own. If you\u0026rsquo;re not using dynamic secrets, you need a rotation strategy from day one, not after the first breach.\nThe Vault complexity trap. Teams adopt Vault because it\u0026rsquo;s the right tool, then spend six months configuring auth backends, policies, and namespaces before they\u0026rsquo;ve protected a single production secret. If you\u0026rsquo;re a team of ten, Vault\u0026rsquo;s operational weight might not be worth it yet. Infisical or a cloud-native store with ESO will get you 80% of the benefit at 20% of the overhead. Reach for Vault when you need what Vault specifically provides: dynamic secrets, PKI, SSH signing, complex policy hierarchies.\nOIDC in CI/CD is still underused. Most teams are still passing long-lived tokens into GitHub Actions or GitLab CI as repository secrets. OIDC-based access (your pipeline authenticates with a short-lived token issued by the CI provider, which AWS/GCP/Azure or Vault trusts) eliminates the long-lived credential entirely. The setup takes an afternoon. The payoff is permanent.\nWhere things actually stand #The tooling is good. ESO is mature and well-maintained. Infisical has momentum. Vault is stable. The cloud stores are reliable. There\u0026rsquo;s no excuse for the state most teams are in on the tooling side.\nThe gap is cultural and operational. Secrets management doesn\u0026rsquo;t feel like a feature, so it doesn\u0026rsquo;t get prioritised. The .env file works until it doesn\u0026rsquo;t. The Kubernetes secret with base64 encoding feels fine until there\u0026rsquo;s an incident. The CI variable from 2022 keeps working so nobody touches it.\nThe actual work is treating secrets management as infrastructure — something that gets designed, documented, reviewed, and rotated on a schedule. Not a one-time setup, and not something you figure out during a post-mortem.\nThe tooling is the easy part. The discipline is what\u0026rsquo;s missing.\n","date":"5 April 2026","permalink":"https://blog.antnsn.dev/2026-p3-secrets-management-mess/","section":"Posts","summary":"\u003cp\u003eIt is 2026. There are \u003ccode\u003e.env\u003c/code\u003e files committed to private repositories right now. There are passwords in Kubernetes \u003ccode\u003eSecret\u003c/code\u003e objects encoded as base64, which is not encryption, and someone on that team thinks it is. There are production credentials in a shared Bitwarden folder with twelve people\u0026rsquo;s access that nobody has audited since the last two people left.\u003c/p\u003e\n\u003cp\u003eSecrets management is a solved problem in the sense that we know what good looks like. It\u0026rsquo;s an unsolved problem in the sense that most teams aren\u0026rsquo;t doing it.\u003c/p\u003e","title":"Secrets management is still a mess in 2026"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/security/","section":"Categories","summary":"","title":"Security"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/security/","section":"Tags","summary":"","title":"Security"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/vault/","section":"Tags","summary":"","title":"Vault"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/devops/","section":"Categories","summary":"","title":"DevOps"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/docker/","section":"Categories","summary":"","title":"Docker"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/docker/","section":"Tags","summary":"","title":"Docker"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/docker-compose/","section":"Tags","summary":"","title":"Docker Compose"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/gitops/","section":"Tags","summary":"","title":"GitOps"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/homelab/","section":"Tags","summary":"","title":"Homelab"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/self-hosted/","section":"Tags","summary":"","title":"Self-Hosted"},{"content":"Not everything needs to run on Kubernetes.\nstackd is a GitOps daemon for Docker Compose, built for people who don\u0026rsquo;t want cloud platform complexity just to keep a few self-hosted services running. It sits between your Git repo and your Docker host, watches for changes, pulls updates, and applies them automatically with docker compose up -d. The point is to make Compose feel operationally mature without turning it into Kubernetes.\nThe problem #If you run homelab or small production-ish stacks, deployment usually means SSHing into a box, pulling the latest code, and restarting containers manually. That works until it doesn\u0026rsquo;t. Until you\u0026rsquo;re managing six stacks across two machines, until you forget which box has the current version of what, until you\u0026rsquo;re trying to remember why you changed something three weeks ago.\nstackd turns that into a repeatable flow driven by Git. Your repo is the source of truth. When you push, things deploy. You don\u0026rsquo;t have to be there.\nThat\u0026rsquo;s the whole pitch. The rest is just implementation.\nWhat it does #stackd is a daemon. You run it in a container, give it access to your Docker socket, point it at one or more Git repositories, and it handles the rest:\nPolls each repo on a configurable interval On SHA change, runs docker compose up -d for each stack in the repo Surfaces state (container health, last sync, recent activity) in a live dashboard Optionally wraps deploys with infisical run -- so secrets come from Infisical, not .env files No CRDs, no controllers, no cluster. Just Git + Docker Compose + a daemon that connects them.\nGetting started # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 services: stackd: container_name: stackd image: ghcr.io/antnsn/stackd:latest environment: - SECRET_KEY=your-strong-random-value - DB_URL=sqlite:///data/stackd.db - PORT=8080 volumes: - /path/to/stackd-data:/data - /var/run/docker.sock:/var/run/docker.sock ports: - \u0026#34;8080:8080\u0026#34; restart: unless-stopped Generate the key:\n1 openssl rand -hex 32 SECRET_KEY is required. stackd uses it to encrypt SSH keys and tokens at rest. It won\u0026rsquo;t start without one.\nAfter it\u0026rsquo;s running, open http://localhost:8080, go to Settings, add your first repository. On the next sync interval it clones the repo, finds your compose stacks, and applies them.\nHow the sync loop works #Polling, not webhooks. That\u0026rsquo;s a deliberate choice. Webhooks require ingress, they require the daemon to be publicly reachable, they add failure modes. A 60-second poll is boring, reliable, and self-healing.\nWhen a sync runs:\nPull the repo Compare HEAD SHA against last known SHA If changed: for each compose file in the stacks directory, docker compose up -d Update state, emit an activity event If a pull fails, stackd backs off exponentially: 2 min, 4, 8, capped at 8× the sync interval. After 10 consecutive failures it suspends the repo entirely. Manual sync from the dashboard resets backoff immediately.\nRepo layout # 1 2 3 4 5 6 7 8 repo/ stacks/ postgres/ docker-compose.yml grafana/ docker-compose.yml jellyfin/ docker-compose.yml Each subdirectory is a stack with independent state. If one fails to apply, the others still run. You\u0026rsquo;re not blocked on a bad service file bringing down the whole sync.\nSecrets without .env files #The .env antipattern is everywhere. A file full of plaintext secrets, sitting on the filesystem, maybe gitignored if you remembered. Fine until it isn\u0026rsquo;t.\nstackd integrates with Infisical for secrets injection. Configure a global machine token in Settings and docker compose up -d becomes infisical run -- docker compose up -d for any stack whose compose file uses ${} variable substitution. Compose reads them from the environment as normal; they just don\u0026rsquo;t live in a file.\nFor stacks that need their own project or environment, drop an infisical.toml in the stack directory:\n1 2 3 4 5 6 7 8 9 [infisical] address = \u0026#34;https://infisical.example.com\u0026#34; [auth] strategy = \u0026#34;token\u0026#34; [project] project_id = \u0026#34;your-project-uuid\u0026#34; default_environment = \u0026#34;prod\u0026#34; Per-stack config takes precedence over the global token. The dashboard shows which mode each stack is using.\nMigrating is mechanical:\n1 2 3 4 5 6 7 # before environment: POSTGRES_PASSWORD: mysecretpassword # after environment: POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} Put the value in Infisical. Remove it from your files. Done.\nThe dashboard #The dashboard is built around clarity, status, and fast operator decisions. What\u0026rsquo;s running, what broke, and why.\nIt shows:\nAll repos: sync status, current SHA, last error All stacks: container health per service, Infisical mode, last apply timestamp Real-time log streaming via SSE. Inspect a failure without leaving the browser. Web shell: browser-based terminal into any running container via xterm.js The log streaming and web shell are the parts I reach for most. When something fails at 2am you don\u0026rsquo;t want to be assembling SSH commands and docker logs flags. You want to click into the container and see what happened.\nWho it\u0026rsquo;s for #stackd is for homelabbers, small teams, and self-hosters who want the GitOps model without adopting a whole Kubernetes stack. If you\u0026rsquo;re running more than a couple of Compose stacks and your current deployment story involves manual SSH steps, it\u0026rsquo;s worth a look.\nIf you\u0026rsquo;re running 50 services across a production cluster with rolling deploys, health-check gating, and a platform team, you want Kubernetes and ArgoCD. stackd is not that. Scope is intentional.\nThe repo is at github.com/antnsn/stackd. AGPL-3.0, commercial licensing available.\n","date":"1 April 2026","permalink":"https://blog.antnsn.dev/2026-p1-stackd/","section":"Posts","summary":"\u003cp\u003eNot everything needs to run on Kubernetes.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/antnsn/stackd\" target=\"_blank\" rel=\"noreferrer\"\u003estackd\u003c/a\u003e is a GitOps daemon for Docker Compose, built for people who don\u0026rsquo;t want cloud platform complexity just to keep a few self-hosted services running. It sits between your Git repo and your Docker host, watches for changes, pulls updates, and applies them automatically with \u003ccode\u003edocker compose up -d\u003c/code\u003e. The point is to make Compose feel operationally mature without turning it into Kubernetes.\u003c/p\u003e","title":"stackd: GitOps for Docker Compose without the Kubernetes tax"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/tools/","section":"Categories","summary":"","title":"Tools"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/cv/","section":"DevOps Noldus","summary":"","title":"CV"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/aks/","section":"Tags","summary":"","title":"AKS"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/aws/","section":"Tags","summary":"","title":"AWS"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/azure/","section":"Tags","summary":"","title":"Azure"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/cloud/","section":"Tags","summary":"","title":"Cloud"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/eks/","section":"Tags","summary":"","title":"EKS"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/endpoint/","section":"Tags","summary":"","title":"Endpoint"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/ingress/","section":"Tags","summary":"","title":"Ingress"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/kubernetes/","section":"Categories","summary":"","title":"Kubernetes"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/kubernetes-proxy/","section":"Tags","summary":"","title":"Kubernetes Proxy"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/kubernetes-tutorial/","section":"Tags","summary":"","title":"Kubernetes Tutorial"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/manifest/","section":"Tags","summary":"","title":"Manifest"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/nginx/","section":"Tags","summary":"","title":"Nginx"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/reverse-proxy/","section":"Tags","summary":"","title":"Reverse Proxy"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/service/","section":"Tags","summary":"","title":"Service"},{"content":"Introduction #While discussing reverse proxies with a colleague who was building out his home lab with Docker, the topic of SSL certificates and proxies came up. I mentioned that I use Kubernetes, cert-manager, and Let\u0026rsquo;s Encrypt to manage these components. However, this made me consider the fact that although most of my services are hosted within Kubernetes, there are still some that run on other platforms, including bare-metal.\nAs a result, I began to wonder if I should configure NGINX Proxy Manager or Traefik on Docker to manage certificates for these external services. Alternatively, could it be possible to leverage Kubernetes\u0026rsquo; internal resource management capabilities for external resources as well?\nThe short answer is \u0026ldquo;You bet!\u0026rdquo; Let\u0026rsquo;s explore how we can accomplish this.\nComponents #To get this up and running, you will need a Kubernetes cluster with a working Ingress Controller and Cert-Manager installed.\nA reverse proxy typically requires three manifests: Endpoint, Service, and Ingress. These manifests define the endpoints and services to be exposed by the Ingress Controller and the rules for routing traffic to them.\nAn Endpoint manifest defines the IP address and port of an external service. A Service manifest exposes the endpoint as a Kubernetes Service object, which can then be used by the Ingress Controller to route traffic to the endpoint.\nFinally, an Ingress manifest defines the rules for routing traffic to the Service objects. It specifies the hostname, path, and other criteria for routing traffic to the appropriate Service object.\nBelow are example manifests for each of these components. Note that these are not intended to be copy-pasted directly, but rather serve as a starting point for building your own manifests that are specific to your environment and requirements.\nEndpoint #An endpoint is the network address of a pod that provides a specific service in the cluster. like a phone number that connects you to a specific person. Endpoints are dynamically created by Kubernetes based on the state of the pods in the cluster and they represent the availability of a particular service. When a service is created, it is associated with one or more endpoints. These endpoints are then used to direct traffic to the appropriate pod that is providing the service. In summary, an endpoint is a reference to a specific pod that is providing a service, and it\u0026rsquo;s used by Kubernetes to route traffic to the correct location.\n1 2 3 4 5 6 7 8 9 apiVersion: v1 kind: Endpoints metadata: name: my-external-service subsets: - addresses: - ip: 10.1.2.3 ports: - port: 80 Service #In Kubernetes, a service is like a virtual front door that allows communication between different parts of a software application running on a cluster. Think of it like a receptionist who directs visitors to the right person or department. The service has a name and a fixed IP address that can be used by other parts of the application to communicate with it. It also helps to ensure that communication between different parts of the application is reliable and efficient, even if the underlying physical components change over time. In short, a Kubernetes service is an important tool for making sure that different parts of a software application can talk to each other effectively and without interruptions.\n1 2 3 4 5 6 7 8 9 10 apiVersion: v1 kind: Service metadata: name: my-external-service spec: ports: - port: 80 targetPort: 80 selector: app: my-external-app Ingress #An Ingress manifest in Kubernetes is a configuration file that tells the cluster how to handle incoming internet traffic. It\u0026rsquo;s like a map that tells the cluster which web addresses and web pages to show to people who visit the website. It\u0026rsquo;s used to control how traffic flows into and out of the cluster, making it possible to manage multiple services and web pages from a single point of entry. It\u0026rsquo;s a way to make sure that the right web pages are shown to the right people, and it makes it possible to add things like security features and load balancing to the website.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: cert-manager.io/cluster-issuer: \u0026#34;letsencrypt-prod\u0026#34; spec: tls: - hosts: - my.domain.com secretName: my-tls-secret rules: - host: my.domain.com http: paths: - path: /api pathType: Prefix backend: service: name: my-external-service port: name: http Summary #In Kubernetes, an Ingress Controller is like a traffic cop at the entrance of a cluster. It controls incoming traffic and directs it to the appropriate services. When a request comes in from outside the cluster, the Ingress Controller uses information from the Ingress Manifest to determine how to route the traffic. The Ingress Manifest specifies the hostnames, paths, and ports that should be exposed, as well as the routing rules for directing traffic to the appropriate services.\nTo route traffic to external services, the Ingress Controller uses Endpoints and Services. An Endpoint is the network address of a pod that provides a specific service in the cluster. A Service is a logical entity that groups together a set of pods and provides a stable IP address and DNS name that other pods can use to access the service.\nTo connect to an external service, the Ingress Controller first creates a Service that points to the external service. The Service has a stable IP address that other pods can use to connect to the external service. Next, the Ingress Controller creates an Endpoint that maps the Service to the actual IP address and port of the external service. The Ingress Controller then uses this Endpoint to route traffic to the external service.\nIf you combine Cert-Manager with the Ingress Controller you can secure external traffic by automating the management and issuance of SSL certificates. The Ingress Manifest specifies the TLS configuration and Cert-Manager generates SSL certificates, stores them as Kubernetes Secrets, and the Ingress Controller uses them to terminate SSL connections and route traffic to the correct location within the Kubernetes cluster.\n","date":"5 May 2023","permalink":"https://blog.antnsn.dev/2023-p2-ingress-as-a-reverse-proxy/","section":"Posts","summary":"\u003ch2 id=\"introduction\" class=\"relative group\"\u003eIntroduction \u003cspan class=\"absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100\"\u003e\u003ca class=\"group-hover:text-primary-300 dark:group-hover:text-neutral-700\" style=\"text-decoration-line: none !important;\" href=\"#introduction\" aria-label=\"Anchor\"\u003e#\u003c/a\u003e\u003c/span\u003e\u003c/h2\u003e\u003cp\u003eWhile discussing reverse proxies with a colleague who was building out his home lab with Docker, the topic of SSL certificates and proxies came up. I mentioned that I use Kubernetes, cert-manager, and Let\u0026rsquo;s Encrypt to manage these components. However, this made me consider the fact that although most of my services are hosted within Kubernetes, there are still some that run on other platforms, including bare-metal.\u003c/p\u003e","title":"Simplifying Reverse Proxy Management with Kubernetes Ingress Controller and Cert-Manager"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/svc/","section":"Tags","summary":"","title":"Svc"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/terraform/","section":"Tags","summary":"","title":"Terraform"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/traefik/","section":"Tags","summary":"","title":"Traefik"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/tutorial/","section":"Categories","summary":"","title":"Tutorial"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/azure/","section":"Categories","summary":"","title":"Azure"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/blog/","section":"Tags","summary":"","title":"Blog"},{"content":"Introduction #Hugo #Hugo is a popular open-source static site generator that allows you to create fast and flexible websites. It is built with performance in mind and uses Go templates to generate static HTML files from templates and content files.\nOne of the key benefits of using Hugo is its simplicity and ease of use. It has a minimal learning curve and allows you to quickly create and publish content using simple markdown files. It also has a large number of customizable themes and options, allowing you to tailor the appearance and functionality of your website to your specific needs.\nIn addition to its speed and flexibility, Hugo is also highly extensible, with a variety of plugins and integrations available to further customize and enhance your website.\nHugo is a powerful and user-friendly tool that is well-suited for a wide range of websites, from personal blogs to corporate websites. I\u0026rsquo;m loving it so far for my personal blog!\nAzure static web app #Azure Static Web Apps is a fully managed service that allows you to host static web applications on the Azure platform. It is designed to make it easy to deploy and host static websites, web APIs, and serverless functions, and provides a variety of features and options to customize and enhance your web applications.\nSome of the key benefits of using Azure Static Web Apps include:\nSimplicity: Azure Static Web Apps is easy to use and requires minimal setup and maintenance. You can deploy your static web application by simply pushing your code to a connected Git repository, and Azure Static Web Apps will automatically build and deploy your application.\nScalability: Azure Static Web Apps is highly scalable and can handle a large amount of traffic without requiring any additional configuration. It also integrates with Azure\u0026rsquo;s Content Delivery Network (CDN) to further enhance performance and availability.\nSecurity: Azure Static Web Apps uses Azure\u0026rsquo;s security infrastructure to protect your web applications from threats and vulnerabilities. It also supports HTTPS out of the box and provides options for customizing your application\u0026rsquo;s security settings.\nIntegration: Azure Static Web Apps integrates with a variety of Azure services, such as Azure Functions, Azure Storage, and Azure CDN, allowing you to enhance the functionality and performance of your static web applications.\nAzure Static Web Apps is a useful and cost-effective solution for hosting static web applications on the Azure platform.\nGetting started #Installation #This guide will be based around linux, but the same commands will work with both Windows, Linux or macos. I use arch btw :P so lets do that.\nInstall the Hugo package: First, you\u0026rsquo;ll need to install the Hugo package using pacman (or your packagemanager of choice). Open a terminal and run the following command:\nArch Linux # 1 sudo pacman -S hugo Debian # 1 sudo apt install hugo Fedora # 1 sudo dnf install hugo This will install the latest stable release of Hugo on your system.\nCreating the site #Create a new Hugo site: #Next, you\u0026rsquo;ll need to create a new Hugo site. You can do this by running the following command:\n1 hugo new site my-site This will create a new directory called \u0026quot;my-site\u0026quot; with the basic files and directories needed for a Hugo site. Install a Hugo theme: #Hugo sites use themes to define their appearance and layout. You can browse and install themes from the Hugo Themes website. To install a theme, you can clone the theme\u0026rsquo;s repository into the themes directory of your Hugo site or add a git submodule. For example:\n1 2 cd my-site git submodule add https://github.com/theNewDynamic/gohugo-theme-ananke.git themes/ananke Configure your site: #You\u0026rsquo;ll need to configure your site by editing the config.toml file in the root directory of your Hugo site. You can set the title, baseURL, and other options for your site in this file. This will vary depending on your chosen theme, a good reference for configuring the theme is the themes github site.\nYou can also customize the appearance and layout of your site by modifying the templates and static files in the themes/ananke directory. Create content: #You can create content for your Hugo site by adding Markdown files to the content directory. You can use the hugo new command to create new content, or you can create files manually.\nBuild and serve your site locally: #Once you have created some content, you can build your site by running the hugo command. This will generate the HTML and other static files for your site. You can then serve your site locally by running the hugo server command. This will start a local web server that you can access at http://localhost:1313.\nAs a side note, if you are running Hugo from a remote machine or a server within your local network you will need to bind the Hugo server to your local ip with the --bind \u0026lt;ip\u0026gt; flag. You will need to do the same for --baseURL. Custom port is also available through the --port \u0026lt;port\u0026gt; flag.\n1 hugo server --bind 192.168.1.5 --baseURL http://192.168.1.5 --port 8080 Publishing to github #Once you are happy with your code and it is ready for the next step of publishing it to the internet, you need to publish it to a private or public GitHub repository. A private repository allows you to keep your code and project information secure and confidential, while still taking advantage of the benefits of version control and collaboration provided by GitHub.\nTo publish your code to a private GitHub repository, you\u0026rsquo;ll need to do the following:\nCreate a private repository: #First, you\u0026rsquo;ll need to create a new private repository on GitHub. You can do this by logging into your GitHub account, clicking the \u0026ldquo;+\u0026rdquo; icon in the top right corner, and selecting \u0026ldquo;New repository\u0026rdquo;. Give your repository a name and select \u0026ldquo;Private\u0026rdquo; as the visibility level.\nInitialize your local repository: #Next, you\u0026rsquo;ll need to initialize a local Git repository for your code. If your code is already tracked by Git, you can skip this step. Otherwise, you can initialize a new repository by running the following commands:\n1 2 3 4 cd my-project git init git add . git commit -m \u0026#34;Initial commit\u0026#34; Add the remote repository: #You\u0026rsquo;ll need to add the remote repository as a remote for your local repository. You can do this by running the following command, replacing with the URL of your repository:\n1 git remote add origin \u0026lt;repository-url\u0026gt; Push your code to the repository: #Finally, you can push your code to the repository by running the git push command. This will upload your code to the repository, making it available for the next step.\nAzure static web app #To publish a Hugo site to Azure Static Web Apps, you can follow these steps:\nCreate an Azure Static Web Apps resource: # You\u0026rsquo;ll need to create an Azure Static Web Apps resource in your Azure account. You can do this by: logging into the Azure portal, clicking the \u0026lsquo;+\u0026rsquo; icon in the top left corner. Searching for \u0026ldquo;Static Web Apps\u0026rdquo;. Click \u0026ldquo;Create\u0026rdquo; to create a new resource. Connect your repository: #You can do this by following the prompts in the Azure portal, or by using the Azure Static Web Apps CLI. You\u0026rsquo;ll need to provide the URL of your repository, as well as your authentication details.\nConfigure your build and deployment settings: #You\u0026rsquo;ll need to configure your build and deployment settings in your Azure Static Web Apps resource. This includes specifying the build command (e.g. hugo), the base directory for your site (e.g. public), and any environment variables that your site requires. You can also set up automatic deployment triggers, such as when new commits are pushed to your repository.\nBuild and deploy your site: Once you have configured your build and deployment settings, you can build and deploy your site by committing and pushing changes to your repository. Azure Static Web Apps will automatically build and deploy your site based on your configuration.\nTest and verify your deployment: Once your deployment is complete, you can test and verify your site by visiting the URL provided by Azure Static Web Apps. You can also use the Azure Static Web Apps CLI or the Azure portal to view the build and deployment logs and troubleshoot any issues.\nCustom domain #To add a custom domain to your new blog you will need to do a few more steps. A few of these steps are dependant on your domain regristrar, but in short you will need a \u0026ldquo;CNAME\u0026rdquo; record to point to the domain provided by azure.\nSteps # Configure your static web app to use your custom domain. In the Azure portal, go to the \u0026ldquo;Custom domains\u0026rdquo; blade for your static web app and select \u0026ldquo;Add custom domain\u0026rdquo;. Enter your domain name and select \u0026ldquo;Validate\u0026rdquo;. Azure will verify that you own the domain and provide you with DNS records to add to your domain\u0026rsquo;s DNS settings. Update your domain\u0026rsquo;s DNS settings to point to your static web app. Go to your domain name registrar\u0026rsquo;s website and navigate to the DNS settings for your domain. Add the DNS records provided by Azure to your domain\u0026rsquo;s DNS settings. This will allow traffic to your domain to be routed to your static web app. Enter the custom domain in the \u0026ldquo;Domain name\u0026rdquo; field. Choose \u0026ldquo;CNAME\u0026rdquo; as Hostname record type, copy the calue from value field below and put this into your domain registrar\u0026rsquo;s dns settings.. Click Add Wait for the DNS changes to propagate. It may take some time for the DNS changes to take effect. You can check the status of your custom domain in the Azure portal by going to the \u0026ldquo;Custom domains\u0026rdquo; blade for your static web app and selecting \u0026ldquo;Refresh status\u0026rdquo;.\nOnce the DNS changes have propagated and your custom domain has been configured correctly, traffic to your domain will be routed to your static web app.\n","date":"8 January 2023","permalink":"https://blog.antnsn.dev/2023-p1-host-blog/","section":"Posts","summary":"\u003ch1 id=\"introduction\" class=\"relative group\"\u003eIntroduction \u003cspan class=\"absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100\"\u003e\u003ca class=\"group-hover:text-primary-300 dark:group-hover:text-neutral-700\" style=\"text-decoration-line: none !important;\" href=\"#introduction\" aria-label=\"Anchor\"\u003e#\u003c/a\u003e\u003c/span\u003e\u003c/h1\u003e\u003ch2 id=\"hugo\" class=\"relative group\"\u003eHugo \u003cspan class=\"absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100\"\u003e\u003ca class=\"group-hover:text-primary-300 dark:group-hover:text-neutral-700\" style=\"text-decoration-line: none !important;\" href=\"#hugo\" aria-label=\"Anchor\"\u003e#\u003c/a\u003e\u003c/span\u003e\u003c/h2\u003e\u003cp\u003eHugo is a popular open-source static site generator that allows you to create fast and flexible websites. It is built with performance in mind and uses Go templates to generate static HTML files from templates and content files.\u003c/p\u003e\n\u003cp\u003eOne of the key benefits of using Hugo is its simplicity and ease of use. It has a minimal learning curve and allows you to quickly create and publish content using simple markdown files. It also has a large number of customizable themes and options, allowing you to tailor the appearance and functionality of your website to your specific needs.\u003c/p\u003e","title":"Hosting a blog with hugo on Azure - For Free!"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/hugo/","section":"Tags","summary":"","title":"Hugo"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/linux/","section":"Tags","summary":"","title":"Linux"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/microsoft/","section":"Tags","summary":"","title":"Microsoft"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/azure-devops/","section":"Tags","summary":"","title":"Azure DevOps"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/github/","section":"Categories","summary":"","title":"Github"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/github/","section":"Tags","summary":"","title":"Github"},{"content":"Introduction #As a DevOps engineer, managing code repositories and collaborating on projects is a critical part of my day-to-day job.\nBoth GitHub and Azure DevOps are popular platforms that offer a range of tools and features for managing code repositories and collaborating on projects. However, for me, GitHub stands out as the better choice for several reasons.\nIn this post, I\u0026rsquo;ll be discussing some of the key features and benefits of using GitHub, as well as how it compares to Azure DevOps, and why I ultimately decided to go with GitHub.\nWhether you\u0026rsquo;re a seasoned developer or just starting out, I hope this post will give you a better understanding of the options available for managing code repositories and collaborating on projects, and help you choose the right platform for your needs.\nSimilarities #Git support: #Both GitHub and Azure DevOps are based on Git, a widely used version control system that allows you to track changes to your code and collaborate with others through branching and merging. However, GitHub was specifically designed for hosting Git repositories, whereas Azure DevOps integrates with Git as one of several tools and services.\nProject management: #Both GitHub and Azure DevOps provide tools for managing your project, such as issue tracking, project boards, and milestones. However, Azure DevOps provides more comprehensive project management features, including support for agile methodologies, portfolio management, and dashboards.\nCollaboration: #Both GitHub and Azure DevOps provide tools for collaborating with others on your project, such as team chat, wikis, and code review. However, GitHub has a larger and more active community of users and developers, and it is particularly popular among open-source projects.\nIntegrations: #Both GitHub and Azure DevOps integrate with a wide range of tools and services, including IDEs, continuous integration and delivery platforms, and project management tools. However, Azure DevOps integrates with a wider range of tools and services, including Azure services and other Microsoft products.\nSo why choose Github? #There are several reasons why someone might choose GitHub over other platforms for version control and code repositories. For me, the main reason is simple, community and open source. GitHub has a large and active community of users and developers, particularly in the open-source space. It is a popular platform for hosting open-source projects and has a range of features and tools specifically designed for open-source development, such as licensing and contribution guidelines.\nAnother great reason for me is GitHub Actions. A feature of GitHub that allows you to automate your workflow by creating custom \u0026ldquo;actions\u0026rdquo; that can be triggered by specific events on your repository. With GitHub Actions, you can define a set of tasks that should be performed when certain events occur, such as when code is pushed to a repository or when an issue is opened. You can use GitHub Actions to automate a wide range of tasks, such as building and testing your code, deploying your application, or running automated checks.\nI know, Azure DevOps has Pipelines and they can do mostly the same thing.\nBut for me, these key points make it my favorite and the go-to solution for my projects.\nWidely used and recognized: #GitHub is one of the most widely used and recognized platforms for version control and code repositories. It is particularly popular among open-source projects and has a large and active community of users.\nIntegrations and tools: #GitHub integrates with a wide range of tools and services, including IDEs, project management tools, and continuous integration and delivery platforms. It also provides a variety of tools for managing your project, such as issue tracking, project boards, and code review.\nCollaboration and sharing: #GitHub makes it easy to collaborate with others on your project and share your code with the rest of the world. You can invite others to collaborate on your repository, and you can make your code public or private as needed.\nCommunity and resources: #GitHub has a large and active community of users and developers, who share resources and knowledge about using the platform. This can be a valuable resource for learning about GitHub and getting help with your projects.\nAzure DevOps Pipelines and GitHub Actions are both tools for automating your workflow and integrating tasks in your software development process. As always the best tool for you will depend on your specific needs and requirements, such as the type of project you are working on, the tools you are using, and your overall workflow and integration needs.\nIn conclusion #GitHub is a popular and widely used platform for version control and code repositories that offers a range of tools and features for managing and collaborating on software development projects. It is particularly well-suited for open-source projects and has a strong community of users and developers. While there are other platforms available, such as Azure DevOps, GitHub is often a good choice for many developers due to its popularity, integrations, and collaboration features. And for these reasons, it is my go-to platform.\nWhat\u0026rsquo;s your favorite platform, and what makes it perfect for you? #","date":"6 January 2023","permalink":"https://blog.antnsn.dev/2023-p05-repos/","section":"Posts","summary":"\u003ch2 id=\"introduction\" class=\"relative group\"\u003eIntroduction \u003cspan class=\"absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100\"\u003e\u003ca class=\"group-hover:text-primary-300 dark:group-hover:text-neutral-700\" style=\"text-decoration-line: none !important;\" href=\"#introduction\" aria-label=\"Anchor\"\u003e#\u003c/a\u003e\u003c/span\u003e\u003c/h2\u003e\u003cp\u003eAs a DevOps engineer, managing code repositories and collaborating on projects is a critical part of my day-to-day job.\u003c/p\u003e\n\u003cp\u003eBoth GitHub and Azure DevOps are popular platforms that offer a range of tools and features for managing code repositories and collaborating on projects. However, for me, GitHub stands out as the better choice for several reasons.\u003c/p\u003e\n\u003cp\u003eIn this post, I\u0026rsquo;ll be discussing some of the key features and benefits of using GitHub, as well as how it compares to Azure DevOps, and why I ultimately decided to go with GitHub.\u003c/p\u003e","title":"Maximizing Productivity and Collaboration with Github"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/post/","section":"Categories","summary":"","title":"Post"},{"content":"I\u0026rsquo;m Marius — engineering manager based in Fredrikstad, Norway. I build systems that are reliable and understandable, and teams that can sustain them.\nEngineering philosophy #I approach engineering pragmatically. Automation over repetition. Transparency over hidden complexity. Long-term stability over short-term hacks.\nMy background is rooted in cloud-native operations: Kubernetes at MSP scale, observability stacks, infrastructure automation. Over the years I gravitated toward platform engineering and architectural thinking, creating internal platforms, defining workflows, and improving engineering practices so teams can deliver faster without losing operational confidence.\nI move between deep technical work, operational reality, and leadership without losing the thread of any of them.\nObservability is not tooling #I\u0026rsquo;ve worked extensively with Grafana, Mimir, Loki, and Prometheus, designing monitoring and platform solutions that provide real operational insight rather than dashboards for their own sake.\nTo me, observability is organisational awareness. Systems should explain themselves. Failures should be visible before they become incidents. Engineers should be empowered by clarity instead of overwhelmed by noise.\nGood platforms reduce cognitive load for the humans operating them. That\u0026rsquo;s the goal.\nFrom engineer to manager #I recently moved into engineering management. What drew me to it is the intersection of technology, people, and the work of building teams that are genuinely skilled, motivated, and effective.\nBefore management I spent years as the unofficial architect: translating between technical reality and leadership, owning the problems nobody else scoped properly, and building things that actually held up in production. That background shapes how I lead. I understand the work, and I take seriously the responsibility to make conditions better for the people doing it.\nThe best technical outcomes I\u0026rsquo;ve seen come from teams that trust each other and have the space to do good work. That\u0026rsquo;s what I\u0026rsquo;m focused on building.\nOutside the terminal #I train Brazilian Jiu-Jitsu. There\u0026rsquo;s a lot about the mat that maps to engineering: steady progression, humility, learning through repetition, and the uncomfortable truth that consistency beats intensity every time.\nI\u0026rsquo;m someone who needs people around me to function well. Family, friends, the team. That\u0026rsquo;s not a soft skill, it\u0026rsquo;s just honest. The social side of this job isn\u0026rsquo;t overhead for me, it\u0026rsquo;s the part I actually look forward to.\nAt heart I\u0026rsquo;m still a lifelong computer geek who wants to understand how things work and make them better. The motivation is deeply human: leaving things better than I found them, technical or otherwise.\nBuild systems that people can trust, environments where engineers can thrive, and a life where professional ambition and family commitment strengthen rather than compete with each other.\nStack KubernetesGrafanaPrometheusLokiMimirTerraformAzureAWSDockerCI/CD Side project github.com/antnsn/stackd → stackd GitOps daemon for Docker Compose. Watches Git repos, pulls changes, injects secrets via Infisical, and applies your stacks automatically. ArgoCD for Docker Compose, without the YAML sprawl.\nGoDockerGitOpsOpen Source GitHub LinkedIn CV / Resume ","date":null,"permalink":"https://blog.antnsn.dev/about/","section":"DevOps Noldus","summary":"\u003cp\u003eI\u0026rsquo;m Marius — engineering manager based in Fredrikstad, Norway. I build systems that are reliable and understandable, and teams that can sustain them.\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"engineering-philosophy\" class=\"relative group\"\u003eEngineering philosophy \u003cspan class=\"absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100\"\u003e\u003ca class=\"group-hover:text-primary-300 dark:group-hover:text-neutral-700\" style=\"text-decoration-line: none !important;\" href=\"#engineering-philosophy\" aria-label=\"Anchor\"\u003e#\u003c/a\u003e\u003c/span\u003e\u003c/h2\u003e\u003cp\u003eI approach engineering pragmatically. Automation over repetition. Transparency over hidden complexity. Long-term stability over short-term hacks.\u003c/p\u003e\n\u003cp\u003eMy background is rooted in cloud-native operations: Kubernetes at MSP scale, observability stacks, infrastructure automation. Over the years I gravitated toward platform engineering and architectural thinking, creating internal platforms, defining workflows, and improving engineering practices so teams can deliver faster without losing operational confidence.\u003c/p\u003e","title":"About"},{"content":"Introduction #Loops are a powerful tool in any programming language, and Terraform is no exception. They allow you to repeat a set of instructions multiple times, potentially with different values each time. This can be very useful for creating multiple similar resources in Terraform, such as a set of identical EC2 instances or S3 buckets.\nTo use loops in Terraform, you can use the count argument, which allows you to specify the number of times a resource should be created. You can also use the for_each argument to iterate over a list or map of values.\nHere\u0026rsquo;s an example of using the count argument to create a set of EC2 instances:\n1 2 3 4 5 6 resource \u0026#34;aws_instance\u0026#34; \u0026#34;example\u0026#34; { count = 3 ami = \u0026#34;ami-123456\u0026#34; instance_type = \u0026#34;t2.micro\u0026#34; } In this example, three EC2 instances will be created with the specified AMI and instance type.\nUsing the for_each argument, you can iterate over a list or map of values and create a resource for each one. Here\u0026rsquo;s an example using a list:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 variable \u0026#34;instance_names\u0026#34; { type = list(string) default = [ \u0026#34;web-server-1\u0026#34;, \u0026#34;web-server-2\u0026#34;, \u0026#34;web-server-3\u0026#34;, ] } resource \u0026#34;aws_instance\u0026#34; \u0026#34;example\u0026#34; { for_each = var.instance_names ami = \u0026#34;ami-123456\u0026#34; instance_type = \u0026#34;t2.micro\u0026#34; tags = { Name = each.value } } In this example, three EC2 instances will be created, and each one will be given a name based on the corresponding value in the instance_names list.\nUsing loops in Terraform can greatly simplify your code and make it easier to manage and maintain. It\u0026rsquo;s a technique that\u0026rsquo;s well worth learning and incorporating into your Terraform projects.\nReal-world scenario #Here is an example of a real-world scenario using the for_each argument in Terraform to create virtual machines in Azure:\nImagine you are working for a client that needs to create a set of virtual machines for a new project. The client has a list of names and sizes for the virtual machines that they want to create. You can use the for_each argument in Terraform to iterate over this list and create the virtual machines. In this scenario, lets say our client already defined a virtual network and working subnet that we can reference called megaservers.\nVariables / locals #To define a locals file in Terraform, you can use the locals block in your configuration. The locals block allows you to define local variables that can be used within the same module.\nHere is an example of how you might define a locals file in Terraform:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 locals { vm = { server1 = { location = \u0026#34;West Europe\u0026#34; admin_username = \u0026#34;secretsuperadmin\u0026#34; size = \u0026#34;Standard_D2ds_v5\u0026#34; publisher = \u0026#34;Canonical\u0026#34; offer = \u0026#34;UbuntuServer\u0026#34; sku = \u0026#34;18.04-LTS\u0026#34; version = \u0026#34;latest\u0026#34; caching = \u0026#34;ReadWrite\u0026#34; storage_account_type = \u0026#34;StandardSSD_LRS\u0026#34; os_disk_size_gb = \u0026#34;30\u0026#34; } server2 = { location = \u0026#34;West Europe\u0026#34; admin_username = \u0026#34;secretsuperadmin\u0026#34; size = \u0026#34;Standard_D2ds_v5\u0026#34; publisher = \u0026#34;Canonical\u0026#34; offer = \u0026#34;UbuntuServer\u0026#34; sku = \u0026#34;18.04-LTS\u0026#34; version = \u0026#34;latest\u0026#34; caching = \u0026#34;ReadWrite\u0026#34; storage_account_type = \u0026#34;StandardSSD_LRS\u0026#34; os_disk_size_gb = \u0026#34;30\u0026#34; } } } Creating the resource group #Before we go on and create the virtual machines our customer has listed let\u0026rsquo;s create the resource groups, and let\u0026rsquo;s base our resource groups on the values of the locals. For this example, we are using the keys server1 and server2 from the locals to define our resource group names.\nThe each.key expression in Terraform allows you to access the keys of a map in a for_each block. The each.key expression is used in combination with the for_each argument to iterate over a map and create resources for each key-value pair in the map.\n1 2 3 4 5 resource \u0026#34;azurerm_resource_group\u0026#34; \u0026#34;rg\u0026#34; { for_each = local.vm name = \u0026#34;rg-magical-${each.key}\u0026#34; location = each.value.location } Creating the Virtual Machines #The azurerm_network_interface resource is used to create a network interface in Azure. A network interface is a logical networking component that represents a network card in Azure. It provides the ability to connect a virtual machine or other resources to a virtual network. In this example we create this the same way we did the resource groups, using the keys for naming and to point to the correct resource group.\n1 2 3 4 5 6 7 8 9 10 11 12 resource \u0026#34;azurerm_network_interface\u0026#34; \u0026#34;nic\u0026#34; { for_each = local.vm name = \u0026#34;magical-${each.key}-nic\u0026#34; location = each.value.location resource_group_name = azurerm_resource_group.rg[each.key].name ip_configuration { name = \u0026#34;internal\u0026#34; subnet_id = azurerm_subnet.megaservers.id private_ip_address_allocation = \u0026#34;Dynamic\u0026#34; } } Lets go on to create the Virtual machines. In this example we go further into our locals to define the vm, like previous resources we create the name using the word magical in combination with the key magical-${each.key}.\nThe each.value expression in Terraform allows you to access the values of a map or list in a for_each block. The each.value expression is used in combination with the for_each argument to iterate over a map or list and create resources for each element in the map or list.\nIn the example we can see that each.value.location referes to the location value in the locals the same goes for the other values like each.value.size, each.value.caching, etc\u0026hellip;\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 resource \u0026#34;azurerm_linux_virtual_machine\u0026#34; \u0026#34;vm\u0026#34; { for_each = local.vm name = \u0026#34;magical-${each.key}\u0026#34; resource_group_name = azurerm_resource_group.rg[each.key].name location = each.value.location size = each.value.size admin_ssh_key { username = each.value.admin_username public_key = file(\u0026#34;~/.ssh/id_rsa.pub\u0026#34;) } network_interface_ids = [ azurerm_network_interface.nic[each.key].id, ] os_disk { name = \u0026#34;magical-${each.key}-osdisk\u0026#34; caching = each.value.caching storage_account_type = each.value.storage_account_type disk_size_gb = each.value.os_disk_size_gb } source_image_reference { publisher = each.value.publisher offer = each.value.offer sku = each.value.sku version = each.value.version } } Conclusion #Using loops in Terraform can be a smart and efficient way to create multiple resources in a repeatable and modular manner. Here are a few reasons why using loops in Terraform can be a timesaver:\nAvoid repetition: Instead of writing separate blocks of code to create multiple resources, loops allow you to create multiple resources using a single block of code. This can help to reduce duplication and make your code more readable and maintainable.\nSimplify configuration: Using loops allows you to define resources in a list or map format, which can be easier to understand and modify than writing out each resource individually. You can also use variables and expressions to customize the configuration of each resource, making it easy to adapt your code to different environments or requirements.\nStreamline resource management: Loops make it easy to manage a large number of resources by allowing you to apply changes or updates to all resources in a single operation. This can save time and effort when compared to managing resources individually.\nOverall, using loops in Terraform can help you write more efficient and modular code, which can save time and effort when managing your infrastructure.\nI hope this helps! Let me know if you have any questions.\n","date":"28 December 2022","permalink":"https://blog.antnsn.dev/2022-p1-loops/","section":"Posts","summary":"\u003ch2 id=\"introduction\" class=\"relative group\"\u003eIntroduction \u003cspan class=\"absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100\"\u003e\u003ca class=\"group-hover:text-primary-300 dark:group-hover:text-neutral-700\" style=\"text-decoration-line: none !important;\" href=\"#introduction\" aria-label=\"Anchor\"\u003e#\u003c/a\u003e\u003c/span\u003e\u003c/h2\u003e\u003cp\u003eLoops are a powerful tool in any programming language, and Terraform is no exception. They allow you to repeat a set of instructions multiple times, potentially with different values each time. This can be very useful for creating multiple similar resources in Terraform, such as a set of identical EC2 instances or S3 buckets.\u003c/p\u003e\n\u003cp\u003eTo use loops in Terraform, you can use the \u003ccode\u003ecount\u003c/code\u003e argument, which allows you to specify the number of times a resource should be created. You can also use the \u003ccode\u003efor_each\u003c/code\u003e argument to iterate over a list or map of values.\u003c/p\u003e","title":"Loops with Terraform"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/categories/terraform/","section":"Categories","summary":"","title":"Terraform"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/terraform-loops/","section":"Tags","summary":"","title":"Terraform Loops"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/terraform-loops-for-each/","section":"Tags","summary":"","title":"Terraform Loops for Each"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/terraform-loops-tutorial/","section":"Tags","summary":"","title":"Terraform Loops Tutorial"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/terraform-tutorial/","section":"Tags","summary":"","title":"Terraform Tutorial"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/terraform-tutorial-aws/","section":"Tags","summary":"","title":"Terraform Tutorial Aws"},{"content":"","date":null,"permalink":"https://blog.antnsn.dev/tags/terraform-tutorial-azure/","section":"Tags","summary":"","title":"Terraform Tutorial Azure"}]