• HashiCorp Developer

  • HashiCorp Cloud Platform
  • Terraform
  • Packer
  • Consul
  • Vault
  • Boundary
  • Nomad
  • Waypoint
  • Vagrant
Packer
  • Install
  • Tutorials
  • Documentation
  • Guides
  • Plugins
  • Try Cloud(opens in new tab)
  • Sign up
Use Cases

Skip to main content
3 tutorials
  • Build a Golden Image Pipeline with HCP Packer
  • Standardize Machine Images Across Multiple Cloud Providers
  • Automate Packer with GitHub Actions

  • Resources

  • Tutorial Library
  • Community Forum
    (opens in new tab)
  • Support
    (opens in new tab)
  • GitHub
    (opens in new tab)
  1. Developer
  2. Packer
  3. Tutorials
  4. Use Cases
  5. Build a Golden Image Pipeline with HCP Packer

Build a Golden Image Pipeline with HCP Packer

  • 36min

  • BetaBeta
  • HCPHCP
  • PackerPacker
  • TerraformTerraform

A golden image is an image on top of which developers can build applications, letting them focus on the application itself instead of system dependencies and patches. A typical golden image includes common system, logging, and monitoring tools, recent security patches, and application dependencies.

Traditionally, operations and security teams had to cross-reference spreadsheets, personally inform downstream developers, and manually update build files when they released new golden images. Sophisticated organizations automated this process by building and maintaining effort-intensive continuous integration (CI) pipelines. The HCP Packer registry improves this process by tracking images' metadata and storage location, and providing the correct image to developers automatically through Packer and Terraform integrations. It also allows you to revoke images to remove them from circulation if they become stale or have security vulnerabilities.

Diagram showing a workflow with HCP Packer

After you build your image with Packer and push its metadata to HCP Packer, you can reference the image in your Terraform configuration to deploy it. HCP Packer has a Terraform Cloud run task integration, which validates that the machine images in your Terraform configuration are not revoked.

In this tutorial, you will use HCP Packer to define a golden image pipeline and build parent golden and child application images. You will then deploy the application image to AWS using Terraform.

To accomplish this, you will first deploy an EC2 instance running Loki for log aggregation and Grafana for visualization. Then, you will build a golden image with configuration that references the Loki and Grafana instance's details, and build an application image that uses the golden image as a base. Then, you will schedule a revocation and learn how image revocation prevents downstream image consumers from referencing outdated images. Finally, you will use Terraform to deploy an EC2 instance running the application image, and view the application logs in Grafana.

Prerequisites

This tutorial assumes that you are familiar with the standard Packer and HCP Packer workflows. If you are new to Packer, complete the Get Started tutorials first. If you are new to HCP Packer, complete the Get Started HCP Packer tutorials first.

This tutorial assumes that you are familiar with the Terraform and Terraform Cloud workflows. If you are new to Terraform, complete the Get Started tutorials first. If you are new to Terraform Cloud, complete the Terraform Cloud Get Started tutorials first.

To follow along with this tutorial, you will need:

  • Packer 1.7.9 installed locally.
  • Terraform 1.2 or later installed locally.
  • A Terraform Cloud account with a Team & Governance plan and workspace admin permissions.
  • An HCP account.
  • An HCP Packer registry with Plus tier.
    • Create a registry: click Packer > Create a free registry. You only need to do this once.
    • Enable Plus tier: click Manage > Edit registry and select Plus. If you have free-trial credits HCP will apply them to enable the Plus tier.
  • An AWS account with credentials set as local environment variables.

Create new HCP Packer registry

Clone the example repository

In your terminal, clone the tutorial repository. It contains configuration for building and publishing images with Packer and deploying them to AWS with Terraform.

$ git clone https://github.com/hashicorp/learn-packer-hcp-golden-image

Navigate to the cloned repository.

$ cd learn-packer-hcp-golden-image

Architecture overview

The diagram below shows the infrastructure and services you will deploy in this tutorial. You will provision one instance that runs Loki and Grafana and two instances for HashiCups — an example application for ordering HashiCorp-branded coffee. You will deploy the HashiCups instances across two AWS regions, us-east-2 and us-west-2. The HashiCups instances contain baseline tools, including Docker and promtail, which they will inherit from the golden image that HashiCups is based on.

Diagram showing interactions between components on the HashiCups and Loki EC2 instances

  1. HashiCups is an application consisting of an API and a database. The components run as separate Docker containers and are provisioned with Docker Compose. Docker stores the logs generated by both the API and database.

  2. Promtail is an agent that sends logs from a local log store to an instance of Loki. In this scenario, Promtail forwards the HashiCups Docker container logs to the Loki instance using a Loki Docker plugin.

  3. Loki is a log aggregation tool that provides log data for querying and runs on port 3100. Grafana visualizes the Loki logs and provides its own web user interface on port 3000.

Review configuration

The example repository contains several directories:

  • The loki directory contains a Packer template file, a Loki configuration file, and scripts that configure and enable Loki and Grafana.
  • The golden directory contains a Packer template file, Docker and Promtail configuration files, and scripts that configure and enable Docker and Promtail.
  • The hashicups directory contains a Packer template file, a Docker Compose file, and the HashiCups start script.
  • The terraform directory contains Terraform configuration files to deploy AWS EC2 instances that run the images for this scenario, and a script to query the HashiCups API.

Warning: This configuration provisions a publicly accessible Loki and Grafana instance, which is not recommended for production services.

Diagram showing golden image pipeline flow

First, you will build the Loki image and deploy it to an EC2 instance. Then, you will build the golden image, which requires the public address of the Loki instance for its configuration. Finally, you will build and provision the HashiCups image, which uses the golden image as a parent image. You will configure and use a Terraform Cloud run task to verify that the images referenced in the Terraform configurations have not been revoked.

The Loki instance simulates an existing implementation of Loki running in your organization's network. In a production scenario, you would configure a DNS entry for your Loki instance(s) rather than the EC2 instance address.

Review Loki image configuration

Open loki/start-loki-grafana.sh and note that both Loki and Grafana run on the same instance — Loki as a system process and Grafana as a Docker container.

loki/start-loki-grafana.sh
#!/bin/bash
# Start Loki in background
cd /home/ubuntu
nohup ./loki-linux-amd64 -config.file=loki-local-config.yaml &

# Start Grafana
docker run -d -p 3000:3000 grafana/grafana

Next, open loki/loki.pkr.hcl. Packer uses this file to build an Amazon Machine Image (AMI) that runs Loki and Grafana. This tutorial will refer to this image as "Loki image" even though it contains both Loki and Grafana.

The amazon-ami.ubuntu-focal data block retrieves an Ubuntu 20.04 image from the us-east-2 region to use as a base. The amazon-ebs.base source block then references the ID of that image from the amazon-ami.ubuntu-focal data block for the source_ami.

loki/loki.pkr.hcl
data "amazon-ami" "ubuntu-focal" {
  region = "us-east-2"
  filters = {
    name                = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"
    root-device-type    = "ebs"
    virtualization-type = "hvm"
  }
  most_recent = true
  owners      = ["099720109477"]
}

source "amazon-ebs" "base" {
  ami_name      = "${var.ami_prefix}-${local.timestamp}"
  instance_type = "t2.micro"
  region        = "us-east-2"
  source_ami    = data.amazon-ami.ubuntu-focal.id
  ssh_username = "ubuntu"
}

The build block uses the image retrieved by the amazon-ebs.base source block, and adds an SSH public key, the Loki configuration file, and the startup script to the image.

loki/loki.pkr.hcl
build {
  name = "learn-packer-loki-server"
  sources = [
    "source.amazon-ebs.base"
  ]

  # Add SSH public key
  provisioner "file" {
    source      = "../learn-packer.pub"
    destination = "/tmp/learn-packer.pub"
  }

  # Add Loki configuration file
  provisioner "file" {
    source      = "loki-local-config.yaml"
    destination = "loki-local-config.yaml"
  }

  # Add startup script that will run loki and grafana on instance boot
  provisioner "file" {
    source      = "start-loki-grafana.sh"
    destination = "/tmp/start-loki-grafana.sh"
  }
# …
}

Then, Packer executes the loki-setup.sh script to set up sudo, install dependencies, the SSH key, and Loki.

loki/loki.pkr.hcl
build {
  # ...
  # Execute setup script
  provisioner "shell" {
    script = "loki-setup.sh"
    # Run script after cloud-init finishes, otherwise you run into race conditions
    execute_command = "/usr/bin/cloud-init status --wait && sudo -E -S sh '{{ .Path }}'"
  }

  # Move temp files to actual destination
  # Must use this method because their destinations are protected
  provisioner "shell" {
    inline = [
      "sudo cp /tmp/start-loki-grafana.sh /var/lib/cloud/scripts/per-boot/start-loki-grafana.sh",
      "rm /tmp/start-loki-grafana.sh",
    ]
  }
  # …
}

Finally, Packer sends the image metadata to the HCP Packer registry so downstream Terraform deployments can use it.

loki/loki.pkr.hcl
build {
  # ...
  hcp_packer_registry {
    bucket_name = "learn-packer-hcp-loki-image"
    description = <<EOT
This is an image for loki built on top of ubuntu 20.04.
    EOT

    bucket_labels = {
      "hashicorp-learn"       = "learn-packer-hcp-loki-image",
      "ubuntu-version"        = "20.04"
    }
  }
}

Review golden image configuration

A golden image typically includes baseline tools, services, and configurations. The golden image for this tutorial contains Docker and Docker Compose for running applications, promtail for log export, grafana/loki-docker-driver:latest for collecting Docker logs, and auditd for securing Docker.

Open golden/golden.pkr.hcl. This configuration defines two amazon-ebs source blocks which each reference a corresponding amazon-ami data block. There is one block for each AWS region where you will publish your AMI. AMIs are region specific, so you must build a separate AMI for each region.

golden/golden.pkr.hcl

data "amazon-ami" "ubuntu-focal-east" {
  region = "us-east-2"
  filters = {
    name                = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"
    root-device-type    = "ebs"
    virtualization-type = "hvm"
  }
  most_recent = true
  owners      = ["099720109477"]
}

source "amazon-ebs" "base_east" {
  ami_name      = "${var.ami_prefix}-${local.timestamp}"
  instance_type = "t2.micro"
  region        = "us-east-2"
  source_ami    = data.amazon-ami.ubuntu-focal-east.id
  ssh_username  = "ubuntu"
  tags = {
    Name        = "learn-hcp-packer-base-east"
    environment = "production"
  }
  snapshot_tags = {
    environment = "production"
  }
}

data "amazon-ami" "ubuntu-focal-west" {
  region = "us-west-2"
  filters = {
    name                = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"
    root-device-type    = "ebs"
    virtualization-type = "hvm"
  }
  most_recent = true
  owners      = ["099720109477"]
}

source "amazon-ebs" "base_west" {
  ami_name      = "${var.ami_prefix}-${local.timestamp}"
  instance_type = "t2.micro"
  region        = "us-west-2"
  source_ami    = data.amazon-ami.ubuntu-focal-west.id
  ssh_username  = "ubuntu"
  tags = {
    Name        = "learn-hcp-packer-base-west"
    environment = "production"
  }
  snapshot_tags = {
    environment = "production"
  }
}

The build block uses the images retrieved by the amazon-ebs.base_east and amazon-ebs.base_west sources. It adds an SSH public key, runs the setup.sh script to install dependencies, adds the audit rules for Docker, the Docker daemon config file, the promtail config file, and the run-promtail.sh script file.

The two values in the sources attribute let Packer build these two images in parallel, reducing the build time. Refer to the AWS Get Started Tutorial for more details about parallel builds.

golden/golden.pkr.hcl
build {
  name = "learn-packer-golden"
  sources = [
    "source.amazon-ebs.base_east",
    "source.amazon-ebs.base_west"
  ]

  # Add SSH public key
  provisioner "file" {
    source      = "../learn-packer.pub"
    destination = "/tmp/learn-packer.pub"
  }

  # Execute setup script
  provisioner "shell" {
    script = "setup.sh"
    # Run script after cloud-init finishes, otherwise you run into race conditions
    execute_command = "/usr/bin/cloud-init status --wait && sudo -E -S sh '{{ .Path }}'"
  }

  # Add audit rules to temp location
  provisioner "file" {
    source      = "audit.rules"
    destination = "/tmp/audit.rules"
  }

  # Update Docker daemon with Loki logs
  provisioner "file" {
    source      = "docker-daemon.json"
    destination = "/tmp/daemon.json"
  }

  # Add promtail configuration file
  provisioner "file" {
    source      = "promtail.yaml"
    destination = "/tmp/promtail.yaml"
  }

  # Add startup script that will run promtail on instance boot
  provisioner "file" {
    source      = "run-promtail.sh"
    destination = "/tmp/run-promtail.sh"
  }
# …
}

After it builds the images, Packer moves the configuration files to the correct directories and runs the setup-promtail.sh script to configure promtail and its Docker plugin.

golden/golden.pkr.hcl
build {
# …
  # Move temp files to actual destination
  # Must use this method because their destinations are protected
  provisioner "shell" {
    inline = [
      "sudo cp /tmp/audit.rules /etc/audit/rules.d/audit.rules",
      "sudo mkdir /opt/promtail/",
      "sudo cp /tmp/promtail.yaml /opt/promtail/promtail.yaml",
      "sudo cp /tmp/run-promtail.sh /var/lib/cloud/scripts/per-boot/run-promtail.sh",
      "sudo cp /tmp/daemon.json /etc/docker/daemon.json",
    ]
  }

  # Execute setup script
  provisioner "shell" {
    script = "setup-promtail.sh"
  }
# ...
}

Finally, Packer sends the image metadata to the HCP Packer registry so that downstream Packer builds and Terraform deployments can reference it.

golden/golden.pkr.hcl
build {
  # ...
  hcp_packer_registry {
    bucket_name = "learn-packer-hcp-golden-base-image"
    description = <<EOT
This is a golden image built on top of ubuntu 20.04.
    EOT

    bucket_labels = {
      "hashicorp-learn" = "learn-packer-hcp-golden-image",
      "ubuntu-version"  = "20.04"
    }
  }
}

Review HashiCups image configuration

Open hashicups/hashicups.pkr.hcl.

The hcp-packer-iteration data source retrieves information about the iteration from the HCP Packer bucket_name and channel. The value of bucket_name matches the one defined in the hcp_packer_registry block of the golden image Packer template (golden/golden.pkr.hcl).

The hcp-packer-image data source uses the iteration details to retrieve the image for the specified cloud_provider and region. This data source is necessary because an iteration can include images from different cloud providers and regions.

The two hcp-packer-image data sources use the same iteration_id but reference different images based on the region value.

hashicups/hashicups.pkr.hcl
data "hcp-packer-iteration" "golden" {
  bucket_name = "learn-packer-hcp-golden-base-image"
  channel = "production"
}

data "hcp-packer-image" "golden_base_east" {
  bucket_name = "learn-packer-hcp-golden-base-image"
  iteration_id = data.hcp-packer-iteration.golden.id
  cloud_provider = "aws"
  region = "us-east-2"
}

data "hcp-packer-image" "golden_base_west" {
  bucket_name = "learn-packer-hcp-golden-base-image"
  iteration_id = data.hcp-packer-iteration.golden.id
  cloud_provider = "aws"
  region = "us-west-2"
}

The source_ami references the hcp-packer-image data source, using the AMI ID stored in the HCP Packer registry.

hashicups/hashicups.pkr.hcl
source "amazon-ebs" "hashicups_east" {
  ami_name      = "${var.ami_prefix}-${local.timestamp}"
  instance_type = "t2.micro"
  region        = "us-east-2"
  source_ami    = data.hcp-packer-image.golden_base_east.id
  ssh_username = "ubuntu"
  tags = {
    Name          = "learn-hcp-packer-hashicups-east"
    environment   = "production"
  }
  snapshot_tags = {
    environment   = "production"
  }
}

source "amazon-ebs" "hashicups_west" {
  ami_name      = "${var.ami_prefix}-${local.timestamp}"
  instance_type = "t2.micro"
  region        = "us-west-2"
  source_ami    = data.hcp-packer-image.golden_base_west.id
  ssh_username = "ubuntu"
  tags = {
    Name          = "learn-hcp-packer-hashicups-west"
    environment   = "production"
  }
  snapshot_tags = {
    environment   = "production"
  }
}

The build block uses the golden images defined in the amazon-ebs.hashicups_east and amazon-ebs.hashicups_west sources and adds an SSH public key, the conf.json file for application configuration, the Docker Compose file to create the HashiCups containers, and the HashiCups start script. Packer then moves the start script to the correct directory.

Like the golden images, Packer builds these images in parallel.

golden/golden.pkr.hcl
build {
  name = "learn-packer-hashicups"
 sources = [
    "source.amazon-ebs.hashicups_east",
    "source.amazon-ebs.hashicups_west"
  ]


  # Add SSH public key
  provisioner "file" {
    source      = "../learn-packer.pub"
    destination = "/tmp/learn-packer.pub"
  }

  # Add HashiCups configuration file
  provisioner "file" {
    source      = "conf.json"
    destination = "conf.json"
  }

  # Add Docker Compose file
  provisioner "file" {
    source      = "docker-compose.yml"
    destination = "docker-compose.yml"
  }

  # Add startup script that will run hashicups on instance boot
  provisioner "file" {
    source      = "start-hashicups.sh"
    destination = "/tmp/start-hashicups.sh"
  }

  # Move temp files to actual destination
  # Must use this method because their destinations are protected
  provisioner "shell" {
    inline = [
      "sudo cp /tmp/start-hashicups.sh /var/lib/cloud/scripts/per-boot/start-hashicups.sh",
    ]
  }
# …
}

Finally, Packer sends the image metadata to the HCP Packer registry so downstream Terraform deployments can reference it.

hashicups/hashicups.pkr.hcl
build {
  # ...
  hcp_packer_registry {
    bucket_name = "learn-packer-hcp-hashicups-image"
    description = <<EOT
This is an image for HashiCups built on top of a golden parent image.
    EOT

    bucket_labels = {
      "hashicorp-learn"       = "learn-packer-hcp-hashicups-image",
    }
  }
}

Review Terraform configuration

Open terraform/main.tf. This Terraform configuration defines EC2 instances that run the Loki and HashiCups images.

Terraform retrieves image source information from the HCP Packer registry in a similar way as Packer. The hcp_packer_image_iteration data source gets the latest iteration from the bucket and channel provided.

The hcp_packer_image data sources then use the iteration ID to retrieve and store the AMI IDs of the images in the regions specified. Notice that the ami value of the aws_instance resource references the hcp_packer_image data source's AMI ID.

terraform/main.tf
data "hcp_packer_iteration" "loki" {
  bucket_name = var.hcp_bucket_loki
  channel     = var.hcp_channel
}

data "hcp_packer_image" "loki" {
  bucket_name    = var.hcp_bucket_loki
  cloud_provider = "aws"
  iteration_id   = data.hcp_packer_iteration.loki.ulid
  region         = "us-east-2"
}

resource "aws_instance" "loki" {
  ami           = data.hcp_packer_image.loki.cloud_image_id
  instance_type = "t2.micro"
  # ...
}

The remaining Terraform configuration files define input variables, output values, and network infrastructure that the Loki and HashiCups instances depend on, including a VPC, internet gateway, subnet, route table, and security groups. This Terraform configuration deploys these resources to both the us-east-2 and us-west-2 regions.

Prepare your environment

The configuration scripts included in the AMIs rely on a user named terraform. Create a local SSH key to pair with the user so that you can securely connect to your instances.

Generate a new SSH key named learn-packer. The argument provided with the -f flag creates the key in the current directory and creates two files called learn-packer and learn-packer.pub. Change the placeholder email address to your email address.

$ ssh-keygen -t rsa -C "your_email@example.com" -f ./learn-packer

When prompted, press enter to leave the passphrase blank on this key.

Set your Terraform Cloud organization

Set the TF_CLOUD_ORGANIZATION environment variable to your Terraform Cloud organization name.

$ export TF_CLOUD_ORGANIZATION=

Log in to Terraform Cloud

In this tutorial, you will use the Terraform CLI to create the Terraform Cloud workspace and trigger remote plan and apply runs.

Log in to your Terraform Cloud account in your terminal.

$ terraform login
Terraform will request an API token for app.terraform.io using your browser.

If login is successful, Terraform will store the token in plain text in
the following file for use by subsequent commands:
    /Users/<USER>/.terraform.d/credentials.tfrc.json

Do you want to proceed?
  Only 'yes' will be accepted to confirm.

  Enter a value:

Confirm with a yes and follow the workflow in the browser window that automatically opens. Paste the generated API key into your Terminal when prompted. Review the Authenticate the CLI with Terraform Cloud tutorial for more details about logging in.

Create HCP service principal

A service principal allows Packer and Terraform to interact with HCP Packer to push and reference image metadata.

Log in to HashiCorp Cloud Platform, choose your organization, and click the Access control (IAM) link from the left navigation.

Access Control link on HashiCorp Cloud Platform dashboard

Click on the Service principals link from the left navigation, then Create service principal on the top right of the page.

Create service principal button on Service principals page

Name the service principal learn-hcp-packer, assign the "Contributor" role, then click Save.

Create service principal form with learn-hcp-packer in Name field and contributor selected in Role field

Click Create service principal key at the bottom of the page.

Service principal page for learn-hcp-packer with Create service principal key link highlighted

Record the Client ID and Client secret — HCP only displays these values upon creation.

Create service principal key dialog with Client ID and Client secret

In your terminal, set an environment variable for your client ID.

$ export HCP_CLIENT_ID=

Then, set an environment variable for your client secret.

$ export HCP_CLIENT_SECRET=

Later in this tutorial, you will also create Terraform Cloud environment variables for these values.

Retrieve HCP Packer run task information

On the HCP Packer page, click Integrate with Terraform Cloud.

View and retrieve endpoint URL and HMAC key

This displays information for you to use to configure your Terraform Cloud run task.

  1. The Endpoint URL is a unique HCP Packer URL, specific to your HCP organization and HCP Packer registry. The Terraform Cloud run task will send a payload to this URL for image validation.

  2. The HMAC Key is a secret key that lets HCP Packer verify the run task request.

    Warning: Do not share these values. If your HMAC key is compromised, re-generate it and update your Terraform Cloud run task to use the new value.

Leave this tab open to reference the displayed values for the next step.

Set up run task in Terraform Cloud

In Terraform Cloud, go to Settings then click Run tasks on the left sidebar.

Click Create run task.

Terraform Cloud run task page

On the Create a Run Task page:

  1. Verify Enabled is checked.

  2. Set Name to HCP-Packer.

  3. Set Endpoint URL to the endpoint URL you retrieved in the previous step.

  4. Set HMAC key to the HMAC key you retrieved in the previous step.

    Note: Although labeled as optional in the UI, you must enter the HMAC key provided by HCP Packer. The HCP Packer integration requires an HMAC key to authenticate requests.

Click Create run task.

Create Terraform Cloud run task with fields

The Run Tasks page now shows the HCP-Packer run task.

Run task page with HCP Packer run task

Build and deploy the Loki image

Use Packer to build the Loki image. Once you create the image and deploy it to an EC2 instance, you will add the instance IP address to the golden image Packer template.

Navigate to the loki directory.

$ cd loki

Define an HCP_PACKER_FINGERPRINT environment variable.

$ export HCP_PACKER_BUILD_FINGERPRINT="$(date +%s)-$(git hash-object loki.pkr.hcl | cut -c -25)"

The fingerprint is a unique identifier for the build iteration in the HCP Packer registry and defaults to the Git head SHA. However, in this tutorial, you will override the build fingerprint by setting it to a timestamp and a 25 character hash of the Packer template to ensure the build fingerprint is unique.

If the build fingerprint matches an existing iteration marked Complete in the HCP Packer registry, Packer will exit without running the builds. Refer to the Packer documentation for more details about the build fingerprint.

Initialize the template file for the Loki image.

$ packer init .

Build the Loki image.

$ packer build .
learn-packer-loki-server.amazon-ebs.base: output will be in this color.

==> learn-packer-loki-server.amazon-ebs.base: Publishing build details for amazon-ebs.base to the HCP Packer registry
==> learn-packer-loki-server.amazon-ebs.base: Prevalidating any provided VPC information
==> learn-packer-loki-server.amazon-ebs.base: Prevalidating AMI Name: learn-packer-hcp-loki-server-20210914145915
    learn-packer-loki-server.amazon-ebs.base: Found Image ID: ami-0a5a9780e8617afe7
## ...
==> learn-packer-loki-server.amazon-ebs.base: Running post-processor:
Build 'learn-packer-loki-server.amazon-ebs.base' finished after 5 minutes 19 seconds.

==> Wait completed after 5 minutes 19 seconds

==> Builds finished. The artifacts of successful builds are:
--> learn-packer-loki-server.amazon-ebs.base: AMIs were created:
us-east-2: ami-0d9356dfdbea0d50b

--> learn-packer-loki-server.amazon-ebs.base: Published metadata to HCP Packer registry packer/learn-packer-hcp-loki-image/iterations/01FFJD3GDKRBW4CZ314EZ467VY

Packer publishes the build's metadata to the HCP Packer registry in the final build step.

Inspect Packer Build on HCP

Visit HCP and click on Packer in the left navigation menu.

This page displays a list of buckets and their latest associated iterations. Click on the Loki bucket, which is named learn-packer-hcp-loki-image.

HashiCorp Cloud Platform Packer artifact registry page displaying buckets

Here, you can find information published to the registry from the Loki Packer build including the description and labels defined in the hcp_packer_registry block of the loki/loki.pkr.hcl template. The latest image iteration is on the right.

Click on Iterations in the left navigation.

This page displays each build iteration published to the bucket. Click on the iteration version at the top of the list.

Iterations page for bucket named learn-packer-hcp-loki-image

The Builds section lists details about the images published in this iteration. The amazon-ebs.base image matches the image defined in the source block in the Loki Packer template. Click on the us-east-2 link to find information about the image published to the us-east-2 region, including the AMI ID.

Iteration details page showing published image information

Create channel for Loki image

HCP Packer registry channels let you reference a specific build iteration in Packer or Terraform. This reduces errors from hardcoding image IDs and allows both Packer and Terraform to automatically retrieve the most recent image.

Click Channels in the left navigation.

Create a new channel for the Loki bucket by clicking on New Channel.

Enter production for the Channel name, select the v1 iteration from the Assign to an image iteration dropdown, and click the Create channel button.

Initialize the Loki instance with Terraform

Now use Terraform to deploy the Loki image to an AWS instance. First, change into the terraform directory.

$ cd ../terraform

Initialize your Terraform configuration.

$ terraform init
Initializing Terraform Cloud...

Initializing provider plugins...
- Reusing previous version of hashicorp/hcp from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/hcp v0.17.0
- Using previously-installed hashicorp/aws v3.63.0

Terraform Cloud has been successfully initialized!

You may now begin working with Terraform Cloud. Try running "terraform plan" to
see any changes that are required for your infrastructure.

If you ever set or change modules or Terraform Settings, run "terraform init"
again to reinitialize your working directory.

You have initialized your Terraform configuration and created your learn-hcp-packer-golden-image workspace. You will now associate the run task you created earlier with this workspace to verify that images referenced in runs have not been revoked.

Add credentials to workspace

In Terraform Cloud, open the learn-hcp-packer-golden-image workspace.

Go to the Variables page and create the following variables with your specific values.

Variable NameValueCategorySensitive
AWS_ACCESS_KEY_IDYour AWS access key IDenvironment
AWS_SECRET_ACCESS_KEYYour AWS secret access keyenvironmentyes
HCP_CLIENT_IDYour HCP client IDenvironment
HCP_CLIENT_SECRETYour HCP client secretenvironmentyes

Set Terraform Cloud variables for AWS and HCP credentials

Note: Set a variable for your AWS_SESSION_TOKEN if your organization requires it.

Enable run tasks in workspace

Click on the workspace Settings, then Run Tasks.

Under Available Run Tasks, click on HCP-Packer.

Run task page shows HCP-Packer under "Available Run Tasks"

Select the Mandatory enforcement level, then click Create.

The Run Task page now displays the run task for HCP Packer. This run task scans your Terraform configuration for resources that use hard-coded machine image IDs and checks if the image is tracked by HCP Packer. If the image is associated with an image iteration, the run task will warn users if it is a revoked iteration. It will also prompt users to use the HCP Packer data sources instead of hard-coded image IDs to better track and manage machine images.

Deploy the Loki instance with Terraform

Apply your configuration. Respond yes to the prompt to confirm the operation.

$ terraform apply
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.loki will be created
  + resource "aws_instance" "loki" {
      + ami                                  = "ami-03d45fc3ac1622776"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = true
      + availability_zone                    = (known after apply)
      + cpu_core_count                       = (known after apply)
…
Plan: 21 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + loki_ip = (known after apply)

Post-plan Tasks:

All tasks completed! 1 passed, 0 failed           (6s elapsed)

│ HCP-Packer ⸺   Passed
│ Data source image validation results: 1 resource scanned. All resources are compliant.
│
│
│ Overall Result: Passed

------------------------------------------------------------------------


Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

## ...

Apply complete! Resources: 21 added, 0 changed, 0 destroyed.

Outputs:

loki_ip = "18.117.188.90"

Once Terraform builds the Loki instance, it prints the loki_ip output value, the Loki instance's public IP address. You will reference this IP address in your parent image configuration to direct log forwarding to the Loki instance.

Verify image validation

In Terraform Cloud, open the latest run and expand the Tasks passed box.

View passing run task

The run task passed, which means that HCP Packer is tracking the Loki image you referenced in the Terraform Configuration, and that the image is not revoked.

Build golden image

To forward the Docker container logs to the Loki instance, you need to update two files with the Loki instance's IP address.

First, verify that you are in the terraform directory.

In golden/docker-daemon.json, replace LOKI_URL with your Loki public IP address.

$ sed -i "" "s/LOKI_URL/$(terraform output -raw loki_ip)/g" ../golden/docker-daemon.json

In golden/promtail.yaml, replace LOKI_URL with your Loki public IP address.

$ sed -i "" "s/LOKI_URL/$(terraform output -raw loki_ip)/g" ../golden/promtail.yaml

First, verify that you are in the terraform directory.

In golden/docker-daemon.json, replace LOKI_URL with your Loki public IP address.

$ sed -i "s/LOKI_URL/$(terraform output -raw loki_ip)/g" ../golden/docker-daemon.json

In golden/promtail.yaml, replace LOKI_URL with your Loki public IP address.

$ sed -i "s/LOKI_URL/$(terraform output -raw loki_ip)/g" ../golden/promtail.yaml

Change to the golden directory.

$ cd ../golden

Create a new build fingerprint environment variable to use for the golden image.

$ export HCP_PACKER_BUILD_FINGERPRINT="$(date +%s)-$(git hash-object golden.pkr.hcl | cut -c -25)"

Initialize the Packer build.

$ packer init .

Build the golden image with Packer and the golden/golden.pkr.hcl template.

$ packer build .
learn-packer-golden.amazon-ebs.base_east: output will be in this color.
learn-packer-golden.amazon-ebs.base_west: output will be in this color.

==> learn-packer-golden.amazon-ebs.base_east: Publishing build details for amazon-ebs.base_east to the HCP Packer registry
==> learn-packer-golden.amazon-ebs.base_west: Publishing build details for amazon-ebs.base_west to the HCP Packer registry
==> learn-packer-golden.amazon-ebs.base_east: Prevalidating any provided VPC information
==> learn-packer-golden.amazon-ebs.base_east: Prevalidating AMI Name: learn-packer-hcp-golden-image-20210923193639
    learn-packer-golden.amazon-ebs.base_east: Found Image ID: ami-0a5a9780e8617afe7
## …
==> learn-packer-golden.amazon-ebs.base_west: Running post-processor:
Build 'learn-packer-golden.amazon-ebs.base_west' finished after 7 minutes 25 seconds.

==> Wait completed after 7 minutes 25 seconds

==> Builds finished. The artifacts of successful builds are:
--> learn-packer-golden.amazon-ebs.base_east: AMIs were created:
us-east-2: ami-0d9ceb03858f6bbbc

--> learn-packer-golden.amazon-ebs.base_east: Published metadata to HCP Packer registry packer/learn-packer-hcp-golden-base-image/iterations/01FGA2HXKNBQ1V5QFT38A7FH7N
--> learn-packer-golden.amazon-ebs.base_west: AMIs were created:
us-west-2: ami-0a8ec1640700b8ae9

--> learn-packer-golden.amazon-ebs.base_west: Published metadata to HCP Packer registry packer/learn-packer-hcp-golden-base-image/iterations/01FGA2HXKNBQ1V5QFT38A7FH7N

Create channel for golden image

In HCP Packer, navigate to the learn-packer-hcp-golden-base-image bucket page, create a new channel named production, and select the latest iteration.

Creating a channel for the learn-packer-hcp-golden-base-image

Build and deploy HashiCups image

Since the golden image is already configured to send container logs to Loki, and the HashiCups image is built on top of the golden one, you do not need to modify the HashiCups image configuration.

Use Packer to build the HashiCups image. Change to the hashicups directory.

$ cd ../hashicups

Define a new build fingerprint environment variable.

$ export HCP_PACKER_BUILD_FINGERPRINT="$(date +%s)-$(git hash-object hashicups.pkr.hcl | cut -c -25)"

Initialize the Packer build.

$ packer init .

Run the Packer build.

$ packer build .
learn-packer-hashicups.amazon-ebs.hashicups_east: output will be in this color.
learn-packer-hashicups.amazon-ebs.hashicups_west: output will be in this color.

==> learn-packer-hashicups.amazon-ebs.hashicups_west: Publishing build details for amazon-ebs.hashicups_west to the HCP Packer registry
==> learn-packer-hashicups.amazon-ebs.hashicups_east: Publishing build details for amazon-ebs.hashicups_east to the HCP Packer registry
==> learn-packer-hashicups.amazon-ebs.hashicups_east: Prevalidating any provided VPC information
==> learn-packer-hashicups.amazon-ebs.hashicups_east: Prevalidating AMI Name: learn-packer-hcp-hashicups-20210923192120
==> learn-packer-hashicups.amazon-ebs.hashicups_west: Prevalidating any provided VPC information
==> learn-packer-hashicups.amazon-ebs.hashicups_west: Prevalidating AMI Name: learn-packer-hcp-hashicups-20210923192120
    learn-packer-hashicups.amazon-ebs.hashicups_east: Found Image ID: ami-07ede610b9d9d4067
## …
==> learn-packer-hashicups.amazon-ebs.hashicups_west: Running post-processor:
Build 'learn-packer-hashicups.amazon-ebs.hashicups_west' finished after 3 minutes 57 seconds.

==> Wait completed after 3 minutes 57 seconds

==> Builds finished. The artifacts of successful builds are:
--> learn-packer-hashicups.amazon-ebs.hashicups_east: AMIs were created:
us-east-2: ami-000da7f760363d944

--> learn-packer-hashicups.amazon-ebs.hashicups_east: Published metadata to HCP Packer registry packer/learn-packer-hcp-hashicups-image/iterations/01FGA0ZB0M6SRDS36FFR9DE9Z7
--> learn-packer-hashicups.amazon-ebs.hashicups_west: AMIs were created:
us-west-2: ami-00a3b74987d9a4b4c

--> learn-packer-hashicups.amazon-ebs.hashicups_west: Published metadata to HCP Packer registry packer/learn-packer-hcp-hashicups-image/iterations/01FGA0ZB0M6SRDS36FFR9DE9Z7

Create channel for HashiCups image and schedule revocation

In HCP Packer, navigate to the learn-packer-hcp-hashicups-image bucket page.

The Ancestry table shows that this image is up to date with it's parent, the learn-packer-hcp-golden-base-image image.

Ancestry information for the learn-packer-hcp-hashicups-image

Now, create a new channel named production, and select the latest iteration.

Creating a channel for the learn-packer-hcp-hashicups-image

Test HCP image validation

If an image becomes outdated or a security risk, you can revoke it to prevent consumers from accessing its metadata and using it to build artifacts. Schedule a revocation for the current iteration.

  1. Go to the Iterations page
  2. Click ...
  3. Click Revoke iteration
  4. Select Revoke at a future date
  5. Enter the time for 1 minute from your current time. The time is in UTC (current time in UTC). For example, if it is currently 10:00, enter 10:01
  6. Enter Assign image channel to revoked iteration for the revocation reason
  7. Click Revoke Iteration to revoke the iteration

You are setting a short revocation window so that your image channel uses a revoked image to test validation workflows. This is for the educational purposes of the tutorial.

Schedule a revocation for the first iteration one minute from current time

Next, attempt to deploy the revoked HashiCups image with Terraform. Change to the terraform directory.

$ cd ../terraform

Add the following configuration to the end of terraform/main.tf. This configuration defines EC2 instances in the us-east-2 and us-west-2 regions.

terraform/main.tf
data "hcp_packer_iteration" "hashicups" {
  bucket_name = var.hcp_bucket_hashicups
  channel     = var.hcp_channel
}

data "hcp_packer_image" "hashicups_west" {
  bucket_name    = data.hcp_packer_iteration.hashicups.bucket_name
  iteration_id   = data.hcp_packer_iteration.hashicups.ulid
  cloud_provider = "aws"
  region         = "us-west-2"
}

data "hcp_packer_image" "hashicups_east" {
  bucket_name    = data.hcp_packer_iteration.hashicups.bucket_name
  iteration_id   = data.hcp_packer_iteration.hashicups.ulid
  cloud_provider = "aws"
  region         = "us-east-2"
}

resource "aws_instance" "hashicups_east" {
  ami           = data.hcp_packer_image.hashicups_east.cloud_image_id
  instance_type = "t2.micro"
  subnet_id     = aws_subnet.subnet_public_east.id
  vpc_security_group_ids = [
    aws_security_group.ssh_east.id,
    aws_security_group.allow_egress_east.id,
    aws_security_group.promtail_east.id,
    aws_security_group.hashicups_east.id,
  ]
  associate_public_ip_address = true

  tags = {
    Name = "Learn-Packer-HashiCups"
  }

  depends_on = [
    aws_instance.loki
  ]
}

resource "aws_instance" "hashicups_west" {
  provider      = aws.west
  ami           = data.hcp_packer_image.hashicups_west.cloud_image_id
  instance_type = "t2.micro"
  subnet_id     = aws_subnet.subnet_public_west.id
  vpc_security_group_ids = [
    aws_security_group.ssh_west.id,
    aws_security_group.allow_egress_west.id,
    aws_security_group.promtail_west.id,
    aws_security_group.hashicups_west.id,
  ]
  associate_public_ip_address = true

  tags = {
    Name = "Learn-Packer-HashiCups"
  }

  depends_on = [
    aws_instance.loki
  ]
}

Save your changes.

The ami values reference values from the HCP Packer data sources instead of hard-coded AMI IDs.

Add the following configuration to the bottom of terraform/outputs.tf to display the IP addresses of the provisioned HashiCups instances.

terraform/outputs.tf
output "hashicups_east_ip" {
  value = aws_instance.hashicups_east.public_ip
  description = "Public IP address for the HashiCups instance in us-east-2."
}

output "hashicups_west_ip" {
  value = aws_instance.hashicups_west.public_ip
  description = "Public IP address for the HashiCups instance in us-west-2."
}

Save your changes.

In your terminal, apply your configuration. After Terraform creates the plan, the run will return an error because the run task failed.

In Terraform Cloud, open the latest run to review the details. Click the Tasks failed box.

Data source image validation results: 3 resources scanned. 2 using revoked images. No newer version was found for the revoked images. Use Packer to build compliant images and send information to HCP Packer. When using channels, the channel must be re-assigned to a valid iteration

The run task detected that the aws_instance resource references the hcp_packer_image data source. Since the data source retrieved a revoked iteration, the run task failed.

If the run task had found a newer iteration version, it would have suggested that you use it. As an image maintainer always make sure to replace revoked images in channels.

Restore image iteration

Click on the Details link in the run task output to visit the HCP Packer dashboard. Click the learn-packer-hcp-hashicups-image bucket and select the revoked iteration. Click Manage, then Restore iteration to restore the revoked iteration.

Restore revoked iteration in learn-packer-run-tasks bucket

Confirm the action by clicking on Restore iteration.

Deploy HashiCups

Apply your configuration.

$  terraform apply
Running apply in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.

Preparing the remote apply...

To view this run in a browser, visit:
https://app.terraform.io/app/hashicorp-training/learn-hcp-packer-golden-image/runs/run-REDACTED

##

Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + hashicups_east_ip = (known after apply)
  + hashicups_west_ip = (known after apply)

Run Tasks (post-plan):

##..

All tasks completed! 1 passed, 0 failed           (1m54s elapsed)

│ HCP-Packer ⸺   Passed
│ Data source and resource image validation results: 3 resources scanned. All resources are compliant.
│
│
│ Overall Result: Passed

------------------------------------------------------------------------


Do you want to perform these actions in workspace "learn-hcp-packer-golden-image"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

##..

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

hashicups_east_ip = "3.15.6.10"
hashicups_west_ip = "34.210.58.171"
loki_ip = "3.12.36.235"

Once Terraform finishes provisioning the HashiCups instances, use cURL to query the HashiCups API using the hashicups_east_ip address, port 19090, and the /coffees path.

$ curl $(terraform output -raw hashicups_east_ip):19090/coffees
[{"id":1,"name":"Packer Spiced Latte","teaser":"Packed with goodness to spice up your images","description":"","price":350,"image":"/packer.png","ingredients":[{"ingredient_id":1},{"ingredient_id":2},{"ingredient_id":4}]},{"id":2,"name":"Vaulatte","teaser":"Nothing gives you a safe and secure feeling like a Vaulatte","description":"","price":200,"image":"/vault.png","ingredients":[{"ingredient_id":1},{"ingredient_id":2}]},{"id":3,"name":"Nomadicano","teaser":"Drink one today and you will want to schedule another","description":"","price":150,"image":"/nomad.png","ingredients":[{"ingredient_id":1},{"ingredient_id":3}]},{"id":4,"name":"Terraspresso","teaser":"Nothing kickstarts your day like a provision of Terraspresso","description":"","price":150,"image":"terraform.png","ingredients":[{"ingredient_id":1}]},{"id":5,"name":"Vagrante espresso","teaser":"Stdin is not a tty","description":"","price":200,"image":"vagrant.png","ingredients":[{"ingredient_id":1}]},{"id":6,"name":"Connectaccino","teaser":"Discover the wonders of our meshy service","description":"","price":250,"image":"consul.png","ingredients":[{"ingredient_id":1},{"ingredient_id":5}]}]

The endpoint will return a list of coffees you could order using the HashiCups app. This shows that the application is running on the instances Terraform deployed.

Note: If you do not get a similar response, please wait a couple of minutes before trying again. It may take several minutes for the EC2 instance to finish running the set up scripts.

Verify HashiCups logs in Grafana

Add Loki as a data source to retrieve logs in Grafana. Since Grafana is running on the Loki instance, you can access it at the same IP, on port 3000.

Use the loki_ip output value to determine the Grafana endpoint.

$ echo "http://$(terraform output -raw loki_ip):3000"
http://18.117.188.90:3000

In your browser, navigate to the Grafana endpoint. Login with the default credentials of admin:admin and ignore the prompt to update the password by clicking on the Skip link at the bottom of the form. Then, click on the settings icon in the left navigation menu, then Data sources. Click on the Add data source button and then on the Loki option.

Adding the Loki data source in Grafana

In the URL form field, enter the loki_ip address from the Terraform output and port 3100. Scroll down and click the Save & test button. Grafana will display a confirmation message stating that the data source is connected.

Entering the Loki data source URL

To view the HashiCups logs, click on the compass icon in the left navigation and then click Explore.

Opening the explore page in Grafana

From the dropdown menu at the top left of the page, choose Loki and then click on the blue Log browser button below it.

Opening the log browser for the Loki data source

Loki uses several labels for the log data it receives and you can choose which logs you want to see by selecting a label and values from the provided list. Select the compose_service label and then both api and db to see logs from the HashiCups API and database services. Notice that the resulting selector query updates as you make selections. Click the Show logs button to save the query.

Selecting the Loki log data

Click the Live button on the upper right corner to have the output stream automatically.

Enabling the live data output streaming in Grafana

Run the terraform/hashicups-query.sh script to generate requests to HashiCups and watch as the output updates. The latest messages appear at the bottom of the output area.

$ ./hashicups-query.sh
HashiCups address (EAST): 3.139.105.135
HashiCups address (WEST): 54.69.128.234
Making requests to hashicups services every 5 seconds.
Press ctrl+c to quit.


HashiCups (EAST) response:
[{"id":1,"name":"Packer Spiced Latte","teaser":"Packed with goodness to spice up your images","description":"","price":350,"image":"/packer.png","ingredients":[{"ingredient_id":1},{"ingredient_id":2},{"ingredient_id":4}]},{"id":2,"name":"Vaulatte","teaser":"Nothing gives you a safe and secure feeling like a Vaulatte","description":"","price":200,"image":"/vault.png","ingredients":[{"ingredient_id":1},{"ingredient_id":2}]},{"id":3,"name":"Nomadicano","teaser":"Drink one today and you will want to schedule another","description":"","price":150,"image":"/nomad.png","ingredients":[{"ingredient_id":1},{"ingredient_id":3}]},{"id":4,"name":"Terraspresso","teaser":"Nothing kickstarts your day like a provision of Terraspresso","description":"","price":150,"image":"terraform.png","ingredients":[{"ingredient_id":1}]},{"id":5,"name":"Vagrante espresso","teaser":"Stdin is not a tty","description":"","price":200,"image":"vagrant.png","ingredients":[{"ingredient_id":1}]},{"id":6,"name":"Connectaccino","teaser":"Discover the wonders of our meshy service","description":"","price":250,"image":"consul.png","ingredients":[{"ingredient_id":1},{"ingredient_id":5}]}]

HashiCups (WEST) response:
[{"id":1,"name":"Packer Spiced Latte","teaser":"Packed with goodness to spice up your images","description":"","price":350,"image":"/packer.png","ingredients":[{"ingredient_id":1},{"ingredient_id":2},{"ingredient_id":4}]},{"id":2,"name":"Vaulatte","teaser":"Nothing gives you a safe and secure feeling like a Vaulatte","description":"","price":200,"image":"/vault.png","ingredients":[{"ingredient_id":1},{"ingredient_id":2}]},{"id":3,"name":"Nomadicano","teaser":"Drink one today and you will want to schedule another","description":"","price":150,"image":"/nomad.png","ingredients":[{"ingredient_id":1},{"ingredient_id":3}]},{"id":4,"name":"Terraspresso","teaser":"Nothing kickstarts your day like a provision of Terraspresso","description":"","price":150,"image":"terraform.png","ingredients":[{"ingredient_id":1}]},{"id":5,"name":"Vagrante espresso","teaser":"Stdin is not a tty","description":"","price":200,"image":"vagrant.png","ingredients":[{"ingredient_id":1}]},{"id":6,"name":"Connectaccino","teaser":"Discover the wonders of our meshy service","description":"","price":250,"image":"consul.png","ingredients":[{"ingredient_id":1},{"ingredient_id":5}]}]

If you want to update the golden image, rebuild it with Packer and update the bucket channel in HCP to the latest iteration. When you rebuild the HashiCups image, Packer will automatically retrieve the latest golden image as the base.

Similarly, if you wanted to update the HashiCups image, rebuild it with Packer and update the HashiCups bucket channel to the latest iteration. Then, when you re-run your Terraform configuration, Terraform will automatically deploy an instance with the latest HashiCups image.

Clean up resources

Now that you completed the tutorial, destroy the resources you created with Terraform. Enter yes to confirm the destruction process.

$ terraform destroy
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_instance.hashicups_east will be destroyed
  - resource "aws_instance" "hashicups_east" {
      - ami                                  = "ami-0cf6c50483ef5aa26" -> null
      - arn                                  = "arn:aws:ec2:us-east-2:561656980159:instance/i-0c12e0ed831c32d93" -> null
      - associate_public_ip_address          = true -> null
…
aws_instance.loki: Destruction complete after 2m33s
aws_subnet.subnet_public: Destroying... [id=subnet-0ba2204618dee10bc]
aws_security_group.loki_grafana: Destroying... [id=sg-0864554590b0d7b5d]
aws_security_group.ssh: Destroying... [id=sg-07eb1f573f4701b69]
aws_security_group.allow_egress: Destroying... [id=sg-095ac5c93dacc82bb]
aws_security_group.allow_egress: Destruction complete after 5s
aws_subnet.subnet_public: Destruction complete after 5s
aws_security_group.ssh: Destruction complete after 5s
aws_security_group.loki_grafana: Destruction complete after 5s
aws_vpc.vpc: Destroying... [id=vpc-0073a7f86c6f28cae]
aws_vpc.vpc: Destruction complete after 1s

Destroy complete! Resources: 23 destroyed.

Your AWS account still has AMIs and their S3-stored snapshots, which you may be charged for depending on your other usage. Destroy the AMIs and snapshots stored in your S3 buckets in both the us-east-2 and us-west-2 regions.

Tip: Remember to delete both the golden and hashicups images and snapshots.

In your us-east-2 AWS account, deregister the AMIs by selecting the AMIs, then click on the Actions button and the Deregister option. Delete the snapshots by selecting the snapshots, then click on the Actions button and the Delete option.

In your us-west-2 AWS account, deregister the AMIs by selecting the AMIs, then click on the Actions button and the Deregister option. Delete the snapshots by selecting the snapshots, then click on the Actions button and the Delete option.

Next steps

In this tutorial, you used Packer and the HCP Packer registry to create a golden image pipeline, allowing you to create a reusable parent image on top of which to build other AMIs. You validated the images using a Terraform Cloud run task.

You learned how to use HCP Packer registry buckets and channels to control which parent images downstream applications build upon and how to integrate them into both Packer and Terraform configurations. This workflow lets your organization build machine images for its services while reducing the overhead of managing system requirements and manually tracking image IDs.

For more information on topics covered in this tutorial, check out the following resources.

  • Read more about the HCP Packer announcement
  • Browse the Packer and HCP Packer documentation
  • Browse the HCP Packer API documentation
  • Visit the HCP Discuss forum to leave feedback or engage in discussion
 Back to Collection
 Next

On this page

  1. Build a Golden Image Pipeline with HCP Packer
  2. Prerequisites
  3. Clone the example repository
  4. Architecture overview
  5. Prepare your environment
  6. Build and deploy the Loki image
  7. Build golden image
  8. Build and deploy HashiCups image
  9. Clean up resources
  10. Next steps
Give Feedback(opens in new tab)
  • Certifications
  • System Status
  • Terms of Use
  • Security
  • Privacy
  • Trademark Policy
  • Trade Controls
  • Give Feedback(opens in new tab)