Synchronizing containers from JFrog Artifactory to Google Cloud Artifact Registry

Taneli Leppä
5 min readFeb 14, 2024

--

If you are leveraging JFrog Artifactory on-premise or other clouds (or even in GCP), you may want to also make your containers available via Artifact Registry. This could be for several purposes:

  • No access from GKE or other Compute-based services to Artifactory
  • Use your containers in Cloud Functions or Cloud Run (as they can only pull from GCR/AR)
  • Scalability or reliability issues

Having containers both in Artifactory and Artifact Registry can be of course solved with pushing the images from your build pipelines to both registries. But this is extra effort for every pipeline.

In this article, I’ll present an alternative, based on Artifactory’s support for webhooks and a Cloud Function. We will be basing this on the trusty Pubsub2Inbox tool (and its sidekick, Json2Pubsub). Let’s start with a quick overview on the architecture:

Architecture of the solution.

The architecture is fairly straightforward. The following flow happens when everything is set up correctly and you push a container to Artifactory:

  1. A container is pushed to Artifactory with a tag.
  2. The push triggers a webhook in Artifactory which targets the Json2Pubsub function (the function can be public accessible as it has basically a single IAM permission, or you can also opt for other means, like fronting it with a load balancer or using INTERNAL_ONLY and accessing it via Interconnect directly).
  3. Json2Pubsub receives the webhook payload and verifies that the shared secret is correctly set in the request header. If so, it writes the payload into a Pub/Sub topic.
  4. The Pub/Sub message triggers the Pubsub2Inbox Cloud Function.
  5. The Cloud Function processes the JSON payload and based on the event type, performs either a Docker pull from the source in payload or a delete in Artifact Registry. The Serverless VPC connector provides private access path to Artifactory.
  6. In case of a new container, a pure Python implementation of Docker downloads the container manifest and layers and then pushes it to Artifact Registry.

Seems simple. Let’s go through the setup process.

Setting up Artifactory

For testing purposes, let’s create a new Docker repository in Artifactory:

Adding a new Docker repository in Artifactory.

Once the repository is set up, we can create a webhook:

Creating a webhook.

When creating the webhook, just put a temporary valid URL as the webhook URL. We will update this later with the proper URL of the Cloud Function as it’s not deployed yet. For events, select “Docker tag pushed”, “Docker tag was deleted” and “Docker tag was promoted” event types. Then click Select Repositories and select the repository you just created in the previous step:

Selecting the repositories.

For the webhook secret, we can generate a cryptographically secure shared secret using for example OpenSSL:

# openssl rand -hex 32
62ed8ed8d0f2dcc4159ae003f342c5a2a14add86214fc283017865b74f150d9d

Take note of the secret as we will need later when setting up Cloud Function. Remember NOT to select “Use secret for payload signing” (as how it works is completely undocumented).

While we could set up the synchronization using our user and password, we’ll set it up properly by creating a separate integration user and granting it Read permissions only to select repositories.

First, create the user in Artifactory:

Creating the user

Remember to remove the user from the default groups, such as readers. Next, we’ll create a new Permission for the just the synchronization user:

Creating a permission.

Select the repositories you want to be able to synchronize, then select the synchronization user:

Selecting the user for the permission.

Finally, let’s select the correct Read permission:

Just read.

Next, we will create an access token for the user to be used by the Cloud Function:

Creating the reference access token.

After generate, the UI will show you the tokens once:

Make a note of the reference token (shorter one) as this will be one “password” that the Cloud Function will be using to access Artifactory. Also please note that there is a delay before the token starts working.

The setup from Artifactory side is now almost complete. Next we will deploy the Cloud Function.

Setting up the Cloud Function

You can start by cloning the example code (the whole repository) from pubsub2inbox/examples/artifactory-to-artifact-registry/. In the directory, lets create a terraform.tfvars file to contain our configuration. The contents should be along these lines (check variables.tf for full customization options):

project_id                 = "<your project id>"
region = "europe-west4"
artifactory_webhook_secret = "<the secret you created with openssl>"
artifactory_username = "gcp-sync"
artifactory_password = "<the reference token you created>"
vpc_config = {
network = "<an-existing-vpc>"
cidr = "10.8.0.0/28" # CIDR range for serverless connector
# network_project = "<shared-vpc-project>" to use shared VPC
# create_connector = false to use existing serverless connector
# connector = "connector id" for an existing serverless connector
}

After this, we can deploy the function using Terraform by running the following commands:

# terraform init
...
# terraform apply

Plan: 40 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ instructions = (known after apply)

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

...

Once the deployment is complete (a few minutes), you should see this:

Apply complete! Resources: 40 added, 0 changed, 0 destroyed.

Outputs:

instructions = <<EOT
Please use the following webhook URL for Artifactory webhook:
https://artifactory-to-artifact-registry-json2pubsub-ma123bc44f-ez.a.run.app

EOT

Capture the Cloud Function URL and update it in Artifactory webhook settings.

That’s it. We can now build and push a container image into the Artifactory repository we configured, and if all goes according to plan, the container will shortly appear in the Artifact Registry that the Terraform code created:

Demonstration of the sync.

That’s it. The code also supports deleting tags (remove a tag in Artifactory with curl -u "user:pw" -X DELETE "https://artifactory.url/artifactory/repository-name/package-name/tag-name ).

Thanks for reading another installation of “the Pubsub2Inbox series”!

PS. We might revisit the solution for two additional use cases: replication of containers between Artifact Registries in different regions, and multi-cloud.

--

--