Managing 25 GitHub Repos as Terraform
I manage 25 GitHub repositories. Every one of them has the same merge settings, the same CI variables. Configuring that manually would be clicking through 25 settings pages every time I want to change something.
Instead, I manage all of it as Terraform. Adding a new repo is 5 lines of HCL. Changing a setting across all repos is one variable update.
The Setup
One Terraform project, flat layout:
tf-github/
├── main.tf # Providers, backend
├── repos.tf # All repositories
├── repo-vars.tf # CI variables and secrets
├── collaborators.tf # Access control
├── variables.tf # Defaults
└── outputs.tf # Clone URLs
Adding a Repo
Every repo is an entry in a single local.repos map:
locals {
repos = {
"my-new-project" = {
description = "What this project does"
topics = ["terraform", "aws", "infrastructure"]
has_issues = true
}
"another-project" = {
description = "Another project"
topics = ["go", "api"]
has_issues = true
}
# ... 23 more repos
}
}
A single for_each resource creates all of them:
resource "github_repository" "repos" {
for_each = local.repos
name = each.key
description = each.value.description
visibility = var.visibility
has_issues = each.value.has_issues
has_projects = false
has_wiki = false
has_discussions = false
allow_merge_commit = false
allow_squash_merge = true
allow_rebase_merge = false
delete_branch_on_merge = true
squash_merge_commit_title = "PR_TITLE"
squash_merge_commit_message = "PR_BODY"
topics = each.value.topics
lifecycle {
prevent_destroy = true
}
}
Every repo gets the same merge settings: squash-only, delete branch on merge, PR title as commit message. The prevent_destroy lifecycle rule ensures a bad merge can’t accidentally delete repositories. No repo-by-repo configuration. No drift.
Distributing CI Variables
Different repos need different CI variables. Some need an AWS role ARN. Some need a GitHub PAT for cross-repo access. Some need account IDs for multi-account deployments.
locals {
terraform_ci_repos = toset([
"tf-aws-root", "tf-github", "tf-backend",
"tf-aft", "my-app", "my-other-app",
])
}
resource "github_actions_variable" "terraform_ci_role_arn" {
for_each = local.terraform_ci_repos
repository = github_repository.repos[each.value].name
variable_name = "TERRAFORM_CI_ROLE_ARN"
value = "arn:aws:iam::${var.management_account_id}:role/TerraformCIRole"
}
Add a repo to the set, apply, it gets the variable. Remove it, apply, it’s gone.
Secret Distribution
The GitHub PAT is stored in AWS SSM (not in Terraform variables or .tfvars files). At apply time, Terraform reads it from SSM and pushes it to repos that need it:
data "aws_ssm_parameter" "github_pat" {
name = "/tf-aws-root/github-pat"
}
resource "github_actions_secret" "gh_pat" {
for_each = local.repos_needing_gh_pat
repository = github_repository.repos[each.value].name
secret_name = "GH_PAT"
plaintext_value = data.aws_ssm_parameter.github_pat.value
}
The PAT never touches a file, a .env, or a commit. It flows from SSM → Terraform → GitHub Secrets.
A note on state security: The plaintext_value field means the PAT value exists in Terraform state. Make sure your state backend is encrypted (S3 with SSE) and access is restricted. Anyone who can run terraform state pull can read the secret. This is a known limitation of the GitHub provider — there’s no way around it without a wrapper.
Managing Collaborators
locals {
collaborators = {
"their-username" = {
permission = "push"
repos = ["project-a", "project-b"]
}
}
}
Removing the entry and applying immediately revokes access. No forgotten permissions lingering for months.
Importing Existing Repos
If you already have repos on GitHub, use Terraform’s native import blocks — no CLI commands needed:
import {
to = github_repository.repos["my-existing-repo"]
id = "my-existing-repo"
}
For bulk imports, generate the blocks dynamically with a data source:
data "github_repositories" "existing" {
query = "org:your-org"
}
import {
for_each = toset(data.github_repositories.existing.names)
to = github_repository.repos[each.value]
id = each.value
}
Run terraform plan — Terraform will show what it wants to adopt and what needs to change. No shell scripts, no manual terraform import commands, and the import blocks can be removed after the first apply since the resources are now in state.
This is cleaner than the old CLI approach because:
- It’s declarative and version-controlled
- It runs as part of the normal plan/apply cycle
- It doesn’t require separate shell access or scripting
- Multiple team members can review the import in a PR before it executes
CI/CD for the GitHub Config Itself
The tf-github repo has its own CI pipeline:
- Push to main →
terraform plan+terraform apply - PRs →
terraform planonly
So when I add a repo or change a setting, I push to tf-github, CI applies it, and the GitHub config updates. Infrastructure managing infrastructure.
What About Branch Protection?
I’d love to manage branch protection rules here too. GitHub’s Free plan doesn’t support branch protection on private repos, and most of my repos are private. If you’re on GitHub Team or Enterprise, add github_branch_protection resources with the same for_each pattern and you’ll have consistent rulesets across everything.
Why Bother?
At 3 repos, this is overkill. At 10, it saves time. At 25, it’s the only sane option.
- Consistency: Every repo has identical settings. No one-off configurations.
- Auditability:
git logshows every change to every repo’s configuration. - Speed: New repo in 30 seconds. Not 5 minutes of clicking through GitHub settings.
- Offboarding: Remove a collaborator entry, apply. Done across all repos instantly.
The setup takes an afternoon. The time savings compound forever.