Basics Lesson 1 of 14

What is CI/CD?

Continuous Integration and Continuous Deployment β€” the foundation of modern software delivery.

πŸ§’ Simple Explanation (ELI5)

Imagine a busy restaurant kitchen. In a well-run restaurant, there's a prep line β€” every ingredient is washed, chopped, measured, and quality-checked before it goes near the stove. That prep line is CI (Continuous Integration). Every time a chef (developer) adds a new ingredient (code), the prep line automatically checks: is it fresh? Is it the right size? Does it match the recipe?

Now, once the dish is cooked and passes the head chef's taste test, the delivery service kicks in automatically β€” the food is plated, boxed, and dispatched to the customer without anyone having to shout "Order up!" That delivery service is CD (Continuous Deployment).

Without CI/CD? The chef does everything manually β€” preps, cooks, plates, delivers. Sometimes they skip the taste test because they're in a rush. Sometimes the food arrives cold. Sometimes they serve raw chicken. In software terms: code ships without being tested, builds break in production, and Friday deployments become nightmares.

CI/CD is the system that makes the kitchen run like a machine β€” every commit is prepped, tested, built, and delivered automatically, every single time.

πŸ”§ Technical Explanation

Continuous Integration (CI)

Continuous Integration is the practice where developers merge their code changes into a shared repository frequently β€” ideally multiple times per day. Each merge triggers an automated pipeline that:

The key insight: bugs are caught minutes after they're introduced, not weeks later when someone tries to deploy. The cost of fixing a bug found in CI is orders of magnitude lower than one found in production.

Continuous Delivery vs Continuous Deployment (CD)

CD stands for two related but different practices β€” and the distinction matters in interviews:

AspectContinuous IntegrationContinuous DeliveryContinuous Deployment
FocusBuild & test on every commitAlways deployable artifactAuto-deploy every commit
Human StepNone (automated)Manual approval to deployNone (fully automated)
Deploy FrequencyN/AOn-demand (when approved)Every passing commit
Risk LevelLowLow–MediumRequires mature testing
Best ForAll teamsRegulated industries, early teamsMature teams with strong test suites
ExampleRun tests on PRBuild Docker image, wait for approvalMerge β†’ tested β†’ live in production

The CI/CD Pipeline Stages

A complete CI/CD pipeline follows these stages, each building on the previous:

StageWhat HappensTools / Actions
1. SourceDeveloper pushes code or opens a PRGit, GitHub, branch protection rules
2. BuildApplication is compiled or bundlednpm build, dotnet build, go build, mvn package
3. TestAutomated tests run against the buildJest, pytest, JUnit, Cypress, linters
4. PackageBuild artifact is packaged for deploymentDocker build, Helm chart, zip archive
5. DeployArtifact is deployed to target environmentKubernetes, AKS, AWS ECS, Azure App Service
6. MonitorHealth checks, logs, and alerts confirm successPrometheus, Grafana, Azure Monitor, Datadog

Why CI/CD Matters

πŸ“Š CI/CD Pipeline Flow

CI/CD Pipeline β€” From Code to Production
Developer Push
β†’
CI β€” Continuous Integration
Build
Unit Tests
Lint & SAST
β†’
Package (Docker / Artifact)
β†’
CD β€” Continuous Deployment
Deploy to Staging
Integration Tests
β†’
Approval Gate
β†’
Deploy to Production
Without CI/CD
Developer
β†’ "git push and pray" β†’
Manual Build
β†’ "skip tests, it's Friday" β†’
Manual Deploy
β†’
πŸ”₯ Production Fire

CI/CD Tools Comparison

GitHub Actions isn't the only CI/CD tool, but it's uniquely positioned for GitHub-hosted projects. Here's how it compares:

ToolWhere It RunsConfig FormatPricingBest For
GitHub ActionsGitHub-hosted or self-hosted runnersYAML (in .github/workflows/)Free for public repos; 2,000 min/month free for privateGitHub-native projects, open source
Azure DevOpsMicrosoft-hosted or self-hosted agentsYAML or classic editor (GUI)Free tier: 1 parallel job, 1,800 min/monthEnterprise .NET/Azure workloads
JenkinsSelf-hosted onlyGroovy (Jenkinsfile)Free (open source) β€” you pay for infraMaximum customization, legacy systems
GitLab CIGitLab-hosted or self-hosted runnersYAML (.gitlab-ci.yml)400 min/month free; premium tiers availableGitLab-native projects, all-in-one DevOps
CircleCICloud-hosted or self-hosted runnersYAML (.circleci/config.yml)Free tier: 6,000 min/monthFast builds, Docker-first workflows

Why GitHub Actions?

Throughout this course, we use GitHub Actions as our CI/CD platform. Here's why:

⌨️ Hands-on: Your First CI Workflow

Let's create a minimal GitHub Actions workflow to see CI in action. This will run every time you push code.

Step 1: Create the Workflow File

In your repository, create the file .github/workflows/hello.yml:

yaml
name: Hello CI
on: [push]
jobs:
  say-hello:
    runs-on: ubuntu-latest
    steps:
      - run: echo "Hello from GitHub Actions!"
      - run: echo "Triggered by ${{ github.event_name }} on ${{ github.ref }}"

Step 2: Push to GitHub

bash
# Create the workflows directory
mkdir -p .github/workflows

# Create the workflow file (or use your editor)
# ... paste the YAML above into .github/workflows/hello.yml

# Commit and push
git add .github/workflows/hello.yml
git commit -m "ci: add hello world workflow"
git push origin main

Step 3: See the Results

Navigate to your repository on GitHub. Click the Actions tab. You'll see your workflow run listed with a green checkmark (or red X if something failed). Click into it to see the logs for each step.

What Just Happened?

πŸ’‘
Tip

You don't need a separate CI server with GitHub Actions β€” it's built into every GitHub repository. No Jenkins installation, no CircleCI signup. Just add a YAML file and push.

⚠️
Important

CI/CD is not just for large teams. Even solo developers benefit from automated testing and deployment. A one-person project with CI catches bugs before they ship, and CD means you can deploy with confidence at any time β€” not just when you remember all the manual steps.

🎯 Interview Questions

Beginner

Q: What is Continuous Integration (CI)?β–Ό

Continuous Integration is a software development practice where developers frequently merge their code changes into a shared repository (ideally multiple times per day). Each merge triggers an automated pipeline that builds the code and runs tests. The goal is to detect integration issues early β€” within minutes of introducing them β€” rather than discovering them days or weeks later during a manual integration phase. CI ensures the codebase is always in a buildable, testable state.

Q: What is the difference between Continuous Delivery and Continuous Deployment?β–Ό

Continuous Delivery: Every code change that passes automated tests is prepared for release and could be deployed to production at any time, but deployment requires a manual approval step. The artifact is always production-ready. Continuous Deployment: Every code change that passes automated tests is automatically deployed to production with zero human intervention. The distinction matters: Continuous Delivery is safer for regulated industries (finance, healthcare) where a human must approve releases, while Continuous Deployment is faster but requires a very mature test suite you trust completely.

Q: What are the main stages of a CI/CD pipeline?β–Ό

A typical CI/CD pipeline has six stages: 1. Source β€” developer pushes code or opens a pull request. 2. Build β€” the application is compiled, transpiled, or bundled. 3. Test β€” automated tests (unit, integration, linting) run against the build. 4. Package β€” the build artifact is packaged (Docker image, Helm chart, zip file). 5. Deploy β€” the artifact is deployed to the target environment (staging, then production). 6. Monitor β€” health checks, logs, and alerts confirm the deployment is working correctly.

Q: Why does CI/CD matter? What problems does it solve?β–Ό

CI/CD solves several critical problems: 1. Slow releases β€” without automation, deployments are manual, error-prone, and infrequent. 2. Integration hell β€” when developers work in isolation for weeks, merging becomes painful and bug-ridden. 3. Inconsistency β€” manual deploys are different every time; automated pipelines are identical every time. 4. No audit trail β€” CI/CD logs every build, test, and deployment. 5. Slow recovery β€” with CI/CD, rollback is one command, not a multi-hour fire drill. According to the DORA State of DevOps report, teams with CI/CD deploy 200x more frequently, have 3x lower change failure rates, and recover from incidents 2,600x faster.

Q: What is a pipeline "artifact"?β–Ό

A pipeline artifact is the output of a build stage β€” the thing that gets deployed. It could be a compiled binary (e.g., a Go executable), a Docker image pushed to a container registry, a Helm chart package, a JAR/WAR file for Java apps, a ZIP archive of a static website, or an npm package. The key principle: build once, deploy many. You build the artifact once in CI, then promote the exact same artifact through staging and production β€” never rebuild for different environments. This ensures what you tested is exactly what you deploy.

Intermediate

Q: How does GitHub Actions compare to Jenkins?β–Ό

GitHub Actions: Cloud-hosted (GitHub manages the runners), YAML configuration, native GitHub integration, marketplace with 20k+ actions, free for public repos. Jenkins: Self-hosted (you manage the server), Groovy-based Jenkinsfile, maximum flexibility and plugin ecosystem (1,800+ plugins), free and open source but you pay for infrastructure and maintenance. Key differences: Jenkins requires you to provision, patch, and scale your own CI servers β€” GitHub Actions handles this for you. Jenkins has more mature plugin ecosystem for niche integrations. GitHub Actions is better for greenfield projects on GitHub; Jenkins is better when you need full control or have complex legacy pipelines. Many organizations are migrating from Jenkins to GitHub Actions to reduce operational overhead.

Q: What is a "shift-left" approach and how does CI/CD enable it?β–Ό

"Shift left" means moving testing, security, and quality checks earlier (to the left) in the development lifecycle β€” catching problems when they're cheapest to fix. CI/CD enables shift-left by running automated checks on every commit: linting catches style issues, unit tests catch logic bugs, SAST (Static Application Security Testing) catches vulnerabilities, dependency scanning catches vulnerable packages β€” all before the code is even merged. Without CI/CD, these checks happen manually (if at all) late in the cycle, when fixing issues is 10–100x more expensive.

Q: What is the role of branch protection rules in CI/CD?β–Ό

Branch protection rules enforce that CI pipeline must pass before code can be merged into protected branches (like main or production). Key settings: Require status checks to pass β€” PRs can't merge until CI jobs succeed. Require PR reviews β€” at least one (or more) approvals needed. Require linear history β€” enforce rebase or squash merges. Restrict who can push β€” prevent direct pushes to main. Together, these ensure no untested or unreviewed code reaches production. They're the "guardrails" that make CI/CD trustworthy.

Q: What is OIDC in the context of CI/CD, and why is it better than storing cloud credentials as secrets?β–Ό

OIDC (OpenID Connect) allows your CI/CD pipeline to authenticate to cloud providers (Azure, AWS, GCP) using short-lived tokens instead of long-lived credentials. How it works: GitHub Actions requests a JWT token from GitHub's OIDC provider β†’ the cloud provider validates the token and issues temporary credentials β†’ the pipeline uses those credentials for the current run only. Why it's better: No long-lived secrets to rotate or leak. Credentials expire automatically after the job finishes. Follows the principle of least privilege. If a secret leaks, the blast radius is limited to that single run. GitHub Actions supports OIDC natively with the azure/login, aws-actions/configure-aws-credentials, and google-github-actions/auth actions.

Q: What is the "build once, deploy many" principle?β–Ό

This principle states that you should build your artifact (Docker image, binary, package) exactly once in the CI stage, then promote that identical artifact through all environments: dev β†’ staging β†’ production. You never rebuild for different environments. Environment-specific configuration is injected at deploy time via environment variables, config maps, or secrets β€” not baked into the artifact. Why it matters: If you rebuild for each environment, you're not testing what you deploy. A rebuild might produce a different binary (different dependency resolution, different build timestamp, race conditions). "Build once, deploy many" guarantees what was tested in staging is byte-for-byte identical to what runs in production.

Scenario-Based

Q: Your team deploys manually every two weeks. Deployments take 4 hours and break production about once a month. How would you introduce CI/CD?β–Ό

Start incrementally, don't try to automate everything at once. Week 1–2: Add CI β€” create a GitHub Actions workflow that runs existing tests (even if there are few) on every PR. Enable branch protection so PRs can't merge without passing CI. Week 3–4: Add linting and code formatting checks to CI. Start writing tests for the most critical paths. Month 2: Add the build + package stage β€” produce a Docker image or deployable artifact automatically. Month 3: Add CD to a staging environment β€” auto-deploy on merge to main. Run smoke tests against staging. Month 4: Add production deployment with a manual approval gate (Continuous Delivery). As confidence grows, consider removing the gate (Continuous Deployment). Throughout, track metrics: deployment frequency, lead time, MTTR, and change failure rate.

Q: A developer pushes code and the CI pipeline passes, but the deployment to staging fails. How do you debug this?β–Ό

Systematic debugging approach: 1. Check the workflow logs β€” click into the failed step in the Actions tab. Read the error message carefully. 2. Common causes: Missing secrets or environment variables (check Settings β†’ Secrets), expired or incorrect cloud credentials, target environment is down or unreachable, Kubernetes cluster has insufficient resources, Docker image was built but failed to push to registry (auth issue). 3. Reproduce locally β€” try running the deployment command on your local machine against the staging cluster. 4. Check environment-specific config β€” staging might have different constraints (smaller node pool, different RBAC permissions, network policies blocking traffic). 5. Review recent changes β€” did someone change the deployment configuration, secrets, or infrastructure recently? 6. Add logging β€” temporarily add run: env or echo statements to see what variables are available at deploy time.

Q: Your company has strict compliance requirements β€” every production change must be approved by two people and have an audit log. Can you still use CI/CD?β–Ό

Absolutely β€” CI/CD is even better for compliance than manual processes. Here's how: CI runs automatically on every PR, producing test reports and security scan results that are attached to the PR. Branch protection rules enforce that at least two reviewers must approve the PR. GitHub Environments with required reviewers serve as approval gates before production deployment β€” specific people must click "Approve" in the Actions UI. Audit trail is automatic: every workflow run, approval, test result, and deployment is logged with timestamps and the identity of who triggered or approved it. Immutable artifacts: the Docker image digest is logged, proving exactly what was deployed. This is far more auditable than manual deployment where someone SSH'd into a server and ran commands.

Q: You manage a monorepo with frontend, backend, and infrastructure code. How do you structure CI/CD so changes to frontend don't trigger backend tests?β–Ό

Use path filters in GitHub Actions to scope workflows to specific directories. Create separate workflow files for each component: ci-frontend.yml with on: push: paths: ['frontend/**'], ci-backend.yml with paths: ['backend/**'], and ci-infra.yml with paths: ['infra/**']. Each runs only when its directory changes. For shared code (e.g., a shared/ library used by both), include that path in both workflows. You can also use dorny/paths-filter action within a single workflow to dynamically decide which jobs to run based on changed files. This keeps CI fast (no wasted minutes) and feedback relevant (developers see only the tests that matter for their change).

Q: A junior developer asks: "Why can't I just FTP my files to the server like we used to?" How do you explain the value of CI/CD?β–Ό

FTP deployment has no safety net: No tests run β€” you could upload code with a syntax error. No rollback β€” if something breaks, you have to manually figure out what the "last good version" was. No audit trail β€” who deployed what and when? Partial uploads β€” if your connection drops mid-transfer, the server has half-old, half-new code. No consistency β€” every developer does it slightly differently. No environment parity β€” "it works on my machine" is the only testing. CI/CD solves all of these: tests run automatically, deployments are atomic (all or nothing), every deployment is logged, the process is identical every time, and the same pipeline runs in dev, staging, and production. The 30 minutes spent setting up a CI/CD pipeline saves hundreds of hours of debugging "why is production broken" over the lifetime of the project.

🌍 Real-World Use Case

A mid-size SaaS company (20 developers, 8 microservices) was deploying manually for two years. Here's their transformation story:

MetricBefore CI/CDAfter GitHub Actions
Deployment processManual: SSH β†’ git pull β†’ build β†’ restart services (2+ hours)Automated: merge PR β†’ pipeline runs β†’ deployed in 15 minutes
Deploy frequencyOnce every 2 weeks (big-bang releases)5–10 times per day (small, safe changes)
Production incidents1–2 per month (bad deploys, missing config)Zero failed deploys in 6 months
Recovery time (MTTR)2–4 hours (find the bug, hotfix, manual redeploy)Under 5 minutes (helm rollback or revert PR β†’ auto‑deploy)
Developer confidence"Please don't deploy on Friday""Deploy anytime, the pipeline has our back"
Onboarding new devsWeek-long deployment training, error-prone"Open a PR. CI tests it. Merge deploys it."

The key lesson: they didn't automate everything at once. They started with CI (tests on PRs), then added Docker builds, then staging auto-deploy, and finally production deployment with approval gates. The entire migration took three months β€” but the payoff was immediate: the first week of CI alone caught 4 bugs that would have reached production.

πŸ“ Summary

← Back to GitHub Actions Course