# Test before Build Sometimes, [[CI - Continuous Integration|CI]] configuration puts **the test stage before the build stage** to run quick checks on the raw source code as early as possible, aiming to catch obvious errors like syntax mistakes, linting issues, or failing unit tests without spending time building artifacts. This practice support that it can speed up feedback in workflows where the build step is slow or produces [[Package|packages]] mainly for deployment rather than [[testing]], This practice of testing first aims to save time by failing fast before any packaging or bundling happens. ## An anti-pattern Although this practice appears to make the feedback loop shorter than the standard build, test, deploy order in an simple and elegtant way, it has unforeseen consequences. [[Testing]] should validate the built artifact (the actual code, dependencies, configuration, and packaging) that will run in production. Even when some tests can be safely performed on raw source, it is usually better to integrate them as part of the build or as parallel early steps, rather than reordering the entire [[pipeline]]. ### Tests that depends on Build Many critical tests inherently depends on the build stage because they verify not just the raw code, but the fully assembled, packaged, or containerized artifact. These include, but are not limited to: - Container and Image Tests: correct base image, configuration files, permissions, expected binaries - Integration Tests: verifying interactions between modules, third-party libraries, and runtime configurations - Deployment Validation and Smoke Tests: validating built artifacts deploy to a staging environment and running test scenario - Security Scanning: Container security scanning and [[SCA - Software Composition Analysis]] - Performance Tests: Realistic performance and load testing measuring actual runtime behavior rather than theoretical performance of isolated components To solve this specific issue, another [[Anti-Pattern]] is often used: [[Pre-build and Post-build test stage]]. ### Missing elements post Build [[Testing]] on source is not enough, as some elements might be present in the repo/sources, but unfortunately missed by the build step, producing a fake positive test on source, but a failed [[package]]/artifact that would crash in production if not tested afterward. Example: If a [[package]] rely on a text file to run, but is misconfigured to include this file in the final [[package]]. Test before build against sources **might work**, when tests against build would **fail**. Similar issues can occur with configuration files, environment variables, compiled assets, or dependency resolution that only manifests after the build step. Even if you validate all raw source code thoroughly, you cannot guarantee that the packaged artifact faithfully represents what you tested unless you run validation after or alongside the build. To solve this specific issue, another [[Anti-Pattern]] is often used: [[Pre-build and Post-build test stage]]. ### Special cases aren't special enough > Special cases aren't special enough to break the rules. Some argue that some projects do not have tests that depends on builds, justifying use this design. But this have multiple consequences: - **Erosion of consistency**: When teams adopt “test before build” only on select projects, the [[pipeline]] order becomes inconsistent across repositories. This inconsistency increases cognitive load for contributors who must remember or re-learn each project’s quirks. - **Risk of Hidden Assumptions**: Even if a project today has no build-dependent tests, this can easily change over time as new integration checks or artifact validations are added. When this happens, teams must re-architect their [[pipeline]] to re-order stages, introducing friction and potential oversights. - **Illusion of Speed**: In many cases, the perceived time savings are marginal. For example, linting and syntax checks can run in parallel to the build rather than ahead of it. Moving tests before build may simply increase overoll runtime without real improvements of the feedback loop. - **Encouragement of Shallow Validation**: Relying solely on tests that run on raw source promotes a bias toward unit-level and static checks, while deferring or neglecting critical validations of what is actually shipped. ## Recommanded pattern > Simple is better than complex and special cases aren't special enough. The recommanded pattern is the following: - **Follow conventional order**: Always keep the pipeline stages in the standard sequence of `build -> test -> deploy`. This preserves clarity and consistency across projects. This doesn't mean that additional stages between or around those three are forbidden, but the order between those three must be kept. - **Explicitly mark tests that do not depend on build artifacts:** Identify tests such as linting, static analysis, and pure unit tests that can safely operate on raw source code. Label them clearly so they can be run early. - **Explicitly mark tests that do depend on build artifacts:** Tag all tests requiring the assembled, packaged, or containerized artifacts, including integration tests, deployment validations, and artifact inspections. - **Let the CI engine orchestrate jobs based on this configuration:** Modern CI systems can schedule independent jobs in parallel. When tests are properly annotated, the CI engine can start build and “source-only” test jobs concurrently. Tests requiring built artifacts will run automatically as soon as the relevant build completes, without waiting for unrelated jobs to finish. This approach delivers fast feedback where it is genuinely safe and meaningful, while ensuring no critical validations are skipped or deferred. It keeps pipelines predictable, lowers maintenance burden, and avoids the pitfalls of artificially reordering stages.