Automotive Software Verification: CI/CD Diagram Series for VectorCAST + RocqStat
AutomotiveDevOpsVerification

Automotive Software Verification: CI/CD Diagram Series for VectorCAST + RocqStat

UUnknown
2026-03-08
10 min read
Advertisement

Step-by-step CI/CD diagrams and gate criteria to integrate VectorCAST + RocqStat for automated timing analysis, static verification, and traceable artifacts.

Stop guessing where timing and static checks belong in your CI/CD — embed them as enforceable gates

Automotive development teams still spend weeks chasing flaky timing results, mismatched static-analysis reports, and unclear artifact lineage. The result: delayed releases, audit friction for functional-safety standards, and wasted engineering cycles. In 2026 the integration of RocqStat into VectorCAST has made it realistic to treat timing analysis and static verification as first-class citizens in CI/CD pipelines. This guide shows concrete, step-by-step pipeline diagrams, gate criteria, and artifact-versioning patterns your team can adopt immediately.

Late 2025 and early 2026 saw three industry moves that change verification strategy for automotive embedded software:

  • Vector Informatik's acquisition of StatInf's RocqStat (Jan 2026) and roadmap to integrate WCET/timing tech into VectorCAST.
  • Stricter traceability expectations in ISO 26262:2018 audits for software components in production electric and ADAS ECUs.
  • Growth of software-defined vehicle platforms forcing faster pipelines and reproducible timing evidence for continuous deliveries.
Vector Informatik will integrate RocqStat into its VectorCAST toolchain to unify timing analysis and software verification — Automotive World, Jan 16, 2026.

Those trends mean teams must stop relying on ad-hoc desktop runs of WCET or static tools. You need to automate, version, and gate timing and static results so CI/CD proves compliance and stability every build.

High-level CI/CD pipeline: where timing and static verification belong

Below is a concise flow to adopt across Git-centric workflows (GitHub/GitLab/Bitbucket) and popular CI runners (Jenkins, GitLab Runner, GitHub Actions). Each stage lists the primary goal, inputs, outputs, and gate criteria.

  [Developer Push] --> [Build & Unit Tests] --> [Static Analysis] --> [Timing Analysis (RocqStat)] --> [Integration Tests & HIL] --> [Artifact Registry / Release]
  

Stage: Build & Unit Tests

  • Goal: Produce deterministic build artifacts and unit test results.
  • Inputs: Git commit, build spec, compiler flags (deterministic), toolchain containers.
  • Outputs: Firmware binary, .map file, unit test JUnit XML.
  • Gate criteria: Build success, 100% critical unit tests pass (organization-defined), deterministic build checksum matches expected pattern for reproducible builds.

Stage: Static Analysis (MISRA, Coverity, cppcheck, etc.)

  • Goal: Detect coding standard violations and high/critical defects early.
  • Inputs: Build artifacts, source files, static-config (.json/.yml).
  • Outputs: Static report (SARIF or vendor XML/HTML), policy score.
  • Gate criteria (example): No new Critical or High violations; aggregated policy score >= 95%; each file with changed lines must not introduce new critical violations.

Stage: Timing Analysis (RocqStat integrated with VectorCAST)

  • Goal: Produce reproducible WCET estimates and path traces for functions and runnables.
  • Inputs: Binary, map, ELF, execution-model configuration, platform timing library, measured execution traces if available.
  • Outputs: WCET per runnable (ms), timing margins, timing trace artifacts (JSON/CSV), formal trace linking to test cases.
  • Gate criteria (example): WCET <= timing budget minus safety margin (e.g., budget - 10%); no unbounded loops or unanalyzed constructs; WCET regression <= 2% vs baseline.

Stage: Integration Tests & HIL

  • Goal: Validate end-to-end behavior under real inputs, capturing timing under system load.
  • Inputs: Firmware image, HIL scripts, test vectors, scenario definitions.
  • Outputs: Integration test results (JUnit), end-to-end timing logs, traceability matrices back to requirements.
  • Gate criteria: No failing safety-critical integration tests, timing logging consistent with WCET reports, and traceability completeness >= 100% for release-critical requirements.

Detailed pipeline diagram (annotated for automation)

Use this annotated ASCII diagram as a template to draw your internal diagrams (PlantUML, draw.io, Diagrams.net) and embed into docs for auditors.

  +------------------+    +--------------------+    +--------------------+
  |  Git Push / PR    | -> | Build & Unit Tests | -> | Static Analysis     |
  +------------------+    +--------------------+    +--------------------+
           |                        |                       |
           |                        v                       v
           |                 artifacts: bin,map       SARIF/XML reports
           |                        |                       |
           v                        v                       v
    +-------------------+      +----------------+    +---------------------+
    | Requirement & CI  | <--> | Timing Analysis| -> | Integration / HIL    |
    | Trace Collection  |      |  (RocqStat)    |    | (end-to-end tests)   |
    +-------------------+      +----------------+    +---------------------+
                                  |      |                     |
                                  |      v                     v
                                  |  WCET JSON/CSV         test logs & traces
                                  |      |                     |
                                  v      v                     v
                                +--------------------------------+
                                | Artifact Registry / Release    |
                                | (binary + reports + metadata)  |
                                +--------------------------------+
  

How to version and store verification artifacts

Artifact versioning and metadata are the backbone of traceability. Store the binary next to verification artifacts and a small manifest that ties the whole set to a Git commit. Example manifest fields (JSON):

  {
    "artifact_version": "1.12.3",
    "git_commit": "a1b2c3d4",
    "build_timestamp": "2026-01-10T15:23:12Z",
    "compiler": "gcc-12.2.0",
    "toolchain_container": "registry.example.com/toolchains/gcc:12.2",
    "vectorcast_version": "2026.1",
    "rocqstat_version": "2026.1-rc",
    "static_report": "static-report.sarif",
    "wcet_report": "wcet-report.json",
    "integration_report": "integration-junit.xml",
    "traceability_matrix": "traceability.csv"
  }
  

Best practice: Artifacts and the manifest must be immutable once published to the artifact registry (Artifactory, Nexus, GitLab Package Registry). Use content-addressable storage where possible and attach cryptographic signatures for production releases.

Gate criteria templates and example thresholds

Below are practical gate templates you can implement in CI scripts or policy engines (Open Policy Agent, custom scripts).

  1. Static Analysis Gate
    • Fail pipeline if new Critical defects > 0.
    • Fail if policy_score <= baseline_score - 1%.
    • Automatic allowlist for legacy files but tracked separately.
  2. Timing Gate
    • WCET(function) <= assigned_budget(function) * 0.9 (10% safety margin).
    • WCET regression from baseline <= 2% (configurable per component).
    • If RocqStat reports unanalyzed paths, fail craft-review and block release.
  3. Traceability Gate
    • All changed requirements must map to at least one unit, integration test, or timing analysis entry.
    • Missing mappings trigger a required reviewer field before merging.

Example GitLab CI snippet: integrate VectorCAST + RocqStat runs

This example shows job stages, artifact collection, and a simple timing-gate job. Replace tool invocations with your vendor CLI or Docker wrappers.

  stages:
    - build
    - static
    - timing
    - integration
    - publish

  build:
    stage: build
    script:
      - ./scripts/build.sh --deterministic --output=build/app.elf
    artifacts:
      paths: [build/app.elf, build/app.map]

  static_analysis:
    stage: static
    script:
      - static-tool --input=src --output=static-report.sarif
      - python scripts/score_static.py static-report.sarif --baseline baseline/static.sarif
    artifacts:
      paths: [static-report.sarif]

  timing_analysis:
    stage: timing
    script:
      - docker run --rm -v $(pwd):/work rocqstat:latest /work/build/app.elf --map /work/build/app.map --out wcet-report.json
      - python scripts/check_wcet.py wcet-report.json --budgets budgets.json --threshold 0.1
    artifacts:
      paths: [wcet-report.json]
    allow_failure: false

  publish:
    stage: publish
    script:
      - python scripts/assemble_manifest.py --out manifest.json
      - curl -u $ART_USER:$ART_PASS -T build/app.elf "$ARTIFACTORY/repo/app/1.12.3/app.elf"
      - curl -u $ART_USER:$ART_PASS -T wcet-report.json "$ARTIFACTORY/repo/app/1.12.3/wcet-report.json"
    dependencies: [build, static_analysis, timing_analysis]
  

Notes: use a deterministic toolchain container, sign published artifacts, and lock in specific VectorCAST/RocqStat versions in your manifest.

Traceability: linking tests, timing traces, and requirements

Traceability is a legal and audit requirement for ISO 26262 and is increasingly required in supplier contracts. Aim to produce three mappings automatically:

  • Requirement <-- unit/integration test (test ID in JUnit XML annotations).
  • Requirement <-- WCET entry (map function to requirement via code annotations or runnables).
  • Test <-- Timing trace (link the test-case ID to runtime traces produced by RocqStat/VectorCAST).

Store these mappings as CSV or as an ALM artifact (Polarion, Jama, DOORS) and keep a local copy in the artifact bundle. Example CSV header:

  requirement_id,test_id,function_name,wcet_ms,trace_file
  REQ-123,UT-456,BrakeCtrl_Update,2.3,wcet-trace-REQ-123-UT-456.json
  

Exports and embedding for auditors and stakeholders

Auditors want readable proofs: include an audit bundle in every release. The bundle should contain:

  • Manifest.json (metadata)
  • Binary (signed)
  • Static analysis SARIF/XML
  • WCET JSON/CSV with path traces
  • Integration test JUnit XML and logs
  • Traceability matrix (CSV/Excel)

Provide two exports from the artifact registry: a machine-readable bundle (zip/tar with JSON manifests) and a human-readable HTML report (generated from the SARIF + WCET JSON) that shows per-requirement pass/fail and timing margins. Many verification teams embed the HTML report into their internal documentation and link it from release notes.

Troubleshooting common CI failures

Here are actionable steps for frequent pipeline failures:

  • Deterministic build mismatch: Pin compiler and linker flags; cache containers; include build-id in manifest for easier comparison.
  • Static analysis flakiness: Lock tool versions; run in a container; avoid local tool rules that differ across developers.
  • Timing regressions: Re-run RocqStat with the same platform model; compare map files; collect hardware execution traces to corroborate WCET changes.
  • Traceability gaps: Use pre-merge hooks to require test and requirement tags in commit metadata or PR templates.

Case study: ECU throttle controller (hypothetical)

Scenario: A mid-size OEM needs to ensure the throttle ECU maintains a 5ms control loop budget with a 10% margin and prove compliance for ASIL-B component. They adopted the pipeline above and saw measurable improvements.

Before: Teams ran static checks in isolation and did ad-hoc WCET runs on desktops. Releases required a week-long “timing triage” where engineers reproduced results across machines. Traceability was manual.

After implementing VectorCAST + RocqStat in CI (via Docker wrappers) and enforcing timing gates:

  • Automated WCET reports were produced on every push; regressions were detected within minutes.
  • Average time to resolve timing regressions dropped from 4 days to 10 hours due to deterministic artifacts and machine-readable traces.
  • Audits accepted the automated audit bundles; supplier overhead decreased.

Key engineering change: the team tracked a baseline WCET per runnable and enforced a 2% regression threshold in CI. They also required each failing regression to open a ticket linked in the CI job output — improving accountability.

Advanced strategies & future-proofing (2026+)

As Vector continues to merge RocqStat into VectorCAST through 2026, consider these advanced tactics:

  • Hardware-in-the-loop (HIL) combined with analytical WCET: Correlate measured traces with RocqStat estimates automatically and record divergence metrics in the manifest.
  • Policy-as-code for gating: Express gating rules in OPA or a similar policy engine so you can change thresholds without altering CI scripts.
  • SBOM + verification artifact binding: Produce a Software Bill of Materials and bind verification artifacts to SBOM entries for supply-chain verification.
  • ML-assisted anomaly detection: Use analytics on historical WCET and static trends to surface subtle regressions before thresholds breach.

Checklist: Quick rollout in 6 weeks

  1. Week 1: Containerize deterministic toolchain and VectorCAST/RocqStat CLI (or vendor images).
  2. Week 2: Add build & unit-stage with deterministic checksum artifacts.
  3. Week 3: Add static analysis SARIF generation and enforce static gate.
  4. Week 4: Add RocqStat timing job, collect WCET JSON, and implement timing gate script.
  5. Week 5: Implement artifact registry and manifest assembly; sign artifacts.
  6. Week 6: Add traceability automation and audit bundle generation; run mock audit.

Conclusion — actionable takeaways

  • Embed timing and static checks early: Run them right after build and before integration to fail fast.
  • Make gates explicit and measurable: Use percent-based regression thresholds and absolute budgets for WCET.
  • Version everything: Binary + static report + WCET outputs + manifest = single source for audits.
  • Automate traceability: Map requirements to tests and timing traces programmatically and store the mapping with artifacts.

Vector's acquisition of RocqStat (Jan 2026) accelerates a future where timing analysis is seamlessly integrated into code-testing toolchains. Teams that adopt these CI/CD patterns now will reduce release friction, speed audits, and gain confidence in the timing behavior of safety-critical vehicle software.

Get started (call-to-action)

If you manage automotive verification pipelines, start by producing a repeatable CI job that runs RocqStat (or your timing tool) and emits a single WCET JSON artifact. Use the manifest example above and attach it to a daily build. If you'd like a ready-to-run GitLab CI template and a PlantUML diagram adapted to your repo layout, request our templated bundle — it includes gating scripts and a mock audit report you can use in supplier reviews.

Request the bundle: visit diagrams.us/requests or contact your VectorCAST vendor representative to learn about official RocqStat integrations and supported CI patterns in 2026.

Advertisement

Related Topics

#Automotive#DevOps#Verification
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-03-08T00:04:50.807Z