Skip to content

Connector skills

Connectors in this framework are produced by four category aware skills: analyze-source, provision-source, generate-connector, and validate-implementation. Each connector page includes a Implementation log section recording the skill version (git ref), inputs, and outputs of its generation.

Generated connectors

Source Category analyze-source provision-source generate-connector validate-implementation Status
ServiceNow cmdb 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
OWASP ZAP dast 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
Dependency-Track sca 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
Semgrep sast 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
SonarQube sast 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
GitHub scm 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
GitLab scm 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
TruffleHog secrets 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated
AWS WAF waf 2026-04-25 2026-04-25 2026-04-25 2026-04-25 Generated

Each cell links to the corresponding row in the Implementation log section for the source in its connector page. Subsequent connectors generated by the skills add rows here.

The four skills

analyze-source

Overview

This skill produces a six section documentation page for each connector from the official API documentation of a source. The page lives at mkdocs/docs/connectors/<category>/<source-slug>.md and feeds the downstream generate-connector and validate-implementation skills. The Reference section captures the seven API facts the framework needs to mechanically derive a connector module. The Implementation log section opens the lifecycle audit trail that the next two skills extend.

This skill is the analysis stage. It does not write any Python, YAML, or test code. It only writes Markdown. Category specific facts (applicable REQ-IDs, default severity, HWM preference, dedup key structure, target Silver tables, auth norms, ingestion tooling preference, quirks) live in references/<category>.md and MUST be read for the category of the source before drafting Reference.

Inputs

  • Source name and homepage URL.
  • Official API documentation URL (fetched via WebFetch).
  • AppSec category: one of cmdb, scm, sast, sca, secrets, dast, waf. Determines which references/<category>.md to load and the output directory.
  • Optional: live API credentials for fixture generation. Skip if unavailable.

Output

A single Markdown page emitted at mkdocs/docs/connectors/<category>/<source-slug>.md where <category> matches the input and <source-slug> is a kebab-case slug of the source name. The page has six top level sections:

  1. Overview: what this connector does, its role in the platform, and which Silver table(s) it populates. For sources outside MVP scope, include the !!! info "Not in MVP scope" admonition.
  2. Prerequisites: how to set up the external service and extract credentials (API keys, OAuth apps, PATs).
  3. Reference: the seven API facts (see the bullet list under Procedure).
  4. Setup: configuration, bundle deployment, first run commands. Stub with !!! info "Not implemented in MVP" if out of scope.
  5. Validation: always stubbed on first emit with !!! info "Pending validation". validate-implementation populates this later.
  6. Implementation log: a Markdown table with four rows. Row 1 is filled by this skill (see Implementation log row template below). Rows 2, 3, and 4 are placeholders marked (pending) for provision-source, generate-connector, and validate-implementation to fill in.

Procedure

  1. Read references/<category>.md for category specific facts that influence the seven API facts in the Reference section. These include applicable REQ-IDs, default severity, HWM preference, dedup key structure, target Silver tables, auth norms, ingestion tooling preference, quirks.
  2. Fetch the API documentation for the source via WebFetch from the input URL. Cache the fetched content for citations.
  3. Identify the authentication mechanism the source supports. Cross-check against the auth norm for the category in references/<category>.md. If the source supports multiple auth modes, select the one matching the category convention.
  4. Enumerate the endpoints required to populate the Silver tables assigned to the category of the source. Cross-reference the Silver Table Ownership table at mkdocs/docs/platform/reference/catalog.md and the standardized schemas at mkdocs/docs/platform/reference/canonical-mapping.md.
  5. Select the incremental strategy per the preference order in references/<category>.md (typical order: webhook > native HWM column > full reload; some categories override).
  6. Extract a consumed field schema excerpt (only fields the connector actually reads) matching the standardized Silver fields from mkdocs/docs/platform/reference/canonical-mapping.md (entities or findings schema, whichever applies to the category).
  7. Produce severity and status lookup proposals per the standardized enumeration models at mkdocs/docs/platform/reference/canonical-mapping.md. For categories where severity or status do not apply (CMDB, secrets-status), record the N/A explicitly.
  8. Document quirks: deviations from category norms, format surprises, handling policies for each source. Cross-check references/<category>.md for category quirks the source may inherit.
  9. Assemble the six section Markdown page and emit to the output path.
  10. Stub the Implementation log section with the row for this skill (date, inputs, outputs, skill repo ref via git rev-parse --short HEAD). Leave rows for generate-connector and validate-implementation marked (pending).

The seven API facts captured under Reference are:

  • API: REST / GraphQL / SDK / CLI; endpoints consumed; authentication mechanisms.
  • Pagination and rate limits: strategy and quotas.
  • Incremental hook: webhook, native HWM column, scan-id, or full reload (per references/<category>.md).
  • Resource schema excerpt: Markdown table with columns Field / Type / Meaning, scoped to fields the connector reads.
  • Enumerations: severity and status mappings against the standardized models.
  • Quirks: deviations from category norms; format surprises.

Authentication is folded into the API fact. That is six visible facts but the framework documentation calls it seven. Preserve the seven fact wording when writing Reference, matching the existing baselines.

Invariants

  • Every official documentation URL used MUST be cited inline or in a References list at the bottom of the page. No fabricated facts: every claim about the API of the source MUST be traceable to fetched documentation or to references/<category>.md.
  • The severity and status lookups MUST cover every documented source value. Undocumented values default to the configured fallback with a data quality warning noted inline.
  • The page slug and category directory MUST match the AppSec category input exactly. Do not invent a new category.
  • The Implementation log table MUST have row 1 populated. Rows 2 and 3 MUST exist with the (pending) marker so downstream skills have a target to overwrite.
  • Output is Markdown only. Do not write Python, YAML, or test files in this skill.

Category specific invariants (applicable REQ-IDs, default severity convention, HWM preference, dedup key structure, target Silver tables, auth norms, ingestion tooling preference, category quirks) live in references/<category>.md. Read the file matching the input category before drafting Reference.

Implementation log row template

Append exactly one row to the Implementation log table for this invocation of the skill. Use this row structure verbatim, replacing the bracketed placeholders with the four data cells:

| Source analysis | analyze-source ({category}) | name={source}; url={doc_url}; category={category} | mkdocs/docs/connectors/{category}/{slug}.md §1–§3 | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
  • {category}: the AppSec category input (cmdb, scm, sast, sca, secrets, dast, or waf).
  • {source}: the source name input.
  • {doc_url}: the official API documentation URL input.
  • {slug}: the kebab-case slug used in the output filename.
  • {YYYY-MM-DD}: the run date in ISO format.
  • {git_short_sha}: output of git rev-parse --short HEAD on the repo for the skill.
  • {branch}: output of git rev-parse --abbrev-ref HEAD.

Rows 2 and 3 of the Implementation log table must be present and marked (pending) so that generate-connector and validate-implementation can overwrite them on their respective runs. The standard placeholder rows are:

| Implementation | generate-connector ({category}) | (pending) | (pending) | (pending) | (pending) |
| Validation | validate-implementation ({category}) | (pending) | (pending) | (pending) | (pending) |

Rendered from .claude/skills/analyze-source/SKILL.md. Source of truth lives in the skill file.

provision-source

Overview

This skill emits the source-side provisioning artefacts for a single source: the Terraform configuration that stands up the source's data path (e.g. an EKS CronJob for Semgrep, AWS WAF + Firehose-to-S3 for AWS WAF, a GitHub Actions workflow snippet for the CI/CD-step path, or a no-op smoke-test for SaaS-only CMDB sources), a per-runtime operator README, an install.sh that runs terraform apply against this runtime/, and the connector page's §Source provisioning section. The skill is read-only over the connector page produced by analyze-source; it modifies only the source-side runtime/ subtree and the page §Source provisioning section + Implementation log row 2.

This skill does NOT emit the contents of runtime/files/*. Bulky operator-authored sidecars (demo target forks, scan helper scripts, IaC fixtures) live there and are authored directly by the operator. The skill emits REFERENCES to those paths in runtime/main.tf (e.g. as local_file data sources, kubernetes_config_map data blocks, or via var.demo_target_paths-style variables) but never their bodies. Category-specific facts that drive emit (Terraform shape, install.sh shape, runtime/files/* conventions, operational.yml.source_runtime schema, page §Source provisioning template) live in references/<category>.md and MUST be read for the source's category before any file is written.

Inputs

  • Source name — determines the runtime path src/connectors/{source}/runtime/.
  • Per-connector page pathmkdocs/docs/connectors/{category}/{slug}.md produced by analyze-source.
  • AppSec category — one of cmdb, scm, sast, sca, secrets, dast, waf. Determines which references/<category>.md to load and the page path.
  • operational.yml pathsrc/connectors/{source}/operational.yml. The skill reads the source_runtime: sub-block; required fields per category are listed in references/<category>.md.

Preconditions:

  • The connector page exists at mkdocs/docs/connectors/{category}/{slug}.md with Reference §3 populated and an Implementation log table whose row 2 is marked (pending).
  • operational.yml is interactively bootstrapped or completed when missing fields exist. If operational.yml is absent, the skill creates it with the two top-level keys source_runtime: and databricks_runtime: empty, then proceeds to gather each required source_runtime: field. For every field declared required in references/<category>.md, the skill issues an AskUserQuestion call presenting the schema's recommended default ("(Recommended)"), a "Use a placeholder for deploy-time fill" option (writes the literal string <your-{field-name}>), and the auto "Other" option for free-text input. Up to four questions are batched per call. Each answer is written to operational.yml.source_runtime.<field>, preserving file structure and adjacent comments. The skill never overwrites an already-populated field and never touches the databricks_runtime: sub-block.
  • Halt-on-missing remains as a fallback when AskUserQuestion is unavailable (headless or unattended runs). In that fallback, the skill halts without partial-emitting any runtime/ file and reports the structured list of missing fields so the controller can populate them and re-run.

Output

  • src/connectors/{source}/runtime/main.tf — Terraform configuration with provider declarations and per-category resources (interpolated from operational.yml.source_runtime values; runtime/files/* paths referenced via local_file data sources or variables).
  • src/connectors/{source}/runtime/variables.tf — Terraform variable declarations for every value that varies per-deployment.
  • src/connectors/{source}/runtime/outputs.tf — Terraform outputs (e.g. S3 bucket ARN, CronJob name, IAM role ARN) consumed by the Databricks-side install phase.
  • src/connectors/{source}/runtime/versions.tf — Terraform and provider version pinning per category template.
  • src/connectors/{source}/runtime/README.md — per-runtime operator README explaining what this runtime/ provisions, the variables to set, and how to invoke runtime/install.sh.
  • src/connectors/{source}/runtime/install.sh — runs terraform apply against this runtime/. For categories where the source is a SaaS with no infrastructure to provision (e.g. CMDB / ServiceNow), runtime/install.sh may be a no-op or a smoke-test script per references/<category>.md.
  • Connector page §Source provisioning section — operator-facing source-provisioning runbook for this category, referencing runtime/install.sh and any runtime/files/* operator-authored sidecars.
  • Implementation log row 2 — overwrite the (pending) placeholder for provision-source per the row template below.

Procedure

  1. Read references/<category>.md for category-specific factsoperational.yml.source_runtime schema (which fields are required), Terraform shape (provider, modules, variables exposed, outputs), runtime/files/* conventions (which sidecar paths apply for the category and how main.tf references them), runtime/install.sh shape, runtime/README.md template, and the page §Source provisioning section template.
  2. Read src/connectors/{source}/operational.yml.source_runtime: sub-block and interactively gather any missing required fields. Identify every source_runtime: field marked required in references/<category>.md. If operational.yml is missing, create it with the two top-level keys (source_runtime:, databricks_runtime:) empty before continuing. For each missing required field, invoke AskUserQuestion (batching up to four questions per call). Write each answer back to operational.yml.source_runtime.<field>, preserving structure and adjacent comments. Re-validate the sub-block; if any required field is still missing AND AskUserQuestion is unavailable, halt and report the structured list of missing fields without partial-emitting any runtime/ file. Otherwise proceed to step 3.
  3. Read connector page §3 Reference to extract any source-side facts the page documents (e.g. authentication mechanism that informs Terraform IAM, S3 prefix structure, webhook endpoint shape). Do not modify the page §3 content.
  4. Emit src/connectors/{source}/runtime/main.tf from the category template, interpolating values from operational.yml.source_runtime. Emit references to runtime/files/* paths declared in the operational.yml's demo_target_paths-style fields via local_file data sources or Terraform variables — do NOT generate the file contents.
  5. Emit src/connectors/{source}/runtime/variables.tf, runtime/outputs.tf, runtime/versions.tf per the category template.
  6. Emit src/connectors/{source}/runtime/README.md from the category README template, listing the variables to set and the invocation command for runtime/install.sh.
  7. Emit src/connectors/{source}/runtime/install.sh per the category shape (typical: cd "$(dirname "$0")" && terraform init && terraform apply -auto-approve). For SaaS-only categories where there is no infrastructure to provision, emit the no-op or smoke-test variant per references/<category>.md.
  8. Update the connector page §Source provisioning section: replace the section body (or insert the section if absent) with the operator-facing source-provisioning runbook for this category, referencing runtime/install.sh and any runtime/files/* operator-authored sidecars.
  9. Update the connector page's Implementation log row 2: overwrite the (pending) placeholder for provision-source using the row template below. Run date in ISO format; git rev-parse --short HEAD for the skill repo ref; git rev-parse --abbrev-ref HEAD for the branch.

Invariants

  • No file is written outside src/connectors/{source}/runtime/, the connector page §Source provisioning section, and the Implementation log row 2 cell. The source-provisioning emit is self-contained.
  • src/connectors/{source}/runtime/files/* is NEVER modified by this skill — it is operator-authored sidecar territory. The skill emits REFERENCES to those paths in runtime/main.tf (via local_file data sources, kubernetes_config_map data blocks, or var.demo_target_paths-style variables) but does NOT generate the file contents.
  • Shared files (databricks.yml, mkdocs/mkdocs.yml, pyproject.toml, the aggregator at mkdocs/docs/platform/reference/connector-skills.md, the catalog at mkdocs/docs/platform/reference/catalog.md) are NEVER touched by this skill.
  • The skill modifies src/connectors/{source}/operational.yml ONLY to fill in missing required-by-category source_runtime: fields gathered via AskUserQuestion. It NEVER overwrites existing populated fields, NEVER modifies the databricks_runtime: sub-block (provision-source only writes to source_runtime:), and NEVER removes fields. If AskUserQuestion is unavailable AND any required-by-category source_runtime: field is still missing, the skill halts and reports the structured list — does not partial-emit any runtime/ file.
  • The connector page sections owned by other skills are NEVER touched: §1–§3 (analyze-source), §4–§7 / §Run-the-job / §Verify / §Troubleshooting (generate-connector), §5 Validation (validate-implementation). Only the §Source provisioning section and Implementation log row 2 are written.
  • For SaaS-only categories where there is no infrastructure to provision, runtime/install.sh is still emitted (as a no-op or smoke-test) so the top-level install.sh chain emitted by generate-connector can call it unconditionally.

Category-specific invariants (operational.yml.source_runtime schema, Terraform shape, runtime/files/* conventions, runtime/install.sh shape, runtime/README.md template, page §Source provisioning section template) live in references/<category>.md. Read the file matching the input category before emitting any file.

Implementation log row template

Overwrite the (pending) placeholder for provision-source (row 2) in the connector page's Implementation log table. Use this row shape verbatim, replacing the bracketed placeholders with the four data cells:

| Source provisioning | provision-source ({category}) | source_runtime fields=<comma-separated list> | src/connectors/{source}/runtime/, mkdocs/docs/connectors/{category}/{slug}.md §Source provisioning | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
  • {category} — the AppSec category input (cmdb, scm, sast, sca, secrets, dast, or waf).
  • <comma-separated list> — the names of every source_runtime: field actually read from operational.yml during this run (e.g. cloud_provider, account_id, region, cluster_name, demo_target_paths). Drives the audit trail for which inputs pinned this emit.
  • {source} — the source name input.
  • {slug} — the kebab-case slug used in the connector page filename.
  • {YYYY-MM-DD} — the run date in ISO format.
  • {git_short_sha} — output of git rev-parse --short HEAD on the skill's repo.
  • {branch} — output of git rev-parse --abbrev-ref HEAD.

Rows 1 (analyze-source), 3 (generate-connector), and 4 (validate-implementation) MUST remain untouched.

Rendered from .claude/skills/provision-source/SKILL.md. Source of truth lives in the skill file.

generate-connector

Overview

This skill emits the eight file connector module for a single source given a reviewed connector page. Category specific facts that drive code emission (target Silver tables, dedup key tuple, ingestion tooling preference overrides, mapping.yml structure, applicable REQ-IDs to bind tests against, category quirks) live in references/<category>.md and MUST be read for the category of the source before any file is written.

Inputs

  • Source name: determines the module path src/connectors/{source}/ and the lookup filenames src/connectors/{source}/severity.yml, src/connectors/{source}/status.yml.
  • Connector page path: mkdocs/docs/connectors/{category}/{slug}.md produced by analyze-source.
  • AppSec category: one of cmdb, scm, sast, sca, secrets, dast, waf. Determines which references/<category>.md to load.

Preconditions:

  • The connector page exists at mkdocs/docs/connectors/{category}/{slug}.md and has been reviewed for completeness (Reference section populated; Implementation log row 1 filled by analyze-source).
  • The shared utilities for the framework at src/platform/ are intact (HTTP client, pagination, HWM state, severity/status normalization, dedup helpers). They are referenced as src/platform/ in earlier code. The worktree path is src/platform/.

Output

A connector module composed of exactly the following eight files (per the baseline procedure at mkdocs/docs/connectors/sast/skills.md § "## generate-connector"):

  • src/connectors/{source}/config.yml: base URL, endpoints, pagination, HWM column (or scan-id / commit-SHA / artefact-prefix per category override), target Bronze table, credential reference.
  • src/connectors/{source}/ingest.py: implements ingest(run_id, state) -> batch per the connector contract in mkdocs/docs/platform/reference/catalog.md.
  • src/connectors/{source}/transform.py: implements transform(bronze_df) -> silver_df per the normalization rules in mkdocs/docs/platform/reference/canonical-mapping.md.
  • src/connectors/{source}/mapping.yml: declarative Bronze to Silver column expressions referencing the severity and status lookups by file path.
  • src/connectors/{source}/severity.yml: severity lookup for the source. For categories where severity is N/A or conventional, see references/<category>.md.
  • src/connectors/{source}/status.yml: status lookup for the source. For categories where status is N/A, see references/<category>.md.
  • resources/{source}-job.yml: standard two task Lakeflow job bundle fragment per the template at mkdocs/docs/platform/reference/connector-job-template.md.
  • src/connectors/{source}/tests/: pytest suite (test_ingest.py, test_transform.py, fixtures/{endpoint}_{scenario}.json) with one test function per applicable REQ-ID, each marked with @pytest.mark.requirement("REQ-...").

Procedure

  1. Read references/<category>.md for category specific facts: target Silver tables, dedup key tuple to encode in transform/dedup logic, ingestion tooling preference overrides (CLI artefact exceptions for SAST CLI, secrets CLI, DAST CLI), mapping.yml structure for this category (entity only, finding only, dual entity+finding for SCM, or event structure for WAF), applicable REQ-IDs to bind tests against, and category quirks affecting code emission.
  2. Read the connector page and extract the seven API facts captured by analyze-source: authentication mechanism, pagination style, incremental hook (HWM column / webhook / scan-id / artefact prefix / full reload), endpoints, consumed field schema, severity vocabulary, status vocabulary, quirks.
  3. Emit src/connectors/{source}/config.yml with the extracted parameters. Use the HWM structure per the category reference (record level updated_at for SAST/SCA/CMDB/SCM; scan-id for DAST server; commit-SHA or scan-start timestamp for full reload categories such as secrets and CLI artefact paths).
  4. Select the ingestion tooling per the preference order in references/<category>.md (typical: Lakeflow Connect, then Databricks SDK, then dlt; the CLI artefact path is the documented exception for SAST CLI, secrets CLI, and DAST CLI). Emit src/connectors/{source}/ingest.py against the chosen tool.
  5. Emit src/connectors/{source}/mapping.yml with the structure required by the category reference: entity only block (CMDB), finding only block (SAST / SCA / secrets / DAST / WAF), or dual entity+finding blocks (SCM). Reference the severity and status lookup files by path. WAF connectors target the canonical silver.findings table — the previous silver.waf_events carve-out has been collapsed.
  6. Emit src/connectors/{source}/severity.yml and src/connectors/{source}/status.yml. Cover every documented source value with a configurable default (medium for severity unless the category reference overrides) and a comment flagging the data quality warning path. For CMDB, both files exist but contain # N/A: CMDB sources emit no findings. For secrets, severity is hard coded high in mapping.yml. The lookup file exists with the comment # default high; per-deployment override permitted for low-entropy detector classes, and the status file carries the literal open (no native lifecycle). For WAF, severity is action keyed (derived from the action field via an action-keyed lookup), and status follows the trufflehog convention — the literal open on every row, with status.yml carrying the literal as the default row.
  7. Emit src/connectors/{source}/transform.py applying the mapping plus the normalization rules from mkdocs/docs/platform/reference/canonical-mapping.md. For categories with a finding structure, encode the dedup key tuple given in references/<category>.md literally. It drives dedup_links linkage in the transform. For DAST, emit the target to silver.deployments join. For SCA, emit the CVE correlation step. For WAF, emit a deterministic SHA-256 finding_id over (webacl_arn, request_id, timestamp_ms) so re-delivered events collapse at the Bronze→Silver MERGE; do NOT emit a dedup_links row, do NOT emit a transform-time silver.deployments join.
  8. Emit the bundle fragment at resources/{source}-job.yml using the standard two task structure from mkdocs/docs/platform/reference/connector-job-template.md, substituting the source name.
  9. Emit the test suite at src/connectors/{source}/tests/: one test function per REQ-ID applicable to the category (per references/<category>.md), each marked with @pytest.mark.requirement("REQ-..."). Fixtures named {endpoint}_{scenario}.json under src/connectors/{source}/tests/fixtures/. Tests cover the framework contract from src/platform/. Pure Python only. No local SparkSession.
  10. Record the invocation: list the generated file paths, run git rev-parse --short HEAD for the skill repo ref, and compute sha256sum mkdocs/docs/connectors/{category}/{slug}.md for the page hash that pins this generation to a specific page revision.
  11. Update the Implementation log section of the connector page: fill in row 2 (generate-connector) with the run date, inputs (page hash via sha256sum mkdocs/docs/connectors/{category}/{slug}.md), outputs (the eight file list above), and the skill repo ref via git rev-parse --short HEAD. Mark row 3 unchanged ((pending)) so validate-implementation has a target to overwrite.

Invariants

  • No file is written outside src/connectors/{source}/, src/connectors/{source}/tests/, src/connectors/{source}/severity.yml, src/connectors/{source}/status.yml, or resources/{source}-job.yml. The connector generation is self contained.
  • Both src/connectors/{source}/severity.yml and src/connectors/{source}/status.yml exist for every connector, even for categories where one or both are N/A. The N/A files carry an explanatory comment per references/<category>.md.
  • All imports in ingest.py and transform.py reference only functions that already exist in src/platform/. New shared helpers are NOT introduced by this skill. If a missing helper is identified, halt and report the gap rather than adding it inline.
  • Every REQ-ID applicable to the category (per references/<category>.md) has at least one bound test function carrying @pytest.mark.requirement("REQ-..."). REQ-IDs marked N/A for the category are not bound.
  • The Implementation log section of mkdocs/docs/connectors/{category}/{slug}.md has row 2 filled and row 3 still marked (pending) after this skill runs. Row 1 (set by analyze-source) is not modified.
  • Output is code, configuration, and test fixtures only, plus the single line Implementation log row update on the connector page. No new Markdown files are created.

Category specific invariants (target Silver tables, dedup key tuple, ingestion tooling override, severity / status lookup structure, mapping.yml structure, applicable REQ-IDs, category quirks affecting code emission) live in references/<category>.md. Read the file matching the input category before emitting any file.

Implementation log row template

Append exactly one row to the Implementation log table on the connector page, overwriting the (pending) placeholder for generate-connector. Use this row structure verbatim, replacing the bracketed placeholders:

| Module generation | generate-connector ({category}) | page hash={sha256_of_page} | src/connectors/{source}/, src/connectors/{source}/tests/, src/connectors/{source}/severity.yml, src/connectors/{source}/status.yml | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
  • {category}: the AppSec category input (cmdb, scm, sast, sca, secrets, dast, or waf).
  • {sha256_of_page}: output of sha256sum mkdocs/docs/connectors/{category}/{slug}.md (the full hex digest pins this generation to the page revision read at step 2).
  • {source}: the source name input.
  • {YYYY-MM-DD}: the run date in ISO format.
  • {git_short_sha}: output of git rev-parse --short HEAD on the repo for the skill.
  • {branch}: output of git rev-parse --abbrev-ref HEAD.

Row 3 (validate-implementation) MUST remain marked (pending) so the next skill has a target to overwrite.

Rendered from .claude/skills/generate-connector/SKILL.md. Source of truth lives in the skill file.

validate-implementation

Overview

This skill runs pytest src/connectors/{source}/tests/, summarises outcomes into a per REQ-ID Validation table, emits a fix list for failing REQs, and updates the Validation and Implementation log sections of the connector page at mkdocs/docs/connectors/{category}/{source}.md. It is observational over the codebase. It modifies only the connector Markdown page.

Inputs

  • Source name: determines the test directory src/connectors/{source}/tests/ and the connector page filename slug.
  • AppSec category: one of cmdb, scm, sast, sca, secrets, dast, waf. Determines which references/<category>.md to load. The applicable REQ-ID set for the category, with explicit N/A reasons drawn from mkdocs/docs/platform/reference/catalog.md, lives in references/<category>.md and is the load bearing artefact for each category for this skill.
  • Connector module path: src/connectors/{source}/. Used to resolve the source code under test for the fix list and as input for the Implementation log row.

Preconditions:

  • The connector module and test suite at src/connectors/{source}/ and src/connectors/{source}/tests/ exist (typically emitted by generate-connector).
  • The connector page at mkdocs/docs/connectors/{category}/{source}.md exists with a stub Validation section (an !!! info "Pending validation" admonition emitted by analyze-source) and a Implementation log table whose row 3 is marked (pending).

Output

  • The Validation section of mkdocs/docs/connectors/{category}/{source}.md is updated. The stub admonition is replaced with a populated REQ-ID table whose rows are exactly the applicable REQ-IDs for the category (in catalog order). Each cell carries PASS, FAIL, or N/A (N/A for REQ-IDs the category excludes per references/<category>.md).
  • A fix list is appended below the table as plain text. For each FAIL row, the failing test file path and a one line summary of the failure.
  • The Implementation log section row 3 (validate-implementation) is updated with the run date, inputs (the connector module path), outputs (the connector page §5), and the skill repo ref (git rev-parse --short HEAD).

No file outside the connector page is modified by this skill.

Procedure

  1. Read references/<category>.md to get the applicable REQ-ID set for the category and N/A reasons. This is the load bearing artefact for each category for this skill. It lists which REQ-IDs bind to tests for connectors in this category, in catalog order, with explicit N/A reasons quoted from mkdocs/docs/platform/reference/catalog.md and mkdocs/docs/connectors/<category>/index.md.
  2. Run pytest src/connectors/{source}/tests/ -v --tb=short. Treat timeouts as failures (not skips). Capture stdout/stderr. Preserve the wall clock duration for the run summary.
  3. Collect every test function carrying a @pytest.mark.requirement("REQ-...") marker and its outcome (passed / failed / skipped / timed-out).
  4. For each REQ-ID in the applicable set for the category (from step 1, in the order they appear in mkdocs/docs/platform/reference/catalog.md), record: is there a bound test? did it pass? what is the test path? For REQ-IDs marked N/A by the category reference, record N/A with no bound test path.
  5. Emit the Markdown table with one row per REQ-ID using PASS, FAIL, or N/A. The table columns match the example at mkdocs/docs/connectors/cmdb/servicenow.md § "## Validation" (Requirement | Bound test | Outcome). For N/A rows, the bound test cell is a dash.
  6. Emit the fix list as plain text below the table. For each FAIL row, one line listing the failing test file path (src/connectors/{source}/tests/test_*.py::test_name) and a one line summary of the failure drawn from the pytest --tb=short output. Omit the fix list entirely if there are no failures.
  7. Replace the stub admonition in the Validation section of mkdocs/docs/connectors/{category}/{source}.md with the table from step 5 and the fix list from step 6 (if any). Append a one line summary noting how many requirement bound tests were collected, the wall clock duration, the pass / fail / N/A split, and the N/A rationale for the category (sourced from references/<category>.md).
  8. Update the Implementation log section row 3 (validate-implementation) on the connector page with the run date, inputs (the connector module path src/connectors/{source}/), outputs (the connector page §5), and the skill repo ref via git rev-parse --short HEAD. Use the row template below.

Invariants

  • No production code is modified by this skill. It is purely observational. The only file written is the connector page at mkdocs/docs/connectors/{category}/{source}.md.
  • Test timeouts are treated as failures, not skips. A timed out test contributes a FAIL row.
  • The Validation table always has exactly the REQ-IDs in the applicable set for the category as rows, in the order they appear in mkdocs/docs/platform/reference/catalog.md § "Requirement catalog". REQ-IDs the category excludes per references/<category>.md appear as N/A rows, not as omissions.

Category specific invariants (which REQ-IDs apply, the standard N/A reason for each excluded REQ-ID, and the test suite assertions the tests for a category must verify) live in references/<category>.md. Read the file matching the input category before drafting the Validation table.

Implementation log row template

Overwrite the (pending) placeholder for validate-implementation in the Implementation log table on the connector page. Use this row structure verbatim, replacing the bracketed placeholders:

| Validation | validate-implementation ({category}) | module path=src/connectors/{source}/ | mkdocs/docs/connectors/{category}/{source}.md §5 | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
  • {category}: the AppSec category input (cmdb, scm, sast, sca, secrets, dast, or waf).
  • {source}: the source name input.
  • {YYYY-MM-DD}: the run date in ISO format.
  • {git_short_sha}: output of git rev-parse --short HEAD on the repo for the skill.
  • {branch}: output of git rev-parse --abbrev-ref HEAD.

Rows 1 (analyze-source), 2 (provision-source), and 3 (generate-connector) MUST remain untouched.

Rendered from .claude/skills/validate-implementation/SKILL.md. Source of truth lives in the skill file.