Connector skills¶
Connectors in this framework are produced by four category aware skills: analyze-source, provision-source, generate-connector, and validate-implementation. Each connector page includes a Implementation log section recording the skill version (git ref), inputs, and outputs of its generation.
Generated connectors¶
| Source | Category | analyze-source | provision-source | generate-connector | validate-implementation | Status |
|---|---|---|---|---|---|---|
| ServiceNow | cmdb | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| OWASP ZAP | dast | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| Dependency-Track | sca | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| Semgrep | sast | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| SonarQube | sast | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| GitHub | scm | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| GitLab | scm | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| TruffleHog | secrets | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
| AWS WAF | waf | 2026-04-25 | 2026-04-25 | 2026-04-25 | 2026-04-25 | Generated |
Each cell links to the corresponding row in the Implementation log section for the source in its connector page. Subsequent connectors generated by the skills add rows here.
The four skills¶
analyze-source¶
Overview¶
This skill produces a six section documentation page for each connector from the official API documentation of a source. The page lives at mkdocs/docs/connectors/<category>/<source-slug>.md and feeds the downstream generate-connector and validate-implementation skills. The Reference section captures the seven API facts the framework needs to mechanically derive a connector module. The Implementation log section opens the lifecycle audit trail that the next two skills extend.
This skill is the analysis stage. It does not write any Python, YAML, or test code. It only writes Markdown. Category specific facts (applicable REQ-IDs, default severity, HWM preference, dedup key structure, target Silver tables, auth norms, ingestion tooling preference, quirks) live in references/<category>.md and MUST be read for the category of the source before drafting Reference.
Inputs¶
- Source name and homepage URL.
- Official API documentation URL (fetched via WebFetch).
- AppSec category: one of
cmdb,scm,sast,sca,secrets,dast,waf. Determines whichreferences/<category>.mdto load and the output directory. - Optional: live API credentials for fixture generation. Skip if unavailable.
Output¶
A single Markdown page emitted at mkdocs/docs/connectors/<category>/<source-slug>.md where <category> matches the input and <source-slug> is a kebab-case slug of the source name. The page has six top level sections:
- Overview: what this connector does, its role in the platform, and which Silver table(s) it populates. For sources outside MVP scope, include the
!!! info "Not in MVP scope"admonition. - Prerequisites: how to set up the external service and extract credentials (API keys, OAuth apps, PATs).
- Reference: the seven API facts (see the bullet list under Procedure).
- Setup: configuration, bundle deployment, first run commands. Stub with
!!! info "Not implemented in MVP"if out of scope. - Validation: always stubbed on first emit with
!!! info "Pending validation".validate-implementationpopulates this later. - Implementation log: a Markdown table with four rows. Row 1 is filled by this skill (see Implementation log row template below). Rows 2, 3, and 4 are placeholders marked
(pending)forprovision-source,generate-connector, andvalidate-implementationto fill in.
Procedure¶
- Read
references/<category>.mdfor category specific facts that influence the seven API facts in the Reference section. These include applicable REQ-IDs, default severity, HWM preference, dedup key structure, target Silver tables, auth norms, ingestion tooling preference, quirks. - Fetch the API documentation for the source via WebFetch from the input URL. Cache the fetched content for citations.
- Identify the authentication mechanism the source supports. Cross-check against the auth norm for the category in
references/<category>.md. If the source supports multiple auth modes, select the one matching the category convention. - Enumerate the endpoints required to populate the Silver tables assigned to the category of the source. Cross-reference the Silver Table Ownership table at
mkdocs/docs/platform/reference/catalog.mdand the standardized schemas atmkdocs/docs/platform/reference/canonical-mapping.md. - Select the incremental strategy per the preference order in
references/<category>.md(typical order: webhook > native HWM column > full reload; some categories override). - Extract a consumed field schema excerpt (only fields the connector actually reads) matching the standardized Silver fields from
mkdocs/docs/platform/reference/canonical-mapping.md(entities or findings schema, whichever applies to the category). - Produce severity and status lookup proposals per the standardized enumeration models at
mkdocs/docs/platform/reference/canonical-mapping.md. For categories where severity or status do not apply (CMDB, secrets-status), record the N/A explicitly. - Document quirks: deviations from category norms, format surprises, handling policies for each source. Cross-check
references/<category>.mdfor category quirks the source may inherit. - Assemble the six section Markdown page and emit to the output path.
- Stub the Implementation log section with the row for this skill (date, inputs, outputs, skill repo ref via
git rev-parse --short HEAD). Leave rows forgenerate-connectorandvalidate-implementationmarked(pending).
The seven API facts captured under Reference are:
- API: REST / GraphQL / SDK / CLI; endpoints consumed; authentication mechanisms.
- Pagination and rate limits: strategy and quotas.
- Incremental hook: webhook, native HWM column, scan-id, or full reload (per
references/<category>.md). - Resource schema excerpt: Markdown table with columns Field / Type / Meaning, scoped to fields the connector reads.
- Enumerations: severity and status mappings against the standardized models.
- Quirks: deviations from category norms; format surprises.
Authentication is folded into the API fact. That is six visible facts but the framework documentation calls it seven. Preserve the seven fact wording when writing Reference, matching the existing baselines.
Invariants¶
- Every official documentation URL used MUST be cited inline or in a References list at the bottom of the page. No fabricated facts: every claim about the API of the source MUST be traceable to fetched documentation or to
references/<category>.md. - The severity and status lookups MUST cover every documented source value. Undocumented values default to the configured fallback with a data quality warning noted inline.
- The page slug and category directory MUST match the AppSec category input exactly. Do not invent a new category.
- The Implementation log table MUST have row 1 populated. Rows 2 and 3 MUST exist with the
(pending)marker so downstream skills have a target to overwrite. - Output is Markdown only. Do not write Python, YAML, or test files in this skill.
Category specific invariants (applicable REQ-IDs, default severity convention, HWM preference, dedup key structure, target Silver tables, auth norms, ingestion tooling preference, category quirks) live in references/<category>.md. Read the file matching the input category before drafting Reference.
Implementation log row template¶
Append exactly one row to the Implementation log table for this invocation of the skill. Use this row structure verbatim, replacing the bracketed placeholders with the four data cells:
| Source analysis | analyze-source ({category}) | name={source}; url={doc_url}; category={category} | mkdocs/docs/connectors/{category}/{slug}.md §1–§3 | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
{category}: the AppSec category input (cmdb,scm,sast,sca,secrets,dast, orwaf).{source}: the source name input.{doc_url}: the official API documentation URL input.{slug}: the kebab-case slug used in the output filename.{YYYY-MM-DD}: the run date in ISO format.{git_short_sha}: output ofgit rev-parse --short HEADon the repo for the skill.{branch}: output ofgit rev-parse --abbrev-ref HEAD.
Rows 2 and 3 of the Implementation log table must be present and marked (pending) so that generate-connector and validate-implementation can overwrite them on their respective runs. The standard placeholder rows are:
| Implementation | generate-connector ({category}) | (pending) | (pending) | (pending) | (pending) |
| Validation | validate-implementation ({category}) | (pending) | (pending) | (pending) | (pending) |
Rendered from .claude/skills/analyze-source/SKILL.md. Source of truth lives in the skill file.
provision-source¶
Overview¶
This skill emits the source-side provisioning artefacts for a single source: the Terraform configuration that stands up the source's data path (e.g. an EKS CronJob for Semgrep, AWS WAF + Firehose-to-S3 for AWS WAF, a GitHub Actions workflow snippet for the CI/CD-step path, or a no-op smoke-test for SaaS-only CMDB sources), a per-runtime operator README, an install.sh that runs terraform apply against this runtime/, and the connector page's §Source provisioning section. The skill is read-only over the connector page produced by analyze-source; it modifies only the source-side runtime/ subtree and the page §Source provisioning section + Implementation log row 2.
This skill does NOT emit the contents of runtime/files/*. Bulky operator-authored sidecars (demo target forks, scan helper scripts, IaC fixtures) live there and are authored directly by the operator. The skill emits REFERENCES to those paths in runtime/main.tf (e.g. as local_file data sources, kubernetes_config_map data blocks, or via var.demo_target_paths-style variables) but never their bodies. Category-specific facts that drive emit (Terraform shape, install.sh shape, runtime/files/* conventions, operational.yml.source_runtime schema, page §Source provisioning template) live in references/<category>.md and MUST be read for the source's category before any file is written.
Inputs¶
- Source name — determines the runtime path
src/connectors/{source}/runtime/. - Per-connector page path —
mkdocs/docs/connectors/{category}/{slug}.mdproduced byanalyze-source. - AppSec category — one of
cmdb,scm,sast,sca,secrets,dast,waf. Determines whichreferences/<category>.mdto load and the page path. - operational.yml path —
src/connectors/{source}/operational.yml. The skill reads thesource_runtime:sub-block; required fields per category are listed inreferences/<category>.md.
Preconditions:
- The connector page exists at
mkdocs/docs/connectors/{category}/{slug}.mdwith Reference §3 populated and an Implementation log table whose row 2 is marked(pending). operational.ymlis interactively bootstrapped or completed when missing fields exist. Ifoperational.ymlis absent, the skill creates it with the two top-level keyssource_runtime:anddatabricks_runtime:empty, then proceeds to gather each requiredsource_runtime:field. For every field declared required inreferences/<category>.md, the skill issues anAskUserQuestioncall presenting the schema's recommended default ("(Recommended)"), a "Use a placeholder for deploy-time fill" option (writes the literal string<your-{field-name}>), and the auto "Other" option for free-text input. Up to four questions are batched per call. Each answer is written tooperational.yml.source_runtime.<field>, preserving file structure and adjacent comments. The skill never overwrites an already-populated field and never touches thedatabricks_runtime:sub-block.- Halt-on-missing remains as a fallback when
AskUserQuestionis unavailable (headless or unattended runs). In that fallback, the skill halts without partial-emitting anyruntime/file and reports the structured list of missing fields so the controller can populate them and re-run.
Output¶
src/connectors/{source}/runtime/main.tf— Terraform configuration with provider declarations and per-category resources (interpolated fromoperational.yml.source_runtimevalues;runtime/files/*paths referenced vialocal_filedata sources or variables).src/connectors/{source}/runtime/variables.tf— Terraform variable declarations for every value that varies per-deployment.src/connectors/{source}/runtime/outputs.tf— Terraform outputs (e.g. S3 bucket ARN, CronJob name, IAM role ARN) consumed by the Databricks-side install phase.src/connectors/{source}/runtime/versions.tf— Terraform and provider version pinning per category template.src/connectors/{source}/runtime/README.md— per-runtime operator README explaining what thisruntime/provisions, the variables to set, and how to invokeruntime/install.sh.src/connectors/{source}/runtime/install.sh— runsterraform applyagainst thisruntime/. For categories where the source is a SaaS with no infrastructure to provision (e.g. CMDB / ServiceNow),runtime/install.shmay be a no-op or a smoke-test script perreferences/<category>.md.- Connector page §Source provisioning section — operator-facing source-provisioning runbook for this category, referencing
runtime/install.shand anyruntime/files/*operator-authored sidecars. - Implementation log row 2 — overwrite the
(pending)placeholder forprovision-sourceper the row template below.
Procedure¶
- Read
references/<category>.mdfor category-specific facts —operational.yml.source_runtimeschema (which fields are required), Terraform shape (provider, modules, variables exposed, outputs),runtime/files/*conventions (which sidecar paths apply for the category and howmain.tfreferences them),runtime/install.shshape,runtime/README.mdtemplate, and the page §Source provisioning section template. - Read
src/connectors/{source}/operational.yml.source_runtime:sub-block and interactively gather any missing required fields. Identify everysource_runtime:field marked required inreferences/<category>.md. Ifoperational.ymlis missing, create it with the two top-level keys (source_runtime:,databricks_runtime:) empty before continuing. For each missing required field, invokeAskUserQuestion(batching up to four questions per call). Write each answer back tooperational.yml.source_runtime.<field>, preserving structure and adjacent comments. Re-validate the sub-block; if any required field is still missing ANDAskUserQuestionis unavailable, halt and report the structured list of missing fields without partial-emitting anyruntime/file. Otherwise proceed to step 3. - Read connector page §3 Reference to extract any source-side facts the page documents (e.g. authentication mechanism that informs Terraform IAM, S3 prefix structure, webhook endpoint shape). Do not modify the page §3 content.
- Emit
src/connectors/{source}/runtime/main.tffrom the category template, interpolating values fromoperational.yml.source_runtime. Emit references toruntime/files/*paths declared in the operational.yml'sdemo_target_paths-style fields vialocal_filedata sources or Terraform variables — do NOT generate the file contents. - Emit
src/connectors/{source}/runtime/variables.tf,runtime/outputs.tf,runtime/versions.tfper the category template. - Emit
src/connectors/{source}/runtime/README.mdfrom the category README template, listing the variables to set and the invocation command forruntime/install.sh. - Emit
src/connectors/{source}/runtime/install.shper the category shape (typical:cd "$(dirname "$0")" && terraform init && terraform apply -auto-approve). For SaaS-only categories where there is no infrastructure to provision, emit the no-op or smoke-test variant perreferences/<category>.md. - Update the connector page §Source provisioning section: replace the section body (or insert the section if absent) with the operator-facing source-provisioning runbook for this category, referencing
runtime/install.shand anyruntime/files/*operator-authored sidecars. - Update the connector page's Implementation log row 2: overwrite the
(pending)placeholder forprovision-sourceusing the row template below. Run date in ISO format;git rev-parse --short HEADfor the skill repo ref;git rev-parse --abbrev-ref HEADfor the branch.
Invariants¶
- No file is written outside
src/connectors/{source}/runtime/, the connector page §Source provisioning section, and the Implementation log row 2 cell. The source-provisioning emit is self-contained. src/connectors/{source}/runtime/files/*is NEVER modified by this skill — it is operator-authored sidecar territory. The skill emits REFERENCES to those paths inruntime/main.tf(vialocal_filedata sources,kubernetes_config_mapdatablocks, orvar.demo_target_paths-style variables) but does NOT generate the file contents.- Shared files (
databricks.yml,mkdocs/mkdocs.yml,pyproject.toml, the aggregator atmkdocs/docs/platform/reference/connector-skills.md, the catalog atmkdocs/docs/platform/reference/catalog.md) are NEVER touched by this skill. - The skill modifies
src/connectors/{source}/operational.ymlONLY to fill in missing required-by-categorysource_runtime:fields gathered viaAskUserQuestion. It NEVER overwrites existing populated fields, NEVER modifies thedatabricks_runtime:sub-block (provision-source only writes tosource_runtime:), and NEVER removes fields. IfAskUserQuestionis unavailable AND any required-by-categorysource_runtime:field is still missing, the skill halts and reports the structured list — does not partial-emit anyruntime/file. - The connector page sections owned by other skills are NEVER touched: §1–§3 (analyze-source), §4–§7 / §Run-the-job / §Verify / §Troubleshooting (generate-connector), §5 Validation (validate-implementation). Only the §Source provisioning section and Implementation log row 2 are written.
- For SaaS-only categories where there is no infrastructure to provision,
runtime/install.shis still emitted (as a no-op or smoke-test) so the top-levelinstall.shchain emitted bygenerate-connectorcan call it unconditionally.
Category-specific invariants (operational.yml.source_runtime schema, Terraform shape, runtime/files/* conventions, runtime/install.sh shape, runtime/README.md template, page §Source provisioning section template) live in references/<category>.md. Read the file matching the input category before emitting any file.
Implementation log row template¶
Overwrite the (pending) placeholder for provision-source (row 2) in the connector page's Implementation log table. Use this row shape verbatim, replacing the bracketed placeholders with the four data cells:
| Source provisioning | provision-source ({category}) | source_runtime fields=<comma-separated list> | src/connectors/{source}/runtime/, mkdocs/docs/connectors/{category}/{slug}.md §Source provisioning | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
{category}— the AppSec category input (cmdb,scm,sast,sca,secrets,dast, orwaf).<comma-separated list>— the names of everysource_runtime:field actually read fromoperational.ymlduring this run (e.g.cloud_provider, account_id, region, cluster_name, demo_target_paths). Drives the audit trail for which inputs pinned this emit.{source}— the source name input.{slug}— the kebab-case slug used in the connector page filename.{YYYY-MM-DD}— the run date in ISO format.{git_short_sha}— output ofgit rev-parse --short HEADon the skill's repo.{branch}— output ofgit rev-parse --abbrev-ref HEAD.
Rows 1 (analyze-source), 3 (generate-connector), and 4 (validate-implementation) MUST remain untouched.
Rendered from .claude/skills/provision-source/SKILL.md. Source of truth lives in the skill file.
generate-connector¶
Overview¶
This skill emits the eight file connector module for a single source given a reviewed connector page. Category specific facts that drive code emission (target Silver tables, dedup key tuple, ingestion tooling preference overrides, mapping.yml structure, applicable REQ-IDs to bind tests against, category quirks) live in references/<category>.md and MUST be read for the category of the source before any file is written.
Inputs¶
- Source name: determines the module path
src/connectors/{source}/and the lookup filenamessrc/connectors/{source}/severity.yml,src/connectors/{source}/status.yml. - Connector page path:
mkdocs/docs/connectors/{category}/{slug}.mdproduced byanalyze-source. - AppSec category: one of
cmdb,scm,sast,sca,secrets,dast,waf. Determines whichreferences/<category>.mdto load.
Preconditions:
- The connector page exists at
mkdocs/docs/connectors/{category}/{slug}.mdand has been reviewed for completeness (Reference section populated; Implementation log row 1 filled byanalyze-source). - The shared utilities for the framework at
src/platform/are intact (HTTP client, pagination, HWM state, severity/status normalization, dedup helpers). They are referenced assrc/platform/in earlier code. The worktree path issrc/platform/.
Output¶
A connector module composed of exactly the following eight files (per the baseline procedure at mkdocs/docs/connectors/sast/skills.md § "## generate-connector"):
src/connectors/{source}/config.yml: base URL, endpoints, pagination, HWM column (or scan-id / commit-SHA / artefact-prefix per category override), target Bronze table, credential reference.src/connectors/{source}/ingest.py: implementsingest(run_id, state) -> batchper the connector contract inmkdocs/docs/platform/reference/catalog.md.src/connectors/{source}/transform.py: implementstransform(bronze_df) -> silver_dfper the normalization rules inmkdocs/docs/platform/reference/canonical-mapping.md.src/connectors/{source}/mapping.yml: declarative Bronze to Silver column expressions referencing the severity and status lookups by file path.src/connectors/{source}/severity.yml: severity lookup for the source. For categories where severity is N/A or conventional, seereferences/<category>.md.src/connectors/{source}/status.yml: status lookup for the source. For categories where status is N/A, seereferences/<category>.md.resources/{source}-job.yml: standard two task Lakeflow job bundle fragment per the template atmkdocs/docs/platform/reference/connector-job-template.md.src/connectors/{source}/tests/: pytest suite (test_ingest.py,test_transform.py,fixtures/{endpoint}_{scenario}.json) with one test function per applicable REQ-ID, each marked with@pytest.mark.requirement("REQ-...").
Procedure¶
- Read
references/<category>.mdfor category specific facts: target Silver tables, dedup key tuple to encode in transform/dedup logic, ingestion tooling preference overrides (CLI artefact exceptions for SAST CLI, secrets CLI, DAST CLI),mapping.ymlstructure for this category (entity only, finding only, dual entity+finding for SCM, or event structure for WAF), applicable REQ-IDs to bind tests against, and category quirks affecting code emission. - Read the connector page and extract the seven API facts captured by
analyze-source: authentication mechanism, pagination style, incremental hook (HWM column / webhook / scan-id / artefact prefix / full reload), endpoints, consumed field schema, severity vocabulary, status vocabulary, quirks. - Emit
src/connectors/{source}/config.ymlwith the extracted parameters. Use the HWM structure per the category reference (record levelupdated_atfor SAST/SCA/CMDB/SCM; scan-id for DAST server; commit-SHA or scan-start timestamp for full reload categories such as secrets and CLI artefact paths). - Select the ingestion tooling per the preference order in
references/<category>.md(typical: Lakeflow Connect, then Databricks SDK, then dlt; the CLI artefact path is the documented exception for SAST CLI, secrets CLI, and DAST CLI). Emitsrc/connectors/{source}/ingest.pyagainst the chosen tool. - Emit
src/connectors/{source}/mapping.ymlwith the structure required by the category reference: entity only block (CMDB), finding only block (SAST / SCA / secrets / DAST / WAF), or dual entity+finding blocks (SCM). Reference the severity and status lookup files by path. WAF connectors target the canonicalsilver.findingstable — the previoussilver.waf_eventscarve-out has been collapsed. - Emit
src/connectors/{source}/severity.ymlandsrc/connectors/{source}/status.yml. Cover every documented source value with a configurable default (mediumfor severity unless the category reference overrides) and a comment flagging the data quality warning path. For CMDB, both files exist but contain# N/A: CMDB sources emit no findings. For secrets, severity is hard codedhighinmapping.yml. The lookup file exists with the comment# default high; per-deployment override permitted for low-entropy detector classes, and the status file carries the literalopen(no native lifecycle). For WAF, severity is action keyed (derived from theactionfield via an action-keyed lookup), and status follows the trufflehog convention — the literalopenon every row, withstatus.ymlcarrying the literal as thedefaultrow. - Emit
src/connectors/{source}/transform.pyapplying the mapping plus the normalization rules frommkdocs/docs/platform/reference/canonical-mapping.md. For categories with a finding structure, encode the dedup key tuple given inreferences/<category>.mdliterally. It drivesdedup_linkslinkage in the transform. For DAST, emit the target tosilver.deploymentsjoin. For SCA, emit the CVE correlation step. For WAF, emit a deterministic SHA-256finding_idover(webacl_arn, request_id, timestamp_ms)so re-delivered events collapse at the Bronze→Silver MERGE; do NOT emit adedup_linksrow, do NOT emit a transform-timesilver.deploymentsjoin. - Emit the bundle fragment at
resources/{source}-job.ymlusing the standard two task structure frommkdocs/docs/platform/reference/connector-job-template.md, substituting the source name. - Emit the test suite at
src/connectors/{source}/tests/: one test function per REQ-ID applicable to the category (perreferences/<category>.md), each marked with@pytest.mark.requirement("REQ-..."). Fixtures named{endpoint}_{scenario}.jsonundersrc/connectors/{source}/tests/fixtures/. Tests cover the framework contract fromsrc/platform/. Pure Python only. No localSparkSession. - Record the invocation: list the generated file paths, run
git rev-parse --short HEADfor the skill repo ref, and computesha256sum mkdocs/docs/connectors/{category}/{slug}.mdfor the page hash that pins this generation to a specific page revision. - Update the Implementation log section of the connector page: fill in row 2 (
generate-connector) with the run date, inputs (page hash viasha256sum mkdocs/docs/connectors/{category}/{slug}.md), outputs (the eight file list above), and the skill repo ref viagit rev-parse --short HEAD. Mark row 3 unchanged ((pending)) sovalidate-implementationhas a target to overwrite.
Invariants¶
- No file is written outside
src/connectors/{source}/,src/connectors/{source}/tests/,src/connectors/{source}/severity.yml,src/connectors/{source}/status.yml, orresources/{source}-job.yml. The connector generation is self contained. - Both
src/connectors/{source}/severity.ymlandsrc/connectors/{source}/status.ymlexist for every connector, even for categories where one or both are N/A. The N/A files carry an explanatory comment perreferences/<category>.md. - All imports in
ingest.pyandtransform.pyreference only functions that already exist insrc/platform/. New shared helpers are NOT introduced by this skill. If a missing helper is identified, halt and report the gap rather than adding it inline. - Every REQ-ID applicable to the category (per
references/<category>.md) has at least one bound test function carrying@pytest.mark.requirement("REQ-..."). REQ-IDs marked N/A for the category are not bound. - The Implementation log section of
mkdocs/docs/connectors/{category}/{slug}.mdhas row 2 filled and row 3 still marked(pending)after this skill runs. Row 1 (set byanalyze-source) is not modified. - Output is code, configuration, and test fixtures only, plus the single line Implementation log row update on the connector page. No new Markdown files are created.
Category specific invariants (target Silver tables, dedup key tuple, ingestion tooling override, severity / status lookup structure, mapping.yml structure, applicable REQ-IDs, category quirks affecting code emission) live in references/<category>.md. Read the file matching the input category before emitting any file.
Implementation log row template¶
Append exactly one row to the Implementation log table on the connector page, overwriting the (pending) placeholder for generate-connector. Use this row structure verbatim, replacing the bracketed placeholders:
| Module generation | generate-connector ({category}) | page hash={sha256_of_page} | src/connectors/{source}/, src/connectors/{source}/tests/, src/connectors/{source}/severity.yml, src/connectors/{source}/status.yml | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
{category}: the AppSec category input (cmdb,scm,sast,sca,secrets,dast, orwaf).{sha256_of_page}: output ofsha256sum mkdocs/docs/connectors/{category}/{slug}.md(the full hex digest pins this generation to the page revision read at step 2).{source}: the source name input.{YYYY-MM-DD}: the run date in ISO format.{git_short_sha}: output ofgit rev-parse --short HEADon the repo for the skill.{branch}: output ofgit rev-parse --abbrev-ref HEAD.
Row 3 (validate-implementation) MUST remain marked (pending) so the next skill has a target to overwrite.
Rendered from .claude/skills/generate-connector/SKILL.md. Source of truth lives in the skill file.
validate-implementation¶
Overview¶
This skill runs pytest src/connectors/{source}/tests/, summarises outcomes into a per REQ-ID Validation table, emits a fix list for failing REQs, and updates the Validation and Implementation log sections of the connector page at mkdocs/docs/connectors/{category}/{source}.md. It is observational over the codebase. It modifies only the connector Markdown page.
Inputs¶
- Source name: determines the test directory
src/connectors/{source}/tests/and the connector page filename slug. - AppSec category: one of
cmdb,scm,sast,sca,secrets,dast,waf. Determines whichreferences/<category>.mdto load. The applicable REQ-ID set for the category, with explicit N/A reasons drawn frommkdocs/docs/platform/reference/catalog.md, lives inreferences/<category>.mdand is the load bearing artefact for each category for this skill. - Connector module path:
src/connectors/{source}/. Used to resolve the source code under test for the fix list and as input for the Implementation log row.
Preconditions:
- The connector module and test suite at
src/connectors/{source}/andsrc/connectors/{source}/tests/exist (typically emitted bygenerate-connector). - The connector page at
mkdocs/docs/connectors/{category}/{source}.mdexists with a stub Validation section (an!!! info "Pending validation"admonition emitted byanalyze-source) and a Implementation log table whose row 3 is marked(pending).
Output¶
- The Validation section of
mkdocs/docs/connectors/{category}/{source}.mdis updated. The stub admonition is replaced with a populated REQ-ID table whose rows are exactly the applicable REQ-IDs for the category (in catalog order). Each cell carriesPASS,FAIL, orN/A(N/Afor REQ-IDs the category excludes perreferences/<category>.md). - A fix list is appended below the table as plain text. For each
FAILrow, the failing test file path and a one line summary of the failure. - The Implementation log section row 3 (
validate-implementation) is updated with the run date, inputs (the connector module path), outputs (the connector page §5), and the skill repo ref (git rev-parse --short HEAD).
No file outside the connector page is modified by this skill.
Procedure¶
- Read
references/<category>.mdto get the applicable REQ-ID set for the category and N/A reasons. This is the load bearing artefact for each category for this skill. It lists which REQ-IDs bind to tests for connectors in this category, in catalog order, with explicit N/A reasons quoted frommkdocs/docs/platform/reference/catalog.mdandmkdocs/docs/connectors/<category>/index.md. - Run
pytest src/connectors/{source}/tests/ -v --tb=short. Treat timeouts as failures (not skips). Capture stdout/stderr. Preserve the wall clock duration for the run summary. - Collect every test function carrying a
@pytest.mark.requirement("REQ-...")marker and its outcome (passed/failed/skipped/timed-out). - For each REQ-ID in the applicable set for the category (from step 1, in the order they appear in
mkdocs/docs/platform/reference/catalog.md), record: is there a bound test? did it pass? what is the test path? For REQ-IDs marked N/A by the category reference, recordN/Awith no bound test path. - Emit the Markdown table with one row per REQ-ID using
PASS,FAIL, orN/A. The table columns match the example atmkdocs/docs/connectors/cmdb/servicenow.md§ "## Validation" (Requirement | Bound test | Outcome). For N/A rows, the bound test cell is a dash. - Emit the fix list as plain text below the table. For each
FAILrow, one line listing the failing test file path (src/connectors/{source}/tests/test_*.py::test_name) and a one line summary of the failure drawn from the pytest--tb=shortoutput. Omit the fix list entirely if there are no failures. - Replace the stub admonition in the Validation section of
mkdocs/docs/connectors/{category}/{source}.mdwith the table from step 5 and the fix list from step 6 (if any). Append a one line summary noting how many requirement bound tests were collected, the wall clock duration, the pass / fail / N/A split, and the N/A rationale for the category (sourced fromreferences/<category>.md). - Update the Implementation log section row 3 (
validate-implementation) on the connector page with the run date, inputs (the connector module pathsrc/connectors/{source}/), outputs (the connector page §5), and the skill repo ref viagit rev-parse --short HEAD. Use the row template below.
Invariants¶
- No production code is modified by this skill. It is purely observational. The only file written is the connector page at
mkdocs/docs/connectors/{category}/{source}.md. - Test timeouts are treated as failures, not skips. A timed out test contributes a
FAILrow. - The Validation table always has exactly the REQ-IDs in the applicable set for the category as rows, in the order they appear in
mkdocs/docs/platform/reference/catalog.md§ "Requirement catalog". REQ-IDs the category excludes perreferences/<category>.mdappear asN/Arows, not as omissions.
Category specific invariants (which REQ-IDs apply, the standard N/A reason for each excluded REQ-ID, and the test suite assertions the tests for a category must verify) live in references/<category>.md. Read the file matching the input category before drafting the Validation table.
Implementation log row template¶
Overwrite the (pending) placeholder for validate-implementation in the Implementation log table on the connector page. Use this row structure verbatim, replacing the bracketed placeholders:
| Validation | validate-implementation ({category}) | module path=src/connectors/{source}/ | mkdocs/docs/connectors/{category}/{source}.md §5 | {YYYY-MM-DD} | {git_short_sha} ({branch}) |
{category}: the AppSec category input (cmdb,scm,sast,sca,secrets,dast, orwaf).{source}: the source name input.{YYYY-MM-DD}: the run date in ISO format.{git_short_sha}: output ofgit rev-parse --short HEADon the repo for the skill.{branch}: output ofgit rev-parse --abbrev-ref HEAD.
Rows 1 (analyze-source), 2 (provision-source), and 3 (generate-connector) MUST remain untouched.
Rendered from .claude/skills/validate-implementation/SKILL.md. Source of truth lives in the skill file.