Skip to content

feat(cli): Add missing database integratios, add tests for all CLI integrations add and edit commands#321

Draft
tkislan wants to merge 62 commits intomainfrom
tk/cli-integrations-add-edit-full
Draft

feat(cli): Add missing database integratios, add tests for all CLI integrations add and edit commands#321
tkislan wants to merge 62 commits intomainfrom
tk/cli-integrations-add-edit-full

Conversation

@tkislan
Copy link
Contributor

@tkislan tkislan commented Feb 23, 2026

Summary by CodeRabbit

  • New Features

    • CLI add/edit integrations now supports many more providers (AlloyDB, Athena, BigQuery, ClickHouse, Databricks, Dremio, MariaDB, Materialize, MindsDB, MongoDB, MySQL, PostgreSQL, Redshift, Snowflake, Spanner, SQL Server, Trino). pandas-dataframe is no longer configurable via CLI.
  • User Experience

    • Unified SSH/SSL prompts across providers, expanded per-provider authentication flows, and clearer errors for unsupported types.
  • Tests

    • Extensive end-to-end tests added for add/edit flows across supported integrations; legacy/overly broad test scenarios were consolidated or removed.

…lution; remove unnecessary TypeScript error suppression in integration tests.
…s for @inquirer/testing; modify lint-staged commands to use nvm; remove inline dependency configuration from vitest setup.
Base automatically changed from tk/cli-integrations-add-edit to main February 27, 2026 11:48
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/cli/src/commands/integrations/edit-integration.ts`:
- Around line 293-297: The catch around resolveEnvVarRefsFromMap is swallowing
the underlying error and letting validation run on unresolved env:... tokens;
instead re-throw a contextual error so callers see the real cause: catch the
error from resolveEnvVarRefsFromMap(integrationRawJson, envVars), build and
throw a new Error (or re-throw the original) with context like "Failed to
resolve env var references for integrationRawJson" and include the original
error message, or alternatively append the original error to the thrown Error;
ensure you remove the silent debug-only behavior so downstream schema validation
failures are avoided.

In `@packages/cli/src/commands/integrations/integrations-prompts/mongodb.ts`:
- Around line 16-24: The safeUrlParse function currently checks for
ReferenceError in its catch block but new URL(...) throws TypeError for
malformed URLs; update the catch to detect TypeError (e.g., if (error instanceof
TypeError) return null) so malformed URLs return null instead of rethrowing, and
apply the same change to the second occurrence of this logic elsewhere in the
file (the other try/catch at the later block).
- Around line 153-156: The port prompt should be skipped for SRV URIs: update
the logic around promptForOptionalStringPortField (variable portRaw) to only
call it when prefix !== SRV_PREFIX, and when prefix === SRV_PREFIX set port to
undefined (instead of using defaultValues?.port or
parsedFromConnectionString?.port). Also ensure the code that later builds the
connection string (where port is included) uses this port variable so SRV URIs
are not rendered as host:port; this change applies to both the initial prompt
block (portRaw) and the later port handling used to construct the URI.
- Around line 43-59: Tighten the prefix typing by introducing a literal union
type MongoPrefix = 'mongodb://' | 'mongodb+srv://' and use it instead of plain
string for the prefix parameter in buildMongoConnectionString and any other
functions/signatures referenced at the same spots (the parameter at the function
signature around buildMongoConnectionString and the select call occurrences
around the other locations), and update the select<string> usage to
select<MongoPrefix> (or change the constants to have as const and use typeof
constants union) so only the two valid prefixes are allowed; keep all other
parameter types the same.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7b16bb8 and fc921ca.

📒 Files selected for processing (3)
  • packages/cli/src/cli.ts
  • packages/cli/src/commands/integrations/edit-integration.ts
  • packages/cli/src/commands/integrations/integrations-prompts/mongodb.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts`:
- Around line 12-32: The returned SSH object currently has duplicate keys
(sshPort and sshUser twice) and marks fields optional even though
promptForRequiredStringField and promptForRequiredStringPortField guarantee
non-empty strings; fix by removing the conditional spread fragments and only
return a single object with sshEnabled: true and the concrete keys sshHost,
sshPort, sshUser (no conditional spreads), and update the function return type
to Promise<{ sshEnabled: true; sshHost: string; sshPort: string; sshUser: string
} | null> so types reflect required values; reference variables and helpers:
sshEnabled, sshHost, sshPort, sshUser, promptForRequiredStringField,
promptForRequiredStringPortField, and ensure no duplicate keys remain in the
return object.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fc921ca and 776ce04.

📒 Files selected for processing (15)
  • packages/cli/src/commands/integrations/integrations-prompts/alloydb.ts
  • packages/cli/src/commands/integrations/integrations-prompts/clickhouse.ts
  • packages/cli/src/commands/integrations/integrations-prompts/databricks.ts
  • packages/cli/src/commands/integrations/integrations-prompts/dremio.ts
  • packages/cli/src/commands/integrations/integrations-prompts/mariadb.ts
  • packages/cli/src/commands/integrations/integrations-prompts/materialize.ts
  • packages/cli/src/commands/integrations/integrations-prompts/mindsdb.ts
  • packages/cli/src/commands/integrations/integrations-prompts/mongodb.ts
  • packages/cli/src/commands/integrations/integrations-prompts/mysql.ts
  • packages/cli/src/commands/integrations/integrations-prompts/pgsql.ts
  • packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts
  • packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssl-fields.ts
  • packages/cli/src/commands/integrations/integrations-prompts/redshift.ts
  • packages/cli/src/commands/integrations/integrations-prompts/sql-server.ts
  • packages/cli/src/commands/integrations/integrations-prompts/trino.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts (1)

12-30: 🛠️ Refactor suggestion | 🟠 Major

Tighten SSH return shape to required fields and drop redundant spreads.

Line 12 advertises optional SSH fields, but Lines 18-23 collect required values. The conditional spreads on Lines 27-29 only widen the type and add dead branching.

Suggested fix
-export async function promptForSshFields(defaultValues?: {
+export async function promptForSshFields(defaultValues?: {
   sshEnabled?: boolean
   sshHost?: string
   sshPort?: string
   sshUser?: string
-}): Promise<{ sshEnabled: true; sshHost?: string; sshPort?: string; sshUser?: string } | null> {
+}): Promise<{ sshEnabled: true; sshHost: string; sshPort: string; sshUser: string } | null> {
@@
     return {
       sshEnabled: true,
-      ...(sshHost.length > 0 ? { sshHost } : {}),
-      ...(sshPort.length > 0 ? { sshPort } : {}),
-      ...(sshUser.length > 0 ? { sshUser } : {}),
+      sshHost,
+      sshPort,
+      sshUser,
     }
   }

As per coding guidelines, "Use strict type checking in TypeScript files" and "Prefer type safety over convenience in TypeScript".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts`
around lines 12 - 30, The function currently types its SSH return as optional
fields but always collects required values (sshHost, sshPort, sshUser) and then
uses conditional spreads; update the return type for this function to require
sshHost, sshPort, and sshUser when sshEnabled is true (e.g. { sshEnabled: true;
sshHost: string; sshPort: string; sshUser: string }), and simplify the return in
the block where sshEnabled === true to directly return { sshEnabled: true,
sshHost, sshPort, sshUser } (remove the conditional spread logic and any dead
branches); adjust any callers/types if necessary to match the stricter shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In
`@packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts`:
- Around line 12-30: The function currently types its SSH return as optional
fields but always collects required values (sshHost, sshPort, sshUser) and then
uses conditional spreads; update the return type for this function to require
sshHost, sshPort, and sshUser when sshEnabled is true (e.g. { sshEnabled: true;
sshHost: string; sshPort: string; sshUser: string }), and simplify the return in
the block where sshEnabled === true to directly return { sshEnabled: true,
sshHost, sshPort, sshUser } (remove the conditional spread logic and any dead
branches); adjust any callers/types if necessary to match the stricter shape.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 776ce04 and 93097ab.

📒 Files selected for processing (3)
  • packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts
  • packages/cli/src/commands/integrations/integrations-prompts/snowflake.ts
  • packages/cli/src/utils/typescript.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (2)
packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts (1)

233-235: ⚠️ Potential issue | 🔴 Critical

Fix the failing service-account .env snapshot.

This snapshot is currently failing in CI and is likely stale after the private-key placeholder update.

Suggested patch
-    expect(envContent).toMatchInlineSnapshot(`
-      "AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PRIVATEKEY="FAKE_PRIVATE_KEY"
-      "
-    `)
+    expect(envContent).toMatchInlineSnapshot(`
+      "AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PRIVATEKEY=FAKE_PRIVATE_KEY
+      "
+    `)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`
around lines 233 - 235, The service-account .env inline snapshot in the test
fails because the private-key placeholder changed; update the inline snapshot
for envContent in add-integration.snowflake.test.ts to match the new PRIVATEKEY
placeholder format used by the code (adjust the quoted FAKE_PRIVATE_KEY line and
any surrounding newlines/escaping so the snapshot string exactly matches the
generated env content produced by the service-account flow, referencing the
envContent variable in the test).
packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts (1)

101-118: 🛠️ Refactor suggestion | 🟠 Major

Use dedent for multiline YAML comparisons.

These multiline assertions should use dedent-based exact string checks instead of inline snapshots.

Proposed refactor
+import dedent from 'ts-dedent'
...
-    expect(yamlContent).toMatchInlineSnapshot(`
-      "#yaml-language-server: $schema=https://example.com/schema.json
-
-      integrations:
-        - id: sf-id-001
-          ...
-      "
-    `)
+    expect(yamlContent).toBe(dedent`
+      `#yaml-language-server`: $schema=https://example.com/schema.json
+
+      integrations:
+        - id: sf-id-001
+          ...
+    `)
#!/bin/bash
# Verify multiline comparisons are using inline snapshots and dedent is not imported.
rg -nP --type=ts 'toMatchInlineSnapshot\(' packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts
rg -nP --type=ts "from 'ts-dedent'|from \"ts-dedent\"" packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts

As per coding guidelines, "Use dedent from ts-dedent for multiline string comparisons in tests".

Also applies to: 168-185, 237-250

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`
around lines 101 - 118, Replace the inline snapshot assertions in
edit-integration.snowflake.test.ts (the
expect(yamlContent).toMatchInlineSnapshot(...) calls) with exact string
comparisons using dedent: import dedent from 'ts-dedent' at the top, then change
each toMatchInlineSnapshot to expect(yamlContent).toEqual(dedent(`...`)) with
the same YAML text inside; apply the same replacement for the other occurrences
noted (the blocks around the other toMatchInlineSnapshot calls referenced) so
all multiline YAML comparisons use ts-dedent instead of inline snapshots.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/cli/src/commands/integrations/tests/add-integration.mariadb.test.ts`:
- Around line 78-232: Add two new tests alongside the existing ones: 1) an
error/validation flow that simulates entering an invalid port then correcting it
— reuse createIntegration and fillBaseFields, after the SSH/SSL prompts type an
invalid port (e.g., "notaport"), assert the CLI shows a validation/error
message, then type a valid port (e.g., "3306") and finish the flow, finally
assert the written YAML contains the corrected port; and 2) a special-character
case that supplies values with quotes, newlines and shell-sensitive characters
for name/password/caCertificateText (e.g., "My \"Weird\" DB", "p@ss\nword$!",
and multiline cert content), run createIntegration with envFile, finish prompts
(enable SSL to capture cert), then assert the YAML and .env capture those values
correctly (env ref for password and caCertificateText) and that special
characters are properly preserved/escaped. Reference the existing test
helpers/symbols createIntegration, fillBaseFields, screen, and readFile to
implement these checks.
- Around line 120-174: The SSH-enabled test for creates mariadb integration with
SSH tunnel enabled currently asserts only the YAML file; update it to also read
and assert the .env file at envFilePath (used earlier) to ensure secrets were
persisted — specifically check the DB password key
(env:AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PASSWORD) and any SSH-related secret
entries that should be written, using the mocked mockUUID to construct expected
env var names; place these assertions after awaiting promise returned by
createIntegration so both YAML and .env are verified.

In
`@packages/cli/src/commands/integrations/tests/add-integration.mindsdb.test.ts`:
- Around line 120-174: The test case for "creates mindsdb integration with SSH
tunnel enabled" is missing the assertion that the saved password was written to
the .env file; after awaiting promise and before or after asserting yamlContent,
read the envFilePath (using readFile(envFilePath, 'utf-8')) and add an
expectation that its contents include the generated env var key and password
(the same key used in other tests, e.g.
AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PASSWORD) or use toMatchInlineSnapshot to
assert the exact env content; this ensures the createIntegration flow (invoked
via createIntegration and filled by fillBaseFields) actually persisted the
password to the .env file.

In
`@packages/cli/src/commands/integrations/tests/add-integration.mongodb.test.ts`:
- Around line 31-50: The fillMongoCredentials helper currently accepts and
destructures a redundant type parameter (type/default 'mongodb') that is never
used; remove the type property from the inputs type, the default destructuring
(delete "type?: string" and "type = 'mongodb'"), and eliminate any references to
type inside fillMongoCredentials, then update all test callers of
fillMongoCredentials in this file to stop passing a type field (or adjust calls
if they rely on that field); keep the remaining defaults (name, host, port,
user, password, database) intact and run tests to confirm nothing else expects
the removed parameter.

In `@packages/cli/src/commands/integrations/tests/add-integration.mysql.test.ts`:
- Around line 120-174: The test for SSH tunnel creates an envFile but never
asserts its contents; after awaiting promise, read the envFilePath (used in the
test) and assert it contains the expected environment variable line that matches
the UUID-derived key used in the YAML (constructed from mockUUID, e.g.
AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PASSWORD) and the secret value supplied by
fillBaseFields; add this check alongside the existing yamlContent assertion by
calling readFile(envFilePath, 'utf-8') and expect(...) toContain the exact env
var assignment.

In `@packages/cli/src/commands/integrations/tests/add-integration.pgsql.test.ts`:
- Around line 178-236: The test "creates integration with SSH tunnel enabled" is
missing the .env file assertion: after awaiting promise and reading yamlContent,
also read envFilePath (the envFile passed into createIntegration) and assert its
contents include the expected environment variable mapping used in the YAML (the
env var key shown as AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PASSWORD derived from
the mocked UUID 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'), e.g. ensure the .env
file contains a line like
AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PASSWORD=<password> to match the YAML
password reference.

In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`:
- Around line 239-284: The test for Native Snowflake OAuth collects a client
secret but never asserts the .env file was written; after awaiting promise in
the "creates snowflake integration with Native Snowflake OAuth auth" test (the
test that calls createIntegration, fillTypeAndName, and fillOptionalFields), add
code to read the env file at envFilePath and assert it contains the expected
CLIENTSECRET entry (the env var name used in the YAML:
AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET) and the correct value
"native-client-secret"; keep the existing YAML assertion but add an assertion
like
expect(envContent).toContain('AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET=native-client-secret')
so the test verifies both the integrations YAML and the .env output.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`:
- Around line 11-253: The tests in edit-integration.snowflake.test.ts only cover
happy paths; add new test cases using the existing helpers (editIntegration,
screen, EXISTING_PASSWORD_YAML, EXISTING_KEY_PAIR_YAML, envFilePath/tempDir) to
cover (1) submitting empty/invalid required fields (e.g., blank Account Name,
blank Username, missing password/privateKey) and asserting the CLI shows
validation errors and does not write the file, (2) entering
credentials/identifiers containing special characters and verifying they are
preserved in integrations.yaml and .env, and (3) simulating a validation-failure
flow (mock any validator used by editIntegration or trigger an invalid
authMethod) and assert the correct error messages are shown and file remains
unchanged; reuse the same screen interaction pattern (screen.type,
screen.keypress, screen.next) and the final readFile/assert checks used in the
existing its.
- Around line 56-253: Add assertions to each test ("edits snowflake password
integration keeping all defaults", "updates account name when user types new
value", and "edits snowflake service account key pair integration keeping
defaults") to lock in that editIntegration does not mutate process.env: set a
sentinel env var (e.g., process.env.__EDIT_INTEGRATION_SENTINEL = '1') before
calling editIntegration (the promise assignment that calls editIntegration) and
after awaiting the flow (after await promise) assert the sentinel still equals
'1'. Reference the editIntegration call and corresponding EXISTING_PASSWORD_YAML
/ EXISTING_KEY_PAIR_YAML setup in each test so the checks are added immediately
before the promise is created and immediately after the promise resolves.

---

Duplicate comments:
In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`:
- Around line 233-235: The service-account .env inline snapshot in the test
fails because the private-key placeholder changed; update the inline snapshot
for envContent in add-integration.snowflake.test.ts to match the new PRIVATEKEY
placeholder format used by the code (adjust the quoted FAKE_PRIVATE_KEY line and
any surrounding newlines/escaping so the snapshot string exactly matches the
generated env content produced by the service-account flow, referencing the
envContent variable in the test).

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`:
- Around line 101-118: Replace the inline snapshot assertions in
edit-integration.snowflake.test.ts (the
expect(yamlContent).toMatchInlineSnapshot(...) calls) with exact string
comparisons using dedent: import dedent from 'ts-dedent' at the top, then change
each toMatchInlineSnapshot to expect(yamlContent).toEqual(dedent(`...`)) with
the same YAML text inside; apply the same replacement for the other occurrences
noted (the blocks around the other toMatchInlineSnapshot calls referenced) so
all multiline YAML comparisons use ts-dedent instead of inline snapshots.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 93097ab and fb792c1.

📒 Files selected for processing (8)
  • packages/cli/src/commands/integrations/tests/add-integration.mariadb.test.ts
  • packages/cli/src/commands/integrations/tests/add-integration.mindsdb.test.ts
  • packages/cli/src/commands/integrations/tests/add-integration.mongodb.test.ts
  • packages/cli/src/commands/integrations/tests/add-integration.mysql.test.ts
  • packages/cli/src/commands/integrations/tests/add-integration.pgsql.test.ts
  • packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts
  • packages/cli/src/commands/integrations/tests/add-integration.trino.test.ts
  • packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts (1)

239-284: ⚠️ Potential issue | 🟡 Minor

Missing .env assertion for Native Snowflake OAuth.

Test collects native-client-secret but never verifies the .env write. Add:

     const yamlContent = await readFile(filePath, 'utf-8')
+    const envContent = await readFile(envFilePath, 'utf-8')

And assert envContent contains the CLIENTSECRET entry.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`
around lines 239 - 284, The test creates a Native Snowflake OAuth integration
but never asserts that the client secret was written to the env file; after
awaiting promise and before finishing the test, read the envFilePath (using
readFile(envFilePath, 'utf-8')) and add an expectation that the env content
contains the generated env var key with the secret, e.g.
expect(envContent).toContain('AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET=native-client-secret'),
referencing the mockUUID value and the existing
createIntegration/fillTypeAndName/fillOptionalFields flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`:
- Around line 239-284: The test creates a Native Snowflake OAuth integration but
never asserts that the client secret was written to the env file; after awaiting
promise and before finishing the test, read the envFilePath (using
readFile(envFilePath, 'utf-8')) and add an expectation that the env content
contains the generated env var key with the secret, e.g.
expect(envContent).toContain('AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET=native-client-secret'),
referencing the mockUUID value and the existing
createIntegration/fillTypeAndName/fillOptionalFields flow.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fb792c1 and 52ca3ed.

📒 Files selected for processing (1)
  • packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts (2)

80-352: ⚠️ Potential issue | 🟡 Minor

Add a regression assertion that process.env stays untouched.

Given the dotenv handling contract, add sentinel assertions before/after each editIntegration(...) run to lock in no global env mutation.

Suggested minimal pattern
+    process.env.__EDIT_INTEGRATION_SENTINEL = '1'
+    expect(process.env.__EDIT_INTEGRATION_SENTINEL).toBe('1')
     const promise = editIntegration({ file: filePath, envFile: envFilePath, id: 'sf-id-001' })
     // ... prompt steps ...
     await promise
+    expect(process.env.__EDIT_INTEGRATION_SENTINEL).toBe('1')

Based on learnings, "Do not load dotenv into process.env during the CLI integrations edit flow... process.env remains unchanged during edit integration tests."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`
around lines 80 - 352, Add a regression assertion around each
editIntegration(...) invocation to ensure process.env is not mutated: capture a
shallow snapshot of process.env before calling editIntegration (e.g., copy
keys/values) and after awaiting the editIntegration promise compare the current
process.env to the snapshot and fail the test if any added/changed/deleted keys
are detected; update tests referencing editIntegration (examples: the tests
calling editIntegration({ file, envFile, id: 'sf-id-001' }), editIntegration({
id: 'sf-id-002' }), editIntegration({ id: 'sf-id-003' })) to include this
pre/post assertion.

76-352: ⚠️ Potential issue | 🟠 Major

Coverage gap: missing error/edge/special-character scenarios.

This suite is still mostly happy-path. Please add invalid/empty required inputs and special-character credential cases with failure assertions (no file write / correct validation messaging).

As per coding guidelines, "Test edge cases, error handling, and special characters in test files".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`
around lines 76 - 352, The tests currently only exercise happy paths; add
negative and special-character cases to the same test file by creating new
it(...) tests that call editIntegration({ file, envFile, id }) (same pattern as
existing tests) and drive the interactive prompts via
screen.type/screen.keypress, then assert validation error messages are shown
(via screen.getScreen()) and that no file writes occur (no changes to
integrations.yaml via readFile) and env file remains unchanged (readFile on
envFilePath). Specifically add tests that: 1) submit empty required fields
(e.g., empty Account Name, Username or Client ID) and expect the CLI to display
a validation error and abort without updating the YAML; 2) provide invalid
characters or malicious input in credentials (passwords, privateKey) and expect
validation failure messages and no writes; and 3) confirm that when validation
passes for special-character credentials the env file is updated correctly (read
envFilePath) and the YAML contains env:VAR references (matching existing
expectations). Use the existing constants EXISTING_PASSWORD_YAML,
EXISTING_KEY_PAIR_YAML, EXISTING_OKTA_YAML and the same promise/await pattern so
tests integrate consistently with the existing screen-driven flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`:
- Around line 80-352: Add a regression assertion around each
editIntegration(...) invocation to ensure process.env is not mutated: capture a
shallow snapshot of process.env before calling editIntegration (e.g., copy
keys/values) and after awaiting the editIntegration promise compare the current
process.env to the snapshot and fail the test if any added/changed/deleted keys
are detected; update tests referencing editIntegration (examples: the tests
calling editIntegration({ file, envFile, id: 'sf-id-001' }), editIntegration({
id: 'sf-id-002' }), editIntegration({ id: 'sf-id-003' })) to include this
pre/post assertion.
- Around line 76-352: The tests currently only exercise happy paths; add
negative and special-character cases to the same test file by creating new
it(...) tests that call editIntegration({ file, envFile, id }) (same pattern as
existing tests) and drive the interactive prompts via
screen.type/screen.keypress, then assert validation error messages are shown
(via screen.getScreen()) and that no file writes occur (no changes to
integrations.yaml via readFile) and env file remains unchanged (readFile on
envFilePath). Specifically add tests that: 1) submit empty required fields
(e.g., empty Account Name, Username or Client ID) and expect the CLI to display
a validation error and abort without updating the YAML; 2) provide invalid
characters or malicious input in credentials (passwords, privateKey) and expect
validation failure messages and no writes; and 3) confirm that when validation
passes for special-character credentials the env file is updated correctly (read
envFilePath) and the YAML contains env:VAR references (matching existing
expectations). Use the existing constants EXISTING_PASSWORD_YAML,
EXISTING_KEY_PAIR_YAML, EXISTING_OKTA_YAML and the same promise/await pattern so
tests integrate consistently with the existing screen-driven flow.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0e24ecc and d7590c8.

📒 Files selected for processing (1)
  • packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (4)
packages/cli/src/commands/integrations/tests/add-integration.mariadb.test.ts (1)

78-238: ⚠️ Potential issue | 🟠 Major

Add missing error-path and special-character tests.

From Line 78 onward, coverage is still happy-path only. Please add:

  1. an invalid port → correction flow, and
  2. a special-character serialization case (name, password, caCertificateText).
Proposed test additions
+  it('re-prompts on invalid port and persists corrected value', async () => {
+    const filePath = join(tempDir, 'integrations-invalid-port.yaml')
+    const envFilePath = join(tempDir, '.env')
+    vi.spyOn(crypto, 'randomUUID').mockReturnValue('aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee')
+
+    const promise = createIntegration({ file: filePath, envFile: envFilePath })
+    await fillBaseFields({ port: 'notaport' })
+
+    await screen.next()
+    expect(screen.getScreen()).toContain('Port')
+    expect(screen.getScreen().toLowerCase()).toContain('invalid')
+    screen.type('3306')
+    screen.keypress('enter')
+
+    await screen.next()
+    screen.keypress('enter') // SSH: no
+    await screen.next()
+    screen.keypress('enter') // SSL: no
+    await promise
+
+    const yamlContent = await readFile(filePath, 'utf-8')
+    expect(yamlContent).toContain('port: "3306"')
+  })
+
+  it('preserves special characters in env-backed values', async () => {
+    const filePath = join(tempDir, 'integrations-special-chars.yaml')
+    const envFilePath = join(tempDir, '.env')
+    vi.spyOn(crypto, 'randomUUID').mockReturnValue('aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee')
+
+    const promise = createIntegration({ file: filePath, envFile: envFilePath })
+    await fillBaseFields({
+      name: 'My "Weird" DB',
+      password: 'p@ss\\nword$!',
+    })
+
+    await screen.next()
+    screen.keypress('enter') // SSH: no
+    await screen.next()
+    screen.type('y')
+    screen.keypress('enter') // SSL: yes
+    await screen.next()
+    screen.type('ca-"name"')
+    screen.keypress('enter')
+    await screen.next()
+    screen.type('-----BEGIN CERT-----\\nline$1\\n"quoted"\\n-----END CERT-----')
+    screen.keypress('enter')
+    await promise
+
+    const yamlContent = await readFile(filePath, 'utf-8')
+    const envContent = await readFile(envFilePath, 'utf-8')
+    expect(yamlContent).toContain('name: My "Weird" DB')
+    expect(yamlContent).toContain('password: env:AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PASSWORD')
+    expect(yamlContent).toContain('caCertificateText: env:AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CACERTIFICATETEXT')
+    expect(envContent).toContain('AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PASSWORD=')
+    expect(envContent).toContain('AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CACERTIFICATETEXT=')
+  })

As per coding guidelines "Test edge cases, error handling, and special characters in test files".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/tests/add-integration.mariadb.test.ts`
around lines 78 - 238, Add two tests after the existing happy-paths: one
"invalid port → correction" test that uses createIntegration, fillBaseFields and
screen to type a non-numeric port (e.g., "notaport"), assert the prompt shows a
validation error, then type the corrected port "3306" and complete creation,
finally assert the YAML contains port: "3306"; and one "special-character
serialization" test that sets name, password and caCertificateText to values
containing quotes, newlines and unicode via fillBaseFields and the SSL flow (use
screen.type for CA text), complete createIntegration, then assert the written
YAML and .env (readFile) preserve/serialize those special characters correctly
(check env var name generation via the mocked crypto.randomUUID and that env
values contain the original special characters); use the same helpers/variables
from the diff (createIntegration, fillBaseFields, screen, tempDir,
vi.spyOn(crypto,'randomUUID')) to locate where to add the tests.
packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts (3)

11-356: ⚠️ Potential issue | 🟠 Major

Coverage is still missing edge/error/special-character scenarios.

Current cases are mostly successful flows. Add invalid/empty required-field cases and special-character persistence checks to lock down failure handling.

As per coding guidelines, "Test edge cases, error handling, and special characters in test files".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`
around lines 11 - 356, Add tests exercising edge/error and special-character
scenarios for editIntegration: create new it() cases that call editIntegration({
file, envFile, id }) using the existing fixtures (EXISTING_PASSWORD_YAML,
EXISTING_OKTA_YAML, EXISTING_KEY_PAIR_YAML) to cover (1) submitting empty
required fields (e.g., leave Username or Account Name blank) and asserting the
command fails or prompts with validation error, (2) providing invalid values
(e.g., malformed accountName or whitespace-only) and asserting expected error
handling, and (3) entering values with special characters/unicode for name,
username, and secrets and verifying those characters persist in
integrations.yaml and .env; reference the interactive helpers screen,
screen.type, screen.keypress and verify outputs via readFile/expect like the
other tests to validate behavior.

121-138: ⚠️ Potential issue | 🟠 Major

Use dedent for multiline YAML comparisons.

Line 121, Line 188, Line 259, and Line 334 use inline snapshots for multiline strings, but this suite should use dedent-based explicit string comparisons for readability and consistency.

Suggested refactor pattern
+import dedent from 'ts-dedent'
...
-    expect(yamlContent).toMatchInlineSnapshot(`
-      "#yaml-language-server: $schema=https://example.com/schema.json
-      ...
-      "
-    `)
+    expect(yamlContent).toBe(dedent`
+      `#yaml-language-server`: $schema=https://example.com/schema.json
+
+      integrations:
+        - id: sf-id-001
+          name: Production Snowflake
+          type: snowflake
+          federated_auth_method: null
+          metadata:
+            accountName: test-account.us-east-1
+            authMethod: password
+            username: snowflake-user
+            password: env:SF_ID_001__PASSWORD
+            warehouse: my_warehouse
+            database: MY_DB
+            role: SYSADMIN
+    `)
#!/bin/bash
# Verify multiline comparison style and dedent usage in this test file.
rg -n "toMatchInlineSnapshot\\(`|toEqual\\(` packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts
rg -n "ts-dedent|dedent" packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts

As per coding guidelines, "Use dedent from ts-dedent for multiline string comparisons in tests".

Also applies to: 188-205, 259-273, 334-350

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`
around lines 121 - 138, Replace the inline snapshot assertions that use
toMatchInlineSnapshot with dedent-based explicit string comparisons: import
dedent from 'ts-dedent', then change
expect(yamlContent).toMatchInlineSnapshot(`...`) to
expect(yamlContent).toEqual(dedent(`...`)) using the same YAML literal content
(trim any leading/trailing blank lines so dedent produces the exact string) for
the yamlContent assertions and apply the same transformation to the other
multiline assertions in this test file that currently use inline snapshots or
raw multiline toEqual; keep the variable name yamlContent and the assertion
targeting logic unchanged.

76-119: ⚠️ Potential issue | 🟡 Minor

Add process.env non-mutation assertions around each edit flow.

At Line 83, Line 148, Line 215, and Line 284, wrap editIntegration(...) with sentinel checks to prevent regressions where dotenv values leak into global runtime state.

Minimal pattern to apply per test
+    process.env.__EDIT_INTEGRATION_SENTINEL = '1'
+    expect(process.env.__EDIT_INTEGRATION_SENTINEL).toBe('1')
     const promise = editIntegration({ file: filePath, envFile: envFilePath, id: 'sf-id-001' })
     ...
     await promise
+    expect(process.env.__EDIT_INTEGRATION_SENTINEL).toBe('1')
+    delete process.env.__EDIT_INTEGRATION_SENTINEL

Based on learnings, "Do not load dotenv into process.env during the CLI integrations edit flow. dotenv should only be read to resolve environment variable references in the integration metadata, not applied to the global runtime environment. In tests, avoid side effects by mocking dotenv or isolating environment loading so process.env remains unchanged during edit integration tests."

Also applies to: 141-185, 208-254, 277-331

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`
around lines 76 - 119, Before and after invoking the edit flow (the
editIntegration call that returns promise), capture a shallow snapshot of
process.env (e.g., const envBefore = { ...process.env }) and after await promise
assert the environment was not mutated (e.g.,
expect(process.env).toEqual(envBefore]); apply this sentinel pattern around each
editIntegration invocation in the test (the promise variable created by
editIntegration in this file) so dotenv resolution does not leak into global
process.env during the editIntegration flows.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In
`@packages/cli/src/commands/integrations/tests/add-integration.mariadb.test.ts`:
- Around line 78-238: Add two tests after the existing happy-paths: one "invalid
port → correction" test that uses createIntegration, fillBaseFields and screen
to type a non-numeric port (e.g., "notaport"), assert the prompt shows a
validation error, then type the corrected port "3306" and complete creation,
finally assert the YAML contains port: "3306"; and one "special-character
serialization" test that sets name, password and caCertificateText to values
containing quotes, newlines and unicode via fillBaseFields and the SSL flow (use
screen.type for CA text), complete createIntegration, then assert the written
YAML and .env (readFile) preserve/serialize those special characters correctly
(check env var name generation via the mocked crypto.randomUUID and that env
values contain the original special characters); use the same helpers/variables
from the diff (createIntegration, fillBaseFields, screen, tempDir,
vi.spyOn(crypto,'randomUUID')) to locate where to add the tests.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`:
- Around line 11-356: Add tests exercising edge/error and special-character
scenarios for editIntegration: create new it() cases that call editIntegration({
file, envFile, id }) using the existing fixtures (EXISTING_PASSWORD_YAML,
EXISTING_OKTA_YAML, EXISTING_KEY_PAIR_YAML) to cover (1) submitting empty
required fields (e.g., leave Username or Account Name blank) and asserting the
command fails or prompts with validation error, (2) providing invalid values
(e.g., malformed accountName or whitespace-only) and asserting expected error
handling, and (3) entering values with special characters/unicode for name,
username, and secrets and verifying those characters persist in
integrations.yaml and .env; reference the interactive helpers screen,
screen.type, screen.keypress and verify outputs via readFile/expect like the
other tests to validate behavior.
- Around line 121-138: Replace the inline snapshot assertions that use
toMatchInlineSnapshot with dedent-based explicit string comparisons: import
dedent from 'ts-dedent', then change
expect(yamlContent).toMatchInlineSnapshot(`...`) to
expect(yamlContent).toEqual(dedent(`...`)) using the same YAML literal content
(trim any leading/trailing blank lines so dedent produces the exact string) for
the yamlContent assertions and apply the same transformation to the other
multiline assertions in this test file that currently use inline snapshots or
raw multiline toEqual; keep the variable name yamlContent and the assertion
targeting logic unchanged.
- Around line 76-119: Before and after invoking the edit flow (the
editIntegration call that returns promise), capture a shallow snapshot of
process.env (e.g., const envBefore = { ...process.env }) and after await promise
assert the environment was not mutated (e.g.,
expect(process.env).toEqual(envBefore]); apply this sentinel pattern around each
editIntegration invocation in the test (the promise variable created by
editIntegration in this file) so dotenv resolution does not leak into global
process.env during the editIntegration flows.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d7590c8 and 6826d4d.

📒 Files selected for processing (2)
  • packages/cli/src/commands/integrations/tests/add-integration.mariadb.test.ts
  • packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts

coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 3, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/cli/src/commands/integrations/add-integration.test.ts (2)

19-31: ⚠️ Potential issue | 🟠 Major

Add a regression assertion for hidden integration types.

This suite does not verify that non-configurable types (e.g. pandas-dataframe) are excluded from the type picker. That is a core behavior change and should be locked by this shared prompt test.

Suggested test addition
   describe('promptForIntegrationType', () => {
     it('returns the selected integration type', async () => {
       const promise = promptForIntegrationType()

       expect(screen.getScreen()).toContain('Select integration type:')
+      expect(screen.getScreen()).not.toContain('pandas-dataframe')

       screen.type('pgsql')
       screen.keypress('enter')

       const result = await promise
       expect(result).toMatchInlineSnapshot(`"pgsql"`)
     })
   })

As per coding guidelines, **/*.test.{ts,tsx}: Create comprehensive tests for all new features using Vitest.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/add-integration.test.ts` around lines
19 - 31, The test for promptForIntegrationType is missing a regression assertion
that non-configurable integration types (e.g. "pandas-dataframe") are not shown;
update the 'promptForIntegrationType' test to assert the rendered picker does
NOT contain hidden types before selecting 'pgsql' (use the same screen utilities
used in the test to check absence of "pandas-dataframe"), then proceed with
typing and asserting the selected result as before so the behavior is locked by
the test.

33-62: ⚠️ Potential issue | 🟡 Minor

Missing special-character coverage for integration names.

You test required-field validation, but not special characters in names. Add one case to prevent regressions in CLI input handling.

Suggested test addition
   describe('promptForIntegrationName', () => {
@@
     it('rejects empty name with validation error', async () => {
@@
       const result = await promise
       expect(result).toMatchInlineSnapshot(`"Valid Name"`)
     })
+
+    it('accepts names with special characters', async () => {
+      const promise = promptForIntegrationName()
+
+      expect(screen.getScreen()).toContain('Integration name:')
+
+      screen.type('db-prod_01:eu-west@team')
+      screen.keypress('enter')
+
+      const result = await promise
+      expect(result).toMatchInlineSnapshot(`"db-prod_01:eu-west@team"`)
+    })
   })

As per coding guidelines, Test edge cases, error handling, and special characters in test files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/add-integration.test.ts` around lines
33 - 62, The tests for promptForIntegrationName cover required-field validation
but lack a case verifying names with special characters; add a new it test under
describe('promptForIntegrationName') that calls promptForIntegrationName(),
asserts the prompt appears via screen.getScreen(), types a name containing a
variety of special characters (e.g. punctuation, symbols, spaces), sends the
enter key, awaits the promise result and asserts the returned string exactly
matches the typed special-character name to prevent regressions in CLI input
handling for the promptForIntegrationName flow.
♻️ Duplicate comments (2)
packages/cli/src/commands/integrations/integrations-prompts/mongodb.ts (1)

19-21: ⚠️ Potential issue | 🟡 Minor

Bare catch swallows all errors.

This silently catches programming errors beyond URL parsing issues. Consider catching TypeError specifically (the actual exception type from new URL()).

Proposed fix
   try {
     return new URL(url)
-  } catch {
-    return null
+  } catch (error) {
+    if (error instanceof TypeError) {
+      return null
+    }
+    throw error
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/integrations-prompts/mongodb.ts`
around lines 19 - 21, The current bare catch in the URL parsing block swallows
all errors; change it to catch the thrown error object and only handle the URL
parsing TypeError from new URL(), returning null for that specific case, while
re-throwing any other error types so programming errors aren't masked; update
the catch in the mongodb URL parsing section (the try surrounding new URL(...))
to check err instanceof TypeError and return null, otherwise throw err.
packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts (1)

7-12: 🧹 Nitpick | 🔵 Trivial

Return type should reflect non-optional fields.

promptForRequiredStringField and promptForRequiredStringPortField guarantee non-empty strings. The return type marks sshHost, sshPort, sshUser as optional, but they're always present when sshEnabled is true.

Proposed fix
 export async function promptForSshFields(defaultValues?: {
   sshEnabled?: boolean
   sshHost?: string
   sshPort?: string
   sshUser?: string
-}): Promise<{ sshEnabled: true; sshHost?: string; sshPort?: string; sshUser?: string } | null> {
+}): Promise<{ sshEnabled: true; sshHost: string; sshPort: string; sshUser: string } | null> {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts`
around lines 7 - 12, The return type of promptForSshFields is incorrect: when
sshEnabled is true the helpers promptForRequiredStringField and
promptForRequiredStringPortField guarantee sshHost, sshPort and sshUser are
non-empty, so update the Promise return type from { sshEnabled: true; sshHost?:
string; sshPort?: string; sshUser?: string } | null to reflect non-optional
fields (e.g., { sshEnabled: true; sshHost: string; sshPort: string; sshUser:
string } | null) and ensure any code paths constructing that object in
promptForSshFields return those required string properties.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@packages/cli/src/commands/integrations/add-integration.test.ts`:
- Around line 19-31: The test for promptForIntegrationType is missing a
regression assertion that non-configurable integration types (e.g.
"pandas-dataframe") are not shown; update the 'promptForIntegrationType' test to
assert the rendered picker does NOT contain hidden types before selecting
'pgsql' (use the same screen utilities used in the test to check absence of
"pandas-dataframe"), then proceed with typing and asserting the selected result
as before so the behavior is locked by the test.
- Around line 33-62: The tests for promptForIntegrationName cover required-field
validation but lack a case verifying names with special characters; add a new it
test under describe('promptForIntegrationName') that calls
promptForIntegrationName(), asserts the prompt appears via screen.getScreen(),
types a name containing a variety of special characters (e.g. punctuation,
symbols, spaces), sends the enter key, awaits the promise result and asserts the
returned string exactly matches the typed special-character name to prevent
regressions in CLI input handling for the promptForIntegrationName flow.

---

Duplicate comments:
In `@packages/cli/src/commands/integrations/integrations-prompts/mongodb.ts`:
- Around line 19-21: The current bare catch in the URL parsing block swallows
all errors; change it to catch the thrown error object and only handle the URL
parsing TypeError from new URL(), returning null for that specific case, while
re-throwing any other error types so programming errors aren't masked; update
the catch in the mongodb URL parsing section (the try surrounding new URL(...))
to check err instanceof TypeError and return null, otherwise throw err.

In
`@packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts`:
- Around line 7-12: The return type of promptForSshFields is incorrect: when
sshEnabled is true the helpers promptForRequiredStringField and
promptForRequiredStringPortField guarantee sshHost, sshPort and sshUser are
non-empty, so update the Promise return type from { sshEnabled: true; sshHost?:
string; sshPort?: string; sshUser?: string } | null to reflect non-optional
fields (e.g., { sshEnabled: true; sshHost: string; sshPort: string; sshUser:
string } | null) and ensure any code paths constructing that object in
promptForSshFields return those required string properties.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 1b9b78ee-d54f-4e6e-bee6-5654a369798f

📥 Commits

Reviewing files that changed from the base of the PR and between 6826d4d and e2a2dcd.

📒 Files selected for processing (7)
  • packages/cli/src/commands/integrations/add-integration.test.ts
  • packages/cli/src/commands/integrations/edit-integration.ts
  • packages/cli/src/commands/integrations/integrations-prompts/mongodb.ts
  • packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts
  • packages/cli/src/commands/run.test.ts
  • packages/cli/src/utils/env-var-refs.test.ts
  • packages/cli/src/utils/typescript.ts
💤 Files with no reviewable changes (1)
  • packages/cli/src/utils/env-var-refs.test.ts

coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 4, 2026
tkislan added 2 commits March 25, 2026 12:18
…or Redshift and Trino integrations

- Consolidated authentication logic for Redshift and Trino integrations, introducing dedicated functions for handling different auth methods (e.g., IAM Role, OAuth).
- Updated prompts to ensure correct field handling based on selected authentication methods, including clearing defaults when switching methods.
- Added tests to verify integration behavior when changing authentication methods, ensuring proper state management and user experience.
- Improved code organization and readability across integration files.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/cli/src/commands/integrations/edit-integration.ts (2)

305-307: ⚠️ Potential issue | 🟠 Major

Validate the edited config before mutating YAML or .env.

promptForIntegrationConfig() is runtime input, not a schema guarantee. Right now malformed values from prompt handlers can be written straight to disk because only the existing config is parsed.

🔧 Suggested fix
   const configForPrompt = existingConfigResult.data
 
   const newConfig = await promptForIntegrationConfig(configForPrompt)
+  const newConfigResult = databaseIntegrationConfigSchema.safeParse(newConfig)
+  if (!newConfigResult.success) {
+    throw new Error(
+      `Updated integration "${newConfig.id}" failed validation: ${newConfigResult.error.issues.map(i => i.message).join('; ')}`
+    )
+  }
+  const validatedConfig = newConfigResult.data
 
   // Update top-level fields on the integration map
-  found.set('name', newConfig.name)
+  found.set('name', validatedConfig.name)
 
   const secrets = updateIntegrationMetadataMap({
     metadataMap,
-    integrationId: newConfig.id,
-    integrationMetadata: newConfig.metadata,
-    secretPaths: getSecretFieldPaths(newConfig.type),
+    integrationId: validatedConfig.id,
+    integrationMetadata: validatedConfig.metadata,
+    secretPaths: getSecretFieldPaths(validatedConfig.type),
   })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/edit-integration.ts` around lines 305
- 307, The code currently takes runtime input from promptForIntegrationConfig()
(newConfig) and writes it to disk without validation; validate newConfig against
the same schema/parser used for existingConfigResult.data before mutating any
YAML or .env files. Specifically, after calling
promptForIntegrationConfig(configForPrompt) run the integration config schema
validation (the same parser/validator used to parse existingConfigResult.data)
and only proceed to the YAML/.env writer functions if validation succeeds; on
validation failure reject the change (or re-prompt) and do not call the routines
that update the disk.

321-337: ⚠️ Potential issue | 🟠 Major

Removed secret-backed fields stay behind in .env.

If an edit switches to a config with no current secret fields, secretCount becomes 0 and Line 334 skips .env updates entirely. The YAML key is removed, but the old credential is retained on disk.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/edit-integration.ts` around lines 321
- 337, The bug is that when secretCount === 0 the code skips calling
updateDotEnv and old secret entries remain in .env; always ensure .env is
updated to remove any previously stored secret keys even when no secrets remain.
Change the call site around secretCount (symbols: secretCount, updateDotEnv,
envFilePath, secrets) to always invoke updateDotEnv and pass a way to remove
absent keys (e.g., call updateDotEnv(envFilePath, secrets, {removeMissing:
true}) or call a new helper to strip known integration-related keys when secrets
is empty), and if needed modify updateDotEnv to delete keys that are not present
in the provided secrets map so deleted secret-backed fields are removed from
disk.
♻️ Duplicate comments (2)
packages/cli/src/commands/integrations/edit-integration.ts (2)

291-295: ⚠️ Potential issue | 🟠 Major

Propagate env-ref resolution failures.

Falling through here makes later validation run against unresolved env: tokens, which hides the real problem and can feed raw references back into the prompts. Re-throw with context instead of debug-only logging.

🔧 Suggested fix
   try {
     integrationRawJson = resolveEnvVarRefsFromMap(integrationRawJson, envVars)
   } catch (error) {
-    debug(`Failed to resolve env: refs in integration metadata: ${error}`)
+    const message = error instanceof Error ? error.message : String(error)
+    throw new Error(`Failed to resolve environment variable references for integration "${targetId}": ${message}`)
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/edit-integration.ts` around lines 291
- 295, The current try/catch around resolveEnvVarRefsFromMap swallows errors
(integrationRawJson, resolveEnvVarRefsFromMap) and only logs via debug, which
masks env-ref resolution failures; update the catch to re-throw a new Error (or
re-throw the original error) with contextual text like "Failed to resolve env:
refs in integration metadata" plus the original error message so
callers/validation see the real failure instead of continuing with unresolved
env: tokens and raw references.

247-249: ⚠️ Potential issue | 🟠 Major

Don't stringify the whole config in the default branch.

At this point the object has already had env: refs resolved, so this error path would dump secrets to stderr if it ever fires. After the satisfies never check, use a constant message instead.

🔧 Suggested fix
     default:
       existingConfig satisfies never
-      throw new Error(`Integration type ('${JSON.stringify(existingConfig)}') is not yet implemented.`)
+      throw new Error('Unexpected integration type encountered.')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/edit-integration.ts` around lines 247
- 249, The default branch currently throws an error that stringifies
existingConfig (after env refs were resolved), risking leaking secrets; replace
that throw with a constant, non-sensitive message. Locate the default case
containing "existingConfig satisfies never" in edit-integration.ts (the code
handling integration types) and change the Error payload to a fixed string like
"Unsupported integration type" or similar, keeping the satisfies never check but
removing JSON.stringify(existingConfig) from the thrown message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`:
- Around line 183-356: Add a new test case in the same file following the
pattern of the existing tests (use createIntegration, mock randomUUID via
vi.spyOn(crypto, 'randomUUID'), await fillTypeAndName(), interact with the
prompt to choose 'Key Pair' as the Authentication method via screen.type('Key
Pair') and screen.keypress('enter'), then fill the expected prompts for key-pair
auth (e.g., username, Private Key (PEM), Private Key Passphrase), call await
fillOptionalFields(), await the createIntegration promise, and assert the
resulting YAML and .env contents mirror the other tests but with
metadata.authMethod set to key-pair and the private key stored in env with the
UUID-based variable like ...__PRIVATEKEY; reference the existing helpers and
symbols createIntegration, fillTypeAndName, fillOptionalFields, screen, and
vi.spyOn(crypto, 'randomUUID') so the test structure matches the other snowflake
auth tests.
- Line 2: Replace the Date.now()-based temp directory creation that uses mkdir
with the atomic mkdtemp API: update the import to pull mkdtemp from
'node:fs/promises' (remove or keep mkdir only if still used), then create the
temp directory with mkdtemp(path.join(os.tmpdir(), 'your-prefix-')) instead of
mkdir(...) and Date.now(); keep existing cleanup with rm and file reads via
readFile. Locate the temp-dir creation in add-integration.snowflake.test.ts (the
code currently using mkdir + Date.now()) and swap it to mkdtemp, ensuring you
import os/path helpers if not already present.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`:
- Line 7: The vi.mock invocation currently uses the wrong relative path
("vi.mock('../../output', ...)") which resolves to a non-existent module; update
that mock call to use the correct relative path "vi.mock('../../../output',
...)" so it matches the actual import in the test subject. Locate the vi.mock
line (e.g., in edit-integration.snowflake.test.ts and the other integration test
suites) and change "../../output" to "../../../output" while keeping the mocked
object ({ debug: vi.fn(), log: vi.fn(), output: vi.fn(), error: vi.fn()})
unchanged.

---

Outside diff comments:
In `@packages/cli/src/commands/integrations/edit-integration.ts`:
- Around line 305-307: The code currently takes runtime input from
promptForIntegrationConfig() (newConfig) and writes it to disk without
validation; validate newConfig against the same schema/parser used for
existingConfigResult.data before mutating any YAML or .env files. Specifically,
after calling promptForIntegrationConfig(configForPrompt) run the integration
config schema validation (the same parser/validator used to parse
existingConfigResult.data) and only proceed to the YAML/.env writer functions if
validation succeeds; on validation failure reject the change (or re-prompt) and
do not call the routines that update the disk.
- Around line 321-337: The bug is that when secretCount === 0 the code skips
calling updateDotEnv and old secret entries remain in .env; always ensure .env
is updated to remove any previously stored secret keys even when no secrets
remain. Change the call site around secretCount (symbols: secretCount,
updateDotEnv, envFilePath, secrets) to always invoke updateDotEnv and pass a way
to remove absent keys (e.g., call updateDotEnv(envFilePath, secrets,
{removeMissing: true}) or call a new helper to strip known integration-related
keys when secrets is empty), and if needed modify updateDotEnv to delete keys
that are not present in the provided secrets map so deleted secret-backed fields
are removed from disk.

---

Duplicate comments:
In `@packages/cli/src/commands/integrations/edit-integration.ts`:
- Around line 291-295: The current try/catch around resolveEnvVarRefsFromMap
swallows errors (integrationRawJson, resolveEnvVarRefsFromMap) and only logs via
debug, which masks env-ref resolution failures; update the catch to re-throw a
new Error (or re-throw the original error) with contextual text like "Failed to
resolve env: refs in integration metadata" plus the original error message so
callers/validation see the real failure instead of continuing with unresolved
env: tokens and raw references.
- Around line 247-249: The default branch currently throws an error that
stringifies existingConfig (after env refs were resolved), risking leaking
secrets; replace that throw with a constant, non-sensitive message. Locate the
default case containing "existingConfig satisfies never" in edit-integration.ts
(the code handling integration types) and change the Error payload to a fixed
string like "Unsupported integration type" or similar, keeping the satisfies
never check but removing JSON.stringify(existingConfig) from the thrown message.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: edc7816e-4c59-40fa-ad9e-125d97abb0b3

📥 Commits

Reviewing files that changed from the base of the PR and between e2a2dcd and 7ebf215.

📒 Files selected for processing (9)
  • packages/cli/src/commands/integrations/edit-integration.ts
  • packages/cli/src/commands/integrations/integrations-prompts/prompt-for-ssh-fields.ts
  • packages/cli/src/commands/integrations/integrations-prompts/redshift.ts
  • packages/cli/src/commands/integrations/integrations-prompts/trino.ts
  • packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts
  • packages/cli/src/commands/integrations/tests/edit-integration.redshift.test.ts
  • packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts
  • packages/cli/src/commands/integrations/tests/edit-integration.trino.test.ts
  • packages/cli/src/commands/run.test.ts

@@ -0,0 +1,357 @@
import crypto from 'node:crypto'
import { mkdir, readFile, rm } from 'node:fs/promises'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify current temp-dir strategy in this file.
rg -n -C2 "Date\\.now\\(\\)|mkdtemp\\(" packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts

# Check nearby integration tests for existing mkdtemp usage pattern consistency.
rg -n -C2 "mkdtemp\\(" packages/cli/src/commands/integrations/tests/*.test.ts

Repository: deepnote/deepnote

Length of output: 251


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find test files that create temp directories
rg -l "tmpdir\\(\\)" packages/cli/src/commands/integrations/tests/ --type ts --type tsx

# Check for mkdtemp usage anywhere in the codebase
rg "mkdtemp" packages/cli/src/ -t ts -t tsx

# Check common temp directory patterns in test files
rg -n "tmp|tempDir|temp_dir" packages/cli/src/commands/integrations/tests/*.test.ts | head -30

Repository: deepnote/deepnote

Length of output: 88


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find test files that create temp directories
rg -l "tmpdir\(\)" packages/cli/src/commands/integrations/tests/ -t ts

# Check for mkdtemp usage anywhere in the codebase
rg "mkdtemp" packages/cli/src/ -t ts

# Check common temp directory patterns in test files
rg -n "tmpdir|tempDir|temp_dir" packages/cli/src/commands/integrations/tests/*.test.ts | head -30

Repository: deepnote/deepnote

Length of output: 7660


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verify that the Date.now() pattern is widespread in integration tests
rg "Date\.now\(\)" packages/cli/src/commands/integrations/tests/*.test.ts | wc -l

# Check one more integration test file to confirm the pattern
cat -n packages/cli/src/commands/integrations/tests/edit-integration.spanner.test.ts | sed -n '29,35p'

Repository: deepnote/deepnote

Length of output: 314


Use mkdtemp for temp directories, consistent with codebase patterns.

The codebase already uses mkdtemp in test helpers and other test files (packages/cli/src/commands/test-helpers.ts, packages/cli/src/utils/dotenv.test.ts, etc.). The Date.now() approach can collide under parallel test execution; mkdtemp atomically creates the directory and eliminates this risk.

Suggested patch
-import { mkdir, readFile, rm } from 'node:fs/promises'
+import { mkdtemp, readFile, rm } from 'node:fs/promises'
@@
   beforeEach(async () => {
     vi.clearAllMocks()
     vi.restoreAllMocks()
-    tempDir = join(tmpdir(), `add-integration-snowflake-test-${Date.now()}`)
-    await mkdir(tempDir, { recursive: true })
+    tempDir = await mkdtemp(join(tmpdir(), 'add-integration-snowflake-test-'))
   })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { mkdir, readFile, rm } from 'node:fs/promises'
import { mkdtemp, readFile, rm } from 'node:fs/promises'
Suggested change
import { mkdir, readFile, rm } from 'node:fs/promises'
beforeEach(async () => {
vi.clearAllMocks()
vi.restoreAllMocks()
tempDir = await mkdtemp(join(tmpdir(), 'add-integration-snowflake-test-'))
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`
at line 2, Replace the Date.now()-based temp directory creation that uses mkdir
with the atomic mkdtemp API: update the import to pull mkdtemp from
'node:fs/promises' (remove or keep mkdir only if still used), then create the
temp directory with mkdtemp(path.join(os.tmpdir(), 'your-prefix-')) instead of
mkdir(...) and Date.now(); keep existing cleanup with rm and file reads via
readFile. Locate the temp-dir creation in add-integration.snowflake.test.ts (the
code currently using mkdir + Date.now()) and swap it to mkdtemp, ensuring you
import os/path helpers if not already present.

Comment on lines +183 to +356
it('creates snowflake integration with service account key pair auth', async () => {
const filePath = join(tempDir, 'integrations-key-pair.yaml')
const envFilePath = join(tempDir, '.env')
const mockUUID: crypto.UUID = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'
vi.spyOn(crypto, 'randomUUID').mockReturnValue(mockUUID)

const promise = createIntegration({ file: filePath, envFile: envFilePath })

await fillTypeAndName()

await screen.next()
expect(screen.getScreen()).toContain('Authentication method:')
screen.type('Service Account')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Service Account Username:')
screen.type('svc-account-user')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Private Key (PEM):')
screen.type('FAKE_PRIVATE_KEY')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Private Key Passphrase:')
screen.keypress('enter')

await fillOptionalFields()

await promise

const yamlContent = await readFile(filePath, 'utf-8')
const envContent = await readFile(envFilePath, 'utf-8')

expect(yamlContent).toMatchInlineSnapshot(`
"#yaml-language-server: $schema=https://raw.githubusercontent.com/deepnote/deepnote/refs/heads/tk/integrations-config-file-schema/json-schemas/integrations-file-schema.json

integrations:
- id: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
type: snowflake
name: My Snowflake
metadata:
accountName: test-account.us-east-1
authMethod: service-account-key-pair
username: svc-account-user
privateKey: env:AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PRIVATEKEY
"
`)
expect(envContent).toMatchInlineSnapshot(`
"AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__PRIVATEKEY=FAKE_PRIVATE_KEY
"
`)
})

it('creates snowflake integration with Azure AD auth', async () => {
const filePath = join(tempDir, 'integrations-azure.yaml')
const envFilePath = join(tempDir, '.env')
const mockUUID: crypto.UUID = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'
vi.spyOn(crypto, 'randomUUID').mockReturnValue(mockUUID)

const promise = createIntegration({ file: filePath, envFile: envFilePath })

await fillTypeAndName()

await screen.next()
expect(screen.getScreen()).toContain('Authentication method:')
screen.type('Azure AD')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Client ID:')
screen.type('azure-client-id')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Client Secret:')
screen.type('azure-client-secret')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Resource:')
screen.type('https://mysnowflake.snowflakecomputing.com')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Tenant:')
screen.type('my-azure-tenant-id')
screen.keypress('enter')

await fillOptionalFields('azure_warehouse', 'AZURE_DB', 'AZURE_ROLE')

await promise

const yamlContent = await readFile(filePath, 'utf-8')
const envContent = await readFile(envFilePath, 'utf-8')

expect(yamlContent).toMatchInlineSnapshot(`
"#yaml-language-server: $schema=https://raw.githubusercontent.com/deepnote/deepnote/refs/heads/tk/integrations-config-file-schema/json-schemas/integrations-file-schema.json

integrations:
- id: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
type: snowflake
name: My Snowflake
metadata:
accountName: test-account.us-east-1
authMethod: azure
clientId: azure-client-id
clientSecret: env:AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET
resource: https://mysnowflake.snowflakecomputing.com
tenant: my-azure-tenant-id
warehouse: azure_warehouse
database: AZURE_DB
role: AZURE_ROLE
"
`)
expect(envContent).toMatchInlineSnapshot(`
"AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET=azure-client-secret
"
`)
})

it('creates snowflake integration with Native Snowflake OAuth auth', async () => {
const filePath = join(tempDir, 'integrations-native.yaml')
const envFilePath = join(tempDir, '.env')
const mockUUID: crypto.UUID = 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee'
vi.spyOn(crypto, 'randomUUID').mockReturnValue(mockUUID)

const promise = createIntegration({ file: filePath, envFile: envFilePath })

await fillTypeAndName()

await screen.next()
expect(screen.getScreen()).toContain('Authentication method:')
screen.type('Native Snowflake')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Client ID:')
screen.type('native-client-id')
screen.keypress('enter')

await screen.next()
expect(screen.getScreen()).toContain('Client Secret:')
screen.type('native-client-secret')
screen.keypress('enter')

await fillOptionalFields()

await promise

const yamlContent = await readFile(filePath, 'utf-8')
const envContent = await readFile(envFilePath, 'utf-8')

expect(yamlContent).toMatchInlineSnapshot(`
"#yaml-language-server: $schema=https://raw.githubusercontent.com/deepnote/deepnote/refs/heads/tk/integrations-config-file-schema/json-schemas/integrations-file-schema.json

integrations:
- id: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
type: snowflake
name: My Snowflake
metadata:
accountName: test-account.us-east-1
authMethod: snowflake
clientId: native-client-id
clientSecret: env:AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET
"
`)
expect(envContent).toMatchInlineSnapshot(`
"AAAAAAAA_BBBB_CCCC_DDDD_EEEEEEEEEEEE__CLIENTSECRET=native-client-secret
"
`)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# 1) Check whether key-pair auth is selectable/handled in Snowflake prompts.
rg -n -C4 "SnowflakeAuthMethods\\.(KeyPair|ServiceAccountKeyPair|Password|Okta|AzureAd|NativeSnowflake)|Authentication method|Key Pair|key-pair" \
  packages/cli/src/commands/integrations/integrations-prompts/snowflake.ts

# 2) Check which auth branches are covered by this add-flow test.
rg -n -C2 "creates snowflake integration with .* auth|screen\\.type\\('.*'\\)" \
  packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts

Repository: deepnote/deepnote

Length of output: 10005


Add test for Snowflake "Key Pair" authentication.

The suite tests 5 auth methods (password, Okta, service account key pair, Azure AD, native Snowflake OAuth), but omits "Key Pair"—which is user-selectable in the prompts (packages/cli/src/commands/integrations/integrations-prompts/snowflake.ts line 227). Add a test case for this auth path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/add-integration.snowflake.test.ts`
around lines 183 - 356, Add a new test case in the same file following the
pattern of the existing tests (use createIntegration, mock randomUUID via
vi.spyOn(crypto, 'randomUUID'), await fillTypeAndName(), interact with the
prompt to choose 'Key Pair' as the Authentication method via screen.type('Key
Pair') and screen.keypress('enter'), then fill the expected prompts for key-pair
auth (e.g., username, Private Key (PEM), Private Key Passphrase), call await
fillOptionalFields(), await the createIntegration promise, and assert the
resulting YAML and .env contents mirror the other tests but with
metadata.authMethod set to key-pair and the private key stored in env with the
UUID-based variable like ...__PRIVATEKEY; reference the existing helpers and
symbols createIntegration, fillTypeAndName, fillOptionalFields, screen, and
vi.spyOn(crypto, 'randomUUID') so the test structure matches the other snowflake
auth tests.

import { screen } from '@inquirer/testing/vitest'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'

vi.mock('../../output', () => ({ debug: vi.fn(), log: vi.fn(), output: vi.fn(), error: vi.fn() }))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

fd '^output\.ts$' packages/cli/src

python - <<'PY'
from pathlib import Path

test = Path('packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts')
edit = Path('packages/cli/src/commands/integrations/edit-integration.ts')

print('snowflake test ../../output   ->', (test.parent / '../../output').resolve())
print('snowflake test ../../../output ->', (test.parent / '../../../output').resolve())
print('edit-integration ../../output ->', (edit.parent / '../../output').resolve())
PY

rg -n "vi\.mock\('\.\./\.\./output'" packages/cli/src/commands/integrations/tests --type ts

Repository: deepnote/deepnote

Length of output: 5226


🏁 Script executed:

cat -n packages/cli/src/commands/integrations/edit-integration.ts | head -20

Repository: deepnote/deepnote

Length of output: 1144


Use ../../../output in the mock path.

The test file imports from packages/cli/src/commands/integrations/tests, so ../../output resolves to packages/cli/src/commands/output (non-existent). The source file imports from ../../output at line 16, which resolves to packages/cli/src/output. Change the mock to ../../../output. This affects all 17 integration test suites in the same directory.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.snowflake.test.ts`
at line 7, The vi.mock invocation currently uses the wrong relative path
("vi.mock('../../output', ...)") which resolves to a non-existent module; update
that mock call to use the correct relative path "vi.mock('../../../output',
...)" so it matches the actual import in the test subject. Locate the vi.mock
line (e.g., in edit-integration.snowflake.test.ts and the other integration test
suites) and change "../../output" to "../../../output" while keeping the mocked
object ({ debug: vi.fn(), log: vi.fn(), output: vi.fn(), error: vi.fn()})
unchanged.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@packages/cli/src/commands/integrations/tests/edit-integration.mongodb.test.ts`:
- Around line 1-15: Move the test so it is colocated with the module under test:
rename/move the test file currently named edit-integration.mongodb.test.ts so it
sits next to the source that exports editIntegration (the test imports
editIntegration and mocks ../../output), ensuring the test file uses the
.test.ts extension and imports remain correct; update any import paths in the
test to reference the local source file instead of the shared tests directory
and run the test suite to confirm paths and mocks (e.g., references to
editIntegration and the mocked output functions) resolve after relocation.
- Around line 395-397: The two long literal strings in the assertions inside
edit-integration.mongodb.test.ts (the expect calls checking envContent for
'MONGO_ID_001__RAWCONNECTIONSTRING=...' and
'MONGO_ID_001__CONNECTION_STRING=...') exceed Biome's line-length rules; break
each long literal across multiple shorter string segments (e.g., concatenate
with '+' or use an array.join('') expression) so the runtime string remains
identical but each source line is under the formatter limit, updating the two
expect(...) comparisons accordingly.
- Around line 317-319: The long string literals in the assertions (the two
expect(envContent).toContain(...) calls that include
'MONGO_ID_001__CONNECTION_STRING=...' and 'MONGO_ID_001__PASSWORD=...') need to
be broken across multiple source lines to satisfy the formatter; split each long
quoted literal into concatenated pieces (e.g. 'mongodb://new-user:new-pass@' +
'new-mongo.example.com:27017/new-db') or use a multi-line template concatenation
so the produced runtime string is identical but the source line length is
reduced, leaving the expect(envContent).toContain(...) calls and their arguments
functionally unchanged.

In `@packages/cli/src/commands/integrations/tests/edit-integration.pgsql.test.ts`:
- Around line 1-15: The test file is misplaced under a tests/ subfolder; move
the test so it's colocated with the source next to the module it exercises (the
editIntegration tests should live alongside the edit-integration module).
Relocate
packages/cli/src/commands/integrations/tests/edit-integration.pgsql.test.ts to
packages/cli/src/commands/integrations/edit-integration.pgsql.test.ts (keeping
the same test contents and imports such as editIntegration and the vi.mock of
../../output), update any relative imports if needed, and ensure the test
filename ends with .test.ts to match the project's colocated test convention.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: e9e546b4-e253-4c5e-a14a-3f2778113f7e

📥 Commits

Reviewing files that changed from the base of the PR and between 7ebf215 and 41067eb.

📒 Files selected for processing (3)
  • packages/cli/src/commands/integrations/edit-integration.test.ts
  • packages/cli/src/commands/integrations/tests/edit-integration.mongodb.test.ts
  • packages/cli/src/commands/integrations/tests/edit-integration.pgsql.test.ts

Comment on lines +1 to +15
import { mkdir, readFile, rm, writeFile } from 'node:fs/promises'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import { screen } from '@inquirer/testing/vitest'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'

vi.mock('../../output', () => ({
debug: vi.fn(),
log: vi.fn(),
output: vi.fn(),
error: vi.fn(),
}))

import { editIntegration } from '../edit-integration'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Test file should be colocated with source.

Same as pgsql: move to packages/cli/src/commands/integrations/edit-integration.mongodb.test.ts.

As per coding guidelines, "Place test files next to the source files with .test.ts or .test.tsx extension".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.mongodb.test.ts`
around lines 1 - 15, Move the test so it is colocated with the module under
test: rename/move the test file currently named edit-integration.mongodb.test.ts
so it sits next to the source that exports editIntegration (the test imports
editIntegration and mocks ../../output), ensuring the test file uses the
.test.ts extension and imports remain correct; update any import paths in the
test to reference the local source file instead of the shared tests directory
and run the test suite to confirm paths and mocks (e.g., references to
editIntegration and the mocked output functions) resolve after relocation.

Comment on lines +317 to +319
expect(envContent).toContain('MONGO_ID_001__CONNECTION_STRING=mongodb://new-user:[email protected]:27017/new-db')
expect(envContent).toContain('MONGO_ID_001__PASSWORD=new-pass')
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Biome formatter failure — wrap long strings.

Pipeline flagged these lines. Break into multi-line calls.

🔧 Proposed fix
-    expect(envContent).toContain('MONGO_ID_001__CONNECTION_STRING=mongodb://new-user:[email protected]:27017/new-db')
-    expect(envContent).toContain('MONGO_ID_001__PASSWORD=new-pass')
+    expect(envContent).toContain(
+      'MONGO_ID_001__CONNECTION_STRING=mongodb://new-user:[email protected]:27017/new-db'
+    )
+    expect(envContent).toContain('MONGO_ID_001__PASSWORD=new-pass')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
expect(envContent).toContain('MONGO_ID_001__CONNECTION_STRING=mongodb://new-user:[email protected]:27017/new-db')
expect(envContent).toContain('MONGO_ID_001__PASSWORD=new-pass')
})
expect(envContent).toContain(
'MONGO_ID_001__CONNECTION_STRING=mongodb://new-user:[email protected]:27017/new-db'
)
expect(envContent).toContain('MONGO_ID_001__PASSWORD=new-pass')
})
🧰 Tools
🪛 GitHub Actions: CI

[error] 317-321: Biome formatter check failed. Formatter would have changed the expect(envContent).toContain('MONGO_ID_001__CONNECTION_STRING=...') assertion to a multi-line call.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.mongodb.test.ts`
around lines 317 - 319, The long string literals in the assertions (the two
expect(envContent).toContain(...) calls that include
'MONGO_ID_001__CONNECTION_STRING=...' and 'MONGO_ID_001__PASSWORD=...') need to
be broken across multiple source lines to satisfy the formatter; split each long
quoted literal into concatenated pieces (e.g. 'mongodb://new-user:new-pass@' +
'new-mongo.example.com:27017/new-db') or use a multi-line template concatenation
so the produced runtime string is identical but the source line length is
reduced, leaving the expect(envContent).toContain(...) calls and their arguments
functionally unchanged.

Comment on lines +395 to +397
expect(envContent).toContain('MONGO_ID_001__RAWCONNECTIONSTRING=mongodb+srv://atlas-user:[email protected]/prod-db')
expect(envContent).toContain('MONGO_ID_001__CONNECTION_STRING=mongodb+srv://atlas-user:[email protected]/prod-db')
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Biome formatter failure — wrap long strings.

Same issue. Format per Biome rules.

🔧 Proposed fix
-    expect(envContent).toContain('MONGO_ID_001__RAWCONNECTIONSTRING=mongodb+srv://atlas-user:[email protected]/prod-db')
-    expect(envContent).toContain('MONGO_ID_001__CONNECTION_STRING=mongodb+srv://atlas-user:[email protected]/prod-db')
+    expect(envContent).toContain(
+      'MONGO_ID_001__RAWCONNECTIONSTRING=mongodb+srv://atlas-user:[email protected]/prod-db'
+    )
+    expect(envContent).toContain(
+      'MONGO_ID_001__CONNECTION_STRING=mongodb+srv://atlas-user:[email protected]/prod-db'
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
expect(envContent).toContain('MONGO_ID_001__RAWCONNECTIONSTRING=mongodb+srv://atlas-user:[email protected]/prod-db')
expect(envContent).toContain('MONGO_ID_001__CONNECTION_STRING=mongodb+srv://atlas-user:[email protected]/prod-db')
})
expect(envContent).toContain(
'MONGO_ID_001__RAWCONNECTIONSTRING=mongodb+srv://atlas-user:[email protected]/prod-db'
)
expect(envContent).toContain(
'MONGO_ID_001__CONNECTION_STRING=mongodb+srv://atlas-user:[email protected]/prod-db'
)
})
🧰 Tools
🪛 GitHub Actions: CI

[error] 395-402: Biome formatter check failed. Formatter would have changed RAWCONNECTIONSTRING and CONNECTION_STRING expect(envContent).toContain('...') assertions to multi-line calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/cli/src/commands/integrations/tests/edit-integration.mongodb.test.ts`
around lines 395 - 397, The two long literal strings in the assertions inside
edit-integration.mongodb.test.ts (the expect calls checking envContent for
'MONGO_ID_001__RAWCONNECTIONSTRING=...' and
'MONGO_ID_001__CONNECTION_STRING=...') exceed Biome's line-length rules; break
each long literal across multiple shorter string segments (e.g., concatenate
with '+' or use an array.join('') expression) so the runtime string remains
identical but each source line is under the formatter limit, updating the two
expect(...) comparisons accordingly.

Comment on lines +1 to +15
import { mkdir, readFile, rm, writeFile } from 'node:fs/promises'
import { tmpdir } from 'node:os'
import { join } from 'node:path'
import { screen } from '@inquirer/testing/vitest'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'

vi.mock('../../output', () => ({
debug: vi.fn(),
log: vi.fn(),
output: vi.fn(),
error: vi.fn(),
}))

import { editIntegration } from '../edit-integration'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Test file should be colocated with source.

This file lives under tests/ subfolder. Move to packages/cli/src/commands/integrations/edit-integration.pgsql.test.ts to match the source file location.

As per coding guidelines, "Place test files next to the source files with .test.ts or .test.tsx extension".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/commands/integrations/tests/edit-integration.pgsql.test.ts`
around lines 1 - 15, The test file is misplaced under a tests/ subfolder; move
the test so it's colocated with the source next to the module it exercises (the
editIntegration tests should live alongside the edit-integration module).
Relocate
packages/cli/src/commands/integrations/tests/edit-integration.pgsql.test.ts to
packages/cli/src/commands/integrations/edit-integration.pgsql.test.ts (keeping
the same test contents and imports such as editIntegration and the vi.mock of
../../output), update any relative imports if needed, and ensure the test
filename ends with .test.ts to match the project's colocated test convention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant