feat: add Oracle-compatible ALL_TAB_COLUMNS view#1281
feat: add Oracle-compatible ALL_TAB_COLUMNS view#1281Adria0615 wants to merge 0 commit intoIvorySQL:masterfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThe pull request removes the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
📝 WalkthroughWalkthroughA new Oracle-compatible view Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
src/backend/catalog/ora_all_tab_columns.sql (3)
26-33: Data type mapping is incomplete.The current mapping covers only 4 types. Consider adding mappings for other common PostgreSQL types:
PostgreSQL Oracle Equivalent int2,int8NUMBERfloat4BINARY_FLOATorNUMBERfloat8BINARY_DOUBLEorNUMBERbpcharCHARbyteaBLOBorRAWboolNUMBER(1)orCHAR(1)This can be addressed incrementally, but the current fallback to
t.typnamewill expose PostgreSQL-specific type names to Oracle-expecting clients.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 26 - 33, The CASE expression mapping PostgreSQL types (using t.typname) is incomplete; update the CASE that currently handles 'int4','varchar','text','timestamp' to also map 'int2' and 'int8' => 'NUMBER', 'float4' => 'BINARY_FLOAT' (or 'NUMBER' if preferred), 'float8' => 'BINARY_DOUBLE' (or 'NUMBER'), 'bpchar' => 'CHAR', 'bytea' => 'BLOB' (or 'RAW'), and 'bool' => 'NUMBER(1)' (or 'CHAR(1)'); keep the existing ELSE fallback but prefer to normalize to an Oracle-compatible type where possible so clients expecting Oracle types do not receive raw PostgreSQL type names (update the CASE handling around t.typname to include these new WHEN branches).
18-18:DBA_COLUMN_IDduplicatesCOLUMN_ID.Both
COLUMN_ID(line 25) andDBA_COLUMN_ID(line 45) map toa.attnum::numeric. If this is intentional for Oracle compatibility, consider adding a comment explaining the rationale. Otherwise, this column may be unnecessary.Also applies to: 45-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` at line 18, COLUMN_ID and DBA_COLUMN_ID both map to a.attnum::numeric, creating an unnecessary duplicate; either remove DBA_COLUMN_ID or change its expression to a distinct value for Oracle compatibility, and if kept for compatibility add a concise comment explaining why DBA_COLUMN_ID intentionally mirrors COLUMN_ID; update the select projection that defines COLUMN_ID and DBA_COLUMN_ID (the a.attnum::numeric mappings) accordingly and ensure any dependent code references are adjusted or documented.
52-55: Consider including materialized views and filtering system schemas.
Missing
relkind = 'm': Oracle'sALL_TAB_COLUMNSincludes materialized views. Add'm'to the filter if IvorySQL supports them.No schema filtering: This view exposes columns from all schemas including
pg_catalog,pg_toast,information_schema, etc. Oracle'sALL_TAB_COLUMNStypically shows only objects the user can access. Consider adding:
n.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast'), or- A permission check like
has_table_privilege(c.oid, 'SELECT')🔧 Proposed changes
WHERE - c.relkind IN ('r', 'v') -- 仅表和视图 + c.relkind IN ('r', 'v', 'm') -- 表、视图和物化视图 AND a.attnum > 0 -- 排除系统隐藏列 - AND NOT a.attisdropped; -- 排除已删除列; + AND NOT a.attisdropped -- 排除已删除列 + AND has_table_privilege(c.oid, 'SELECT'); -- 仅显示用户有权访问的对象🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 52 - 55, The WHERE clause currently filters c.relkind IN ('r','v') and excludes system/hidden/drop columns via a.attnum and a.attisdropped; update it to also include materialized views by adding 'm' to c.relkind, and restrict system schemas by adding a condition on n.nspname (e.g., n.nspname NOT IN ('pg_catalog','information_schema','pg_toast')) or replace/augment that with a permission check like has_table_privilege(c.oid, 'SELECT') to only expose accessible objects; adjust the predicate around c.relkind, n.nspname, and/or has_table_privilege to implement this filtering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend/catalog/ora_all_tab_columns.sql`:
- Line 44: The CASE expression generating CHAR_USED currently returns 'BYTE' for
non-character types; update the CASE in the SELECT (the expression using
t.typname and producing CHAR_USED) so that WHEN t.typname IN ('varchar','char')
THEN 'CHAR' ELSE NULL (cast to the same varchar type if needed) instead of
'BYTE', ensuring the CHAR_USED column returns NULL for non-character types
consistent with Oracle behavior.
- Around line 34-35: The DATA_LENGTH calculation must treat numeric/decimal
specially rather than using atttypmod - 4; change the CASE that computes
DATA_LENGTH to detect numeric/decimal (e.g. via a.atttypid/typname or comparing
to 'numeric'/'decimal') and, when numeric, extract precision from atttypmod as
precision := ((a.atttypmod - 4) >> 16) and use that (or NULL) as the DATA_LENGTH
value; for all other types keep the existing GREATEST(a.atttypmod - 4, 0) logic.
Ensure you reference the DATA_LENGTH expression and a.atttypmod in the updated
CASE.
- Around line 36-37: The precision/scale extraction for numeric/decimal types is
wrong because atttypmod includes a 4-byte header; update the CASE expressions
that reference a.atttypmod (the lines using t.typname IN ('numeric','decimal'))
to subtract VARHDRSZ (4) before bit ops: compute precision as ((a.atttypmod - 4)
>> 16) & 0xffff and compute scale using the sign-extension formula
(((a.atttypmod - 4) & 0x7ff) ^ 1024) - 1024 so negative scales are handled
correctly; adjust the two CASE lines that currently use (a.atttypmod >> 16) &
65535 and a.atttypmod & 65535 to use the corrected expressions.
- Around line 60-61: The Oracle-specific `CREATE OR REPLACE SYNONYM
ALL_TAB_COLUMNS FOR pg_catalog.ALL_TAB_COLUMNS;` statement is unsupported by
IvorySQL's Oracle parser (see ora_gram.y); remove this SYNONYM line and replace
it with a supported construct such as creating a PostgreSQL view or public alias
named ALL_TAB_COLUMNS that selects from pg_catalog.ALL_TAB_COLUMNS (or comment
out the statement if not needed), and ensure any callers expect the view name
rather than a SYNONYM; alternatively, implement SYNONYM parsing in ora_gram.y if
SYNONYM semantics are required.
---
Nitpick comments:
In `@src/backend/catalog/ora_all_tab_columns.sql`:
- Around line 26-33: The CASE expression mapping PostgreSQL types (using
t.typname) is incomplete; update the CASE that currently handles
'int4','varchar','text','timestamp' to also map 'int2' and 'int8' => 'NUMBER',
'float4' => 'BINARY_FLOAT' (or 'NUMBER' if preferred), 'float8' =>
'BINARY_DOUBLE' (or 'NUMBER'), 'bpchar' => 'CHAR', 'bytea' => 'BLOB' (or 'RAW'),
and 'bool' => 'NUMBER(1)' (or 'CHAR(1)'); keep the existing ELSE fallback but
prefer to normalize to an Oracle-compatible type where possible so clients
expecting Oracle types do not receive raw PostgreSQL type names (update the CASE
handling around t.typname to include these new WHEN branches).
- Line 18: COLUMN_ID and DBA_COLUMN_ID both map to a.attnum::numeric, creating
an unnecessary duplicate; either remove DBA_COLUMN_ID or change its expression
to a distinct value for Oracle compatibility, and if kept for compatibility add
a concise comment explaining why DBA_COLUMN_ID intentionally mirrors COLUMN_ID;
update the select projection that defines COLUMN_ID and DBA_COLUMN_ID (the
a.attnum::numeric mappings) accordingly and ensure any dependent code references
are adjusted or documented.
- Around line 52-55: The WHERE clause currently filters c.relkind IN ('r','v')
and excludes system/hidden/drop columns via a.attnum and a.attisdropped; update
it to also include materialized views by adding 'm' to c.relkind, and restrict
system schemas by adding a condition on n.nspname (e.g., n.nspname NOT IN
('pg_catalog','information_schema','pg_toast')) or replace/augment that with a
permission check like has_table_privilege(c.oid, 'SELECT') to only expose
accessible objects; adjust the predicate around c.relkind, n.nspname, and/or
has_table_privilege to implement this filtering.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d9c3b09d-06a6-4abd-9948-089b9da89743
📒 Files selected for processing (1)
src/backend/catalog/ora_all_tab_columns.sql
| CASE WHEN t.typname IN ('numeric', 'decimal') THEN (a.atttypmod >> 16) & 65535 ELSE NULL END::numeric, | ||
| CASE WHEN t.typname IN ('numeric', 'decimal') THEN a.atttypmod & 65535 ELSE NULL END::numeric, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify numeric typmod encoding in PostgreSQL source
rg -n "VARHDRSZ|typmod" --type=c -C3 | head -100
# Search for existing numeric precision/scale extraction patterns in IvorySQL
ast-grep --pattern 'atttypmod >> 16'
ast-grep --pattern 'atttypmod & 65535'Repository: IvorySQL/IvorySQL
Length of output: 6730
🏁 Script executed:
# First, read the file to see the actual context around lines 36-37
head -50 src/backend/catalog/ora_all_tab_columns.sql | tail -30Repository: IvorySQL/IvorySQL
Length of output: 1409
🏁 Script executed:
# Search for numeric typmod encoding in PostgreSQL source files
rg -n "numeric.*typmod|typmod.*numeric" -A5 -B5 --type=c
# Also search for typmod extraction patterns
rg -n "atttypmod.*>>|atttypmod.*&" --type=c -C3 | head -150Repository: IvorySQL/IvorySQL
Length of output: 24273
🏁 Script executed:
# Look for numeric-specific typmod handling in numeric.c or similar
fd "numeric" src --type=f -x grep -l "typmod" {}Repository: IvorySQL/IvorySQL
Length of output: 120
🌐 Web query:
PostgreSQL numeric datatype typmod encoding precision scale
💡 Result:
In PostgreSQL, the numeric datatype (also known as decimal) uses a type modifier (typmod) to encode the precision and scale specified in declarations like NUMERIC(precision, scale). The typmod is an int32 value computed as ((precision << 16) | (scale & 0x7FF)) + VARHDRSZ, where VARHDRSZ is 4. Precision occupies the upper 16 bits, and scale (11-bit signed, range -1024 to 1023) the lower 11 bits after offset and XOR for negative values. To extract: - Precision: numeric_typmod_precision(typmod) = ((typmod - 4) >> 16) + 0 - Scale: numeric_typmod_scale(typmod) = (((typmod - 4) & 0x7FF) ^ 1024) - 1024 Valid typmods are >= 4. If no typmod or unspecified (NUMERIC without parameters), typmod = -1, allowing unlimited precision up to implementation limits (131072 digits before decimal point, 16383 after). Precision is total significant digits (1-1000 max specified). Scale is digits after decimal point (-1000 to 1000). Values exceeding typmod limits raise errors or round as per rules. This encoding is handled by the typmodin/typmodout functions (numeric_typmodin/numeric_typmodout). The actual numeric value storage uses a separate varlena format with sign, weight, dscale, and base-10000 digits, applying typmod constraints via apply_typmod.
Citations:
- 1: https://www.postgresql.org/docs/current/datatype-numeric.html
- 2: http://www.postgresql.org/docs/15/datatype-numeric.html
- 3: https://doxygen.postgresql.org/backend_2utils_2adt_2numeric_8c_source.html
- 4: https://doxygen.postgresql.org/numeric_8h.html
Incorrect precision/scale extraction for numeric/decimal types.
PostgreSQL stores atttypmod for numeric as ((precision << 16) | (scale & 0x7ff)) + VARHDRSZ where VARHDRSZ = 4. The current extraction doesn't subtract the header size before bit manipulation.
- Line 36: Precision should use
((atttypmod - 4) >> 16) & 0xffff - Line 37: Scale should use
(((atttypmod - 4) & 0x7ff) ^ 1024) - 1024
The second formula handles sign extension for negative scales (range -1024 to 1023). Without these fixes, precision will be offset by the header and scale will include garbage from bits 11-15.
🐛 Proposed fix
- CASE WHEN t.typname IN ('numeric', 'decimal') THEN (a.atttypmod >> 16) & 65535 ELSE NULL END::numeric,
- CASE WHEN t.typname IN ('numeric', 'decimal') THEN a.atttypmod & 65535 ELSE NULL END::numeric,
+ CASE WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 THEN ((a.atttypmod - 4) >> 16) & 0xffff ELSE NULL END::numeric,
+ CASE WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 THEN (((a.atttypmod - 4) & 0x7ff) ^ 1024) - 1024 ELSE NULL END::numeric,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CASE WHEN t.typname IN ('numeric', 'decimal') THEN (a.atttypmod >> 16) & 65535 ELSE NULL END::numeric, | |
| CASE WHEN t.typname IN ('numeric', 'decimal') THEN a.atttypmod & 65535 ELSE NULL END::numeric, | |
| CASE WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 THEN ((a.atttypmod - 4) >> 16) & 0xffff ELSE NULL END::numeric, | |
| CASE WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 THEN (((a.atttypmod - 4) & 0x7ff) ^ 1024) - 1024 ELSE NULL END::numeric, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 36 - 37, The
precision/scale extraction for numeric/decimal types is wrong because atttypmod
includes a 4-byte header; update the CASE expressions that reference a.atttypmod
(the lines using t.typname IN ('numeric','decimal')) to subtract VARHDRSZ (4)
before bit ops: compute precision as ((a.atttypmod - 4) >> 16) & 0xffff and
compute scale using the sign-extension formula (((a.atttypmod - 4) & 0x7ff) ^
1024) - 1024 so negative scales are handled correctly; adjust the two CASE lines
that currently use (a.atttypmod >> 16) & 65535 and a.atttypmod & 65535 to use
the corrected expressions.
| CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'v' THEN 'VIEW' ELSE NULL END::varchar, | ||
| -- 修正:保证CHAR_LENGTH非负 | ||
| CASE WHEN t.typname IN ('varchar', 'char') THEN GREATEST(a.atttypmod - 4, 0) ELSE NULL END::numeric, | ||
| CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar, |
There was a problem hiding this comment.
CHAR_USED returns 'BYTE' for non-character types; Oracle returns NULL.
In Oracle's ALL_TAB_COLUMNS, CHAR_USED is NULL for non-character data types, not 'BYTE'. Returning 'BYTE' for numeric, date, or other non-character columns may cause compatibility issues with applications expecting Oracle behavior.
🔧 Proposed fix
- CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar,
+ CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE NULL END::varchar,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar, | |
| CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE NULL END::varchar, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/ora_all_tab_columns.sql` at line 44, The CASE expression
generating CHAR_USED currently returns 'BYTE' for non-character types; update
the CASE in the SELECT (the expression using t.typname and producing CHAR_USED)
so that WHEN t.typname IN ('varchar','char') THEN 'CHAR' ELSE NULL (cast to the
same varchar type if needed) instead of 'BYTE', ensuring the CHAR_USED column
returns NULL for non-character types consistent with Oracle behavior.
| -- 修正:IvorySQL兼容的PUBLIC同义词语法 | ||
| CREATE OR REPLACE SYNONYM ALL_TAB_COLUMNS FOR pg_catalog.ALL_TAB_COLUMNS; No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for existing SYNONYM usage in IvorySQL codebase
rg -i "CREATE.*SYNONYM" --type=sql -C2
# Check if there's documentation about synonym support
fd -i readme -e md --exec cat {} \; | head -200Repository: IvorySQL/IvorySQL
Length of output: 12155
🏁 Script executed:
#!/bin/bash
# Search for SYNONYM-related code in Oracle compatibility features
rg -i "synonym" src/backend --type=sql -C1 | head -100
# Look for Oracle feature documentation/declarations
fd -i feature -e txt -e md -e sql | xargs rg -l "oracle" | head -10Repository: IvorySQL/IvorySQL
Length of output: 251
🏁 Script executed:
#!/bin/bash
# Search for Oracle mode/compatibility module
find src -name "*oracle*" -type f | head -20
# Look for tests that might verify SYNONYM
fd -i "*synonym*" --type fRepository: IvorySQL/IvorySQL
Length of output: 900
🏁 Script executed:
#!/bin/bash
# Search for SYNONYM keyword in parser and compatibility code
rg "SYNONYM" src/backend/oracle_parser/ src/backend/utils/adt/oracle_compat.c -i
# Look for SQL files with any synonym-related content
fd --glob "*.sql" --type f | xargs rg -l "synonym" -i | head -20Repository: IvorySQL/IvorySQL
Length of output: 327
🏁 Script executed:
#!/bin/bash
# Check the oracle parser source for syntax definitions
head -200 src/backend/oracle_parser/liboracle_parser.c
# Search for CREATE token handling in oracle parser
rg "CREATE" src/backend/oracle_parser/liboracle_parser.c -C 2 | head -50Repository: IvorySQL/IvorySQL
Length of output: 5937
🏁 Script executed:
#!/bin/bash
# Search for grammar definition files
find src -name "*.y" -o -name "*grammar*" -o -name "*gramparse*" | head -20
# Look for SYNONYM in all source files
rg "SYNONYM" src --type=c --type=h -i | head -50Repository: IvorySQL/IvorySQL
Length of output: 1838
🏁 Script executed:
#!/bin/bash
# Search for what CREATE statements are supported in oracle parser
fd -e y -e l src/backend/oracle_parser/ | xargs cat | grep -i "create" | head -50Repository: IvorySQL/IvorySQL
Length of output: 470
🏁 Script executed:
#!/bin/bash
# Search for CREATE statement patterns in the oracle grammar
rg "CREATE" src/backend/oracle_parser/ora_gram.y -A 5 -B 1 | head -100Repository: IvorySQL/IvorySQL
Length of output: 2921
🏁 Script executed:
#!/bin/bash
# Look for SYNONYM in the oracle grammar file
rg -i "synonym" src/backend/oracle_parser/ora_gram.y -C 3
# Check if SYNONYM is defined as a keyword
rg -i "\\bsynonym\\b" src/backend/oracle_parser/ora_keywords.h -C 2Repository: IvorySQL/IvorySQL
Length of output: 141
🏁 Script executed:
#!/bin/bash
# Find keyword definition files
find src/backend/oracle_parser -type f | head -20
# Comprehensive search in ora_gram.y for SYNONYM and FOR
wc -l src/backend/oracle_parser/ora_gram.y
rg "SYNONYM\|\\bFOR\\b" src/backend/oracle_parser/ora_gram.y -i | head -50Repository: IvorySQL/IvorySQL
Length of output: 415
🏁 Script executed:
#!/bin/bash
# Search for CREATE OR REPLACE in oracle grammar
rg "CREATE.*REPLACE" src/backend/oracle_parser/ora_gram.y -i -A 3 -B 1 | head -150Repository: IvorySQL/IvorySQL
Length of output: 4836
🏁 Script executed:
#!/bin/bash
# Direct search for SYNONYM in the entire ora_gram.y file
rg -i "synonym" src/backend/oracle_parser/ora_gram.y
# Also check if there's any CREATE ... FOR syntax pattern
rg "CREATE.*FOR" src/backend/oracle_parser/ora_gram.y -i -A 2 -B 2Repository: IvorySQL/IvorySQL
Length of output: 5994
🏁 Script executed:
#!/bin/bash
# Check what objects can be created in Oracle mode
rg "^CreateStmt|^Create.*Stmt" src/backend/oracle_parser/ora_gram.y | head -30
# Look at complete line context for CREATE statements
grep -n "CREATE " src/backend/oracle_parser/ora_gram.y | wc -l
</parameter>Repository: IvorySQL/IvorySQL
Length of output: 1339
🏁 Script executed:
#!/bin/bash
# Search for any tests that use the SYNONYM syntax from the problematic file
find src/oracle_test src/test -name "*.sql" -exec grep -l "CREATE.*SYNONYM" {} \;
# Check if ora_all_tab_columns.sql is actually used/executed anywhere
rg "ora_all_tab_columns" --type=sql --type=c --type=hRepository: IvorySQL/IvorySQL
Length of output: 151
🏁 Script executed:
#!/bin/bash
# Check if this file is included or loaded anywhere
rg "ora_all_tab_columns" --type=sql --type=c --type=h --type=py
# Check the git history of this file to understand its purpose
git log --oneline src/backend/catalog/ora_all_tab_columns.sql | head -10Repository: IvorySQL/IvorySQL
Length of output: 220
🏁 Script executed:
#!/bin/bash
# Check what the commit that added this file was trying to accomplish
git show a5bb5b8fb --stat
# Look at the full content of the file to understand context
cat src/backend/catalog/ora_all_tab_columns.sqlRepository: IvorySQL/IvorySQL
Length of output: 50374
🏁 Script executed:
#!/bin/bash
# Check if there are any test files that attempt to execute this SQL file
find . -name "*.sql" -o -name "*.sh" | xargs grep -l "ora_all_tab_columns" 2>/dev/null
# Search for any initialization or loading of catalog SQL files
rg "\.sql" src/backend/catalog/ --type=c -B2 -A2 | head -80Repository: IvorySQL/IvorySQL
Length of output: 106
Remove or replace unsupported CREATE OR REPLACE SYNONYM syntax — this feature is not implemented in IvorySQL's Oracle parser.
The CREATE OR REPLACE SYNONYM statement will fail because SYNONYM is not a recognized object type in the Oracle compatibility parser. The grammar (src/backend/oracle_parser/ora_gram.y) supports CREATE OR REPLACE only for LANGUAGE, TRIGGER, AGGREGATE, FUNCTION, PROCEDURE, PACKAGE, TRANSFORM, RULE, VIEW, and CONVERSION — not SYNONYM. Additionally, this file is not referenced or executed anywhere in the codebase, indicating incomplete integration.
Either implement full SYNONYM support in the Oracle parser or replace this with an alternative approach (e.g., using PostgreSQL views or public aliases) that is actually supported.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 60 - 61, The
Oracle-specific `CREATE OR REPLACE SYNONYM ALL_TAB_COLUMNS FOR
pg_catalog.ALL_TAB_COLUMNS;` statement is unsupported by IvorySQL's Oracle
parser (see ora_gram.y); remove this SYNONYM line and replace it with a
supported construct such as creating a PostgreSQL view or public alias named
ALL_TAB_COLUMNS that selects from pg_catalog.ALL_TAB_COLUMNS (or comment out the
statement if not needed), and ensure any callers expect the view name rather
than a SYNONYM; alternatively, implement SYNONYM parsing in ora_gram.y if
SYNONYM semantics are required.
|
@Adria0615 Hi Adria,here are three issues to address before merging:
|
|
Oracle query results are uppercase by default, whereas IvorySQL uses lowercase by default. To maintain consistency with Oracle, IvorySQL provides the case-conversion function ORA_CASE_TRANS. It is recommended to use this function for automatic case conversion to match Oracle's behavior. |
a5bb5b8 to
dd60c8f
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (3)
src/backend/catalog/ora_all_tab_columns.sql (3)
60-61:⚠️ Potential issue | 🔴 CriticalRemove or replace unsupported
CREATE OR REPLACE SYNONYMstatement.Line 60-61 repeats the earlier blocker: synonym DDL is not supported by the current Oracle parser path in this codebase and can break deployment.
#!/bin/bash # Verify parser support for SYNONYM in Oracle grammar. rg -n "SYNONYM" src/backend/oracle_parser/ora_gram.y src/backend/oracle_parser/ora_keywords.h🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 60 - 61, Remove the unsupported Oracle synonym DDL; delete the "CREATE OR REPLACE SYNONYM ALL_TAB_COLUMNS FOR pg_catalog.ALL_TAB_COLUMNS" statement (or replace it with a supported alternative such as a view or schema-qualified reference) so the code no longer relies on the unsupported CREATE OR REPLACE SYNONYM syntax referencing ALL_TAB_COLUMNS/pg_catalog.ALL_TAB_COLUMNS.
44-44:⚠️ Potential issue | 🟠 Major
CHAR_USEDshould beNULLfor non-character types.Line 44 returns
'BYTE'for non-char data types, which diverges from OracleALL_TAB_COLUMNSsemantics. This was already reported previously.Proposed fix
- CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar, + CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE NULL END::varchar,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` at line 44, The CASE expression that populates CHAR_USED (currently "CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar") is wrong for non-character types; change the logic in the expression that computes CHAR_USED (the CASE in the SELECT building CHAR_USED) to return 'CHAR' only for character types and NULL for all other types instead of 'BYTE' so the result matches Oracle ALL_TAB_COLUMNS semantics.
35-37:⚠️ Potential issue | 🔴 CriticalFix numeric typmod handling for
DATA_LENGTH,DATA_PRECISION, andDATA_SCALE.Line 35-37 still decodes numeric typmod incorrectly (missing
-4header handling and proper scale sign extraction), andDATA_LENGTHstill uses generic logic for numeric/decimal. This repeats prior flagged compatibility defects.Proposed fix
- CASE WHEN a.atttypmod > 0 THEN GREATEST(a.atttypmod - 4, 0) ELSE NULL END::numeric, - CASE WHEN t.typname IN ('numeric', 'decimal') THEN (a.atttypmod >> 16) & 65535 ELSE NULL END::numeric, - CASE WHEN t.typname IN ('numeric', 'decimal') THEN a.atttypmod & 65535 ELSE NULL END::numeric, + CASE + WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 THEN ((a.atttypmod - 4) >> 16) & 0xffff + WHEN a.atttypmod > 0 THEN GREATEST(a.atttypmod - 4, 0) + ELSE NULL + END::numeric, + CASE + WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 + THEN ((a.atttypmod - 4) >> 16) & 0xffff + ELSE NULL + END::numeric, + CASE + WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 + THEN (((a.atttypmod - 4) & 0x7ff) ^ 1024) - 1024 + ELSE NULL + END::numeric,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 35 - 37, The numeric typmod decoding is wrong: update the three CASE expressions so numeric/decimal uses the typmod with the 4-byte header removed before extracting length/precision/scale. Specifically, for DATA_LENGTH use CASE WHEN t.typname IN ('numeric','decimal') THEN GREATEST(a.atttypmod - 4, 0) ELSE (existing generic expression) END::numeric; for DATA_PRECISION use CASE WHEN t.typname IN ('numeric','decimal') THEN ((a.atttypmod - 4) >> 16) & 65535 ELSE NULL END::numeric; and for DATA_SCALE use CASE WHEN t.typname IN ('numeric','decimal') THEN (a.atttypmod - 4) & 65535 ELSE NULL END::numeric, replacing the current a.atttypmod-based extractions in the CASEs referenced above.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend/catalog/ora_all_tab_columns.sql`:
- Around line 2-18: The file defines the Oracle-compatible view
pg_catalog.ALL_TAB_COLUMNS and currently contains inline comments in Chinese;
update all inline comments (e.g., the header comment and column annotations next
to OWNER, TABLE_NAME, COLUMN_NAME, COLUMN_ID, DATA_TYPE, etc., and any other
Chinese comments around lines referenced such as the comments after
DATA_DEFAULT, TABLE_TYPE, CHAR_LENGTH, CHAR_USED, DBA_COLUMN_ID) to English
equivalents consistently across the file so the view and column descriptions are
entirely in English before merge; preserve the meaning of each comment while
converting language and keep formatting/comment style unchanged.
- Around line 3-61: The PR added Oracle-compat behavior in the
pg_catalog.ALL_TAB_COLUMNS view but omitted regression tests; add SQL regression
tests that query the view (pg_catalog.ALL_TAB_COLUMNS) and assert expected
values for DATA_LENGTH, DATA_PRECISION, DATA_SCALE (coverage for the atttypmod
bit-extraction logic used in the SELECT expressions), verify CHAR_USED returns
'CHAR' vs 'BYTE' for varchar/char types, confirm DATA_DEFAULT renders as empty
string when NULL (COALESCE path), and check casing/TYPE mapping (e.g. typname ->
'NUMBER'/'VARCHAR2'/'CLOB'/'DATE'); implement these as deterministic
INSERT/CREATE temporary table + ALTER COLUMN DEFAULT cases and SELECT assertions
comparing OWNER, TABLE_NAME, COLUMN_NAME and the target columns to expected
literals so the CI will fail on regressions.
- Around line 22-24: Wrap the identifier outputs n.nspname, c.relname, and
a.attname with ORA_CASE_TRANS(...) so the exported OWNER, TABLE_NAME, and
COLUMN_NAME use Oracle-compatible casing; update the select expressions that
produce OWNER/TABLE_NAME/COLUMN_NAME to call ORA_CASE_TRANS(n.nspname),
ORA_CASE_TRANS(c.relname), and ORA_CASE_TRANS(a.attname) respectively
(preserving the varchar cast if needed).
- Around line 1-3: The new SQL view file
src/backend/catalog/ora_all_tab_columns.sql is not being installed because it is
missing from the catalog install-data target in the Makefile; update the
src/backend/catalog/Makefile to include ora_all_tab_columns.sql in the
install-data list (alongside system_functions.sql, system_views.sql,
information_schema.sql, sql_features.txt) so that the CREATE OR REPLACE VIEW
pg_catalog.ALL_TAB_COLUMNS script is copied/installed during build, or
alternatively move the file into the contrib/ivorysql_ora integration path and
update that path's install rules to deploy the view.
---
Duplicate comments:
In `@src/backend/catalog/ora_all_tab_columns.sql`:
- Around line 60-61: Remove the unsupported Oracle synonym DDL; delete the
"CREATE OR REPLACE SYNONYM ALL_TAB_COLUMNS FOR pg_catalog.ALL_TAB_COLUMNS"
statement (or replace it with a supported alternative such as a view or
schema-qualified reference) so the code no longer relies on the unsupported
CREATE OR REPLACE SYNONYM syntax referencing
ALL_TAB_COLUMNS/pg_catalog.ALL_TAB_COLUMNS.
- Line 44: The CASE expression that populates CHAR_USED (currently "CASE WHEN
t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar") is wrong
for non-character types; change the logic in the expression that computes
CHAR_USED (the CASE in the SELECT building CHAR_USED) to return 'CHAR' only for
character types and NULL for all other types instead of 'BYTE' so the result
matches Oracle ALL_TAB_COLUMNS semantics.
- Around line 35-37: The numeric typmod decoding is wrong: update the three CASE
expressions so numeric/decimal uses the typmod with the 4-byte header removed
before extracting length/precision/scale. Specifically, for DATA_LENGTH use CASE
WHEN t.typname IN ('numeric','decimal') THEN GREATEST(a.atttypmod - 4, 0) ELSE
(existing generic expression) END::numeric; for DATA_PRECISION use CASE WHEN
t.typname IN ('numeric','decimal') THEN ((a.atttypmod - 4) >> 16) & 65535 ELSE
NULL END::numeric; and for DATA_SCALE use CASE WHEN t.typname IN
('numeric','decimal') THEN (a.atttypmod - 4) & 65535 ELSE NULL END::numeric,
replacing the current a.atttypmod-based extractions in the CASEs referenced
above.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bc89a8b5-c5bd-47e2-b1b1-a19f4d678ce7
📒 Files selected for processing (1)
src/backend/catalog/ora_all_tab_columns.sql
| -- Oracle兼容 ALL_TAB_COLUMNS 系统视图定义 | ||
| CREATE OR REPLACE VIEW pg_catalog.ALL_TAB_COLUMNS | ||
| ( | ||
| OWNER, -- 模式名(对应Oracle的所有者) | ||
| TABLE_NAME, -- 表/视图名 | ||
| COLUMN_NAME, -- 列名 | ||
| COLUMN_ID, -- 列序号 | ||
| DATA_TYPE, -- 数据类型 | ||
| DATA_LENGTH, -- 数据长度 | ||
| DATA_PRECISION, -- 数值精度 | ||
| DATA_SCALE, -- 小数位 | ||
| NULLABLE, -- 空值标识(Y/N) | ||
| DATA_DEFAULT, -- 默认值 | ||
| TABLE_TYPE, -- 表类型(TABLE/VIEW) | ||
| CHAR_LENGTH, -- 字符长度 | ||
| CHAR_USED, -- 字符使用方式(BYTE/CHAR) | ||
| DBA_COLUMN_ID -- 兼容Oracle的DBA级列ID |
There was a problem hiding this comment.
Convert inline comments to English before merge.
Lines in this file still use Chinese comments (for example, Line 2, Line 27, Line 34, Line 57). The review thread marked English-only comments as a blocker.
Also applies to: 27-27, 34-34, 39-39, 42-42, 53-55, 57-57, 60-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 2 - 18, The file
defines the Oracle-compatible view pg_catalog.ALL_TAB_COLUMNS and currently
contains inline comments in Chinese; update all inline comments (e.g., the
header comment and column annotations next to OWNER, TABLE_NAME, COLUMN_NAME,
COLUMN_ID, DATA_TYPE, etc., and any other Chinese comments around lines
referenced such as the comments after DATA_DEFAULT, TABLE_TYPE, CHAR_LENGTH,
CHAR_USED, DBA_COLUMN_ID) to English equivalents consistently across the file so
the view and column descriptions are entirely in English before merge; preserve
the meaning of each comment while converting language and keep
formatting/comment style unchanged.
| CREATE OR REPLACE VIEW pg_catalog.ALL_TAB_COLUMNS | ||
| ( | ||
| OWNER, -- 模式名(对应Oracle的所有者) | ||
| TABLE_NAME, -- 表/视图名 | ||
| COLUMN_NAME, -- 列名 | ||
| COLUMN_ID, -- 列序号 | ||
| DATA_TYPE, -- 数据类型 | ||
| DATA_LENGTH, -- 数据长度 | ||
| DATA_PRECISION, -- 数值精度 | ||
| DATA_SCALE, -- 小数位 | ||
| NULLABLE, -- 空值标识(Y/N) | ||
| DATA_DEFAULT, -- 默认值 | ||
| TABLE_TYPE, -- 表类型(TABLE/VIEW) | ||
| CHAR_LENGTH, -- 字符长度 | ||
| CHAR_USED, -- 字符使用方式(BYTE/CHAR) | ||
| DBA_COLUMN_ID -- 兼容Oracle的DBA级列ID | ||
| ) | ||
| AS | ||
| SELECT | ||
| n.nspname::varchar, | ||
| c.relname::varchar, | ||
| a.attname::varchar, | ||
| a.attnum::numeric, | ||
| CASE | ||
| -- Oracle数据类型 → IvorySQL映射(关键兼容点) | ||
| WHEN t.typname = 'int4' THEN 'NUMBER' | ||
| WHEN t.typname = 'varchar' THEN 'VARCHAR2' | ||
| WHEN t.typname = 'text' THEN 'CLOB' | ||
| WHEN t.typname = 'timestamp' THEN 'DATE' | ||
| ELSE t.typname::varchar | ||
| END, | ||
| -- 修正:保证DATA_LENGTH非负 | ||
| CASE WHEN a.atttypmod > 0 THEN GREATEST(a.atttypmod - 4, 0) ELSE NULL END::numeric, | ||
| CASE WHEN t.typname IN ('numeric', 'decimal') THEN (a.atttypmod >> 16) & 65535 ELSE NULL END::numeric, | ||
| CASE WHEN t.typname IN ('numeric', 'decimal') THEN a.atttypmod & 65535 ELSE NULL END::numeric, | ||
| CASE WHEN a.attnotnull THEN 'N' ELSE 'Y' END::varchar, | ||
| -- 修正:默认值NULL时显示空字符串,对齐Oracle | ||
| COALESCE(pg_get_expr(ad.adbin, ad.adrelid)::varchar, '')::varchar, | ||
| CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'v' THEN 'VIEW' ELSE NULL END::varchar, | ||
| -- 修正:保证CHAR_LENGTH非负 | ||
| CASE WHEN t.typname IN ('varchar', 'char') THEN GREATEST(a.atttypmod - 4, 0) ELSE NULL END::numeric, | ||
| CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar, | ||
| a.attnum::numeric | ||
| FROM | ||
| pg_namespace n | ||
| JOIN pg_class c ON n.oid = c.relnamespace | ||
| JOIN pg_attribute a ON c.oid = a.attrelid | ||
| JOIN pg_type t ON a.atttypid = t.oid | ||
| LEFT JOIN pg_attrdef ad ON a.attrelid = ad.adrelid AND a.attnum = ad.adnum | ||
| WHERE | ||
| c.relkind IN ('r', 'v') -- 仅表和视图 | ||
| AND a.attnum > 0 -- 排除系统隐藏列 | ||
| AND NOT a.attisdropped; -- 排除已删除列; | ||
|
|
||
| -- 授予所有用户查询权限(对齐Oracle的PUBLIC权限) | ||
| GRANT SELECT ON pg_catalog.ALL_TAB_COLUMNS TO PUBLIC; | ||
|
|
||
| -- 修正:IvorySQL兼容的PUBLIC同义词语法 | ||
| CREATE OR REPLACE SYNONYM ALL_TAB_COLUMNS FOR pg_catalog.ALL_TAB_COLUMNS; No newline at end of file |
There was a problem hiding this comment.
Add regression tests for this view before merge.
Current change introduces Oracle-compatibility behavior but no accompanying regression tests were included (explicit blocker in PR discussion). Please add tests for at least Line 35-37 numeric handling, Line 44 CHAR_USED, casing behavior, and default rendering.
I can draft a regression test SQL + expected output set for these scenarios if you want me to open a follow-up patch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 3 - 61, The PR
added Oracle-compat behavior in the pg_catalog.ALL_TAB_COLUMNS view but omitted
regression tests; add SQL regression tests that query the view
(pg_catalog.ALL_TAB_COLUMNS) and assert expected values for DATA_LENGTH,
DATA_PRECISION, DATA_SCALE (coverage for the atttypmod bit-extraction logic used
in the SELECT expressions), verify CHAR_USED returns 'CHAR' vs 'BYTE' for
varchar/char types, confirm DATA_DEFAULT renders as empty string when NULL
(COALESCE path), and check casing/TYPE mapping (e.g. typname ->
'NUMBER'/'VARCHAR2'/'CLOB'/'DATE'); implement these as deterministic
INSERT/CREATE temporary table + ALTER COLUMN DEFAULT cases and SELECT assertions
comparing OWNER, TABLE_NAME, COLUMN_NAME and the target columns to expected
literals so the CI will fail on regressions.
| n.nspname::varchar, | ||
| c.relname::varchar, | ||
| a.attname::varchar, |
There was a problem hiding this comment.
Apply ORA_CASE_TRANS to identifier outputs for Oracle-compatible casing.
Line 22-24 currently exposes raw nspname/relname/attname, which will default to lowercase in many cases. Wrap these with ORA_CASE_TRANS(...) so OWNER, TABLE_NAME, and COLUMN_NAME match Oracle-style behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 22 - 24, Wrap the
identifier outputs n.nspname, c.relname, and a.attname with ORA_CASE_TRANS(...)
so the exported OWNER, TABLE_NAME, and COLUMN_NAME use Oracle-compatible casing;
update the select expressions that produce OWNER/TABLE_NAME/COLUMN_NAME to call
ORA_CASE_TRANS(n.nspname), ORA_CASE_TRANS(c.relname), and
ORA_CASE_TRANS(a.attname) respectively (preserving the varchar cast if needed).
dd60c8f to
be80a92
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend/catalog/Makefile`:
- Around line 60-61: Add ora_all_tab_columns.sql to
src/backend/catalog/meson.build so Meson installs the same file as the Makefile:
update the install_data list (or equivalent variable) in meson.build to include
"ora_all_tab_columns.sql" (mirror the Makefile's $(INSTALL_DATA) entry) ensuring
Meson-based builds don't omit it; also remove the trailing whitespace at the end
of the Makefile line that installs ora_all_tab_columns.sql to clean up line 61.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9f59ec29-5300-4c5a-9960-e513eca29241
📒 Files selected for processing (2)
src/backend/catalog/Makefilesrc/backend/catalog/ora_all_tab_columns.sql
🚧 Files skipped from review as they are similar to previous changes (1)
- src/backend/catalog/ora_all_tab_columns.sql
| $(INSTALL_DATA) $(srcdir)/ora_all_tab_columns.sql '$(DESTDIR)$(datadir)/ora_all_tab_columns.sql' | ||
|
|
There was a problem hiding this comment.
Missing corresponding update in meson.build will cause Meson-based installations to omit this file.
The Makefile correctly installs ora_all_tab_columns.sql, but the corresponding src/backend/catalog/meson.build file (lines 41-48) is not updated. This inconsistency means:
- Make-based builds: file is installed ✓
- Meson-based builds: file is silently missing ✗
Add the file to meson.build:
install_data(
'information_schema.sql',
'sql_features.txt',
'system_functions.sql',
'system_views.sql',
'ora_sys_schema.sql',
+ 'ora_all_tab_columns.sql',
install_dir: dir_data,
)Also, Line 61 has trailing whitespace that should be removed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/Makefile` around lines 60 - 61, Add
ora_all_tab_columns.sql to src/backend/catalog/meson.build so Meson installs the
same file as the Makefile: update the install_data list (or equivalent variable)
in meson.build to include "ora_all_tab_columns.sql" (mirror the Makefile's
$(INSTALL_DATA) entry) ensuring Meson-based builds don't omit it; also remove
the trailing whitespace at the end of the Makefile line that installs
ora_all_tab_columns.sql to clean up line 61.
be80a92 to
c482ae9
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (6)
src/backend/catalog/ora_all_tab_columns.sql (6)
22-24:⚠️ Potential issue | 🟠 MajorApply
ORA_CASE_TRANS(...)to identifier columns for Oracle-compatible casing.
OWNER,TABLE_NAME, andCOLUMN_NAMEare currently emitted from raw names on Line 22-24, so case behavior won’t match Oracle expectations.🔧 Proposed fix
- n.nspname::varchar, - c.relname::varchar, - a.attname::varchar, + ORA_CASE_TRANS(n.nspname)::varchar, + ORA_CASE_TRANS(c.relname)::varchar, + ORA_CASE_TRANS(a.attname)::varchar,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 22 - 24, Apply ORA_CASE_TRANS(...) to the identifier columns so Oracle-style casing is used: wrap n.nspname, c.relname, and a.attname with ORA_CASE_TRANS(...) where OWNER, TABLE_NAME, and COLUMN_NAME are produced (the expressions producing those columns in the query) so the emitted OWNER/TABLE_NAME/COLUMN_NAME values follow Oracle-compatible casing rules.
2-18:⚠️ Potential issue | 🟠 MajorConvert all inline comments to English before merge.
Chinese inline comments remain in multiple places (e.g., Line 2, Line 27, Line 34, Line 57), which conflicts with the review requirement.
Also applies to: 27-27, 34-34, 39-39, 42-42, 53-55, 57-57, 60-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 2 - 18, The inline comments in the CREATE OR REPLACE VIEW pg_catalog.ALL_TAB_COLUMNS definition (and other comment blocks tied to this view) are written in Chinese; update all inline comments to English so they meet project standards — e.g., translate comments for OWNER, TABLE_NAME, COLUMN_NAME, COLUMN_ID, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, DATA_DEFAULT, TABLE_TYPE, CHAR_LENGTH, CHAR_USED, and DBA_COLUMN_ID and any other comments within this view definition to concise English equivalents while preserving their original meaning.
3-61:⚠️ Potential issue | 🟠 MajorAdd regression coverage for this view’s compatibility behavior.
The file introduces Oracle-compatibility logic (type mapping, typmod decoding, case behavior, default rendering), but no regression tests are included in this change set.
I can draft
sql/expectedregression cases forDATA_LENGTH/DATA_PRECISION/DATA_SCALE,CHAR_USED,DATA_DEFAULT, and identifier casing if you want a follow-up patch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 3 - 61, Add regression tests that exercise the Oracle-compatibility logic in the ALL_TAB_COLUMNS view: create tests that insert or define sample tables/columns and assert expected values for DATA_LENGTH, DATA_PRECISION, DATA_SCALE (typmod decoding logic in the SELECT for a.atttypmod), CHAR_LENGTH and CHAR_USED (varchar/char handling), DATA_DEFAULT (COALESCE(pg_get_expr...) producing '' when NULL), and mapped DATA_TYPE results (the CASE mapping in the SELECT). Target the view pg_catalog.ALL_TAB_COLUMNS and validate columns OWNER, TABLE_NAME, COLUMN_NAME, COLUMN_ID, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, DATA_DEFAULT, CHAR_LENGTH, CHAR_USED and DBA_COLUMN_ID; add sql/expected files covering edge cases (negative typmod, NULL defaults, various type names) so the regression suite will catch regressions in those transformations.
44-44:⚠️ Potential issue | 🟠 MajorReturn
NULLforCHAR_USEDon non-character types.On Line 44, non-character columns return
'BYTE'; for Oracle compatibility this should beNULL.🔧 Proposed fix
- CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar, + CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE NULL END::varchar,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` at line 44, The CASE that produces CHAR_USED currently returns 'BYTE' for non-character types; change it so CHAR_USED returns 'CHAR' when t.typname IN ('varchar','char') and NULL otherwise (e.g., CASE WHEN t.typname IN ('varchar','char') THEN 'CHAR' ELSE NULL END::varchar) so non-character columns yield NULL instead of 'BYTE'.
36-37:⚠️ Potential issue | 🔴 CriticalFix
numeric/decimaltypmod decoding for precision/scale.Line 36-37 still decodes
a.atttypmodwithout removing the 4-byte header, soDATA_PRECISION/DATA_SCALEcan be wrong.🔧 Proposed fix
- CASE WHEN t.typname IN ('numeric', 'decimal') THEN (a.atttypmod >> 16) & 65535 ELSE NULL END::numeric, - CASE WHEN t.typname IN ('numeric', 'decimal') THEN a.atttypmod & 65535 ELSE NULL END::numeric, + CASE WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 + THEN ((a.atttypmod - 4) >> 16) & 0xffff ELSE NULL END::numeric, + CASE WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 + THEN (((a.atttypmod - 4) & 0x7ff) ^ 1024) - 1024 ELSE NULL END::numeric,#!/bin/bash # Verify current SQL expression and canonical numeric typmod helpers in source. rg -n -C2 "a\.atttypmod|DATA_PRECISION|DATA_SCALE" src/backend/catalog/ora_all_tab_columns.sql rg -n -C2 "numeric_typmod_precision|numeric_typmod_scale|VARHDRSZ" src/backend/utils/adt/numeric.c🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 36 - 37, The CASE that decodes numeric/decimal typmod (using a.atttypmod) must strip the 4-byte varlena header before extracting precision/scale; update the two expressions to use (a.atttypmod - 4) (or subtract VARHDRSZ) before bit-shifting/masking so DATA_PRECISION/DATA_SCALE are correct — e.g., replace uses of a.atttypmod in the CASE for t.typname IN ('numeric','decimal') with (a.atttypmod - 4) (or the equivalent numeric_typmod helper) when computing >> 16 & 65535 and & 65535.
60-61:⚠️ Potential issue | 🔴 Critical
CREATE OR REPLACE SYNONYMneeds verification against supported grammar.Line 61 uses Oracle synonym syntax that may not be supported by IvorySQL parser paths; this can break execution of the script.
#!/bin/bash # Verify whether SYNONYM grammar is supported in Oracle parser sources. rg -n -i -C2 "\bsynonym\b" src/backend/oracle_parser/ora_gram.y src/backend/oracle_parser/ora_keywords.h rg -n -C2 "CREATE OR REPLACE SYNONYM" src/backend/catalog/ora_all_tab_columns.sql
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/backend/catalog/ora_all_tab_columns.sql`:
- Around line 22-24: Apply ORA_CASE_TRANS(...) to the identifier columns so
Oracle-style casing is used: wrap n.nspname, c.relname, and a.attname with
ORA_CASE_TRANS(...) where OWNER, TABLE_NAME, and COLUMN_NAME are produced (the
expressions producing those columns in the query) so the emitted
OWNER/TABLE_NAME/COLUMN_NAME values follow Oracle-compatible casing rules.
- Around line 2-18: The inline comments in the CREATE OR REPLACE VIEW
pg_catalog.ALL_TAB_COLUMNS definition (and other comment blocks tied to this
view) are written in Chinese; update all inline comments to English so they meet
project standards — e.g., translate comments for OWNER, TABLE_NAME, COLUMN_NAME,
COLUMN_ID, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE,
DATA_DEFAULT, TABLE_TYPE, CHAR_LENGTH, CHAR_USED, and DBA_COLUMN_ID and any
other comments within this view definition to concise English equivalents while
preserving their original meaning.
- Around line 3-61: Add regression tests that exercise the Oracle-compatibility
logic in the ALL_TAB_COLUMNS view: create tests that insert or define sample
tables/columns and assert expected values for DATA_LENGTH, DATA_PRECISION,
DATA_SCALE (typmod decoding logic in the SELECT for a.atttypmod), CHAR_LENGTH
and CHAR_USED (varchar/char handling), DATA_DEFAULT (COALESCE(pg_get_expr...)
producing '' when NULL), and mapped DATA_TYPE results (the CASE mapping in the
SELECT). Target the view pg_catalog.ALL_TAB_COLUMNS and validate columns OWNER,
TABLE_NAME, COLUMN_NAME, COLUMN_ID, DATA_TYPE, DATA_LENGTH, DATA_PRECISION,
DATA_SCALE, NULLABLE, DATA_DEFAULT, CHAR_LENGTH, CHAR_USED and DBA_COLUMN_ID;
add sql/expected files covering edge cases (negative typmod, NULL defaults,
various type names) so the regression suite will catch regressions in those
transformations.
- Line 44: The CASE that produces CHAR_USED currently returns 'BYTE' for
non-character types; change it so CHAR_USED returns 'CHAR' when t.typname IN
('varchar','char') and NULL otherwise (e.g., CASE WHEN t.typname IN
('varchar','char') THEN 'CHAR' ELSE NULL END::varchar) so non-character columns
yield NULL instead of 'BYTE'.
- Around line 36-37: The CASE that decodes numeric/decimal typmod (using
a.atttypmod) must strip the 4-byte varlena header before extracting
precision/scale; update the two expressions to use (a.atttypmod - 4) (or
subtract VARHDRSZ) before bit-shifting/masking so DATA_PRECISION/DATA_SCALE are
correct — e.g., replace uses of a.atttypmod in the CASE for t.typname IN
('numeric','decimal') with (a.atttypmod - 4) (or the equivalent numeric_typmod
helper) when computing >> 16 & 65535 and & 65535.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: eaa13013-d469-43ae-9790-5e059905d898
📒 Files selected for processing (2)
src/backend/catalog/Makefilesrc/backend/catalog/ora_all_tab_columns.sql
✅ Files skipped from review due to trivial changes (1)
- src/backend/catalog/Makefile
c482ae9 to
018504b
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (5)
src/backend/catalog/ora_all_tab_columns.sql (5)
22-24:⚠️ Potential issue | 🟠 MajorApply
ORA_CASE_TRANS(...)to OWNER/TABLE_NAME/COLUMN_NAME outputs.Line 22-24 currently expose raw identifiers, so casing won’t match Oracle-style defaults. Wrap all three with
ORA_CASE_TRANS(...).Suggested patch
- n.nspname::varchar, - c.relname::varchar, - a.attname::varchar, + ORA_CASE_TRANS(n.nspname)::varchar, + ORA_CASE_TRANS(c.relname)::varchar, + ORA_CASE_TRANS(a.attname)::varchar,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 22 - 24, The OWNER/TABLE_NAME/COLUMN_NAME columns currently return raw identifiers; wrap n.nspname, c.relname and a.attname with ORA_CASE_TRANS(...) so results follow Oracle-style casing. Locate the SELECT producing those columns (references: n.nspname, c.relname, a.attname) in the ora_all_tab_columns view/query and replace each with ORA_CASE_TRANS(n.nspname), ORA_CASE_TRANS(c.relname) and ORA_CASE_TRANS(a.attname) respectively, keeping the surrounding casts/aliases intact.
2-18:⚠️ Potential issue | 🟠 MajorConvert all inline comments to English.
Chinese comments are still present; this is an explicit merge blocker from PR discussion.
Also applies to: 27-27, 34-34, 39-39, 42-42, 53-55, 57-57, 60-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 2 - 18, Convert all Chinese inline comments in the CREATE OR REPLACE VIEW pg_catalog.ALL_TAB_COLUMNS definition to English; specifically replace the Chinese descriptions next to the column list (OWNER, TABLE_NAME, COLUMN_NAME, COLUMN_ID, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, DATA_DEFAULT, TABLE_TYPE, CHAR_LENGTH, CHAR_USED, DBA_COLUMN_ID) and any other Chinese comments in this view (also at the other noted lines) with concise English equivalents so the view header and all inline comments are fully in English.
3-61:⚠️ Potential issue | 🟠 MajorAdd regression coverage for this new view before merge.
This introduces Oracle-compat behavior without test protection for typmod decoding,
CHAR_USED, casing conversion, and default-value rendering.I can draft the regression SQL/expected-output pair for
ora_sysviewto cover these cases if you want a follow-up patch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 3 - 61, The new pg_catalog.ALL_TAB_COLUMNS view adds Oracle-compat behavior but lacks regression tests; add SQL regression tests that query pg_catalog.ALL_TAB_COLUMNS (or its synonym ALL_TAB_COLUMNS) and assert expected values for typmod decoding (DATA_LENGTH, DATA_PRECISION, DATA_SCALE), CHAR_LENGTH and CHAR_USED for varchar/char types, casing of OWNER/TABLE_NAME/COLUMN_NAME, and DATA_DEFAULT rendering (empty string when NULL, exact expression when present); include rows for a table with numeric(n,p), varchar(n), char(n), text, timestamp and columns with/without defaults and mixed case identifiers to cover those branches and ensure CI will fail on regressions.
44-44:⚠️ Potential issue | 🟠 MajorReturn
NULLforCHAR_USEDon non-character types.Current
ELSE 'BYTE'is not Oracle-compatible for non-char columns; useNULLinstead.Suggested patch
- CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE 'BYTE' END::varchar, + CASE WHEN t.typname IN ('varchar', 'char') THEN 'CHAR' ELSE NULL END::varchar,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` at line 44, The CHAR_USED expression currently returns 'CHAR' or 'BYTE'; update the CASE in the ora_all_tab_columns.sql select expression (the CASE WHEN t.typname IN ('varchar','char') THEN 'CHAR' ELSE 'BYTE' END::varchar) to return NULL for non-character types instead of 'BYTE' (e.g., ELSE NULL::varchar) so CHAR_USED is NULL for non-char columns.
35-37:⚠️ Potential issue | 🔴 CriticalFix numeric typmod decoding for
DATA_PRECISIONandDATA_SCALE.
a.atttypmodincludes the 4-byte header; current extraction produces incorrect precision/scale (and misses signed scale handling).Suggested patch
- CASE WHEN t.typname IN ('numeric', 'decimal') THEN (a.atttypmod >> 16) & 65535 ELSE NULL END::numeric, - CASE WHEN t.typname IN ('numeric', 'decimal') THEN a.atttypmod & 65535 ELSE NULL END::numeric, + CASE + WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 + THEN ((a.atttypmod - 4) >> 16) & 0xffff + ELSE NULL + END::numeric, + CASE + WHEN t.typname IN ('numeric', 'decimal') AND a.atttypmod > 0 + THEN (((a.atttypmod - 4) & 0x7ff) ^ 1024) - 1024 + ELSE NULL + END::numeric,#!/bin/bash # Verify PostgreSQL numeric typmod handling and this view expression usage. rg -n "numeric_typmod|VARHDRSZ|typmod" src/backend/utils/adt/numeric.c src/include/utils/numeric.h rg -n "DATA_PRECISION|DATA_SCALE|atttypmod" src/backend/catalog/ora_all_tab_columns.sql🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 35 - 37, The current extraction from a.atttypmod doesn't remove the VARHDRSZ header properly and doesn't sign-extend the numeric scale; update the CASEs that compute DATA_PRECISION and DATA_SCALE (the expressions using a.atttypmod and t.typname IN ('numeric','decimal')) to first subtract the 4-byte header (a.atttypmod - 4) and then extract precision from the high 16 bits and scale from the low 16 bits, making sure to cast the low 16 bits to a signed 16-bit type before converting to numeric so negative scales are handled correctly; adjust the CASE conditions that check a.atttypmod (> 0) and the two expressions that produce precision and scale accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend/catalog/ora_all_tab_columns.sql`:
- Around line 1-3: The CREATE OR REPLACE VIEW definition for
pg_catalog.ALL_TAB_COLUMNS currently lives in src/backend/catalog (file
src/backend/catalog/ora_all_tab_columns.sql) but needs to be relocated into the
Oracle sysview extension; move the ora_all_tab_columns.sql file into the
contrib/ivorysql_ora/src/sysview/ directory, update the extension's installation
manifest (the sysview install list) so the CREATE VIEW for ALL_TAB_COLUMNS is
applied through the extension pipeline, and remove the original file from
src/backend/catalog to avoid duplicate installation; make sure any references to
pg_catalog.ALL_TAB_COLUMNS in extension control or install scripts point to the
new path so the view is installed by the contrib/ivorysql_ora extension.
---
Duplicate comments:
In `@src/backend/catalog/ora_all_tab_columns.sql`:
- Around line 22-24: The OWNER/TABLE_NAME/COLUMN_NAME columns currently return
raw identifiers; wrap n.nspname, c.relname and a.attname with
ORA_CASE_TRANS(...) so results follow Oracle-style casing. Locate the SELECT
producing those columns (references: n.nspname, c.relname, a.attname) in the
ora_all_tab_columns view/query and replace each with ORA_CASE_TRANS(n.nspname),
ORA_CASE_TRANS(c.relname) and ORA_CASE_TRANS(a.attname) respectively, keeping
the surrounding casts/aliases intact.
- Around line 2-18: Convert all Chinese inline comments in the CREATE OR REPLACE
VIEW pg_catalog.ALL_TAB_COLUMNS definition to English; specifically replace the
Chinese descriptions next to the column list (OWNER, TABLE_NAME, COLUMN_NAME,
COLUMN_ID, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE,
DATA_DEFAULT, TABLE_TYPE, CHAR_LENGTH, CHAR_USED, DBA_COLUMN_ID) and any other
Chinese comments in this view (also at the other noted lines) with concise
English equivalents so the view header and all inline comments are fully in
English.
- Around line 3-61: The new pg_catalog.ALL_TAB_COLUMNS view adds Oracle-compat
behavior but lacks regression tests; add SQL regression tests that query
pg_catalog.ALL_TAB_COLUMNS (or its synonym ALL_TAB_COLUMNS) and assert expected
values for typmod decoding (DATA_LENGTH, DATA_PRECISION, DATA_SCALE),
CHAR_LENGTH and CHAR_USED for varchar/char types, casing of
OWNER/TABLE_NAME/COLUMN_NAME, and DATA_DEFAULT rendering (empty string when
NULL, exact expression when present); include rows for a table with
numeric(n,p), varchar(n), char(n), text, timestamp and columns with/without
defaults and mixed case identifiers to cover those branches and ensure CI will
fail on regressions.
- Line 44: The CHAR_USED expression currently returns 'CHAR' or 'BYTE'; update
the CASE in the ora_all_tab_columns.sql select expression (the CASE WHEN
t.typname IN ('varchar','char') THEN 'CHAR' ELSE 'BYTE' END::varchar) to return
NULL for non-character types instead of 'BYTE' (e.g., ELSE NULL::varchar) so
CHAR_USED is NULL for non-char columns.
- Around line 35-37: The current extraction from a.atttypmod doesn't remove the
VARHDRSZ header properly and doesn't sign-extend the numeric scale; update the
CASEs that compute DATA_PRECISION and DATA_SCALE (the expressions using
a.atttypmod and t.typname IN ('numeric','decimal')) to first subtract the 4-byte
header (a.atttypmod - 4) and then extract precision from the high 16 bits and
scale from the low 16 bits, making sure to cast the low 16 bits to a signed
16-bit type before converting to numeric so negative scales are handled
correctly; adjust the CASE conditions that check a.atttypmod (> 0) and the two
expressions that produce precision and scale accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8b144f99-65f5-4e09-8d44-cc345752267f
📒 Files selected for processing (2)
src/backend/catalog/Makefilesrc/backend/catalog/ora_all_tab_columns.sql
🚧 Files skipped from review as they are similar to previous changes (1)
- src/backend/catalog/Makefile
| -- src/backend/catalog/ora_all_tab_columns.sql | ||
| -- Oracle兼容 ALL_TAB_COLUMNS 系统视图定义 | ||
| CREATE OR REPLACE VIEW pg_catalog.ALL_TAB_COLUMNS |
There was a problem hiding this comment.
Move this view into the Oracle sysview extension path before merge.
This file is still under src/backend/catalog, but the required PR change asks for Oracle-compatible sysviews to live in contrib/ivorysql_ora/src/sysview/ and be installed through that pipeline.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/catalog/ora_all_tab_columns.sql` around lines 1 - 3, The CREATE
OR REPLACE VIEW definition for pg_catalog.ALL_TAB_COLUMNS currently lives in
src/backend/catalog (file src/backend/catalog/ora_all_tab_columns.sql) but needs
to be relocated into the Oracle sysview extension; move the
ora_all_tab_columns.sql file into the contrib/ivorysql_ora/src/sysview/
directory, update the extension's installation manifest (the sysview install
list) so the CREATE VIEW for ALL_TAB_COLUMNS is applied through the extension
pipeline, and remove the original file from src/backend/catalog to avoid
duplicate installation; make sure any references to pg_catalog.ALL_TAB_COLUMNS
in extension control or install scripts point to the new path so the view is
installed by the contrib/ivorysql_ora extension.
018504b to
b4ace29
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
contrib/ivorysql_ora/src/sysview/ora_all_tab_columns.sql (3)
92-97: CHAR_USED value may not match Oracle semantics.Oracle's
CHAR_USEDcolumn returns'B'for byte semantics or'C'for character semantics. Returning'CHAR'as a string may confuse applications expecting Oracle-compatible values.Consider returning
'C'(character semantics) since PostgreSQLvarchar/bpcharare character-based, orNULLif semantics are undetermined.Proposed fix
/* CHAR_USED: NULL for non-character types (Oracle compatible) */ CASE WHEN t.typname IN ('varchar', 'bpchar') THEN - 'CHAR' + 'C' ELSE NULL END::varchar,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/ivorysql_ora/src/sysview/ora_all_tab_columns.sql` around lines 92 - 97, The CHAR_USED CASE currently returns the string 'CHAR', which is not Oracle-compatible; update the CASE for the CHAR_USED column so that when t.typname IN ('varchar','bpchar') it returns 'C' (character semantics) and otherwise returns NULL, preserving the existing ::varchar cast and keeping the CASE expression tied to the CHAR_USED output.
107-109: Consider including partitioned tables and materialized views.The filter only includes regular tables (
'r') and views ('v'). Oracle'sALL_TAB_COLUMNSalso covers materialized views. Consider adding:
'p'for partitioned tables'm'for materialized viewsProposed fix
-WHERE c.relkind IN ('r', 'v') +WHERE c.relkind IN ('r', 'v', 'p', 'm') AND a.attnum > 0 AND NOT a.attisdropped;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/ivorysql_ora/src/sysview/ora_all_tab_columns.sql` around lines 107 - 109, The WHERE clause currently restricts c.relkind to ('r','v') and thus excludes partitioned tables and materialized views; update the relkind filter in the query that uses c.relkind (the WHERE that also checks a.attnum and a.attisdropped) to include 'p' and 'm' (e.g., change the IN list to include 'p','m') so partitioned tables and materialized views are returned alongside regular tables and views.
42-45: Timestamp types mapped to DATE may lose precision information.PostgreSQL
timestampandtimestamptzhave sub-second precision, but OracleDATEonly stores to the second. Oracle'sTIMESTAMPtype would be a more accurate mapping. However, if intentional for compatibility with legacy Oracle applications that expectDATE, this is acceptable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contrib/ivorysql_ora/src/sysview/ora_all_tab_columns.sql` around lines 42 - 45, The type-mapping branch that converts timestamp types to Oracle DATE (the WHEN t.typname = 'timestamp' THEN 'DATE' and WHEN t.typname = 'timestamptz' THEN 'DATE' cases) can drop sub-second precision; change those branches to map to Oracle TIMESTAMP (and TIMESTAMP WITH TIME ZONE for timestamptz if desired) instead of 'DATE' so precision is preserved, or add a configurable/explicit comment documenting the intentional legacy mapping if you must keep 'DATE'; update the WHEN t.typname = 'timestamp' and WHEN t.typname = 'timestamptz' branches accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@conftest.c`:
- Around line 1-54: This file conftest.c is an autogenerated Autoconf probe
(result of AC_SEARCH_LIBS(clock_gettime, rt) in configure.ac) and must be
removed from the PR: delete the committed conftest.c, revert it from the branch
(or remove the file in a follow-up commit), and ensure you do not add generated
configure probes to the repository in future (add a rule to ignore generated
probe files or adjust your commit to exclude them); reference
AC_SEARCH_LIBS(clock_gettime, rt) when locating the source of the probe so you
avoid committing similar artifacts again.
In `@contrib/ivorysql_ora/src/sysview/ora_all_tab_columns.sql`:
- Around line 48-59: The DATA_LENGTH CASE in ora_all_tab_columns.sql omits int2;
update the CASE expression that computes DATA_LENGTH to include a WHEN branch
for t.typname = 'int2' (parallel to the existing WHEN t.typname = 'int4' THEN 4
and WHEN t.typname = 'int8' THEN 8) and return the appropriate storage size (2)
as numeric so DATA_LENGTH matches the DATA_TYPE mapping that already includes
int2.
---
Nitpick comments:
In `@contrib/ivorysql_ora/src/sysview/ora_all_tab_columns.sql`:
- Around line 92-97: The CHAR_USED CASE currently returns the string 'CHAR',
which is not Oracle-compatible; update the CASE for the CHAR_USED column so that
when t.typname IN ('varchar','bpchar') it returns 'C' (character semantics) and
otherwise returns NULL, preserving the existing ::varchar cast and keeping the
CASE expression tied to the CHAR_USED output.
- Around line 107-109: The WHERE clause currently restricts c.relkind to
('r','v') and thus excludes partitioned tables and materialized views; update
the relkind filter in the query that uses c.relkind (the WHERE that also checks
a.attnum and a.attisdropped) to include 'p' and 'm' (e.g., change the IN list to
include 'p','m') so partitioned tables and materialized views are returned
alongside regular tables and views.
- Around line 42-45: The type-mapping branch that converts timestamp types to
Oracle DATE (the WHEN t.typname = 'timestamp' THEN 'DATE' and WHEN t.typname =
'timestamptz' THEN 'DATE' cases) can drop sub-second precision; change those
branches to map to Oracle TIMESTAMP (and TIMESTAMP WITH TIME ZONE for
timestamptz if desired) instead of 'DATE' so precision is preserved, or add a
configurable/explicit comment documenting the intentional legacy mapping if you
must keep 'DATE'; update the WHEN t.typname = 'timestamp' and WHEN t.typname =
'timestamptz' branches accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6623e04d-e807-47f6-9ebb-a10b6274b377
📒 Files selected for processing (19)
confdefs.hconftest.cconftest.errcontrib/Makefilecontrib/ivorysql_ora/ivorysql_ora--1.0.sqlcontrib/ivorysql_ora/src/sysview/Makefilecontrib/ivorysql_ora/src/sysview/ora_all_tab_columns.sqlcontrib/ivorysql_ora/src/sysview/sysview--1.0.sqlcontrib/uuid-ossp/Makefilecontrib/uuid-ossp/expected/ivy_uuid_ossp.outcontrib/uuid-ossp/expected/uuid_ossp.outcontrib/uuid-ossp/expected/uuid_ossp_1.outcontrib/uuid-ossp/meson.buildcontrib/uuid-ossp/sql/ivy_uuid_ossp.sqlcontrib/uuid-ossp/sql/uuid_ossp.sqlcontrib/uuid-ossp/uuid-ossp--1.0--1.1.sqlcontrib/uuid-ossp/uuid-ossp--1.1.sqlcontrib/uuid-ossp/uuid-ossp.ccontrib/uuid-ossp/uuid-ossp.control
💤 Files with no reviewable changes (11)
- contrib/uuid-ossp/uuid-ossp.control
- contrib/uuid-ossp/meson.build
- contrib/uuid-ossp/expected/uuid_ossp_1.out
- contrib/uuid-ossp/expected/ivy_uuid_ossp.out
- contrib/uuid-ossp/expected/uuid_ossp.out
- contrib/uuid-ossp/sql/uuid_ossp.sql
- contrib/uuid-ossp/uuid-ossp--1.1.sql
- contrib/uuid-ossp/Makefile
- contrib/uuid-ossp/uuid-ossp.c
- contrib/uuid-ossp/uuid-ossp--1.0--1.1.sql
- contrib/uuid-ossp/sql/ivy_uuid_ossp.sql
✅ Files skipped from review due to trivial changes (2)
- contrib/ivorysql_ora/src/sysview/sysview--1.0.sql
- confdefs.h
| /* confdefs.h */ | ||
| #define PACKAGE_NAME "PostgreSQL" | ||
| #define PACKAGE_TARNAME "postgresql" | ||
| #define PACKAGE_VERSION "19devel" | ||
| #define PACKAGE_STRING "PostgreSQL 19devel" | ||
| #define PACKAGE_BUGREPORT "[email protected]" | ||
| #define PACKAGE_URL "https://www.postgresql.org/" | ||
| #define CONFIGURE_ARGS " '--prefix=/usr/local/ivorysql' 'PG_TEST_EXTRA='" | ||
| #define PG_MAJORVERSION "19" | ||
| #define PG_MAJORVERSION_NUM 19 | ||
| #define PG_MINORVERSION_NUM 0 | ||
| #define PG_VERSION "19devel" | ||
| #define DEF_PGPORT 5432 | ||
| #define DEF_PGPORT_STR "5432" | ||
| #define DEF_ORAPORT 1521 | ||
| #define DEF_ORAPORT_STR "1521" | ||
| #define BLCKSZ 8192 | ||
| #define RELSEG_SIZE 131072 | ||
| #define XLOG_BLCKSZ 8192 | ||
| #define MAXIMUM_FUNARG 100 | ||
| #define HAVE_VISIBILITY_ATTRIBUTE 1 | ||
| #define DLSUFFIX ".so" | ||
| #define USE_ICU 1 | ||
| #define PG_KRB_SRVNAM "postgres" | ||
| #define STDC_HEADERS 1 | ||
| #define HAVE_SYS_TYPES_H 1 | ||
| #define HAVE_SYS_STAT_H 1 | ||
| #define HAVE_STDLIB_H 1 | ||
| #define HAVE_STRING_H 1 | ||
| #define HAVE_MEMORY_H 1 | ||
| #define HAVE_STRINGS_H 1 | ||
| #define HAVE_INTTYPES_H 1 | ||
| #define HAVE_STDINT_H 1 | ||
| #define HAVE_UNISTD_H 1 | ||
| #define HAVE_PTHREAD_PRIO_INHERIT 1 | ||
| #define HAVE_PTHREAD 1 | ||
| #define HAVE_STRERROR_R 1 | ||
| #define HAVE_LIBM 1 | ||
| /* end confdefs.h. */ | ||
|
|
||
| /* Override any GCC internal prototype to avoid an error. | ||
| Use char because int might match the return type of a GCC | ||
| builtin and then its argument prototype would still apply. */ | ||
| #ifdef __cplusplus | ||
| extern "C" | ||
| #endif | ||
| char clock_gettime (); | ||
| int | ||
| main () | ||
| { | ||
| return clock_gettime (); | ||
| ; | ||
| return 0; | ||
| } |
There was a problem hiding this comment.
Remove generated Autoconf probe file from the PR
conftest.c is a generated configure probe (from AC_SEARCH_LIBS(clock_gettime, rt) in configure.ac Line 1419), not maintainable source. Committing it adds unrelated, host-specific metadata (CONFIGURE_ARGS, version/header defines) and will drift immediately.
Suggested change
-<remove this file from the commit entirely>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@conftest.c` around lines 1 - 54, This file conftest.c is an autogenerated
Autoconf probe (result of AC_SEARCH_LIBS(clock_gettime, rt) in configure.ac) and
must be removed from the PR: delete the committed conftest.c, revert it from the
branch (or remove the file in a follow-up commit), and ensure you do not add
generated configure probes to the repository in future (add a rule to ignore
generated probe files or adjust your commit to exclude them); reference
AC_SEARCH_LIBS(clock_gettime, rt) when locating the source of the probe so you
avoid committing similar artifacts again.
eed592e to
f3cf9b3
Compare
实现 Oracle 兼容的 ALL_TAB_COLUMNS 系统视图
Summary by CodeRabbit
Release Notes
New Features
ALL_TAB_COLUMNSsystem view for querying table column definitions and propertiesRemoved Features