Releases: NpgsqlRest/NpgsqlRest
NpgsqlRest v3.11.1
TsClient: proxy Passthrough Endpoint Support
The TypeScript client generator (NpgsqlRest.TsClient) now recognizes proxy passthrough endpoints and generates functions that return the raw Response object, matching the existing proxy_out behavior. Previously, passthrough proxy endpoints (which typically use returns void) would generate Promise<void>, which was incorrect since the actual response comes from the upstream service.
Now, both proxy and proxy_out endpoints generate Promise<Response>:
// Generated for a proxy passthrough endpoint
export async function tsclientTestProxyPassthrough() : Promise<Response> {
const response = await fetch(baseUrl + "/api/tsclient-test/proxy-passthrough", {
method: "GET",
});
return response;
}This allows callers to handle the upstream response appropriately (.json(), .blob(), .text(), etc.), just like proxy_out endpoints.
authorize Annotation Now Matches User ID and User Name Claims
The authorize comment annotation previously only matched against role claims (DefaultRoleClaimType). It now also matches against user ID (DefaultUserIdClaimType) and user name (DefaultNameClaimType) claims, aligning with the behavior that sse_scope authorize already had.
This means you can now restrict endpoint access to specific users, not just roles:
-- Authorize by role (existing behavior)
comment on function get_reports() is 'authorize admin';
-- Authorize by user name (new)
comment on function get_my_profile() is 'authorize john';
-- Authorize by user ID (new)
comment on function get_account() is 'authorize user123';
-- Mix of roles and user identifiers (new)
comment on function get_data() is 'authorize admin, user123, jane';The SSE matching scope was also aligned to check all three claim types, making authorization behavior consistent across all features.
NpgsqlRest v3.11.0
New Feature: proxy_out Annotation (Post-Execution Proxy)
A new proxy mode that reverses the existing proxy flow: execute the PostgreSQL function first, then forward the function's result body to an upstream service. The upstream response is returned to the client.
This enables a common pattern where business logic in PostgreSQL prepares a payload, and an external service performs processing that PostgreSQL cannot do — PDF rendering, image processing, ML inference, email sending, etc.
Syntax
@proxy_out [ METHOD ] [ host_url ]
Also aliased as forward_proxy (with or without @ prefix).
How It Works
Client Request → NpgsqlRest
→ Execute PostgreSQL function
→ Forward function result as request body to upstream service
→ Forward original query string to upstream URL
→ Return upstream response to client
Unlike the existing proxy annotation (which forwards the incoming request to upstream), proxy_out forwards the outgoing function result. The original request query string is forwarded to the upstream URL as-is. The client-facing HTTP method and the upstream HTTP method are independent — the client can send a GET while the upstream receives a POST.
Basic Usage
create function generate_report(report_id int)
returns json
language plpgsql as $$
begin
return json_build_object(
'title', 'Monthly Report',
'data', (select json_agg(row_to_json(t)) from sales t where month = report_id)
);
end;
$$;
comment on function generate_report(int) is 'HTTP GET
@proxy_out POST https://render-service.internal/render';The client calls GET /api/generate-report/?reportId=3. The server:
- Executes
generate_report(3)in PostgreSQL. - Takes the returned JSON and POSTs it to
https://render-service.internal/render/api/generate-report/?reportId=3(original query string forwarded). - Returns the upstream response (e.g., a rendered PDF) directly to the client with the upstream's content-type and status code.
Query String Forwarding
The original client query string is forwarded to the upstream service as-is. This allows the upstream to receive the same parameters that were used to invoke the function:
create function generate_report(p_format text, p_id int)
returns json
language plpgsql as $$
begin
return json_build_object('id', p_id, 'data', 'report');
end;
$$;
comment on function generate_report(text, int) is 'HTTP GET
@proxy_out POST';Calling GET /api/generate-report/?pFormat=pdf&pId=123 executes the function, then POSTs the result to the upstream with ?pFormat=pdf&pId=123 appended to the URL.
HTTP Method Override
Specify which HTTP method to use for the upstream request:
comment on function my_func() is 'HTTP GET
@proxy_out PUT';The client sends GET, but the upstream receives PUT with the function's result as the body.
Custom Host
Override the default ProxyOptions.Host per-endpoint:
comment on function my_func() is 'HTTP GET
@proxy_out POST https://my-other-service.internal';Error Handling
- If the function fails (database error, exception), the error is returned directly to the client — the proxy call is never made.
- If the upstream fails (5xx, timeout, connection error), the upstream's error status and body are forwarded to the client (502 for connection errors, 504 for timeouts).
Configuration
Uses the same ProxyOptions configuration as the existing proxy annotation. ProxyOptions.Enabled must be true:
{
"NpgsqlRest": {
"ProxyOptions": {
"Enabled": true,
"Host": "https://api.example.com",
"DefaultTimeout": "30 seconds"
}
}
}Performance
- Zero overhead for non-proxy_out endpoints. The implementation adds only branch-not-taken boolean/null checks on the normal execution path (~4 nanoseconds).
- Efficient byte forwarding. Function output is captured as raw bytes and forwarded directly via
ByteArrayContent— no intermediate string allocation or double UTF-8 encoding.
TsClient: proxy_out Endpoint Support
The TypeScript client generator (NpgsqlRest.TsClient) now recognizes proxy_out endpoints and generates functions that return the raw Response object instead of a typed return value. Since the actual response comes from the upstream proxy service (not from the PostgreSQL function's return type), the generated function returns Promise<Response>, allowing the caller to handle the response appropriately (.json(), .blob(), .text(), etc.):
// Generated for a proxy_out endpoint
export async function generateReport() : Promise<Response> {
const response = await fetch(baseUrl + "/api/generate-report", {
method: "GET",
});
return response;
}NpgsqlRest v3.10.0
New Feature: Resolved Parameter Expressions
When using HTTP Client Types, sensitive values like API tokens or secrets are often needed in outgoing HTTP requests (e.g., in an Authorization header). Previously, these values had to be supplied as regular HTTP parameters — exposing them to the client and requiring an insecure round-trip: database → client → server → external API.
Resolved parameter expressions solve this by allowing function parameters to be resolved server-side via SQL expressions defined in comment annotations. The resolved values are used in HTTP Client Type placeholder substitution (headers, URL, body) and are also passed to the PostgreSQL function — but they never appear in or originate from the client HTTP request.
How It Works
If a comment annotation uses the existing key = value syntax and the key matches an actual function parameter name, the value is treated as a SQL expression to execute at runtime:
create type my_api_response as (body json, status_code int);
comment on type my_api_response is 'GET https://api.example.com/data
Authorization: Bearer {_token}';
create function get_secure_data(
_user_id int,
_req my_api_response,
_token text default null
)
returns table (body json, status_code int)
language plpgsql as $$
begin
return query select (_req).body, (_req).status_code;
end;
$$;
comment on function get_secure_data(int, my_api_response, text) is '
_token = select api_token from user_tokens where user_id = {_user_id}
';The client calls GET /api/get-secure-data/?user_id=42. The server:
- Fills
_user_idfrom the query string (value42). - Executes the resolved expression:
select api_token from user_tokens where user_id = $1(parameterized, with$1 = 42). - Sets
_tokento the result (e.g.,"secret-abc"). - Substitutes
{_token}in the outgoing HTTP request header:Authorization: Bearer secret-abc. - Makes the HTTP call and returns the response.
The token never leaves the server. The client never sees it.
Behavior
- Server-side only: Resolved parameters cannot be overridden by client input. Even if the client sends
&token=hacked, the DB-resolved value is used. - NULL handling: If the SQL expression returns no rows or NULL, the parameter is set to
DBNull.Value(empty string in placeholder substitution). - Name-based placeholders, parameterized execution: Placeholders like
{_user_id}reference other function parameters by name — the value is always looked up by name, regardless of position. Internally, placeholders are converted to positional$Nparameters for safe execution (preventing SQL injection). - Sequential execution: When multiple parameters are resolved, expressions execute one-by-one on the same connection, in annotation order.
- Works with user_params: Resolved expressions can reference parameters auto-filled from JWT claims via
user_params, enabling fully zero-parameter authenticated calls.
Multiple Resolved Parameters
Multiple parameters can each have their own resolved expression:
comment on function my_func(text, my_type, text, text) is '
_token = select api_token from tokens where user_name = {_name}
_api_key = select ''static-key-'' || api_token from tokens where user_name = {_name}
';Resolved Parameters in URL, Headers, and Body
Resolved values participate in all HTTP Client Type placeholder locations — URL path segments, headers, and request body templates:
-- URL: GET https://api.example.com/resource/{_secret_path}
-- Header: Authorization: Bearer {_token}
-- Body: {"token": "{_token}", "data": "{_payload}"}New Feature: HTTP Client Type Retry Logic
When using HTTP Client Types, outgoing HTTP requests to external APIs can fail transiently — rate limiting (429), temporary server errors (503), network timeouts. Previously, a single failure was passed directly to the PostgreSQL function with no opportunity to retry.
The new @retry_delay directive adds configurable automatic retries with delays, defined in the HTTP type comment alongside existing directives like timeout.
Syntax
-- Retry on any failure (non-2xx status, timeout, or network error):
comment on type my_api_type is '@retry_delay 1s, 2s, 5s
GET https://api.example.com/data';
-- Retry only on specific HTTP status codes:
comment on type my_api_type is '@retry_delay 1s, 2s, 5s on 429, 503
GET https://api.example.com/data';
-- Combined with timeout:
comment on type my_api_type is 'timeout 10s
@retry_delay 1s, 2s, 5s on 429, 503
GET https://api.example.com/data';The delay list defines both the number of retries and the delay before each retry. 1s, 2s, 5s means 3 retries with 1-second, 2-second, and 5-second delays respectively. Delay values use the same format as timeout — 100ms, 1s, 5m, 30, 00:00:01, etc.
Behavior
- Without
onfilter: Retries on any non-success HTTP response, timeout, or network error. - With
onfilter: Retries only when the HTTP response status code matches one of the listed codes (e.g.,429, 503). Timeouts and network errors always trigger retry regardless of the filter, since they have no status code. - Retry exhaustion: If all retries fail, the last error (status code, error message) is passed to the PostgreSQL function — the same as if retries were not configured.
- Unexpected exceptions: Non-HTTP errors (e.g., invalid URL) are never retried.
- Parallel execution: Each HTTP type in a function retries independently within its own parallel task. No changes to the parallel execution model.
- No external dependencies: Built-in retry loop, no Polly or other libraries required. Matches the existing PostgreSQL command retry pattern.
Example
create type rate_limited_api as (body json, status_code int, error_message text);
comment on type rate_limited_api is '@retry_delay 1s, 2s, 5s on 429, 503
GET https://api.example.com/data
Authorization: Bearer {_token}';
create function get_rate_limited_data(
_token text,
_req rate_limited_api
)
returns table (body json, status_code int, error_message text)
language plpgsql as $$
begin
return query select (_req).body, (_req).status_code, (_req).error_message;
end;
$$;If the external API returns 429 (rate limited), the request is automatically retried after 1s, then 2s, then 5s. If it returns 400 (bad request), no retry occurs and the error is returned immediately.
New Feature: Data Protection Encrypt/Decrypt Annotations
Two new comment annotations — encrypt and decrypt — enable transparent application-level column encryption using ASP.NET Data Protection. Parameter values are encrypted before being sent to PostgreSQL, and result column values are decrypted before being returned to the API client. The database stores ciphertext; the API consumer sees plaintext. No pgcrypto or client-side encryption required.
This is useful for storing PII (SSN, medical records, credit card numbers) or other sensitive data that must be encrypted at rest but is only ever looked up by an unencrypted key (e.g., user_id, patient_id).
Prerequisite: The
DataProtectionsection must be enabled inappsettings.json(it is by default). TheDefaultDataProtectoris automatically created from Data Protection configuration and passed to the NpgsqlRest authentication options.
Encrypt Parameters
Mark specific parameters to encrypt before they are sent to PostgreSQL:
create function store_patient_ssn(_patient_id int, _ssn text)
returns void
language plpgsql as $$
begin
insert into patients (id, ssn) values (_patient_id, _ssn)
on conflict (id) do update set ssn = excluded.ssn;
end;
$$;
comment on function store_patient_ssn(int, text) is '
HTTP POST
encrypt _ssn
';The client calls POST /api/store-patient-ssn/ with {"patientId": 1, "ssn": "123-45-6789"}. The server encrypts _ssn using Data Protection before executing the SQL — the database stores ciphertext like CfDJ8N..., never the plaintext SSN.
Use encrypt without arguments to encrypt all text parameters:
comment on function store_all_secrets(text, text) is '
HTTP POST
encrypt
';Decrypt Result Columns
Mark specific result columns to decrypt before returning to the client:
create function get_patient(_patient_id int)
returns table(id int, ssn text, name text)
language plpgsql as $$
begin
return query select p.id, p.ssn, p.name from patients p where p.id = _patient_id;
end;
$$;
comment on function get_patient(int) is '
decrypt ssn
';The client calls GET /api/get-patient/?patientId=1. The ssn column is decrypted from ciphertext back to "123-45-6789" before being included in the JSON response. The id and name columns are returned as-is.
Use decrypt without arguments to decrypt all result columns:
comment on function get_all_secrets(text) is '
decrypt
';Decrypt also works on scalar (single-value) return types:
create function get_secret(_id int) returns text ...
comment on function get_secret(int) is 'decrypt';Full Roundtrip Example
-- Store with encryption
create function store_secret(_key text, _value text) returns void ...
comment on function store_secret(text, text) is '
HTTP POST
encrypt _value
';
-- Retrieve with decryption
create function get_secret(_key text) returns table(key text, value text) ...
comment on function get_secret(text) is '
decrypt value
';POST /api/store-secret/ {"key": "api-key", "value": "sk-abc123"}
GET /api/get-secret/?key=api-key → {"key": "api-key", "value": "sk-abc123"}
``...
NpgsqlRest v3.9.0
Commented Configuration Output (--config)
The --config output now includes inline JSONC comments with descriptions for every setting, matching the appsettings.json file exactly. This makes it easy to understand what each setting does without consulting the documentation. The default configuration file can be constructed with:
npgsqlrest --config > appsettings.json
Configuration Search and Filter (--config [filter])
Added an optional filter argument to --config that searches keys, comments, and values (case-insensitive) and returns only matching settings as valid JSONC:
npgsqlrest --config cors
npgsqlrest --config=timeout
npgsqlrest --config minworker
Output preserves the full section hierarchy so it can be copy-pasted directly into appsettings.json. When a key inside a section matches, the parent section wrapper is included. When a section name or its comment matches, the entire section is shown. Matched terms are highlighted with inverted colors in the terminal; piped output is plain text.
CLI Improvements
- Case-insensitive config overrides: Command-line config overrides like
--Applicationname=testnow correctly update the existingApplicationNamekey instead of creating a duplicate entry with different casing. - Config validation on
--config: The--configcommand now validates configuration keys before dumping. Unknown keys (e.g.,--xxx=test) produce an error on stderr and exit with code 1. - Redirected output fix: Formatted CLI output (
--help,--version) no longer crashes when stdout is redirected (e.g., piped or captured by a parent process). - CLI test suite: Added process-based tests for all CLI commands (
--help,--version,--hash,--basic_auth,--config-schema,--annotations,--config,--config [filter], invalid args).
NpgsqlRest v3.8.0
New Feature: Configuration Key Validation
Added startup validation that checks all configuration keys in appsettings.json against the known defaults schema. This catches typos and unknown keys that would otherwise be silently ignored (e.g., LogCommand instead of LogCommands).
Controlled by the new Config:ValidateConfigKeys setting with three modes:
"Warning"(default) — logs warnings for unknown keys, startup continues."Error"— logs errors for unknown keys and exits the application."Ignore"— no validation.
"Config": {
"ValidateConfigKeys": "Warning"
}Example output:
[12:34:56 WRN] Unknown configuration key: NpgsqlRest:KebabCaselUrls
Removed
- Removed the
Config:ExposeAsEndpointoption. Use the--configCLI switch to inspect configuration instead.
Kestrel Configuration Validation
Configuration key validation also covers the Kestrel section, checking against the known Kestrel schema including Limits, Http2, Http3, and top-level flags like DisableStringReuse and AllowSynchronousIO. User-defined endpoint and certificate names under Endpoints and Certificates remain open-ended and won't trigger warnings.
Syntax Highlighted --config Output
The --config CLI switch now outputs JSON with syntax highlighting (keys, strings, numbers/booleans, and structural characters in distinct colors). When output is redirected to a file, plain JSON is emitted without color codes. The --config switch can now appear anywhere in the argument list and be combined with config files and --key=value overrides.
Improved CLI Error Handling
Unknown command-line parameters now display a clear error message with a --help hint instead of an unhandled exception stack trace.
Universal fallback_handler for All Upload Handlers
The fallback_handler parameter, previously Excel-only, is now available on all upload handlers via BaseUploadHandler. When a handler's format validation fails and a fallback_handler is configured, processing is automatically delegated to the named fallback handler.
This enables scenarios like: CSV format check fails on a binary file → fall back to large_object or file_system to save the raw file for analysis.
comment on function my_csv_upload(json) is '
@upload for csv
@check_format = true
@fallback_handler = large_object
@row_command = select process_row($1,$2)
';Optional Path Parameters
Path parameters now support the ASP.NET Core optional parameter syntax {param?}. When a path parameter is marked as optional and the corresponding PostgreSQL function parameter has a default value, omitting the URL segment will use the PostgreSQL default:
create function get_item(p_id int default 42) returns text ...
comment on function get_item(int) is '
HTTP GET /items/{p_id?}
';GET /items/5→ uses the provided value5GET /items/→ uses the PostgreSQL default42
This also works with query_string_null_handling null_literal to pass NULL via the literal string "null" in the path for any parameter type:
create function get_item(p_id int default null) returns text ...
comment on function get_item(int) is '
HTTP GET /items/{p_id}
query_string_null_handling null_literal
';GET /items/null→ passes SQL NULL to the function
Fixes
- Fixed query string overload resolution not accounting for path parameters. GET endpoints with path parameters and overloaded functions (same name, different signatures) would resolve to the wrong function. The body JSON overload resolution already handled this correctly.
- Added missing
QueryStringNullHandlingandTextResponseNullHandlingentries toConfigDefaults, which caused them to be absent from--configoutput. - Added missing
Pattern,MinLength, andMaxLengthproperties to default validation rule schemas inConfigDefaults.
Machine-Readable CLI Commands for Tool Integration
Added new CLI commands designed for programmatic consumption by tools like pgdev. All JSON-outputting commands use syntax highlighting when run in a terminal and emit plain JSON when piped or redirected.
--version --json
Outputs version information as structured JSON including all assembly versions, runtime, platform RID, and directories:
npgsqlrest --version --json
--validate [--json]
Pre-flight check that validates configuration keys against known defaults and tests the database connection, then exits with code 0 (success) or 1 (failure):
npgsqlrest --validate
npgsqlrest --validate --json
--config-schema
Outputs a JSON Schema (draft-07) describing the full appsettings.json configuration structure — types, defaults, and enum constraints. Can be used for IDE autocomplete via the $schema property or as the foundation for config editing UIs:
npgsqlrest --config-schema
--annotations
Outputs all 44 supported SQL comment annotations as a JSON array with name, aliases, syntax, and description for each:
npgsqlrest --annotations
--endpoints
Connects to the database, discovers all generated REST endpoints, outputs full metadata (method, path, routine info, parameters, return columns, authorization, custom parameters), then exits. Logging is suppressed to keep output clean:
npgsqlrest --endpoints
--config (updated)
The --config --json flag has been removed. The --config command now always uses automatic detection: syntax highlighted in terminal, plain JSON when output is piped or redirected.
Stats Endpoints: format Query String Override
Stats endpoints now accept an optional format query string parameter that overrides the configured Stats:OutputFormat setting per-request. Valid values are html and json.
GET /api/stats/routines?format=json
GET /api/stats/tables?format=html
NpgsqlRest v3.7.0
Fixes
-
Fixed comma separator bug in Excel Upload Handler error response when processing multiple files. The
fileIdcounter was not incremented on error, causing malformed JSON output when an invalid file was followed by additional files. -
Fixed
CustomHostconfiguration inClientCodeGennot accepting an empty string value. Setting"CustomHost": ""was treated the same asnull(triggering host auto-detection) becauseGetConfigStrusesstring.IsNullOrEmpty. Now an explicit empty string correctly producesconst baseUrl = "";in generated TypeScript, which is useful for relative URL paths.
New Features
- Added
fallback_handlerparameter to the Excel Upload Handler. When set (e.g.,fallback_handler = csv), if ExcelDataReader fails to parse an uploaded file (invalid Excel format), the handler automatically delegates processing to the named fallback handler. This allows a single upload endpoint to accept both Excel and CSV files transparently:
comment on function my_upload(json) is '
@upload for excel
@fallback_handler = csv
@row_command = select process_row($1,$2)
';New Feature: Pluggable Table Format Renderers
Added a pluggable table format rendering system that allows PostgreSQL function results to be rendered as HTML tables or Excel spreadsheet downloads instead of JSON, controlled by the @table_format annotation.
HTML Table Format
Renders results as a styled HTML table suitable for browser viewing and copy-paste into Excel:
comment on function get_report() is '
HTTP GET
@table_format = html
';Configuration options in TableFormatOptions: HtmlEnabled, HtmlKey, HtmlHeader, HtmlFooter.
Excel Table Format
Renders results as an .xlsx Excel spreadsheet download using the SpreadCheetah library (streaming, AOT-compatible):
comment on function get_report() is '
HTTP GET
@table_format = excel
';Configuration options in TableFormatOptions: ExcelEnabled, ExcelKey, ExcelSheetName, ExcelDateTimeFormat, ExcelNumericFormat.
ExcelDateTimeFormat— Excel Format Code for DateTime cells (default:yyyy-MM-dd HH:mm:ss). Examples:yyyy-mm-dd,dd/mm/yyyy hh:mm.ExcelNumericFormat— Excel Format Code for numeric cells (default: General). Examples:#,##0.00,0.00.
Per-Endpoint Custom Parameters
The download filename and worksheet name can be overridden per-endpoint via custom parameter annotations:
comment on function get_report() is '
HTTP GET
@table_format = excel
@excel_file_name = monthly_report.xlsx
@excel_sheet = Report Data
';These also support dynamic placeholders resolved from function parameters:
comment on function get_report(_format text, _file_name text, _sheet_name text) is '
HTTP GET
@table_format = {_format}
@excel_file_name = {_file_name}
@excel_sheet = {_sheet_name}
';TsClient: Per-Endpoint URL Export Control
Added two new custom parameter annotations to control TypeScript client code generation per-endpoint:
tsclient_export_url
Overrides the global ExportUrls configuration setting for a specific endpoint:
comment on function login(_username text, _password text) is '
HTTP POST
@login
@tsclient_export_url = true
';When enabled, the generated TypeScript exports a URL constant for that endpoint:
export const loginUrl = () => baseUrl + "/api/login";tsclient_url_only
When set, only the URL constant is exported — the fetch function and response type interface are skipped entirely. Implies tsclient_export_url = true:
comment on function get_data(_format text) is '
HTTP GET
@table_format = {_format}
@tsclient_url_only = true
';This generates only the URL constant and request interface, which is useful for endpoints consumed via browser navigation (e.g., table format downloads) rather than fetch calls.
NpgsqlRest v3.6.3
Fixes
- Fixed
ParseEnvironmentVariablesfeature not working for Kestrel configuration values. Previously, environment variable placeholders (e.g.,{MY_HOST}) in Kestrel settings like Endpoints URLs, Certificate paths/passwords, and Limits were not being replaced because Kestrel uses ASP.NET Core's direct binding which bypassed the custom placeholder processing. Now all Kestrel configuration values properly support environment variable replacement whenParseEnvironmentVariablesis enabled.
NpgsqlRest v3.6.2
Fixes
-
Fixed
NestedJsonForCompositeTypesoption fromRoutineOptionsnot being applied to endpoints. Previously, only thenestedcomment annotation could enable nested JSON serialization for composite types. Now the global configuration option is properly applied as the default for all endpoints. -
Fixed TypeScript client (
NpgsqlRest.TsClient) generating incorrect types for composite columns whenNestedJsonForCompositeTypesisfalse(the default). The client now correctly generates flat field types matching the actual JSON response structure, instead of always generating nested interfaces.
Breaking Changes
- Added
NestedJsonForCompositeTypesproperty toIRoutineSourceinterface. Custom implementations ofIRoutineSourcewill need to add this property.
NpgsqlRest v3.6.1
Fixes
- Fixed
RequireAuthorizationon Stats and Health endpoints to use manual authorization check consistent with NpgsqlRest endpoints. - Fixed
ActivityQueryin Stats endpoints. - Fixed
OutputFormatdefault value in Stats endpoints.
NpgsqlRest v3.6.0
New Feature: Security Headers Middleware
Added configurable security headers middleware to protect against common web vulnerabilities. The middleware adds HTTP security headers to all responses:
- X-Content-Type-Options - Prevents MIME-sniffing attacks (default:
nosniff) - X-Frame-Options - Prevents clickjacking attacks (default:
DENY, skipped if Antiforgery is enabled) - Referrer-Policy - Controls referrer information (default:
strict-origin-when-cross-origin) - Content-Security-Policy - Defines approved content sources (configurable)
- Permissions-Policy - Controls browser feature access (configurable)
- Cross-Origin-Opener-Policy - Controls document sharing with popups
- Cross-Origin-Embedder-Policy - Controls cross-origin resource loading
- Cross-Origin-Resource-Policy - Controls resource sharing cross-origin
Configuration:
//
// Security Headers: Adds HTTP security headers to all responses to protect against common web vulnerabilities.
// These headers instruct browsers how to handle your content securely.
// Note: X-Frame-Options is automatically handled by the Antiforgery middleware when enabled (see Antiforgery.SuppressXFrameOptionsHeader).
// Reference: https://owasp.org/www-project-secure-headers/
//
"SecurityHeaders": {
//
// Enable security headers middleware. When enabled, configured headers are added to all HTTP responses.
//
"Enabled": false,
//
// X-Content-Type-Options: Prevents browsers from MIME-sniffing a response away from the declared content-type.
// Recommended value: "nosniff"
// Set to null to not include this header.
//
"XContentTypeOptions": "nosniff",
//
// X-Frame-Options: Controls whether the browser should allow the page to be rendered in a <frame>, <iframe>, <embed> or <object>.
// Values: "DENY" (never allow), "SAMEORIGIN" (allow from same origin only)
// Note: This header is SKIPPED if Antiforgery is enabled (Antiforgery already sets X-Frame-Options: SAMEORIGIN by default).
// Set to null to not include this header.
//
"XFrameOptions": "DENY",
//
// Referrer-Policy: Controls how much referrer information should be included with requests.
// Values: "no-referrer", "no-referrer-when-downgrade", "origin", "origin-when-cross-origin",
// "same-origin", "strict-origin", "strict-origin-when-cross-origin", "unsafe-url"
// Recommended: "strict-origin-when-cross-origin" (send origin for cross-origin requests, full URL for same-origin)
// Set to null to not include this header.
//
"ReferrerPolicy": "strict-origin-when-cross-origin",
//
// Content-Security-Policy: Defines approved sources of content that the browser may load.
// Helps prevent XSS, clickjacking, and other code injection attacks.
// Example: "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'"
// Reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
// Set to null to not include this header (recommended to configure based on your application needs).
//
"ContentSecurityPolicy": null,
//
// Permissions-Policy: Controls which browser features and APIs can be used.
// Example: "geolocation=(), microphone=(), camera=()" disables these features entirely.
// Example: "geolocation=(self), microphone=()" allows geolocation only from same origin.
// Reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy
// Set to null to not include this header.
//
"PermissionsPolicy": null,
//
// Cross-Origin-Opener-Policy: Controls how your document is shared with cross-origin popups.
// Values: "unsafe-none", "same-origin-allow-popups", "same-origin"
// Set to null to not include this header.
//
"CrossOriginOpenerPolicy": null,
//
// Cross-Origin-Embedder-Policy: Prevents a document from loading cross-origin resources that don't explicitly grant permission.
// Values: "unsafe-none", "require-corp", "credentialless"
// Required for SharedArrayBuffer and high-resolution timers (along with COOP: same-origin).
// Set to null to not include this header.
//
"CrossOriginEmbedderPolicy": null,
//
// Cross-Origin-Resource-Policy: Indicates how the resource should be shared cross-origin.
// Values: "same-site", "same-origin", "cross-origin"
// Set to null to not include this header.
//
"CrossOriginResourcePolicy": null
}New Feature: Forwarded Headers Middleware
Added support for processing proxy headers when running behind a reverse proxy (nginx, Apache, Azure App Service, AWS ALB, Cloudflare, etc.). This is critical for getting the correct client IP address and protocol.
- X-Forwarded-For - Gets real client IP instead of proxy IP
- X-Forwarded-Proto - Gets original protocol (http/https)
- X-Forwarded-Host - Gets original host header
Configuration:
//
// Forwarded Headers: Enables the application to read proxy headers (X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Host).
// CRITICAL: Required when running behind a reverse proxy (nginx, Apache, Azure App Service, AWS ALB, Cloudflare, etc.)
// Without this, the application sees the proxy's IP instead of the client's real IP, and HTTP instead of HTTPS.
// Security Warning: Only enable if you're behind a trusted proxy. Malicious clients can spoof these headers.
// Reference: https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer
//
"ForwardedHeaders": {
//
// Enable forwarded headers middleware. Must be placed FIRST in the middleware pipeline.
//
"Enabled": false,
//
// Limits the number of proxy entries that will be processed from X-Forwarded-For.
// Default is 1 (trust only the immediate proxy). Increase if you have multiple proxies in a chain.
// Set to null to process all entries (not recommended for security).
//
"ForwardLimit": 1,
//
// List of IP addresses of known proxies to accept forwarded headers from.
// Example: ["10.0.0.1", "192.168.1.1"]
// If empty and KnownNetworks is also empty, forwarded headers are accepted from any source (less secure).
//
"KnownProxies": [],
//
// List of CIDR network ranges of known proxies.
// Example: ["10.0.0.0/8", "192.168.0.0/16", "172.16.0.0/12"] for private networks
// Useful when proxy IPs are dynamically assigned within a known range.
//
"KnownNetworks": [],
//
// List of allowed values for the X-Forwarded-Host header.
// Example: ["example.com", "www.example.com"]
// If empty, any host is allowed (less secure). Helps prevent host header injection attacks.
//
"AllowedHosts": []
}New Feature: Health Check Endpoints
Added health check endpoints for container orchestration (Kubernetes, Docker Swarm) and monitoring systems:
- /health - Overall health status (combines all checks)
- /health/ready - Readiness probe with optional PostgreSQL connectivity check
- /health/live - Liveness probe (always returns healthy if app is running)
Configuration:
//
// Health Checks: Provides endpoints for monitoring application health, used by container orchestrators (Kubernetes, Docker Swarm),
// load balancers, and monitoring systems to determine if the application is running correctly.
// Three types of checks are supported:
// - /health: Overall health status (combines all checks)
// - /health/ready: Readiness probe - is the app ready to accept traffic? (includes database connectivity)
// - /health/live: Liveness probe - is the app process running? (always returns healthy if app responds)
// Reference: https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/health-checks
//
"HealthChecks": {
//
// Enable health check endpoints.
//
"Enabled": false,
//
// Cache health check responses server-side in memory for the specified duration.
// Cached responses are served without re-executing the endpoint.
// Value is in PostgreSQL interval format (e.g., '5 seconds', '1 minute', '30s', '1min').
// Set to null to disable caching. Query strings are ignored to prevent cache-busting.
//
"CacheDuration": "5 seconds",
//
// Path for the main health check endpoint that reports overall status.
// Returns "Healthy", "Degraded", or "Unhealthy" with HTTP 200 (healthy/degraded) or 503 (unhealthy).
//
"Path": "/health",
//
// Path for the readiness probe endpoint.
// Kubernetes uses this to know when a pod is ready to receive traffic.
// Includes database connectivity check when IncludeDatabaseCheck is true.
// Returns 503 Service Unavailable if database is unreachable.
//
"ReadyPath": "/health/ready",
//
// Path for the liveness probe endpoint.
// Kubernetes uses this to know when to restart a pod.
// Always returns Healthy (200) if the application process is responding.
// Does NOT check database - a slow database shouldn't trigger a container restart.
//
"LivePath": "/health/live",
//
// Include PostgreSQL database connectivity in health checks.
// When true, the readiness probe will fail if the database is unreachable.
//
"IncludeDatabaseCheck": true,
//
// Name for the database health check (appears in detailed health reports).
//
"DatabaseCheckName": "postgresql",
//
// Require authentication for health check endpoints.
// When true, all health endpoints require a valid authenticated user.
// Security Consideration: Health endpoints can reveal information about your infrastructure
// (database connectivity, service status). Enable this if your health endpoints are publicly accessible.
// Note: Kubernetes/Docker health probes may need to authenticate if this is enabled.
//
"RequireAuthorization": false,
//
// Apply a rate limiter policy to health check endpoints.
// Specify the name of a policy defined in RateLimiterOptions.Policies.
// Security Consideration: Prevents denial-of-service attac...