CockroachDB v26.2 is a required Regular release. This page contains a complete list of features and changes in v26.2.
- For a summary of the most significant changes in v26.2, refer to Feature highlights.
Before upgrading to CockroachDB v26.2, review the backward-incompatible changes, including key cluster setting changes and deprecations.
For details about the support window for this release type, review the Release Support Policy.
For details about all supported releases, the release schedule, and licenses, refer to CockroachDB Releases Overview.
Get future release notes emailed to you:
v26.2.0
Release Date: April 28, 2026
This version is currently available only for select CockroachDB Cloud clusters. Binaries for self-hosted clusters will be available on May 13, 2026.
Feature highlights
This section summarizes the most significant user-facing changes in SQL, security, observability, and performance.
You can also search the docs for sections labeled New in v26.2.
SQL highlights
| Feature | Availability | Self-hosted | Basic | Standard | Advanced |
|---|---|---|---|---|---|
|
SQL triggers SQL triggers are now generally available. CockroachDB supports PostgreSQL-compatible |
GA | ||||
|
PostgreSQL-compatible fuzzystrmatch functions support CockroachDB now supports PostgreSQL-compatible fuzzystrmatch built-in functions: |
GA | ||||
|
PostgreSQL-compatible TCP keepalive session variables CockroachDB now supports PostgreSQL-compatible TCP keepalive session variables |
GA | ||||
|
Schema lock enforcement The cluster setting |
GA | ||||
|
Hash-sharded indexes with prefix columns Hash-sharded indexes now support computing the shard value from a subset of index key columns rather than all of them. This gives you finer control over how data is distributed across shards and significantly improves query performance when filtering on only a prefix of the indexed columns. |
GA |
Security highlights
| Feature | Availability | Self-hosted | Basic | Standard | Advanced |
|---|---|---|---|---|---|
|
Certificate-based authentication using X.509 Subject field CockroachDB now supports mapping SQL user roles to distinguished name attributes in the Subject field of X.509 certificates, including |
Preview | ||||
|
Post-quantum cryptography support CockroachDB now supports post-quantum cryptographic algorithms for TLS connections. This applies to both client-to-node and inter-node communication. |
Preview |
Observability highlights
| Feature | Availability | Self-hosted | Basic | Standard | Advanced |
|---|---|---|---|---|---|
|
Active Session History Active Session History (ASH) tracks CPU usage, I/O activity, wait events, and contention for session activity including SQL statements and background jobs. Samples are captured at regular intervals, enabling faster diagnosis of performance bottlenecks by correlating session activity with resource consumption. |
Preview |
Performance highlights
| Feature | Availability | Self-hosted | Basic | Standard | Advanced |
|---|---|---|---|---|---|
|
Leader leases Leader leases are now generally available. This feature maintains more stable leadership across nodes by reducing unnecessary lease transfers, resulting in more consistent query response times and fewer latency spikes. |
GA | ||||
|
Buffered writes Buffered writes are now generally available. This feature improves throughput and reducing tail latency under heavy write workloads by batching writes efficiently before flushing to disk. |
GA |
Features that require upgrade finalization
This section summarizes the features that are not available until you finalize the v26.2 upgrade.
Views now support the PostgreSQL-compatible
security_invokeroption. When set viaCREATE VIEW ... WITH (security_invoker)orALTER VIEW SET (security_invoker = true), privilege checks on the underlying tables are performed as the querying user rather than the view owner. Thesecurity_invokeroption can be reset withALTER VIEW ... RESET (security_invoker). #164184Added support for
ALTER TABLE ENABLE TRIGGERandALTER TABLE DISABLE TRIGGERsyntax. This allows users to temporarily disable triggers without dropping them, and later re-enable them. The syntax supports disabling/enabling individual triggers by name, or all triggers on a table using theALLorUSERkeywords. #161924Added an index storage parameter
skip_unique_checksthat can be used to disable unique constraint checks for indexes with implicit partition columns, including indexes inREGIONAL BY ROWtables. This should only be used if the application can guarantee uniqueness, for example, by using external UUID values or relying on aunique_rowid()default value. Incorrectly applying this setting when uniqueness is not guaranteed by the application could result in logically duplicate keys in different partitions of a unique index. #163378ALTER TABLE ... DROP CONSTRAINTcan now be used to dropUNIQUEconstraints. The backingUNIQUEindex will also be dropped, as CockroachDB treats the constraint and index as the same thing. #162345EXPLAINandEXPLAIN ANALYZEnow display atable stats modefield (canaryorstable) when thesql.stats.canary_fractioncluster setting is greater than 0, indicating which table statistics were used for query planning. Scan nodes for tables with active canary stats also show the configured canary window duration. #166129When selecting from a view, the view owner's privileges on the underlying tables are now checked. Previously, no privilege checks were performed on the underlying tables, so a view would continue to work even after the owner lost access to the underlying tables. This also affects row-level security (RLS): the view owner's RLS policies are now enforced instead of the invoker's. If this causes issues, you can restore the previous behavior by setting the cluster setting
sql.auth.skip_underlying_view_privilege_checks.enabledtotrue. #164664During an
INSPECTrun, a new check validates unique column values inREGIONAL BY ROWtables. #164449The
bulkio.import.row_count_validation.modecluster setting controls whether row count validation runs afterIMPORToperations. When enabled, a backgroundINSPECTjob validates that the imported row count matches expectations after anIMPORTcompletes. TheIMPORTresult includes aninspect_job_idcolumn so theINSPECTjob can be viewed separately. Valid values areoff(default),async, andsync. #168403
Backward-incompatible changes
This section summarizes changes that can cause applications, scripts, or manual workflows to fail or behave differently than in previous releases. This includes key cluster setting changes and deprecations.
The
TG_ARGVtrigger function parameter now uses 0-based indexing to match PostgreSQL behavior. Previously,TG_ARGV[1]returned the first argument; nowTG_ARGV[0]returns the first argument andTG_ARGV[1]returns the second argument. Additionally, usage ofTG_ARGVno longer requires setting theallow_create_trigger_function_with_argv_referencessession variable. #161925The session variable
distsql_prevent_partitioning_soft_limited_scansis now enabled by default. This prevents scans with soft limits from being planned as multiple TableReaders, which decreases the initial setup costs of some fully-distributed query plans. #160051Creating or altering a changefeed or Kafka/Pub/Sub external connection now returns an error when the
topic_namequery parameter is explicitly set to an empty string in the sink URI, rather than silently falling back to using the table name as the topic name. Existing changefeeds with an emptytopic_nameare not affected. #164225TTL jobs are now owned by the schedule owner instead of the
nodeuser. This allows users withCONTROLJOBprivilege to cancel TTL jobs, provided the schedule owner is not an admin (CONTROLJOBdoes not grant control over admin-owned jobs). #161226Calling
information_schema.crdb_rewrite_inline_hintsnow requires theREPAIRCLUSTERprivilege. #160716The Statement Details page URL format has changed from
/statement/{implicitTxn}/{statementId}to/statement/{statementId}. As a result, bookmarks using the old URL structure will no longer work. #159558Changed the unit of measurement for admission control duration metrics from microseconds to nanoseconds. The following metrics are affected:
admission.granter.slots_exhausted_duration.kv,admission.granter.cpu_load_short_period_duration.kv,admission.granter.cpu_load_long_period_duration.kv,admission.granter.io_tokens_exhausted_duration.kv,admission.granter.elastic_io_tokens_exhausted_duration.kv, andadmission.elastic_cpu.nanos_exhausted_duration. Note that dashboards displaying these metrics will show a discontinuity at upgrade time, with pre-upgrade values appearing much lower due to the unit change. #160956Renamed the builtin function
crdb_internal.inject_hint(introduced in v26.1.0-alpha.2) toinformation_schema.crdb_rewrite_inline_hints. #160716Removed the
incremental_locationoption fromBACKUPandCREATE SCHEDULE FOR BACKUP. #159189Removed the
incremental_locationoption fromSHOW BACKUPandRESTORE. #160416When selecting from a view, the view owner's privileges on the underlying tables are now checked. Previously, no privilege checks were performed on the underlying tables, so a view would continue to work even after the owner lost access to the underlying tables. This also affects row-level security (RLS): the view owner's RLS policies are now enforced instead of the invoker's. If this causes issues, you can restore the previous behavior by setting the cluster setting
sql.auth.skip_underlying_view_privilege_checks.enabledtotrue. #164664Using
ALTER CHANGEFEED ADD ...for a table that is already watched will now return an error:target already watched by changefeed. #164433Explicit
AS OF SYSTEM TIMEqueries are no longer allowed on a Physical Cluster Replication (PCR) reader virtual cluster, unless thebypass_pcr_reader_catalog_aostsession variable is set totrue. This session variable should only be used during investigation or for changing cluster settings specific to the reader virtual cluster. #165382Added the
TEMPORARYdatabase privilege, which controls whether users can create temporary tables and views. On new databases, this privilege is granted to thepublicrole by default, matching PostgreSQL behavior. #165992Statement diagnostics requests with
sampling_probabilityandexpires_atnow collect up to 10 bundles (configurable viasql.stmt_diagnostics.max_bundles_per_request) instead of a single bundle. Set the cluster setting to1to restore single-bundle behavior. #166159User-defined views that reference
crdb_internalvirtual tables now enforce unsafe access checks. To restore the previous behavior, set the session variableallow_unsafe_internalsor the cluster settingsql.override.allow_unsafe_internals.enabledtotrue. #167023REFRESH MATERIALIZED VIEWnow evaluates row-level security (RLS) policies using the view owner's identity instead of the invoker's, matching PostgreSQL's definer semantics. #167419
Key cluster setting changes
Review the following changes before upgrading. New default values will be used unless you have manually set a cluster setting value. To view the non-default settings on your cluster, run the SQL statement SELECT * FROM system.settings.
| Setting | Description | Previous default | New default | Backported to versions |
|---|---|---|---|---|
bulkio.import.elastic_control.enabled |
The bulkio.import.elastic_control.enabled cluster setting is now enabled by default, allowing import operations to integrate with elastic CPU control and automatically throttle based on available resources. #163867 |
false |
true |
None |
bulkio.index_backfill.elastic_control.enabled |
The bulkio.index_backfill.elastic_control.enabled cluster setting is now enabled by default, allowing index backfill operations to integrate with elastic CPU control and automatically throttle based on available resources. #163866 |
false |
true |
None |
bulkio.ingest.sst_batcher_elastic_control.enabled |
The bulkio.ingest.sst_batcher_elastic_control.enabled cluster setting is now enabled by default, allowing SST batcher operations to integrate with elastic CPU control and automatically throttle based on available resources. #163868 |
false |
true |
None |
changefeed.max_retry_backoff |
Lowered the default value of the changefeed.max_retry_backoff cluster setting from 10m to 30s to reduce changefeed lag during rolling restarts. #164874 |
10m |
30s |
v25.4, v26.1 |
kv.range_split.load_sample_reset_duration |
The kv.range_split.load_sample_reset_duration cluster setting now defaults to 30m. This should improve load-based splitting in rare edge cases. #159499 |
0 |
30m |
None |
sql.catalog.allow_leased_descriptors.enabled |
Changed the default value of the sql.catalog.allow_leased_descriptors.enabled cluster setting to true. This setting allows introspection tables like information_schema and pg_catalog to use cached descriptors when building the table results, which improves the performance of introspection queries when there are many tables in the cluster. #159162 |
false |
true |
v26.1 |
sql.guardrails.max_row_size_err |
Lowered the default value of the sql.guardrails.max_row_size_log cluster setting from 64 MiB to 16 MiB, and the default value of sql.guardrails.max_row_size_err from 512 MiB to 80 MiB. These settings control the maximum size of a row (or column family) that SQL can write before logging a warning or returning an error, respectively. The previous defaults were high enough that large rows would hit other limits first (such as the Raft command size limit or the backup SST size limit), producing confusing errors. The new defaults align with existing system limits to provide clearer diagnostics. If your workload legitimately writes rows larger than these new defaults, you can restore the previous behavior by increasing these settings. #164468 |
512 MiB |
80 MiB |
None |
sql.guardrails.max_row_size_log |
Lowered the default value of the sql.guardrails.max_row_size_log cluster setting from 64 MiB to 16 MiB, and the default value of sql.guardrails.max_row_size_err from 512 MiB to 80 MiB. These settings control the maximum size of a row (or column family) that SQL can write before logging a warning or returning an error, respectively. The previous defaults were high enough that large rows would hit other limits first (such as the Raft command size limit or the backup SST size limit), producing confusing errors. The new defaults align with existing system limits to provide clearer diagnostics. If your workload legitimately writes rows larger than these new defaults, you can restore the previous behavior by increasing these settings. #164468 |
64 MiB |
16 MiB |
None |
sql.stats.automatic_full_concurrency_limit |
Increased the default value of sql.stats.automatic_full_concurrency_limit (which controls the maximum number of concurrent full statistics collections) from 1 to number of vCPUs divided by 2 (e.g., 4 vCPU nodes will have the value of 2). #161806 |
1 |
number of vCPUs / 2 | None |
Deprecations
| Deprecated | Description |
|---|---|
enable_inspect_command session variable |
INSPECT is now a generally available (GA) feature. The enable_inspect_command session variable has been deprecated, and is now effectively always set to true. #159659 |
enable_super_regions session variable and sql.defaults.super_regions.enabled cluster setting |
The enable_super_regions session variable and the sql.defaults.super_regions.enabled cluster setting are no longer required to use super regions. Super region DDL operations (ADD, DROP, and ALTER SUPER REGION) now work without any experimental flag. The session variable and cluster setting are deprecated, and existing scripts that set them will continue to work without error. #165227 |
cockroach encode-uri command |
The cockroach encode-uri command has been merged into the cockroach convert-url command and encode-uri has been deprecated. As a result, the flags --inline, --database, --user, --password, --cluster, --certs-dir, --ca-cert, --cert, and --key have been added to convert-url. #164561 |
Security updates
- LDAP authentication for the DB Console now supports automatic user provisioning. When the cluster setting
security.provisioning.ldap.enabledis set to true, users who authenticate successfully via LDAP will be automatically created in CockroachDB if they do not already exist. #163199 The new cluster setting
security.client_cert.san_required.enabledenables Subject Alternative Name (SAN) based authentication for client certificates. When enabled, CockroachDB validates client identities using SAN attributes (URIs, DNS names, or IP addresses) from X.509 certificates instead of or in addition to the certificate's Common Name field.Key capabilities include:
- For privileged users (root and node): SAN identities are validated against values configured via the
--root-cert-sanand--node-cert-sanstartup flags, with automatic fallback to Distinguished Name validation when both methods are configured. - For database users: SAN identities are extracted from client certificates and mapped to database usernames using Host-Based Authentication (HBA) identity mapping rules, allowing a single certificate with multiple SAN entries to authenticate as different database users based on context.
- Multiple identity attributes: A single certificate can contain multiple SAN entries (e.g., URI for service identity, DNS for hostname, IP for network location), providing flexible authentication options.
This authentication method works across both SQL client connections and internal RPC communication between cluster nodes, ensuring consistent identity verification throughout the system. Organizations using modern certificate management systems and service identity frameworks can now leverage their existing infrastructure for database authentication without requiring certificate reissuance or CN-based naming conventions. #162583
- For privileged users (root and node): SAN identities are validated against values configured via the
When the
security.provisioning.ldap.enabledcluster setting is enabled, LDAP-authenticated DB Console logins now update theestimated_last_login_timecolumn in thesystem.userstable. #163400When the
security.provisioning.oidc.enabledcluster setting is enabled, OIDC-authenticated DB Console logins now populate theestimated_last_login_timecolumn insystem.users, allowing administrators to track when OIDC users last accessed the DB Console. #164129Removed an overly restrictive TLS curve preference that limited FIPS mode to P-256. CockroachDB now uses Go's native FIPS curve selection, improving interoperability with clients that prefer other FIPS curves. #166793
Enterprise edition changes
- Added a new cluster setting,
security.provisioning.oidc.enabled, to allow automatic provisioning of users when they log in for the first time via OIDC. When enabled, a new user will be created in CockroachDB upon their first successful OIDC authentication. This feature is disabled by default. #159787 - LDAP authentication for the DB Console now additionally supports role-based access control (RBAC) through LDAP group membership. To use this feature, an administrator must first create roles in CockroachDB with names that match the Common Names (CN) of their LDAP groups. These roles should then be granted the desired privileges for DB Console access. When a user who is a member of a corresponding LDAP group logs into the DB Console, they will be automatically granted the role and its associated privileges, creating consistent behavior with SQL client connections. #162302
SQL language changes
Added cluster settings to control the number of concurrent automatic statistics collection jobs:
sql.stats.automatic_full_concurrency_limitcontrols the maximum number of concurrent full statistics collections. The default is 1.sql.stats.automatic_extremes_concurrency_limitcontrols the maximum number of concurrent partial statistics collections using extremes. The default is 128.
Note that at most one statistics collection job can run on a single table at a time. #158835
Added a new cluster setting
bulkio.import.distributed_merge.modeto enable distributed merge support forIMPORToperations. When enabled (default: false),IMPORTjobs will use a two-phase approach where import processors first write SST files to local storage, then a coordinator merges and ingests them. This can improve performance for large imports by reducing L0 file counts and enabling merge-time optimizations. This feature requires all nodes to be running v26.1 or later. #159330Added a new cluster setting,
sql.schema.auto_unlock.enabled, that controls whether DDL operations automatically unlockschema_lockedtables. When set tofalse, DDL on schema-locked tables is blocked unless the user manually unlocks the table first. This allows customers using LDR to enforceschema_lockedas a hard lock that prevents user-initiated DDL. The default istrue, preserving existing behavior. #166471Added a new cluster setting
sql.prepared_transactions.unsafe.enabled(default:false) that controls whetherPREPARE TRANSACTIONstatements are accepted. This setting is marked unsafe and requires the unsafe setting interlock to change. When disabled, attempting to prepare a transaction returns an error.COMMIT PREPAREDandROLLBACK PREPAREDremain available regardless of this setting to allow cleanup of existing prepared transactions. #166855Users can now set the
use_backups_with_idssession setting to enable a newSHOW BACKUPS INexperience. When enabled,SHOW BACKUPS IN {collection}displays all backups in the collection. Results can be filtered by backup end time usingOLDER THAN {timestamp}orNEWER THAN {timestamp}clauses. Example usage:SET use_backups_with_ids = true; SHOW BACKUPS IN '{collection}' OLDER THAN '2026-01-09 12:13:14' NEWER THAN '2026-01-04 15:16:17';#160137If the new
SHOW BACKUPexperience is enabled by setting theuse_backups_with_idssession variable to true,SHOW BACKUPwill parse the IDs provided bySHOW BACKUPSand display contents for single backups. #160812If the new
RESTOREexperience is enabled by setting theuse_backups_with_idssession variable to true,RESTOREwill parse the IDs provided bySHOW BACKUPSand will restore the specified backup without the use ofAS OF SYSTEM TIME. #161294SHOW BACKUPandRESTOREnow allow backup IDs even if theuse_backups_with_idssession variable is not set. Setting the variable only configures whetherLATESTis resolved using the new or legacy path. #162329Added the
REVISION START TIMEoption to the newSHOW BACKUPSexperience enabled via theuse_backups_with_idssession variable. Use theREVISION START TIMEoption to view the revision start times of revision history backups. #161328Added the
STRICToption for locality-aware backups. When enabled, backups fail if data from a KV node with one locality tag would be backed up to a bucket with a different locality tag, ensuring data domiciling compliance. #158999RESTORE TABLE/DATABASEnow supports theWITH GRANTSoption, which restores grants on restore targets for users in the restoring cluster. Note that using this option withnew_db_namewill cause the new database to inherit the privileges in the backed-up database. #164444Added support for
SHOW STATEMENT HINTS, which displays information about the statement hints (if any) associated with the given statement fingerprint string. The fingerprint is normalized in the same way asEXPLAIN (FINGERPRINT)before hints are matched. Example usage:SHOW STATEMENT HINTS FOR ' SELECT * FROM xy WHERE x = 10 'orSHOW STATEMENT HINTS FOR $$ SELECT * FROM xy WHERE x = 10 $$ WITH DETAILS. #159231Added support for a new statement hint used to change session variable values for the duration of a single statement without application changes. The new hint type can be created using the
information_schema.crdb_set_session_variable_hintbuilt-in function. The override applies only when executing a statement matching the given fingerprint and does not persist on the session or surrounding transaction. #164909Introduced the
information_schema.crdb_delete_statement_hintsbuilt-in function, which accepts 2 kinds of payload:row_id(int): the primary key ofsystem.statement_hints;fingerprint(string). The function returns the number of rows deleted. #163891CockroachDB now includes
information_schema.crdb_rewrite_inline_hintsstatements in theschema.sqlfile of a statement diagnostics bundle for re-creating all the statement hints bound to the statement. The hint recreation statements are sorted in ascending order of the original hint creation time. #164164Rewrite-inline-hints rules can now be scoped to a specific database, and will only apply to matching statements when the current database also matches. This database can be specified with an optional third argument to
information_schema.crdb_rewrite_inline_hints. #165457SHOW STATEMENT HINTSnow includesdatabaseandenabledcolumns in its output. Thedatabasecolumn indicates which database the hint applies to, and theenabledcolumn indicates whether the hint is active. #165712The
information_schema.crdb_delete_statement_hintsbuilt-in function now accepts an optional seconddatabaseargument to delete only hints scoped to a specific database. #167192CREATE OR REPLACE TRIGGERis now supported. If a trigger with the same name already exists on the same table, it is replaced with the new definition. If no trigger with that name exists, a new trigger is created. #162633Updated
DROP TRIGGERto accept theCASCADEoption for PostgreSQL compatibility. Since triggers in CockroachDB cannot have dependents,CASCADEbehaves the same asRESTRICTor omitting the option entirely. #161915DROP COLUMNandDROP INDEXwithCASCADEnow properly drop dependent triggers. Previously, these operations would fail with an unimplemented error when a trigger depended on the column or index being dropped. #163296CREATE OR REPLACE FUNCTIONnow works on trigger functions that have active triggers. Previously, this was blocked with an unimplemented error, requiring users to drop and recreate triggers. The replacement now atomically updates all dependent triggers to execute the new function body. #163348Added support for the
pg_trigger_depth()builtin function, which returns the current nesting level of PostgreSQL triggers (0 if not called from inside a trigger). #162286Added the
pg_get_triggerdefbuiltin function, which returns theCREATE TRIGGERstatement for a given trigger OID. This improves PostgreSQL compatibility for databases that contain triggers. #165849A database-level changefeed with no tables will periodically poll to check for tables added to the database. The new option
hibernation_polling_frequencysets the frequency at which the polling occurs, until a table is found, at which point polling ceases. #156771CREATE CHANGEFEED FOR DATABASEnow returns an error stating that the feature is not implemented. #166920Added the
MAINTAINprivilege, which can be granted on tables and materialized views. Users with theMAINTAINprivilege on a materialized view can executeREFRESH MATERIALIZED VIEWwithout being the owner. Users with theMAINTAINprivilege on a table can executeANALYZEwithout needingSELECT. This aligns with PostgreSQL 17 behavior. #164236Added support for the
aclitemtype and themakeaclitemandacldefaultbuilt-in functions for PostgreSQL compatibility. The existingaclexplodefunction, which previously always returned no rows, now correctly parses ACL strings and returns the individual privilege grants they contain. #165744CockroachDB now supports the PostgreSQL session variables
tcp_keepalives_idle,tcp_keepalives_interval,tcp_keepalives_count, andtcp_user_timeout. These allow per-session control over TCP keepalive behavior on each connection. A value of 0 (the default) uses the corresponding cluster setting. Non-zero values override the cluster setting for that session only. Units match PostgreSQL: seconds for keepalive settings, milliseconds fortcp_user_timeout. #164369Added support for the
dmetaphone(),dmetaphone_alt(), anddaitch_mokotoff()built-in functions, completing CockroachDB's implementation of the PostgreSQLfuzzystrmatchextension.dmetaphoneanddmetaphone_altreturn Double Metaphone phonetic codes for a string, anddaitch_mokotoffreturns an array of Daitch-Mokotoff soundex codes. These functions are useful for fuzzy string matching based on phonetic similarity. #163430Added
to_date(text, text)andto_timestamp(text, text)SQL functions that parse dates and timestamps from formatted strings using PostgreSQL-compatible format patterns. For example,to_date('2023-03-15', 'YYYY-MM-DD')returns a date, andto_timestamp('2023-03-15 14:30:45', 'YYYY-MM-DD HH24:MI:SS')returns atimestamptz. #164672SHOW ALLnow returns a third column,description, containing a human-readable description of each session variable. This matches the PostgreSQL behavior ofSHOW ALL. #165397The
tableoidsystem column is now supported on virtual tables such as those inpg_catalogandinformation_schema. This improves compatibility with PostgreSQL tools likepg_dumpthat referencetableoidin their introspection queries. #165727Added the
ST_AsMVTaggregate function to generate Mapbox Vector Tile (MVT) binary format from geospatial data, providing PostgreSQL/PostGIS compatibility for web mapping applications. #150663Aggregation function
ST_AsMVTcan now also be used as a window function. #166860Updated CockroachDB to allow a prefix of index key columns to be used for the shard column in a hash-sharded index. The
shard_columnsstorage parameter may be used to override the default, which uses all index key columns in the shard column. #161422Queries executed via the vectorized engine now display their progress in the
phasecolumn ofSHOW QUERIES. Previously, this feature was only available in the row-by-row engine. #158029CockroachDB now shows execution statistics (like
execution time) onEXPLAIN ANALYZEoutput forrendernodes, which often handle built-in functions. #161509The output of
EXPLAIN [ANALYZE]in non-VERBOSEmode is now more succinct. #153361crdb_internal.datums_to_bytesis now available in theinformation_schemasystem catalog asinformation_schema.crdb_datums_to_bytes. #156963The
information_schema.crdb_datums_to_bytesbuilt-in function is now documented. #160486Active Session History tables are now accessible via
information_schema.crdb_node_active_session_historyandinformation_schema.crdb_cluster_active_session_history, in addition to the existingcrdb_internaltables. This improves discoverability when browsinginformation_schemafor available metadata. #164969Added a
workload_typecolumn to thecrdb_internal.node_active_session_historyandcrdb_internal.cluster_active_session_historyvirtual tables, as well as the correspondinginformation_schemaviews. The column exposes the type of workload being sampled, with possible valuesSTATEMENT,JOB,SYSTEM, orUNKNOWN. #165866Added the
optimizer_inline_any_unnest_subquerysession setting to enable/disable the optimizer ruleInlineAnyProjectSet. The setting is on by default in v26.2 and later. #161880Exposed the following settings for canary table statistics:
- Cluster setting
sql.stats.canary_fraction: probability that table statistics will use canary mode (i.e., always use the freshest stats) instead of stable mode (i.e., use the second-freshest stats) for query planning [0.0-1.0]. - Session variable
canary_stats_mode: Whensql.stats.canary_fractionis greater than0, controls which table statistics are used for query planning on the current session:onalways uses the newest (canary) stats immediately when they are collected,offdelays using new stats until they outlive the canary window, andautoselects probabilistically based on the canary fraction. Has no effect whensql.stats.canary_fractionis0. #167944
- Cluster setting
CockroachDB now supports
COMMIT AND CHAINandROLLBACK AND CHAIN(as well asEND AND CHAINandABORT AND CHAIN). These statements finish the current transaction and immediately start a new explicit transaction with the same isolation level, priority, and read/write mode as the previous transaction.AND NO CHAINis also accepted for PostgreSQL compatibility but behaves identically to a plainCOMMITorROLLBACK. #164403Added support for importing Parquet files using the
IMPORTstatement. Parquet files can be imported from cloud storage URLs (s3://,gs://,azure://) or HTTP servers that support range requests (Accept-Ranges: bytes). This feature supports column-level compression formats (Snappy, GZIP, ZSTD, Brotli, etc.) as specified in the Parquet file format, but does not support additional file-level compression (e.g.,.parquet.gzfiles). Nested Parquet types (lists, maps, structs) are not supported; only flat schemas with primitive types are supported at this time. #163991ALTER TABLE ... SET LOCALITYis now fully executed using the declarative schema changer, improving reliability and consistency with other schema change operations. #161763Setting
skip_unique_checks = trueon an index now emits a notice warning that unique constraint enforcement is bypassed, with a pointer to theINSPECTdocumentation. #167405
Operational changes
- Changefeeds now support the
partition_algoption for specifying a Kafka partitioning algorithm. Currentlyfnv-1a(default) andmurmur2are supported. The option is only valid on Kafka v2 sinks. This is protected by the cluster settingchangefeed.partition_alg.enabled. An example usage:SET CLUSTER SETTING changefeed.partition_alg.enabled=true; CREATE CHANGEFEED ... INTO 'kafka://...' WITH partition_alg='murmur2';. Note that if a changefeed is created using themurmur2algorithm, and then the cluster setting is disabled, the changefeed will continue using themurmur2algorithm unless the changefeed is altered to use a differentpartition_alg. #161265 - Added the
server.sql_tcp_user.timeoutcluster setting, which specifies the maximum amount of time transmitted data can remain unacknowledged before the underlying TCP connection is forcefully closed. This setting is enabled by default with a value of 30 seconds and is supported on Linux and macOS (Darwin). #164037 - Introduced a new cluster setting
kvadmission.store.snapshot_ingest_bandwidth_control.min_rate.enabled. When this setting is enabled and disk bandwidth-based admission control is active, snapshot ingestion will be admitted at a minimum rate. This prevents snapshot ingestion from being starved by other elastic work. #159436 - Added periodic ASH workload summary logging to the
OPSchannel. Two new cluster settings,obs.ash.log_interval(default:10m) andobs.ash.log_top_n(default:10), control how often and how many entries are emitted. Each summary reports the most frequently sampled workloads grouped by event type, event name, and workload ID, providing visibility into workload patterns that previously existed only in memory. #165093 - Added the opt-in cluster setting
server.oidc_authentication.tls_insecure_skip_verify.enabledto skip TLS certificate verification for OIDC provider connections. #164514 - A new cluster setting,
server.gc_assist.enabled, allows operators to dynamically disable GC assist in CockroachDB's forked Go runtime. By default, it follows theGODEBUG=gcnoassistflag. A new metric,sys.gc.assist.enabled, reports the current state (1= enabled,0= disabled). #166555 - Added a new cluster setting
changefeed.kafka.max_request_sizeand a per-changefeedFlush.MaxBytesoption in the Kafka sink config to control the maximum size of record batches sent to Kafka by the v2 sink. Lowering this from the default of 256 MiB can prevent spurious message-too-large errors when multiple batches are coalesced into a single broker request. #166740 - The new
cockroach gen dashboardcommand generates standardized monitoring dashboards from an embedded configuration file. It outputs a dashboard JSON file for either Datadog (--tool=datadog) or Grafana (--tool=grafana), with Grafana dashboards using Prometheus queries. The generated dashboards include metrics across Overview, Hardware, Runtime, Networking, SQL, and Storage categories. Use--outputto set the output file path and--rollup-intervalto control metric aggregation. #161050 - The
build.timestampPrometheus metric now carriesmajorandminorlabels identifying the release series of the running CockroachDB binary (e.g.,major="26", minor="1"for any v26.1.x build). #163834 - Added the
kv.protectedts.protect,kv.protectedts.release,kv.protectedts.update_timestamp,kv.protectedts.get_record, andkv.protectedts.mark_verifiedmetrics to track protected timestamp storage operations. These metrics help diagnose issues with excessive protected timestamp churn and operational errors. Each operation tracks both successful completions (.success) and failures (.failed, such asErrExistsorErrNotExists). Operators can monitor these metrics to understand PTS system behavior and identify performance issues related to backups, changefeeds, and other features that use protected timestamps. #160129 - Added a new metric
sql.rls.policies_applied.countthat tracks the number of SQL statements where row-level security (RLS) policies were applied during query planning. #164405 - RPC connection metrics now include a
protocollabel. The following metrics are affected:rpc.connection.avg_round_trip_latency,rpc.connection.failures,rpc.connection.healthy,rpc.connection.healthy_nanos,rpc.connection.heartbeats,rpc.connection.tcp_rtt,rpc.connection.tcp_rtt_var,rpc.connection.unhealthy,rpc.connection.unhealthy_nanos, andrpc.connection.inactive. In v26.2, the label value is alwaysgrpc. For example:rpc_connection_healthy{node_id="1",remote_node_id="0",remote_addr="localhost:26258",class="system",protocol="grpc"} 1#162528 - Added a new metric
sql.query.with_statement_hints.countthat is incremented whenever a statement is executed with one or more external statement hints applied. An example of an external statement hint is an inline-hints rewrite rule added by callinginformation_schema.crdb_rewrite_inline_hints. #161043 - Promoted the following admission control metrics to
ESSENTIALstatus, making them more discoverable in monitoring dashboards and troubleshooting workflows:admission.wait_durations.*(sql-kv-response,sql-sql-response,elastic-stores,elastic-cpu),admission.granter.*_exhausted_duration.kv(slots,io_tokens,elastic_io_tokens),admission.elastic_cpu.nanos_exhausted_duration,kvflowcontrol.eval_wait.*.duration(elastic,regular), andkvflowcontrol.send_queue.bytes. These metrics track admission control wait times, resource exhaustion, and replication flow control, providing visibility into cluster health and performance throttling. #164827 - Added two new metrics,
auth.cert.san.conn.totalandauth.cert.san.conn.success, to track SAN-based certificate authentication attempts and successes. #166829 - Logical Data Replication (LDR) now supports hash-sharded indexes and secondary indexes with virtual computed columns. Previously, tables with these index types could not be replicated using LDR. #161062
- External connections can now be used with online restore. #159090
- Backup schedules that utilize the
revision_historyoption now apply that option only to incremental backups triggered by that schedule, rather than duplicating the revision history in the full backups as well. #162105 - Changefeed ranges are now more accurately reported as lagging. #163427
- Jobs now clear their running status messages upon successful completion. #163765
- Added a new structured event of type
rewrite_inline_hintsthat is emitted when an inline-hints rewrite rule is added usinginformation_schema.crdb_rewrite_inline_hints. This event is written to both the event log and theOPSchannel. #160901 - When hash-based redaction is enabled in the logging configuration, usernames in authentication logs now produce deterministic hashes instead of being fully redacted. This lets support engineers correlate the same user across multiple log entries without revealing the actual values. #165804
- Changed goroutine profile dumps from human-readable
.txt.gzfiles to binary proto.pb.gzfiles. This improves the performance of the goroutine dumper by eliminating brief in-process pauses that occurred when collecting goroutine stacks. #160798 - Red Hat certified CockroachDB container images are now published as multi-arch manifests supporting
linux/amd64,linux/arm64, andlinux/s390x. Previously onlylinux/amd64was published to the Red Hat registry. #165725
Command-line changes
- The
cockroach debug tsdumpcommand now defaults to--format=rawinstead of--format=text. Theraw(gob) format is optimized for Datadog ingestion. A new--outputflag lets you write output directly to a file, avoiding potential file corruption that can occur with shell redirection. If--outputis not specified, output is written tostdout. #160538 - The
cockroach debug tsdumpcommand now supports ZSTD encoding via--format=raw --encoding=zstd. This generates compressed tsdump files that are approximately 85% smaller than raw format. Thetsdump uploadcommand automatically detects and decompresses ZSTD files, allowing direct upload without manual decompression. #161998 - The
cockroach debug zipcommand's--include-filesand--exclude-filesflags now support full zip path patterns. Patterns containing/are matched against the full path within the zip archive (e.g.,--include-files='debug/nodes/1/*.json'). Patterns without/continue to match the base file name as before. #163266 - Added the
--exclude-log-severitiesflag tocockroach debug zipthat filters log entries by severity server-side. For example,--exclude-log-severities=INFOexcludes allINFO-level log entries from the collected log files, which can significantly reduce zip file size for large clusters. Valid severity names areINFO,WARNING,ERROR, andFATAL. The flag accepts a comma-delimited list or can be specified multiple times. #165802 - Added a
--list-dbsflag toworkload init workload_generatorthat lists all user databases found in debug logs without initializing tables. This helps users discover which databases are available in the debug zip before running the full init command. #163930
DB Console changes
- Added a new time-series bar graph called Plan Distribution Over Time to the Statement Fingerprint page, on the Explain Plans tab. It shows which execution plans were used in each time interval, helping detect shifts in query plan distributions. #161011
- The SQL Activity > Sessions page now defaults the Session Status filter to Active, Idle to exclude closed sessions. #160576
Bug fixes
- The fix for
node descriptor not founderrors for changefeeds withexecution_localityfilters in CockroachDB Basic and Standard clusters is now controlled by cluster settingsql.instance_info.use_instance_resolver.enabled(default:true). #163947 - Statistics histogram collection is now skipped for JSON columns referenced in partial index predicates, except when
sql.stats.non_indexed_json_histograms.enabledis true (default: false). #164477 - CockroachDB could previously encounter internal errors like
column statistics cannot be determined for empty column setandinvalid unionin some edge cases withUNION,EXCEPT, andINTERCEPT. This has now been fixed. #150706 - Fixed a bug that could cause a scan over a secondary index to read significantly more KVs than necessary in order to satisfy a limit when the scanned index had more than one column family. #156672
- Fixed a bug where a query predicate could be ignored when all of the following conditions were met: the query used a lookup join to an index, the predicate constrained a column to multiple values (e.g.,
column IN (1, 2)), and the constrained column followed one or more columns with optional multi-value constraints in the index. This bug was introduced in v24.3.0. #159722 - Fixed an error that occurred when using generic query plans that generates a lookup join on indexes containing identity computed columns. #162036
- Fixed a bug that prevented the
optimizer_min_row_countsetting from applying to anti-join expressions, which could lead to bad query plans. The fix is gated behindoptimizer_use_min_row_count_anti_join_fix, which is on by default on v26.2 and later, and off by default in earlier versions. #163244 - Fixed an optimizer limitation that prevented index usage on computed columns when querying through views or subqueries containing JSON fetch expressions (such as
->,->>,#>, or#>>). Queries that project JSON expressions matching indexed computed column definitions now correctly use indexes instead of performing full table scans, significantly improving performance for JSON workloads. #163395 - Statements within a UDF or stored procedure similar to (1) and (2) where the limit/offset is a reference to an argument of the UDF/SP. #163500
- Fixed an issue where
ORDER BYexpressions containing subqueries with non-defaultNULLSordering (e.g.,NULLS LASTforASC,NULLS FIRSTforDESC) could cause an error during query planning. #163230 - Fixed a bug where CockroachDB did not always promptly respond to the statement timeout when performing a hash join with
ONfilter that is mostlyfalse. #164879 - Fixed a bug that caused a routine with an
INSERTstatement to unnecessarily block dropping a hash-sharded index or computed column on the target table. This fix applies only to newly created routines. In releases prior to v25.3, the fix must be enabled by setting the session variableuse_improved_routine_dependency_trackingtoon. #146250 - Fixed a bug where creating a routine could create unnecessary column dependencies when the routine references columns through CHECK constraints (including those for RLS policies and hash-sharded indexes) or partial index predicates. These unnecessary dependencies prevented dropping the column without first dropping the routine. The fix is gated behind the session setting
use_improved_routine_deps_triggers_and_computed_cols, which is off by default prior to v26.1. #159126 - Fixed a bug that allowed columns to be dropped despite being referenced by a routine. This could occur when a column was only referenced as a target column in the
SETclause of anUPDATEstatement within the routine. This fix only applies to newly-created routines. In versions prior to v26.1, the fix must be enabled by setting the session variableprevent_update_set_column_drop. #158935 - Fixed a bug that caused
SHOW CREATE FUNCTIONto error when the function body contained casts from columns to user-defined types. #159642 - Fixed a bug in which PL/pgSQL UDFs with many
IFstatements would cause a timeout and/or OOM when executed from a prepared statement. This bug was introduced in v23.2.22, v24.1.15, v24.3.9, v25.1.2, and v25.2.0. #162512 - Fixed a bug where running
EXPLAIN ANALYZE (DEBUG)on a query that invokes a UDF with many blocks could cause out-of-memory errors (OOMs). #166132 - Fixed a bug where
ALTER FUNCTION ... RENAME TOandALTER PROCEDURE ... RENAME TOcould create duplicate functions in non-public schemas. #166681 - Fixed a race condition/conflict between concurrent
ALTER FUNCTION ... SET SCHEMAandDROP SCHEMAoperations. #164043 - Fixed a bug where schema changes could fail after a
RESTOREdue to missing session data. #159176 - Fixed a bug where schema changes adding a
NOT NULLconstraint could enter an infinite retry loop if a row violated the constraint and contained certain content (e.g.,"EOF"). Such errors are now correctly classified and don't cause retries. #160780 - Fixed a bug where
CREATE INDEXon a table withPARTITION ALL BYwould fail if the partition columns were explicitly included in the primary key definition. #161083 - Fixed a bug that caused
ALTER INDEX ... PARTITION BYstatements to fail on a nonexistent index even ifIF EXISTSwas used. #163378 ALTER TABLE ... ALTER PRIMARY KEY USING COLUMNS (col) USING HASHis now correctly treated as a no-op when the table already has a matching hash-sharded primary key, instead of attempting an unnecessary schema change. #164557- Fixed a bug where
ALTER TABLE ... ALTER COLUMN ... SET DATA TYPEfrom an unbounded string or bit type to a bounded type with a length>= 64(for example,STRINGtoSTRING(100)) would skip validating existing data against the new length constraint. This could leave rows in the table that violate the column's type, with values longer than the specified limit. #164739 - Context cancellation is now surfaced if a
statement_timeoutoccurs while waiting for a schema change. #167112 - Fixed a bug that could cause a panic during changefeed startup if an error occurred while initializing the metrics controller. #159431
- Fixed a bug that could cause changefeeds using Kafka v1 sinks to hang when the changefeed was cancelled. #162058
- Fixed an issue where changefeeds with
execution_localityfilters could fail in multi-tenant clusters withnode descriptor not founderrors. #163507 - Fixed a bug where running changefeeds with
envelope=enrichedandenriched_propertiescontainingsourcewould cause failures during a cluster upgrade. #163885 - Fixed a bug introduced in v25.4+ where setting
min_checkpoint_frequencyto0prevented changefeeds from advancing their resolved timestamp (high-water mark) and emitting resolved messages. Note that settingmin_checkpoint_frequencyto lower than500msis not recommended as it may cause degraded changefeed performance. #164765 - Changefeed retry backoff now resets when the changefeed's resolved timestamp (high-water mark) advances between retries, in addition to the existing time-based reset (configured by
changefeed.retry_backoff_reset). This prevents transient rolling restarts from causing changefeeds to fall behind because of excessive backoff. #164933 - Fixed a bug where
RESTOREwithskip_missing_foreign_keyscould fail with an internal error if the restored table had an in-progress schema change that added a foreign key constraint whose referenced table was not included in the restore. #164757 - Fixed a bug where incremental backups taken after downgrading a mixed-version cluster to v25.4 could result in inconsistent backup indexes. #164301
- Fixed a bug where restoring a database backup containing default privileges that referenced non-existent users would leave dangling user references in the restored database descriptor. #166183
- Fixed a bug where AVRO file imports of data with JSON or binary records could hang indefinitely when encountering stream errors from cloud storage (such as
HTTP/2CANCELerrors). Import jobs will now properly fail with an error instead of hanging. #161290 - Fixed a bug where IMPORT with AVRO data using OCF format could silently lose data if the underlying storage (e.g., S3) returned an error during read. Such errors are now properly reported. Other formats (specified via
data_as_binary_recordsanddata_as_json_recordsoptions) are unaffected. The bug has been present since about v20.1. #161318 - Fixed a bug where import rollback could incorrectly revert data in a table that was already online. This could only occur if an import job was cancelled or failed after the import had already succeeded and the table was made available for use. #159627
- Invalid
avro_schema_prefixis now caught during statement time. The prefix must start with[A-Za-z_]and subsequently contain only[A-Za-z0-9_], as specified in the Avro specification. #159869 - Fixed a bug where
IMPORTerror messages could include unredacted cloud storage credentials from the source URI. Credentials are now stripped from URIs before they appear in error messages. #164881 - Reduced contention when dropping descriptors or running concurrent imports. #161941
- Fixed a bug where rolling back a transaction that had just rolled back a savepoint would block other transactions accessing the same rows for five seconds. #160346
- Fixed a bug where multi-statement explicit transactions using
SAVEPOINTto recover from certain errors (like duplicate key-value violations) could lose writes performed before the savepoint was created, in rare cases when buffered writes were enabled (off by default). This bug was introduced in v25.2. #161972 - Fixed a race condition that could occur during context cancellation of an incoming snapshot. #159403
- Fixed a bug which could cause prepared statements to fail with the error message
non-const expressionwhen they contained filters with stable functions. This bug has been present since 25.4.0. #159201 - Fixed prepared statements failing with
version mismatcherrors when user-defined types are modified between preparation and execution. Prepared statements now automatically detect UDT changes and re-parse to use current type definitions. #161827 - Fixed an internal error
could not find format code for column Nthat occurred when executingEXPLAIN ANALYZE EXECUTEstatements via JDBC or other clients using the PostgreSQL binary protocol. #162115 - Fixed a bug where CockroachDB returned "cached plan must not change result type" errors during the
Executephase instead of theBindphase of the extended pgwire protocol. This caused compatibility issues with drivers like pgx that expect the error beforeBindCompleteis sent, particularly when using batch operations with prepared statements after schema changes. #164406 - Fixed a bug where CockroachDB could crash when handling decimals with negative scales via the extended PGWire protocol. An error is now returned instead, matching PostgreSQL behavior. #160499
- Fixed a bug where the index definition shown in
pg_indexesfor hash sharded indexes withSTORINGcolumns was not valid SQL. TheSTORINGclause now appears in the correct position. #161882 - Fixed a bug where concurrent updates to a table using multiple column families during a partial index creation could result in data loss, incorrect
NULLvalues, or validation failures in the resulting index. #166325 - Fixed a bug where statement bundles were missing
CREATE TYPEstatements for user-defined types used as array column types. #162357 - Fixed a rare data race during parallel constraint checks where a fresh descriptor collection could resolve a stale enum type version. This bug was introduced in v26.1.0. #163883
- Fixed a bug where creating a table with a user-defined type column failed when the user had
USAGEprivilege on the base type but not on its implicit array type. The array type now inherits privileges from the base type, matching PostgreSQL behavior. #164471 - Fixed a bug where rolling back a
CREATE TABLEthat referenced user-defined types or sequences would leave orphaned back-references on the type and sequence descriptors, causing them to appear incrdb_internal.invalid_objectsafter the table was GC'd. #166223 - Fixed a race condition where queries run after revoking
BYPASSRLScould return wrong results because cached plans failed to notice the change immediately. #159354 - Fixed a bug where
DROP TABLE ... CASCADEwould incorrectly drop tables that had triggers or row-level security (RLS) policies referencing the dropped table. Now only the triggers/policies are dropped, and the tables owning them remain intact. #161914 - Fixed a bug where
EXPLAIN ANALYZE (DEBUG)statement bundles did not include triggers, their functions, or tables modified by those triggers. The bundle'sschema.sqlfile now contains theCREATE TRIGGER,CREATE FUNCTION, andCREATE TABLEstatements needed to fully reproduce the query environment when triggers are involved. #163584 - Fixed a bug where dropped columns appeared in
pg_catalog.pg_attributewith theatttypidcolumn equal to 2283 (anyelement). Now this column will be 0 for dropped columns. This matches PostgreSQL behavior, whereatttypid=0is used for dropped columns. #163950 - Fixed a bug where temporary tables created in one session could fail to appear in
pg_catalogqueries from another session because the parent temporary schema could not be resolved by ID. #165395 - The
information_schema.crdb_node_active_session_historyandinformation_schema.crdb_cluster_active_session_historyviews now include theapp_namecolumn, matching the underlyingcrdb_internaltables. #165367 - An error will now be reported when the database provided as the argument to a
SHOW REGIONSorSHOW SUPER REGIONSstatement does not exist. This bug had been present since version v21.1. #161014 - Dropping a region from the system database no longer leaves
REGIONAL BY TABLEsystem tables referencing the removed region, preventing descriptor validation errors. #163503 - Fixed a bug where super region zone configurations did not constrain all replicas to regions within the super region. #164285
- Fixed a bug that had previously allowed the primary and secondary to be in separate super regions. #164943
- Fixed a bug where converting a table from
REGIONAL BY ROWtoGLOBALwould not clear theskip_unique_checksstorage parameter on the primary key, even though implicit partitioning was removed. #167484 - Fixed a bug where
TRUNCATEdid not behave correctly with respect to theschema_lockedstorage parameter, and was not being blocked when Logical Data Replication (LDR) was in use. This behavior was incorrect and has been fixed. #159378 - The PCR job now switches into the cutover phase more promptly after a failover is requested, terminating the replication phase more quickly and more reliably when components of the ingestion process are hung due to network errors. #166778
- Fixed an issue where long-running transactions with many statements could cause unbounded memory growth in the SQL statistics subsystem. When a transaction includes a large number of statements, the SQL statistics ingester now automatically flushes buffered statistics before the transaction commits. As a side effect, the flushed statement statistics might not have an associated transaction fingerprint ID because the transaction has not yet completed. In such cases, the transaction fingerprint ID cannot be backfilled after the fact. #158527
- Fixed a deadlock that could occur when a statistics creation task panicked. #160348
- Fixed a bug that could cause row sampling for table statistics to crash a node due to a data race when processing a collated string column with values larger than 400 bytes. This bug has existed since before v23.1. #165260
- Fixed a bug where CockroachDB might not have respected the table-level parameters
sql_stats_automatic_full_collection_enabledandsql_stats_automatic_partial_collection_enabledand defaulted to using the corresponding cluster settings when deciding whether to perform automatic statistics collection on a table. #167681 - Previously, v26.1.0-beta.1 and v26.1.0-beta.2 could encounter a rare process crash when running TTL jobs. This has been fixed. #160674
- Fixed a bug introduced in v26.1.0-beta.1 in which row-level TTL jobs could encounter GC threshold errors if each node had a large number of spans to process. #161979
- Fixed a bug where an error would occur when defining a foreign key on a hash-sharded primary key without explicitly providing the primary key columns. #162608
- Fixed a rare race condition where
SHOW CREATE TABLEcould fail with a"relation does not exist"error if a table referenced by a foreign key was being concurrently dropped. #164942 - Fixed a bug in the legacy schema changer where rolling back a
CREATE TABLEwith inlineFOREIGN KEYconstraints could leave orphaned foreign key back-references on the referenced table, causing descriptor validation errors. #165551 - Fixed a bug where the
lock_timeoutanddeadlock_timeoutsession settings were not honored by FK existence checks performed during insert fast path execution. This could cause inserts to block indefinitely on conflicting locks instead of returning a timeout error. #167532 - JWT authentication now returns a clear error when HTTP requests to fetch JWKS or OpenID configuration return non-
2xxstatus codes, instead of silently passing the response body to the JSON parser. #158294 - Fixed a data race that could cause certificate expiration metrics (
security.certificate.expiration.node-client,security.certificate.expiration.client-tenant,security.certificate.expiration.ca-client-tenantand their TTL counterparts) to not update after certificate rotation viaSIGHUP. #166664 - Fixed a bug in which inline-hints rewrite rules created with
information_schema.crdb_rewrite_inline_hintswere not correctly applied to statements run withEXPLAIN ANALYZE. This bug was introduced in v26.1.0-alpha.2. #161273 - Fixed a bug that prevented successfully injecting hints using
information_schema.crdb_rewrite_inline_hintsforINSERT,UPSERT,UPDATE, andDELETEstatements. This bug had existed since hint injection was introduced in v26.1.0-alpha.2. #161773 - The
asciibuilt-in function now returns0when the input is the empty string instead of an error. #159178 - Previously, CockroachDB could hit an internal error when evaluating built-in functions with
'{}'as an argument (without explicit type casts, such as on a query likeSELECT cardinality('{}');). This is now fixed and a regular error is returned instead (matching PostgreSQL behavior). #161835 - Fixed a bug where comments associated with constraints were left behind after the column and constraint were dropped. #159180
- Fixed a memory accounting issue that could occur when a lease expired due to a SQL liveness session-based timeout. #159527
- Fixed a bug where the
pprofUI endpoints for allocs, heap, block, and mutex profiles ignored the seconds parameter and returned immediate snapshots instead of delta profiles. #160608 - Fixed a bug where generating a debug zip could trigger an out-of-memory (OOM) condition on a node if malformed log entries were present in logs using
jsonorjson-compactformatting. This bug was introduced in v24.1. #163224 - Fixed a bug in the TPC-C workload where long-duration runs (>= 4 days or indefinite) would experience periodic performance degradation every 24 hours due to excessive concurrent
UPDATEstatements resetting warehouse and district year-to-date values. #159286 - Fixed a bug in
appBatchStats.mergewhere thenumEmptyEntriesfield was not being properly accumulated when merging statistics. This could result in incorrect statistics tracking for empty Raft log entries. #164671 - Fixed a bug where descriptor version fetching could be incorrectly throttled by the elastic CPU limiter, potentially leading to increased query latency or timeouts under high CPU load. #166810
- Fixed a crash (
traceRegion: alloc too large) that could occur when Go's execution tracer was enabled and a range cache lookup used a key longer than about 64 KB. #166705 - Fixed a bug where transient I/O errors (such as cloud storage network timeouts) during split or merge trigger evaluation were misidentified as replica corruption, causing the node to crash. These errors now correctly fail the operation, which is retried automatically. #167377
- Fixed a bug where executing a mutation in a subquery (e.g., as a CTE) could cause the "rows written" metrics like
sql.statements.index_rows_written.countandsql.statements.index_bytes_written.countto not be incremented correctly. #167432
Performance improvements
- Database- and table-level backups no longer fetch all object descriptors from disk in order to resolve the backup targets. Now only the objects that are referenced by the targeted objects will be fetched. This improves performance when there are many tables in the cluster. #157790
- The optimizer now better optimizes query plans of statements within UDFs and stored procedures that have
INsubqueries. #160503 - The optimizer can now better handle filters that redundantly
unnest()an array placeholder argument within anINorANYfilter. Previously, this pattern could prevent the filters from being used to constrain a table scan. Example:SELECT k FROM a WHERE k = ANY(SELECT * FROM unnest($1:::INT[]))#161816 - The query optimizer now eliminates redundant filter and projection operators over inputs with zero cardinality, even when the filter or projection expressions are not leakproof. This produces simpler, more efficient query plans in cases where joins or other operations fold to zero rows. #164212
- Improved changefeed performance when filtering unwatched column families and offline tables by replacing expensive error chain traversal with direct status enum comparisons. #159745
- Improved changefeed checkpointing performance when changefeeds are lagging. Previously, checkpoint updates could be redundantly applied multiple times per checkpoint operation. #162546
- Queries that have comparison expressions with the
levenshteinbuilt-in are now up to 30% faster. #160394 - Fixed a performance regression in
pg_catalog.pg_rolesandpg_catalog.pg_authidby avoiding privilege lookups for each row in the table. #160121 - Significantly reduced WAL write latency when using encryption at rest by properly recycling WAL files instead of deleting and recreating them. #160784
- Optimized the logic that applies zone config constraints so it no longer fetches all descriptors in the cluster during background constraint reconciliation. #160966
- Various background tasks and jobs now more actively yield to foreground work when that work is waiting to run. #159205
- Statement executions using canary stats will no longer use cached plans, which prevents cache thrashing but causes a slight increase in planning time over statement executions using stable stats. #167503
Known limitations
This section describes newly identified limitations in CockroachDB v26.2.
- Statements within views do not currently respect hint injections. The workaround is to modify the inline hints directly in the body by replacing the view. #166782
- Statements within routines do not currently respect hint injections. The workaround is to modify the inline hints directly in the body by replacing the routine. #162627