Release Notes for Couchbase Server 7.1
Release 7.1.6 (November 2023)
Couchbase Server 7.1.6 was released in November 2023. This release contains fixes to issues.
Fixed Issues
This release contains the fixes listed below.
Cluster Manager
Issue | Description | Resolution |
---|---|---|
When memcached restarted, for example, due to an OOM kill, the cluster configuration was not uploaded to memcached. As a result, SDKs did not bootstrap. |
When the terse_cluster_info_uploader starts again, the cluster configuration is now refreshed in memcached. |
Cross Datacenter Replication (XDCR)
Issue | Description | Resolution |
---|---|---|
Checkpoint Manager could get stuck when stopping when it had not been started. This caused memory leak. |
Checkpoint Manager can now be stopped correctly even when it hasn’t been started. |
|
When stopped during connectivity issues, Checkpoint Manager could hold onto resources longer than usual. This caused a temporary memory bloat. |
Checkpoint Manager does not now hold on to resources when it stops during connectivity issues. |
|
A rare timing issue during a pipeline restart prevented the XDCR DCP client from closing. This caused memory leak. |
To prevent memory leak, the XDCR DCP client does not now start when the pipeline is stopped. |
Data Service
Issue | Description | Resolution |
---|---|---|
The computation count for the items remaining DCP/Checkpoint stats exposed to Prometheus was the O(N) function. Where N is the number of items in a checkpoint. This caused various performance issues including Prometheus stats timeouts when checkpoints accumulated a high number of items. |
The computation count has now been optimized as O(1). |
Release 7.1.5 (August 2023)
Couchbase Server 7.1.5 was released in August 2023. This release contains a new XDCR feature and fixes to issues.
New XDCR Feature
XDCR replications, specified by means of the REST API, can now use the filterBinary
flag.
This specifies whether binary documents should be replicated.
Detailed information on the filterBinary flag is provided on the REST reference page Creating a Replication.
Fixed Issues
This release contains the fixes listed below.
Cluster Manager
Issue | Description | Resolution |
---|---|---|
Janitor run during failover might corrupt the configuration by leaving failed nodes in the bucket server configuration list. |
Failed nodes are now filtered out of the bucket server configuration list. |
|
An ns_server process crash meant the Delete Bucket memcached command was not always called before bucket files were deleted later in rebalance. This caused the memcached process to crash repeatedly causing data service downtime. |
The Delete Bucket command is now called on memcached before a file is deleted during rebalance. This ensures mencached doesn’t attempt to read the files. |
|
When the REST API attempted to access unsupported stats, such as meta_latency_wt, "stats_reader" crashed on the ns_server side. |
ns_server now returns a "not_found" status when the REST API is called with unsupported stats. |
Cross Datacenter Replication (XDCR)
Issue | Description | Resolution |
---|---|---|
XDCR did not process documents with a JSON array and Extended Attributes (XATTRs). When a document contained XATTRs, XDCR checked for XATTRs in transactions, transaction filters were enabled, and XATTRs were not checked. |
When documents contain arrays, XATTRs are now checked in the transaction XATTRs, and the document is not prevented from being parsed in an array. |
|
Data streamed from the Data Service over XDCR should always be streamed in order by mutation id. However, in some scenarios, for efficiency, the Data Service streamed records that were not ordered by mutation id. In certain situations, this out-of-sequence-order [OSO] caused performance issues. |
OSO mode is now available as a global override to be switched off for any currently deployed replications to avoid performance issues. |
|
Checkpoint Manager created checkpoint records out-of-sequence when many target nodes ran slowly. |
Checkpoint Manager now creates checkpoints in sequence when target nodes are slow. |
|
There was an issue in the ns_server stats collector when the Resource Manager checked the pipeline status to obtain the replication priority to provide more resources for higher priority pipelines. |
Resource Manager does not now check replication pipeline statuses when all replication pipelines have either the exact same high or low priority. |
|
XDCR on a non-KV node froze in the UI when the replication setting was changed multiple times. This caused the replication reload channel to be full. |
The replication reload channel does not now get full when changes are made multiple times to Non-KV nodes. |
|
When target data nodes were undersized or consistently overwhelmed, XDCR memory usages could increase as it retried. |
Reallocated memory is now reused instead of generate garbage for GC to clean up. |
|
A Checkpoint Manager Initialization error caused two memory leak types. These were a backfill pipeline and a main pipeline memory leak. |
The Pipeline Manager and backfill pipeline have both been modified to prevent the memory leaks. |
|
XDCR Checkpoint Manager instances were not cleaned up under certain circumstances due to timing and networking issues when contacting target, or when an invalid backfill task was fed in as input. |
Checkpoint Manager instances are now cleaned up. A flag has been added to check for invalid backfill tasks. |
|
When a replication spec change was made to a non-Data Service node, delete replication hung and caused the node to return an incorrect replication configuration. |
XDCR now checks that the node is running the Data Service and handles it correctly. |
|
Running ipv6 only mode + non-encrypted remote resulted in invalid IP addresses being returned leading to connection issues. |
A valid IP address is now returned. |
|
StatsMgr stopping could hang due to watching for notifications resulting in stranded go-routines. |
Go-routines are now stopped correctly. |
|
When ipv4 only mode was used, and full encryption only had an alternate address configured where the internal address was unresolvable, XDCR resulted in an error when it contacted the target data nodes. |
The specific scenario has been fixed so that replication can now proceed. |
|
A legacy race condition where metadata store could cause a conflict was exposed as part of the binary filter improvements. |
Legacy race conditions have all been resolved. |
Query Service
Issue | Description | Resolution |
---|---|---|
Due to how nested dependencies were handled, a sudden rise in memory utilization of the query service on a node caused a memory alert issue. The node did not recover correctly following a restart. |
Nested dependencies are now handled appropriately in the ADVISE statement. |
|
A query with multiple filters on an index key, one of which was a parameter, could produce incorrect results. This was caused by incorrectly composing the exact index spans to support the query. |
The way in which exact spans are set has been modified to correct this issue. |
|
Covering FLATTEN_KEYS() on an array index generated incorrect results. This was because a modified version of the ANY clause was applied after the index which meant false positives were retained and Distinct scan rows were eliminated. |
The ANY filter is now applied on an index scan itself when covering an index scan with flatten keys. |
|
Inter-service read timeout errors were not detected or handled accordingly. User requests consequently failed with timeout errors without retrying with a new connection. |
The error handling and retry mechanism has been modified to handle these types of timeout issues and errors. |
|
Under certain circumstances, a query with UNNEST used a covering index scan and incorrect results were returned. Reference to the UNNEST expression should have prevented the covering index from being used for the query as the index did not contain the entire array. |
The logic to determine covering UNNEST scans has been changed to not use a covering index scan for such queries. |
|
When an index scan had multiple spans, index selectivity was incorrectly calculated. |
Index selectivity for multiple index spans is now correctly calculated. |
|
Incorrect results were returned when a filter contained conditional query parameters. This issue was due to a problem in an OR clause that depended on a named parameter and not a document. |
Constant filters in the subterms of the OR clause are now detected and marked. The extra check prevents index aggregation pushdown. For classifying expressions, when there is an OR clause under an AND, processing removes any constant subterms. |
|
A query plan was changed between Server releases. This meant the filter did not update the index when an OR clause pushed variable spans. |
The OR clause has been modified to correct this issue. |
|
Incorrect results were returned for a non-IndexScan on a constant false condition. This was due to incorrect handling of a FALSE WHERE clause. |
The FALSE WHERE clause is now correctly handled. |
|
Querying system:functions_cache in a multi query node cluster returned incomplete results with warnings. The query result included entries in the local query node, but none from remote query nodes. This was due to a typographical error. |
The typographical error has been corrected. |
|
A panic in go_json.stateInString under parsed value functions caused by incorrect concurrent access resulted in the state being freed whilst still in use. |
The concurrent access issue has been resolved. |
|
The values returned by the OBJECT_ functions were erroneously pooled and reused by subsequent invocations. Depending on when values were reused, the original results were overwritten. |
Pooling has been removed eliminating the chance that values are overwritten. |
|
cbq required a client authentication key file whenever a certificate authority file was used. |
cbq now accepts a certificate authority file without a client key file enabling use with username and password credentials. |
|
When appropriate optimizer statistics were used in Cost-Based Optimizer (CBO), for a query with ORDER BY, if there were multiple indexes available for the query, CBO unconditionally favored an index that provided ordering. Such indexes were not always the best ones to use. |
CBO now allows cost-based comparison of indexes. |
Backup Service
Issue | Description | Resolution |
---|---|---|
Backup and Restore did not complete successfully when bucket names contained a period or full stop character [.] This was due to a filtering issue where this character was not correctly validated. |
Backup and Restore has been fixed to correctly handle any period characters in bucket names. |
Index Service
Issue | Description | Resolution |
---|---|---|
When indexer was under high memory pressure, queuing 256k mutations added to more memory pressure. For each bucket, indexer queued a minimum of 256k mutations before throttling for memory. |
The number of queued mutations has been reduced so that indexer can handle high memory pressure situations much better. |
|
During scaling, an GSI indexer rebalance froze and did not successfully complete. This was because an index snapshot was not correctly deleted and recreated. |
A flag now handles snapshots to ensure they are correctly deleted or recreated when indexes are updated during rebalancing. |
|
When the indexer was slow to process mutations, a rare race condition resulted in incorrect book-keeping for the indexer. This meant index builds did not complete. |
The race condition no longer happens. |
|
When a partitioned key was based on a secondary field of a document and a delete mutation occurred, the Indexer could not determine which partition the document belonged to. This resulted in delete operations on all partitions. |
For partition indexes with document ID as the only partition key, delete mutations are routed only to the partition where the document belongs. This improves the performance of delete and expiration mutations. |
|
The indexer.settings.rebalance.redistribute_indexes flag did not affect partitioned indexes. Partitioned Indexes were by default considered for movement during Rebalance. |
The indexer.settings.rebalance.redistribute_indexes flag has been modified to consider partitioned and non-partitioned indexes when restricting the number of index movements during a rebalance. |
|
When alter index updated the replica count, new replicas were not built immediately when the original definition was
{ |
New replicas are now built when the replica count is updated for deferred indexes. The status of existing index instances is checked, and if ready, a new build of the instance is triggered. |
|
Watcher threads were created by metadata_provider during rebalance but were not cleaned up. |
Threads are now closed after rebalance is finished. |
|
Indexer contained incorrect logic to identify active indexer nodes during a multi-service rebalance. This caused potential downtime and failures in index creation, builds, or other DDL operations. |
The information used by TranslatePort has been updated to use the node Services endpoint to correct this issue. |
Data Service
Issue | Description | Resolution |
---|---|---|
When expired documents were identified during compaction, the Data Service queued a read of the documents' metadata as part of expiry processing. No upper bound was imposed on the size of this queue. This could result in exceeding the Bucket quota for workloads when large amounts of documents expired in a short time. |
Metadata reads for TTL processing are not now queued. Instead, they are processed inline. Consequently, Bucket quota is no longer exceeded. |
|
A shared allocation cache (tcache) between buckets resulted in a stats drift. This caused higher-than-normal memory fragmentation. |
Dedicated tcaches are now used for buckets. jemalloc has been changed to support increased numbers of tcaches. |
|
Workloads involving bulk data ingestion or Time-To-Live (TTLs) expiring at the same time caused a sudden increase in memory fragmentation. |
The defragmenter now runs more frequently to better cope with sudden increases in fragmentation. |
|
A rollback loop affected legacy clients when collections were used and a tombstone newer than the last mutation in the default collection was purged. |
The lastReadSeqno is now Incremented when the client is not collection-aware. |
|
In rare cases, after a failover or memcached restart, a replica rollback while under memory pressure might have caused a crash in the Data Service. |
Memory pressure recovery logic (Item expelling) is now skipped when replica rollback is in progress. |
|
XDCR or restore from backup entered an endless loop if attempting to overwrite a document which was deleted or expired some time ago with a deleteWithMeta operation. This was due to a specific unanticipated state in memory which increased CPU usage, and connection became unusable for further operations. |
deleteWithMeta is now resilient to temporary non-existent values with xattr datatype. |
|
When using .NET SDK on Windows 10 client and client certs were enabled on CB Server, the Data-Service did not establish a connection and client bootstrap failed with a OpenSSL “session id context uninitialized" error. |
Data-Service has been updated to disable TLS session resume. |
|
GET_META requests for deleted items fetched metadata in memory which was not evicted in value-eviction buckets. |
Metadata items are now cleaned when the expiry pager runs. |
|
DCP clients streamed in OSO backfill snapshots under Magma observed duplicate documents received in the disk snapshot. This happened where the stream was paused and resumed when the resume point was wrongly set to a key already processed in the stream. |
OSO backfill in Magma now sets the correct resume point after a pause. |
|
A spurious auto-failover could happen when Magma compaction visited a TTL’d document that was already deleted. |
Document not found does not now increment the number of read failures. |
Eventing Service
Issue | Description | Resolution |
---|---|---|
A server regression in version 7.1.2 might have caused a cURL request encoding issue. |
The default behavior has now been reverted to that in version 7.1.0. In addition, there’s now an optional argument, "url_encode_version", with potential values of [6.6.2, 7.1.0, and 7.2.0]. This argument facilitates the selection of an encoding scheme during upgrades if necessary. |
|
The eventing producer process terminated the eventing consumer process when it did not receive a heartbeat from the consumer process. |
The message receiver loop routine now only exits upon receiving a proper termination command. |
Analytics Service
Issue | Description | Resolution |
---|---|---|
External collections could not be created using Azure Managed Identity. |
Azure dependencies have been updated to correct this issue. |
|
Query results could be unnecessarily converted twice to JSON when documents were large. |
The Query result is now converted to JSON once for all documents. |
|
When the Prometheus stats returned from Analytics exceeded four kilobytes, the status code was inadvertently set to 500 (Internal Error), and this resulted in a large number of warnings in the Analytics warning log. Couchbase Server discarded these statistics. |
This has been fixed to properly return a 200 (OK) status code when the size of Prometheus stats exceeds 4KiB, allowing these stats to be recorded properly. The warning is not displayed. |
Storage
Issue | Description | Resolution |
---|---|---|
Inconsistencies were observed where a single Magma bucket in a database took a long time to warm up. |
The seq index scan has been optimized for tombstones of zero value size. Optimization is for look up by key, sequence iteration, and key iteration. Docs of 0 value size are placed in both key index and seq index. |
Release 7.1.4 (March 2023)
Couchbase Server 7.1.4 was released in March 2023. This release contains fixes to issues.
This release contains the fixes listed below.
Cluster Manager
Issue | Description |
---|---|
Alerts reports "IP address seems to have changed" for nxdomain errors. |
XDCR
Issue | Description |
---|---|
XDCR panic when filtering |
|
Backfill Request Handler deadlock |
|
CheckpointMgr hang on P2P RespCh |
|
bucket topology service concurrent map iteration and map write |
Query Service
Issue | Description |
---|---|
query using IntersectScan vs UnionScan |
|
FTS SEARCH() with memory_quota fails |
|
INSERT/UPSERT options should not be shared |
Index Service
Issue | Description |
---|---|
Log flooded with "FlushTs Not Snapshot Aligned." |
|
Address plasma rpVersion (uint16) overflow |
|
Panic in NodeTable::Get - Logging improvements |
|
Change log level for watcher connection terminations |
|
Rebalance is hung on a dataplane since >1 hour. |
|
Optimise projector CPU during XATTR processing |
|
perf tests stuck due to failed cbindex |
|
Use streamId instead of index.Stream to determine stream catchup pending |
|
Index build stuck on "Check pending stream" during shard rebalance testing |
|
Index build can hang in mixed mode due to projector skipping transaction records |
Release 7.1.3 (November 2022)
Couchbase Server 7.1.3 was released in November 2022. This release contains fixes to issues.
XDCR
Issue | Description |
---|---|
XDCR - Unable to create replications |
|
AdvFilter upgrade happens pre-emptively leading to missed documents |
|
Inter Cluster XDCR failing |
|
XDCR Metakv callbacks racing when remote cluster ref is added/changed |
Query Service
Issue | Description |
---|---|
LIMIT clause is not working properly with ORDER BY clause |
|
Potential for request stall if stream operator fails to notify request that it has terminated |
Release 7.1.2 (October 2022)
Couchbase Server 7.1.2 was released in October 2022. This release contains new features and fixes to issues.
Features
The following new features are provided.
-
The Search, Eventing, and Analytics Services now support the Magma storage engine. See Storage Engines.
-
The Search Service now supports a Hebrew analyzer. See Understanding Analyzers.
-
The Analytics Service now supports the Parquet file format, for external datasets. See Creating a Collection on an External Link and Analytics Collection Specification.
-
A REST API is now provided to ensure that only nodes with conformant FQDN and CIDR patterns can be added to the cluster. See Restrict Node-Addition.
-
A user’s password can now be changed by means of the REST API without roles needing to be specified. See Create a Local User and Assign Roles.
-
The Search Service now supports higher dimensional spatial structures via GeoJSON, for both query requests and documents. See Geospatial Queries.
-
The Index Service can now optionally create indexes on missing leading keys. See Index Key Attributes.
-
Couchbase Server now provides configurable alerts, to be triggered when memory thresholds are exceeded. See Setting Alerts.
-
The Eventing Services now allows multiple collections to be listened to. See Eventing Keyspaces.
-
Direct backup to Azure blob store using cbbackupmgr CLI or the Backup Service is GA in 7.1.2. See Cloud Backup.
New Supported Platforms
This release adds support for the following platforms:
-
ARM v8 now supported on Ubuntu 20.04 (AWS Graviton)
See Supported Platforms for the complete list of supported platforms, and notes on deprecated platforms.
Deprecated Platforms and Procedures
The following platforms and procedures are deprecated:
-
SUSE Linux Enterprise Server 12 versions earlier than SP2 are deprecated: in the future, they will no longer be supported.
See Supported Platforms for the complete list of supported platforms, and notes on deprecated platforms.
Data Service
Issue | Description |
---|---|
wait for seqno persistence won’t timeout on idle vbucket |
|
Memcached crashes in 20 bucket throughput test due to exception |
|
Vbucket stats call to KV can timeout during delta node recovery preparation |
Query Service
Issue | Description |
---|---|
Improve pagination queries with fetch |
|
Race condition between stop signal and timeout |
|
Push filters to index scan to potentially reduce fetch size |
|
UNNEST query 'usedMemory' issue when using Query Memory Quota |
|
Include missing key Index syntax on Index workbench doesn’t show include keyword in definition |
|
subqueries should be advised, explained and monitored |
Index Service
Issue | Description |
---|---|
With Collection Indexer should index leading MISSING entries |
|
indexer blocked during storage warmup on MOI storage, causing rebalance failure. |
|
Include missing key Index syntax on Index workbench doesn’t show include keyword in definition |
Analytics Service
Issue | Description |
---|---|
Select * query throws Failure contacting server for parquet files |
Eventing Service
Issue | Description |
---|---|
Eventing function deployment taking a long time |
|
appcode rest api returns bytes instead of string |
|
Log function scope for lifecycle operation audit logs |
|
Unable to modify function settings when user has only eventing_manage_functions role |
|
Eventing Multi Collection: Inter function recursion not detected in case of sbm handler |
|
Eventing Service should honor the CPU & Memory limits set in cgroups |
|
Memory limits are not checked while setting eventingMemoryQuota via REST API |
|
multi-collection-eventing: eventing leaks source bucket mutation to eventing consumer |
|
Function app-log Write hangs when called after Close |
|
Incorrect query param encoding for curl binding and path param is not encoded |
|
Change in error returned when non-existing bucket used in function creation |
|
Eventing function should be able to listen to multiple collections in a bucket at the same time |
|
Timers handler stuck in deploying state after offline upgrade from 6.6.5 to 7.1.1 |
|
Log a system event when an eventing function is auto undeployed due to RBAC changes |
|
Number of cpu cores mentioned in UI warning does not take into account container limits |
|
LCB_ERR_TIMEOUT thrown when keyspace for a bucket binding does not exist |
|
Eventing Multi Collection: Function deployment successful for a function listening at scope level even though scope does not exist |
|
Unable to modify function settings when user has only eventing_manage_functions role |
|
Unable to modify function settings when user has only eventing_manage_functions role |
Release 7.1.1 (July 2022)
Couchbase Server 7.1.1 was released in July 2022. This maintenance release contains fixes to issues.
Data Service
Issue | Description |
---|---|
Memcached hangs when no passphrase is passed for encrypted private key |
XDCR
Issue | Description |
---|---|
XDCR does not update memcached flag/body after txn xattribute removal if user xattr is not found |
|
Add authType back to bucket properties in pools/default/buckets/bucket-name |
Query Service
Issue | Description |
---|---|
Negative integer in the 64bit range causes rounding |
|
Stop session hangs |
|
IN/NOT IN filters not using Hash for evaluation - continued |
|
LEFT JOIN breaks with between operator on non-existing attribute |
|
WITH clause distribution over union queries deviates from sql standard |
|
Adhoc query index selection issue with LIKE as index condition and query parameters |
|
Refresh_cluster_map fails with ERROR 199 : N1QL: Invalid query service endpoint |
Eventing Service
Issue | Description |
---|---|
Function causing recursion is missing from ERR_INTER_BUCKET_RECURSION error description |
Release 7.1 (May 2022)
Couchbase Server 7.1 was released in May 2022. This release contains new features, enhancements, and fixes.
New Features
This section highlights the notable new features and improvements in this release.
-
Analytics shadow data may now be replicated up to 3 times to ensure high availability. Refer to General Settings.
-
Analytics now supports Analytics views and tabular Analytics views. Refer to Analytics Views.
-
The new Tableau Connector provides integration between tabular Analytics views and the Tableau interactive data visualization platform. Refer to Introduction.
-
The Analytics Service now supports external datasets on Azure Blob storage. Refer to Managing Links and Analytics Links REST API.
-
Analytics now supports array indexes. Refer to Using Indexes and Data Definition Language (DDL).
-
The cost-based optimizer may now consider different join orders, and can choose the optimal join order based on cost information. Refer to Join Enumeration.
-
The Query service now supports optimizer hints within queries using a specially-formatted hint comment. Refer to Optimizer Hints.
-
Couchbase Server now permits multiple root certificates to maintained in a trust store for the cluster. See Using Multiple Root Certificates.
-
Couchbase Server now supports PKCS #1 and PKCS #8 — in each case, only for use with private keys. See Private Key Formats.
-
Use of encrypted private keys is now supported for certificate management. Registration procedures are provided for encrypted private keys associated with node-certificates. See JSON Passphrase Registration.
-
System Events are now provided, to record significant events on the cluster. See System Events.
-
New roles are provided for the administration of Sync Gateway, especially in the context of Couchbase Capella. These roles are listed at Roles.
-
TLS 1.3 cipher-suites can now by used by all services; and by the Cluster Manager, XDCR, and Views. See On the Wire Security.
-
Heightened security is now provided for adding nodes to clusters. Once a cluster is using uploaded certificates, a node that is to be added must itself be provisioned with conformant certificates before addition can be successfully performed. The new node is now always added over an encrypted connection. See Adding and Joining New Nodes.
-
The scalability of indexing is enhanced by the flattening of arrays. See Format of Query Predicate.
-
Automatic failover can now fail over more than three nodes concurrently. See Automatic Failover. This improvement has permitted the removal of pre-7.1 interfaces that were specific to triggering auto-failover for server groups. Consequently, in order now to ensure successful auto-failover of a server group, the maximum count for auto-failover must be established by the administrator as a value equal to or greater than the number of nodes in the server group.
Note that the pre-7.1 interfaces for triggering auto-failover for server groups have been removed from 7.1: therefore, programs that attempt to use the pre-7.1 interfaces with 7.1+ will fail.
Note also that in 7.1, automatic failover of the Index Service is supported.
Updated interfaces for 7.1+ are documented in Node Availability, Enabling and Disabling Auto-Failover, and Retrieving Auto-Failover Settings.
-
Improvements have been made to rebalancing algorithms so that active buckets, services, and replicas will be spread across different server groups, even when server groups are unequal. See Server Group Awareness.
-
The Magma Storage Engine has been added to 7.1 as an Enterprise Edition feature, allowing for higher performance with very large datasets. Magma is a disk-based engine, so is highly suited to datasets that will not fit in available memory. You can find more details on Magma in Storage Engines.
-
The Eventing Service now has full RBAC support allowing non-administrative users to create and manage Eventing Functions subject to the user’s assigned resource privileges. You can find more details on Magma RBAC in Eventing Role-Based Access Control.
-
The Index Service now uses smart batching to reduce the time and resources required to move index metadata, and to rebuild indexes at their new locations during rebalance. See Smart Batching.
Enhancements
The following enhancements are provided in this release:
-
The Analytics function
object_concat
has been updated to support dynamic uses, similar to the more general OBJECT constructor functionality that is available in the Query Service. Refer to object_concat. -
XDCR checkpointing is now entirely persistent through topology-changes on the source cluster. This provides improved performance when failover and rebalance occur on the source cluster.
-
The Plasma Storage Engine has been enhanced with per page Bloom filters and in-memory compression. For information, see Plasma Memory Enhancements.
-
Root and intermediate certificates can now be managed while node-to-node encryption is enabled. See Certificate Management and Node-to-Node Encryption.
New Supported Platforms
This release adds support for the following platforms:
-
Apple macOS v11.6 (Big Sur) for development only
-
Apple macOS v12.x (Monterey) for development only
-
Amazon Linux (ARM)
-
Debian 11.x
-
Microsoft Windows Server 2022
See Supported Platforms for the complete list of supported platforms, and notes on deprecated platforms.
Deprecated Features and Platforms
Deprecated and Removed Platforms
The following platforms are deprecated and will be removed in a future release:
-
Apple macOS v10.14 (Mojave) – removed
-
Apple macOS v10.15 (Catalina) – deprecated
-
CentOS 7.x – deprecated
-
CentOS 8.x – removed
-
Debian 9.x – removed
-
Microsoft Windows Server 2016 – removed
-
Microsoft Windows Server 2016 (64-bit, DataCenter Edition) – removed
-
Oracle Linux 7.x – deprecated
-
Red Hat Enterprise Linux (RHEL) 7.x – deprecated
-
Ubuntu 18.x – deprecated
Deprecation of Certificate Upload API
The POST
method and /controller/uploadClusterCA
URI, which historically have been used to upload an appropriately configured certificate to the cluster, so that it becomes the root certificate for the cluster, are deprecated in 7.1.
For security reasons, in versions 7.1 and after, by default, this method and URI can continue to be used on localhost only. However, this default setting can be changed, if required. For details, see Deprecated Certificate Management APIs.
Note that new methods and URIs for certificate management are summarized on the page Certificate Management API.
Removal of pre-7.1 Server-Group Auto-Failover Interfaces
Automatic failover can now fail over more than three nodes concurrently: this improvement has permitted the removal of pre-7.1 interfaces that were specific to triggering auto-failover for server groups. Consequently, in 7.1+, in order to ensure successful auto-failover of a server group, the maximum count for auto-failover must be established by the administrator as a value equal to or greater than the number of nodes in the server group.
Note that the pre-7.1 interfaces for triggering auto-failover for server groups have been removed from 7.1: therefore, programs that attempt to use the pre-7.1 interfaces with 7.1+ will fail.
An overview of auto-failover is provided in Automatic Failover. Updated interfaces for 7.1+ are documented in Node Availability, Enabling and Disabling Auto-Failover, and Retrieving Auto-Failover Settings.
Fixed Issues
This release contains the fixes listed below.
Installation
Issue | Description |
---|---|
Fix cbupgrade for single node IPv6 clusters |
|
Windows installer always rollbacks during install |
Cluster Manager
Issue | Description |
---|---|
The old bucket 'sasl_password' should be effectively removed |
|
The versions REST API should be authenticated |
Storage
Issue | Description |
---|---|
Cleaning up of the cluster fails with "Rebalance exited with reason {buckets_shutdown_wait_failed" |
Data Service
Issue | Description |
---|---|
Limit the Checkpoint memory usage |
|
Cannot make persistent change to num nonio/auxio threads |
|
Align roles to updated permissions in memcached |
Views
Issue | Description |
---|---|
ViewEngine doesn’t handle the case of empty default-collection |
|
Views 8092 REST API leaking version info |
Analytics Service
Issue | Description |
---|---|
On corrupt remote link details in metakv, analytics cluster becomes permanently unusable on restart |
Query Service
Issue | Description |
---|---|
Query log format |
|
Support FTS’s docid_regexp mode for N1QL |
|
Mutation fail may not report the error |
|
Public interface documentation on parsing 12009 DML error |
|
Like functions escape character should be optional |
Index Service
Issue | Description |
---|---|
Smart Batching Index Builds During Rebalance |
|
Rebalance button not enabled post Quorum Loss failover even when indexing has partitioned indexes |
|
Internal Server error is raised while performing backup on a index node using cbbackupmgr |
|
Index build stuck during rebalance due to large number of pending items |
Search Service
Issue | Description |
---|---|
Rebalance optimisations via index file transfer across nodes |
|
Bind only to IPv4 addresses when invoked with IPv4-Only cluster-wide setting |
|
FTS - Apply RBAC only for target collections in a multi-collection index |
|
n1fty to upgrade to blevesearch/sear for verification phase |
|
Support encrypted certificate / key / password - Search |
|
System Event Log - Search |
|
Multiple Root CA Certs - FTS |
|
Search UI should be able to accept queries as objects |
|
Add Croatian language (hr) to the list of supported languages |
|
SEARCH_META().score behaves different from SEARCH_SCORE() in some N1QL queries |
Known Issue
This release contains the following known issue.
Query Service
Issue | Description |
---|---|
Summary: Implement defs.CheckMixedModeCallback for mixed mode checks Any attempt to execute a function with N1QL udfs replicated from a 7.1 node will fail with "no library found in worker" on a 7.0.x node. Workaround: If possible, all nodes in cluster should be running under version 7.1 or higher. |