Query
- how-to
You can query for documents in Couchbase using the SQL++ query language, a language based on SQL, but designed for structured and flexible JSON documents. Querying can solve typical programming tasks such as finding a user profile by email address, facebook login, or user ID.
Our query service uses SQL++ (formerly N1QL), which will be fairly familiar to anyone who’s used any dialect of SQL. Further resources for learning about SQL++ are listed at the bottom of the page. Before you get started you may wish to checkout the SQL++ intro page, or just dive in with a query against our "travel-sample" data set. In this case, note that before you can query a bucket, you must define at least one index. You can define a primary index on a bucket. When a primary index is defined you can issue non-covered queries on the bucket as well.
Use cbq, our interactive Query shell. Open it, and enter the following:
CREATE PRIMARY INDEX ON `travel-sample`
or replace travel-sample with a different Bucket name to build an index on a different dataset.
The default installation places cbq in /opt/couchbase/bin/ on Linux, /Applications/Couchbase Server.app/Contents/Resources/couchbase-core/bin/cbq on OS X, and C:\Program Files\Couchbase\Server\bin\cbq.exe on Microsoft Windows.
|
Note that building indexes is covered in more detail on the Query concept page — and in the API Reference.
Getting Started
After familiarizing yourself with the basics on how the SQL++ query language works and how to query it from the UI you can use it from the Python SDK. Here’s a complete example of doing a query and handling the results:
from couchbase.cluster import Cluster
from couchbase.options import ClusterOptions, QueryOptions
from couchbase.auth import PasswordAuthenticator
from couchbase.exceptions import CouchbaseException
cluster = Cluster.connect(
"couchbase://your-ip",
ClusterOptions(PasswordAuthenticator("Administrator", "password")))
bucket = cluster.bucket("travel-sample")
collection = bucket.default_collection()
try:
result = cluster.query(
"SELECT * FROM `travel-sample`.inventory.airport LIMIT 10", QueryOptions(metrics=True))
for row in result.rows():
print(f"Found row: {row}")
print(f"Report execution time: {result.metadata().metrics().execution_time()}")
except CouchbaseException as ex:
import traceback
traceback.print_exc()
When using a Couchbase version < 6.5 you must create a valid Bucket connection using cluster.bucket(name) before you can execute queries.
|
Let’s break it down. A query is always performed at the Cluster
level, using the query
method. It takes the statement as a required argument and then allows to provide additional options if needed.
Once a result returns you can iterate the returned rows and/or access the QueryMetaData
associated with the query.
Queries & Placeholders
Placeholders allow you to specify variable constraints for an otherwise constant query. There are two variants of placeholders: postional and named parameters. Positional parameters use an ordinal placeholder for substitution and named parameters use variables. A named or positional parameter is a placeholder for a value in the WHERE, LIMIT or OFFSET clause of a query. Note that both parameters and options are optional.
result = cluster.query(
"SELECT ts.* FROM `travel-sample`.inventory.airport WHERE city=$1",
"San Jose")
result = cluster.query(
"SELECT ts.* FROM `travel-sample`.inventory.airport WHERE city=$1",
QueryOptions(positional_parameters=["San Jose"]))
result = cluster.query(
"SELECT ts.* FROM `travel-sample`.inventory.airport WHERE city=$city",
city='San Jose')
result = cluster.query(
"SELECT ts.* FROM `travel-sample`.inventory.airport WHERE city=$city",
QueryOptions(named_parameters={"city": "San Jose"}))
The complete code for this page’s example can be found at n1ql_ops.py.
What style you choose is up to you, for readability in more complex queries we generally recommend using the named parameters.
Note that you cannot use parameters in all positions.
If you put it in an unsupported place the server will respond with a ParsingFailedException
or similar.
The Query Result
When performing a query, the response you receive is a QueryResult
.
If no error is returned then the request succeeded and the result provides access to both the rows returned and also associated QueryMetaData
.
result = cluster.query(
"SELECT * FROM `travel-sample`.inventory.airline LIMIT 10")
# iterate over rows
for row in result:
# each row is an instance of the query call
try:
name = row["airline"]["name"]
callsign = row["airline"]["callsign"]
print(f"Airline name: {name}, callsign: {callsign}")
except KeyError:
print("Row does not contain 'name' key")
The SDK executes queries lazily, meaning the query is only run against the cluster when you try to use the QueryResult object.
You therefore cannot iterate over the QueryResult multiple times, and attempting to do so raises an AlreadyQueriedException .
|
The QueryMetaData
provides insight into some basic profiling/timing information as well as information like the ClientContextID
.
Name | Description |
---|---|
|
Returns the request identifer of this request. |
|
Returns the context ID either generated by the SDK or supplied by the user. |
|
An enum simply representing the state of the result. |
|
Returns metrics provided by the query for the request if enabled. |
|
If a signature is present, it will be available to consume in a generic fashion. |
|
Non-fatal errors are available to consume as warnings on this method. |
|
If enabled returns additional profiling information of the query. |
For example, here is how you can print the executionTime
of a query:
result = cluster.query("SELECT 1=1", QueryOptions(metrics=True))
for row in result:
print(f"Result: {row}")
print(f"Execution time: {result.metadata().metrics().execution_time()}")
Query Options
The query service provides an array of options to customize your query. The following table lists them all:
Name | Description |
---|---|
|
Sets a context ID returned by the service for debugging purposes. |
|
Allows to set positional arguments for a parameterized query. |
|
Allows to set named arguments for a parameterized query. |
|
Assigns a different server-side priority to the query. |
|
Escape hatch to add arguments that are not covered by these options. |
|
Tells the client and server that this query is readonly. |
|
If set to false will prepare the query and later execute the prepared statement. |
|
Allows to be consistent with previously written mutations ("read your own writes"). |
|
Tunes the maximum parallelism on the server. |
|
Enables the server to send metrics back to the client as part of the response. |
|
Sets the batch size for the query pipeline. |
|
Sets the cap for the query pipeline. |
|
Allows to enable additional query profiling as part of the response. |
|
Allows to specify a maximum scan wait time. |
|
Specifies a maximum cap on the query scan size. |
|
Sets a different scan consistency for this query. |
|
Allows to set target bucket and/or scope. |
Scan Consistency
By default, the query engine will return whatever is currently in the index at the time of query (this mode is also called QueryScanConsistency.NOT_BOUNDED
).
If you need to include everything that has just been written, a different scan consistency must be chosen.
If QueryScanConsistency.REQUEST_PLUS
is chosen, it will likely take a bit longer to return the results but the query engine will make sure that it is as up-to-date as possible.
result = cluster.query(
"SELECT * FROM `travel-sample`.inventory.airline LIMIT 10",
QueryOptions(scan_consistency=QueryScanConsistency.REQUEST_PLUS))
You can also use consistent_with=MutationState
for a more narrowed-down scan consistency.
Construct the MutationState
from individual `MutationToken`s that are returned from KV `MutationResult`s to make sure at least those mutations are visible.
Depending on the index update rate this might provide a speedier response.
new_hotel = {
"callsign": None,
"country": "United States",
"iata": "TX",
"icao": "TX99",
"id": 123456789,
"name": "Howdy Airlines",
"type": "airline"
}
res = collection.upsert(
"airline_{}".format(new_hotel["id"]), new_hotel)
ms = MutationState(res)
result = cluster.query(
"SELECT ts.* FROM `travel-sample`.inventory.airline LIMIT 10",
QueryOptions(consistent_with=ms))
Client Context ID
The SDK will always send a client context ID with each query, even if none is provided by the user. By default a UUID will be generated that is mirrored back from the query engine and can be used for debugging purposes. A custom string can always be provided if you want to introduce application-specific semantics into it (so that for example in a network dump it shows up with a certain identifier). Whatever is chosen, we recommend making sure it is unique so different queries can be distinguished during debugging or monitoring.
result = cluster.query(
"SELECT ts.* FROM `travel-sample`.inventory.hotel LIMIT 10",
QueryOptions(client_context_id="user-44{}".format(uuid.uuid4())))
ReadOnly
If the query is marked as readonly, both the server and the SDK can improve processing of the operation. On the client side, the SDK can be more liberal with retries because it can be sure that there are no state-mutating side-effects happening. The query engine will ensure that actually no data is mutated when parsing and planning the query.
result = cluster.query(
"SELECT ts.* FROM `travel-sample`.inventory.hotel LIMIT 10",
QueryOptions(read_only=True))
Streaming Large Result Sets
By default, the Python SDK will stream the result set from the server, where the client will start a persistent connection with the server and only read the header until the Rows are enumerated; then, each row or JSON object will be de-serialized one at a time.
This decreases pressure on Garbage Collection and helps to prevent OutOfMemory errors.
Async APIs
In addition to the blocking API on Cluster
, the SDK provides asyncio and Twisted APIs on ACluster
or TxCluster
respectively.
If you are in doubt of which API to use, we recommend looking at the asyncio API first.
Simple queries with both asyncio and Twisted APIs look similar to the blocking one:
from acouchbase.cluster import Cluster, get_event_loop
from couchbase.options import ClusterOptions, QueryOptions
from couchbase.auth import PasswordAuthenticator
from couchbase.exceptions import ParsingFailedException
async def get_couchbase():
cluster = Cluster(
"couchbase://your-ip",
ClusterOptions(PasswordAuthenticator("Administrator", "password")))
bucket = cluster.bucket("travel-sample")
await bucket.on_connect()
collection = bucket.default_collection()
return cluster, bucket, collection
async def simple_query(cluster):
try:
result = cluster.query(
"SELECT ts.* FROM `travel-sample` ts WHERE ts.`type`=$type LIMIT 10",
QueryOptions(named_parameters={"type": "hotel"}))
async for row in result:
print("Found row: {}".format(row))
except ParsingFailedException as ex:
print(ex)
loop = get_event_loop()
cluster, bucket, collection = loop.run_until_complete(get_couchbase())
loop.run_until_complete(simple_query(cluster))
# **IMPORTANT** need to do this import prior to importing the reactor (new to the Python 4.x SDK)
import txcouchbase
from twisted.internet import reactor
from txcouchbase.cluster import TxCluster
from couchbase.options import ClusterOptions, QueryOptions
from couchbase.auth import PasswordAuthenticator
def handle_query_results(result):
for r in result.rows():
print("query row: {}".format(r))
reactor.stop()
def on_streaming_error(error):
print("Streaming operation had an error.\nError: {}".format(error))
reactor.stop()
def on_connect_ok(result, cluster):
# create a bucket object
bucket = cluster.bucket("travel-sample")
# create a collection object
cb = bucket.default_collection()
d = cluster.query("SELECT ts.* FROM `travel-sample` ts WHERE ts.`type`=$type LIMIT 10",
QueryOptions(named_parameters={"type": "hotel"}))
d.addCallback(handle_query_results).addErrback(on_streaming_error)
def on_connect_err(error):
print("Unable to connect.\n{}".format(error))
cluster = TxCluster("couchbase://your-ip",
ClusterOptions(PasswordAuthenticator("Administrator", "password")))
# wait for connect
cluster.on_connect().addCallback(on_connect_ok, cluster).addErrback(on_connect_err)
reactor.run()
Querying at Scope Level
It is possible to query off the Scope
level,
with Couchbase Server release 7.0, using the scope.query()
method.
It takes the statement as a required argument, and then allows additional options if needed.
agent_scope = bucket.scope("inventory")
result = agent_scope.query(
"SELECT a.* FROM `airline` a WHERE a.country=$country LIMIT 10",
country='France')
Additional Resources
SQL++ is not the only query option in Couchbase. Be sure to check that your use case fits your selection of query service. |
-
For a deeper dive into SQL++ from the SDK, refer to our SQL++ SDK concept doc.
-
The Server doc SQL++ intro introduces a complete guide to the SQL++ language, including all of the latest additions.
-
The SQL++ interactive tutorial is a good introduction to the basics of SQL++ use.
-
For scaling up queries, be sure to read up on Indexes.
-
The Query Service is for operational queries; for analytical workloads, read more on when to choose Analytics Service.