This section describes:
functions and operators for processing and creating JSON data
the SQL/JSON path language
To learn more about the SQL/JSON standard, see [sqltr-19075-6]. For details on JSON types supported in PostgreSQL, see Section 8.14.
Table 9.45 shows the operators that
are available for use with JSON data types (see Section 8.14).
In addition, the usual comparison operators shown in Table 9.1 are available for
jsonb, though not for json. The comparison
operators follow the ordering rules for B-tree operations outlined in
Section 8.14.4.
Table 9.45. json and jsonb Operators
Operator Description Example(s) |
|---|
Extracts
|
Extracts JSON object field with the given key.
|
Extracts
|
Extracts JSON object field with the given key, as
|
Extracts JSON sub-object at the specified path, where path elements can be either field keys or array indexes.
|
Extracts JSON sub-object at the specified path as
|
The field/element/path extraction operators return NULL, rather than failing, if the JSON input does not have the right structure to match the request; for example if no such key or array element exists.
Some further operators exist only for jsonb, as shown
in Table 9.46.
Section 8.14.4
describes how these operators can be used to effectively search indexed
jsonb data.
Table 9.46. Additional jsonb Operators
Operator Description Example(s) |
|---|
Does the first JSON value contain the second? (See Section 8.14.3 for details about containment.)
|
Is the first JSON value contained in the second?
|
Does the text string exist as a top-level key or array element within the JSON value?
|
Do any of the strings in the text array exist as top-level keys or array elements?
|
Do all of the strings in the text array exist as top-level keys or array elements?
|
Concatenates two
To append an array to another array as a single entry, wrap it in an additional layer of array, for example:
|
Deletes a key (and its value) from a JSON object, or matching string value(s) from a JSON array.
|
Deletes all matching keys or array elements from the left operand.
|
Deletes the array element with specified index (negative integers count from the end). Throws an error if JSON value is not an array.
|
Deletes the field or array element at the specified path, where path elements can be either field keys or array indexes.
|
Does JSON path return any item for the specified JSON value?
|
Returns the result of a JSON path predicate check for the
specified JSON value. Only the first item of the result is taken into
account. If the result is not Boolean, then
|
The jsonpath operators @?
and @@ suppress the following errors: missing object
field or array element, unexpected JSON item type, datetime and numeric
errors. The jsonpath-related functions described below can
also be told to suppress these types of errors. This behavior might be
helpful when searching JSON document collections of varying structure.
Table 9.47 shows the functions that are
available for constructing json and jsonb values.
Table 9.47. JSON Creation Functions
Function Description Example(s) |
|---|
|
Converts any SQL value to
|
Converts an SQL array to a JSON array. The behavior is the same
as
|
Converts an SQL composite value to a JSON object. The behavior is the
same as
|
Builds a possibly-heterogeneously-typed JSON array out of a variadic
argument list. Each argument is converted as
per
|
Builds a JSON object out of a variadic argument list. By convention,
the argument list consists of alternating keys and values. Key
arguments are coerced to text; value arguments are converted as
per
|
|
Builds a JSON object out of a text array. The array must have either exactly one dimension with an even number of members, in which case they are taken as alternating key/value pairs, or two dimensions such that each inner array has exactly two elements, which are taken as a key/value pair. All values are converted to JSON strings.
|
This form of
|
Table 9.48 shows the functions that
are available for processing json and jsonb values.
Table 9.48. JSON Processing Functions
Function Description Example(s) |
|---|
Expands the top-level JSON array into a set of JSON values.
value ----------- 1 true [2,false]
|
Expands the top-level JSON array into a set of
value ----------- foo bar
|
Returns the number of elements in the top-level JSON array.
|
Expands the top-level JSON object into a set of key/value pairs.
key | value -----+------- a | "foo" b | "bar"
|
Expands the top-level JSON object into a set of key/value pairs.
The returned
key | value -----+------- a | foo b | bar
|
Extracts JSON sub-object at the specified path.
(This is functionally equivalent to the
|
Extracts JSON sub-object at the specified path as
|
Returns the set of keys in the top-level JSON object.
json_object_keys ------------------ f1 f2
|
Expands the top-level JSON object to a row having the composite type
of the To convert a JSON value to the SQL type of an output column, the following rules are applied in sequence:
While the example below uses a constant JSON value, typical use would
be to reference a
a | b | c
---+-----------+-------------
1 | {2,"a b"} | (4,"a b c")
|
Expands the top-level JSON array of objects to a set of rows having
the composite type of the
a | b ---+--- 1 | 2 3 | 4
|
Expands the top-level JSON object to a row having the composite type
defined by an
a | b | c | d | r
---+---------+---------+---+---------------
1 | [1,2,3] | {1,2,3} | | (123,"a b c")
|
Expands the top-level JSON array of objects to a set of rows having
the composite type defined by an
a | b ---+----- 1 | foo 2 |
|
Returns
|
If
|
Returns
|
Deletes all object fields that have null values from the given JSON value, recursively. Null values that are not object fields are untouched.
|
Checks whether the JSON path returns any item for the specified JSON
value.
If the
|
Returns the result of a JSON path predicate check for the specified
JSON value. Only the first item of the result is taken into account.
If the result is not Boolean, then
|
Returns all JSON items returned by the JSON path for the specified
JSON value.
The optional
jsonb_path_query ------------------ 2 3 4
|
Returns all JSON items returned by the JSON path for the specified
JSON value, as a JSON array.
The optional
|
Returns the first JSON item returned by the JSON path for the
specified JSON value. Returns
|
These functions act like their counterparts described above without
the
|
|
Converts the given JSON value to pretty-printed, indented text.
[
{
"f1": 1,
"f2": null
},
2
]
|
|
Returns the type of the top-level JSON value as a text string.
Possible types are
|
See also Section 9.21 for the aggregate
function json_agg which aggregates record
values as JSON, the aggregate function
json_object_agg which aggregates pairs of values
into a JSON object, and their jsonb equivalents,
jsonb_agg and jsonb_object_agg.
SQL/JSON path expressions specify the items to be retrieved
from the JSON data, similar to XPath expressions used
for SQL access to XML. In PostgreSQL,
path expressions are implemented as the jsonpath
data type and can use any elements described in
Section 8.14.7.
JSON query functions and operators pass the provided path expression to the path engine for evaluation. If the expression matches the queried JSON data, the corresponding JSON item, or set of items, is returned. Path expressions are written in the SQL/JSON path language and can include arithmetic expressions and functions.
A path expression consists of a sequence of elements allowed
by the jsonpath data type.
The path expression is normally evaluated from left to right, but
you can use parentheses to change the order of operations.
If the evaluation is successful, a sequence of JSON items is produced,
and the evaluation result is returned to the JSON query function
that completes the specified computation.
To refer to the JSON value being queried (the
context item), use the $ variable
in the path expression. It can be followed by one or more
accessor operators,
which go down the JSON structure level by level to retrieve sub-items
of the context item. Each operator that follows deals with the
result of the previous evaluation step.
For example, suppose you have some JSON data from a GPS tracker that you would like to parse, such as:
{
"track": {
"segments": [
{
"location": [ 47.763, 13.4034 ],
"start time": "2018-10-14 10:05:14",
"HR": 73
},
{
"location": [ 47.706, 13.2635 ],
"start time": "2018-10-14 10:39:21",
"HR": 135
}
]
}
}
To retrieve the available track segments, you need to use the
. accessor
operator to descend through surrounding JSON objects:
key
$.track.segments
To retrieve the contents of an array, you typically use the
[*] operator. For example,
the following path will return the location coordinates for all
the available track segments:
$.track.segments[*].location
To return the coordinates of the first segment only, you can
specify the corresponding subscript in the []
accessor operator. Recall that JSON array indexes are 0-relative:
$.track.segments[0].location
The result of each path evaluation step can be processed
by one or more jsonpath operators and methods
listed in Section 9.16.2.2.
Each method name must be preceded by a dot. For example,
you can get the size of an array:
$.track.segments.size()
More examples of using jsonpath operators
and methods within path expressions appear below in
Section 9.16.2.2.
When defining a path, you can also use one or more
filter expressions that work similarly to the
WHERE clause in SQL. A filter expression begins with
a question mark and provides a condition in parentheses:
? (condition)
Filter expressions must be written just after the path evaluation step
to which they should apply. The result of that step is filtered to include
only those items that satisfy the provided condition. SQL/JSON defines
three-valued logic, so the condition can be true, false,
or unknown. The unknown value
plays the same role as SQL NULL and can be tested
for with the is unknown predicate. Further path
evaluation steps use only those items for which the filter expression
returned true.
The functions and operators that can be used in filter expressions are
listed in Table 9.50. Within a
filter expression, the @ variable denotes the value
being filtered (i.e., one result of the preceding path step). You can
write accessor operators after @ to retrieve component
items.
For example, suppose you would like to retrieve all heart rate values higher than 130. You can achieve this using the following expression:
$.track.segments[*].HR ? (@ > 130)
To get the start times of segments with such values, you have to filter out irrelevant segments before returning the start times, so the filter expression is applied to the previous step, and the path used in the condition is different:
$.track.segments[*] ? (@.HR > 130)."start time"
You can use several filter expressions in sequence, if required. For example, the following expression selects start times of all segments that contain locations with relevant coordinates and high heart rate values:
$.track.segments[*] ? (@.location[1] < 13.4) ? (@.HR > 130)."start time"
Using filter expressions at different nesting levels is also allowed. The following example first filters all segments by location, and then returns high heart rate values for these segments, if available:
$.track.segments[*] ? (@.location[1] < 13.4).HR ? (@ > 130)
You can also nest filter expressions within each other:
$.track ? (exists(@.segments[*] ? (@.HR > 130))).segments.size()
This expression returns the size of the track if it contains any segments with high heart rate values, or an empty sequence otherwise.
PostgreSQL's implementation of the SQL/JSON path language has the following deviations from the SQL/JSON standard:
A path expression can be a Boolean predicate, although the SQL/JSON
standard allows predicates only in filters. This is necessary for
implementation of the @@ operator. For example,
the following jsonpath expression is valid in
PostgreSQL:
$.track.segments[*].HR < 70
There are minor differences in the interpretation of regular
expression patterns used in like_regex filters, as
described in Section 9.16.2.3.
When you query JSON data, the path expression may not match the actual JSON data structure. An attempt to access a non-existent member of an object or element of an array results in a structural error. SQL/JSON path expressions have two modes of handling structural errors:
lax (default) — the path engine implicitly adapts the queried data to the specified path. Any remaining structural errors are suppressed and converted to empty SQL/JSON sequences.
strict — if a structural error occurs, an error is raised.
The lax mode facilitates matching of a JSON document structure and path expression if the JSON data does not conform to the expected schema. If an operand does not match the requirements of a particular operation, it can be automatically wrapped as an SQL/JSON array or unwrapped by converting its elements into an SQL/JSON sequence before performing this operation. Besides, comparison operators automatically unwrap their operands in the lax mode, so you can compare SQL/JSON arrays out-of-the-box. An array of size 1 is considered equal to its sole element. Automatic unwrapping is not performed only when:
The path expression contains type() or
size() methods that return the type
and the number of elements in the array, respectively.
The queried JSON data contain nested arrays. In this case, only the outermost array is unwrapped, while all the inner arrays remain unchanged. Thus, implicit unwrapping can only go one level down within each path evaluation step.
For example, when querying the GPS data listed above, you can abstract from the fact that it stores an array of segments when using the lax mode:
lax $.track.segments.location
In the strict mode, the specified path must exactly match the structure of
the queried JSON document to return an SQL/JSON item, so using this
path expression will cause an error. To get the same result as in
the lax mode, you have to explicitly unwrap the
segments array:
strict $.track.segments[*].location
The .** accessor can lead to surprising results
when using the lax mode. For instance, the following query selects every
HR value twice:
lax $.**.HR
This happens because the .** accessor selects both
the segments array and each of its elements, while
the .HR accessor automatically unwraps arrays when
using the lax mode. To avoid surprising results, we recommend using
the .** accessor only in the strict mode. The
following query selects each HR value just once:
strict $.**.HR
Table 9.49 shows the operators and
methods available in jsonpath. Note that while the unary
operators and methods can be applied to multiple values resulting from a
preceding path step, the binary operators (addition etc.) can only be
applied to single values.
Table 9.49. jsonpath Operators and Methods
Operator/Method Description Example(s) |
|---|
Addition
|
Unary plus (no operation); unlike addition, this can iterate over multiple values
|
Subtraction
|
Negation; unlike subtraction, this can iterate over multiple values
|
Multiplication
|
Division
|
Modulo (remainder)
|
Type of the JSON item (see
|
Size of the JSON item (number of array elements, or 1 if not an array)
|
Approximate floating-point number converted from a JSON number or string
|
Nearest integer greater than or equal to the given number
|
Nearest integer less than or equal to the given number
|
Absolute value of the given number
|
Date/time value converted from a string
|
Date/time value converted from a string using the
specified
|
The object's key-value pairs, represented as an array of objects
containing three fields:
|
The result type of the datetime() and
datetime(
methods can be template)date, timetz, time,
timestamptz, or timestamp.
Both methods determine their result type dynamically.
The datetime() method sequentially tries to
match its input string to the ISO formats
for date, timetz, time,
timestamptz, and timestamp. It stops on
the first matching format and emits the corresponding data type.
The datetime(
method determines the result type according to the fields used in the
provided template string.
template)
The datetime() and
datetime( methods
use the same parsing rules as the template)to_timestamp SQL
function does (see Section 9.8), with three
exceptions. First, these methods don't allow unmatched template
patterns. Second, only the following separators are allowed in the
template string: minus sign, period, solidus (slash), comma, apostrophe,
semicolon, colon and space. Third, separators in the template string
must exactly match the input string.
If different date/time types need to be compared, an implicit cast is
applied. A date value can be cast to timestamp
or timestamptz, timestamp can be cast to
timestamptz, and time to timetz.
However, all but the first of these conversions depend on the current
TimeZone setting, and thus can only be performed
within timezone-aware jsonpath functions.
Table 9.50 shows the available filter expression elements.
Table 9.50. jsonpath Filter Expression Elements
Predicate/Value Description Example(s) |
|---|
Equality comparison (this, and the other comparison operators, work on all JSON scalar values)
|
Non-equality comparison
|
Less-than comparison
|
Less-than-or-equal-to comparison
|
Greater-than comparison
|
Greater-than-or-equal-to comparison
|
JSON constant
|
JSON constant
|
JSON constant
|
Boolean AND
|
Boolean OR
|
Boolean NOT
|
Tests whether a Boolean condition is
|
Tests whether the first operand matches the regular expression
given by the second operand, optionally with modifications
described by a string of
|
Tests whether the second operand is an initial substring of the first operand.
|
Tests whether a path expression matches at least one SQL/JSON item.
Returns
|
SQL/JSON path expressions allow matching text to a regular expression
with the like_regex filter. For example, the
following SQL/JSON path query would case-insensitively match all
strings in an array that start with an English vowel:
$[*] ? (@ like_regex "^[aeiou]" flag "i")
The optional flag string may include one or more of
the characters
i for case-insensitive match,
m to allow ^
and $ to match at newlines,
s to allow . to match a newline,
and q to quote the whole pattern (reducing the
behavior to a simple substring match).
The SQL/JSON standard borrows its definition for regular expressions
from the LIKE_REGEX operator, which in turn uses the
XQuery standard. PostgreSQL does not currently support the
LIKE_REGEX operator. Therefore,
the like_regex filter is implemented using the
POSIX regular expression engine described in
Section 9.7.3. This leads to various minor
discrepancies from standard SQL/JSON behavior, which are cataloged in
Section 9.7.3.8.
Note, however, that the flag-letter incompatibilities described there
do not apply to SQL/JSON, as it translates the XQuery flag letters to
match what the POSIX engine expects.
Keep in mind that the pattern argument of like_regex
is a JSON path string literal, written according to the rules given in
Section 8.14.7. This means in particular that any
backslashes you want to use in the regular expression must be doubled.
For example, to match string values of the root document that contain
only digits:
$.* ? (@ like_regex "^\\d+$")
To provide native support for JSON data types within the SQL environment, PostgreSQL implements the SQL/JSON data model. This model comprises sequences of items. Each item can hold SQL scalar values, with an additional SQL/JSON null value, and composite data structures that use JSON arrays and objects. The model is a formalization of the implied data model in the JSON specification RFC 7159.
SQL/JSON allows you to handle JSON data alongside regular SQL data, with transaction support, including:
Uploading JSON data into the database and storing it in regular SQL columns as character or binary strings.
Generating JSON objects and arrays from relational data.
Querying JSON data using SQL/JSON query functions and SQL/JSON path language expressions.
All SQL/JSON functions fall into one of two groups. Constructor functions generate JSON data from values of SQL types. Query functions evaluate SQL/JSON path language expressions against JSON values and produce values of SQL/JSON types, which are converted to SQL types.
PostgreSQL provides several functions
that generate JSON data. Taking values of SQL types as input, these
functions construct JSON objects, JSON arrays or JSON scalars represented
as the json or jsonb types, or as
SQL character or binary strings.
JSONJSON(expression[ FORMAT JSON [ ENCODING UTF8 ] ] [ { WITH | WITHOUT } UNIQUE [ KEYS ] ] [ RETURNINGjson_data_type] )
The JSON function generates JSON
from text data.
expression [ FORMAT JSON [ ENCODING UTF8 ] ]
The string expression provides the
JSON text data.
It can be any character string (text,
char, etc.) or binary string (bytea)
in UTF8 encoding.
If the expression is NULL an
SQL null value is returned.
The optional FORMAT clause is provided to conform
to the SQL/JSON standard.
{ WITH | WITHOUT } UNIQUE [ KEYS ]
Defines whether duplicate keys are allowed:
WITHOUTDefault. The constructed JSON object can contain duplicate keys.
WITHDuplicate keys are not allowed. If the input data contains duplicate keys, an error is returned.
Optionally, you can add the KEYS keyword for
semantic clarity.
RETURNING json_data_type
The output clause that specifies the type (json or
jsonb) of the generated JSON.
The default is json.
Alternatively, you can construct JSON values simply
using PostgreSQL-specific casts to
json and jsonb types.
Construct JSON using the provided strings:
SELECT JSON('{ "a" : 123, "b": [ true, "foo" ], "a" : "bar" }');
json
--------------------------------------------------
{ "a" : 123, "b": [ true, "foo" ], "a" : "bar" }
(1 row)
SELECT JSON('{"a": 123, "b": [true, "foo"], "a": "bar"}' RETURNING jsonb);
json
----------------------------------
{"a": "bar", "b": [true, "foo"]}
(1 row)
SELECT JSON('{"a": 123, "b": [true, "foo"], "a": "bar"}' WITH UNIQUE KEYS);
ERROR: duplicate JSON object key value
JSON_SCALARJSON_SCALAR(expression[ RETURNINGjson_data_type] )
The JSON_SCALAR function generates a
JSON scalar value from SQL data.
expression
The expression provides the data for constructing a
JSON value.
For null input, SQL null
(not a JSON null) value is returned.
For any scalar other than a number or a Boolean, the text
representation will be used, with escaping as necessary to make
it a valid JSON string value.
For details, see
to_json()/to_jsonb()
in Table 9.47.
RETURNING json_data_type
The output clause that specifies the type (json or
jsonb) of the generated JSON scalar.
The default is json.
Alternatively, you can construct JSON objects by
using the PostgreSQL-specific
to_json()/to_jsonb() functions.
See Table 9.47 for details.
Construct JSON scalars from the provided values of various types:
SELECT JSON_SCALAR(123.45);
json_scalar
-------------
123.45
(1 row)
SELECT JSON_SCALAR('123');
json_scalar
-------------
"123"
(1 row)
SELECT JSON_SCALAR(true);
json_scalar
-------------
true
(1 row)
JSON_OBJECTJSON_OBJECT( [ {key_expression{ VALUE | ':' }value_expression[ FORMAT JSON [ ENCODING UTF8 ] ] }[, ...] ] [ { NULL | ABSENT } ON NULL ] [ { WITH | WITHOUT } UNIQUE [ KEYS ] ] [ RETURNINGdata_type[ FORMAT JSON [ ENCODING UTF8 ] ] ] )
The JSON_OBJECT function generates a
JSON object from SQL or
JSON data.
key_expression { VALUE | ':' }
value_expression [ FORMAT JSON [ ENCODING UTF8 ] ]
The input clause that provides the data for constructing a JSON object:
key_expression is a scalar expression
defining the JSON key, which is implicitly
converted to the text type.
The provided expression cannot be NULL or
belong to a type that has a cast to json.
value_expression is an expression
that provides the input for the JSON value.
The optional FORMAT clause is provided to
conform to the SQL/JSON standard.
You must use a colon or the VALUE keyword as a
separator between the key and the value. Multiple key/value pairs are
separated by commas.
{ NULL | ABSENT } ON NULL
Defines whether NULL values are allowed in the constructed
JSON object:
NULL
Default. NULL values are allowed.
ABSENT
If the value is NULL,
the corresponding key/value pair is omitted from the generated
JSON object.
{ WITH | WITHOUT } UNIQUE [ KEYS ]
Defines whether duplicate keys are allowed:
WITHOUTDefault. The constructed JSON object can contain duplicate keys.
WITHDuplicate keys are not allowed. If the input data contains duplicate keys, an error is returned. This check is performed before removing JSON items with NULL values.
Optionally, you can add the KEYS keyword for semantic clarity.
RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ]
The output clause that specifies the type of the generated JSON object. For details, see Section 9.16.3.4.2.
Alternatively, you can construct JSON objects by using the
PostgreSQL-specific json_build_object()/
jsonb_build_object() functions.
See Table 9.47 for details.
Construct a JSON object from the provided key/value pairs of various types:
SELECT JSON_OBJECT(
-- scalar JSON types
'key1': 'string',
'key2': '[1, 2]',
'key3' VALUE 123, -- alternative syntax for key-value delimiter
'key4': NULL,
-- other types
'key5': ARRAY[1, 2, 3], -- postgres array
'key6': jsonb '{"a": ["b", 1]}', -- composite json/jsonb
'key7': date '2017-09-30', -- datetime type
'key8': row(1, 'a'), -- row type
'key9': '[1, 2]' FORMAT JSON, -- same value as for key2, but with FORMAT
-- key can be an expression
'key' || 'last' : TRUE
ABSENT ON NULL) AS json;
json
----------------------------------------------------
{"key1" : "string", "key2" : "[1, 2]", "key3" : 123,
"key5" : [1,2,3], "key6" : {"a": ["b", 1]},
"key7" : "2017-09-30", "key8" : {"f1":1,"f2":"a"},
"key9" : [1, 2], "keylast" : true}
(1 row)
From the films table, select some data
about the films distributed by Paramount Pictures
(did = 103) and return JSON objects:
SELECT
JSON_OBJECT(
'code' VALUE f.code,
'title' VALUE f.title,
'did' VALUE f.did
) AS paramount
FROM films AS f
WHERE f.did = 103;
paramount
----------------------------------------------------
{"code" : "P_301", "title" : "Vertigo", "did" : 103}
{"code" : "P_302", "title" : "Becket", "did" : 103}
{"code" : "P_303", "title" : "48 Hrs", "did" : 103}
(3 rows)
JSON_OBJECTAGGJSON_OBJECTAGG( [ {key_expression{ VALUE | ':' }value_expression} ] [ { NULL | ABSENT } ON NULL ] [ { WITH | WITHOUT } UNIQUE [ KEYS ] ] [ RETURNINGdata_type[ FORMAT JSON [ ENCODING UTF8 ] ] ] )
The JSON_OBJECTAGG function aggregates the provided data
into a JSON object. You can use this function to combine values
stored in different table columns into pairs.
key_expression { VALUE | ':' } value_expression
The input clause that provides the data to be aggregated as a JSON object:
key_expression is a scalar expression
defining the JSON key, which is implicitly
converted to the text type.
The provided expression cannot be NULL or
belong to a type that has a cast to json.
value_expression is an expression that
provides the input for the JSON value preceded
by its type.
For JSON scalar types, you can omit the type.
The input value of the bytea type must be stored in UTF8
and contain a valid UTF8 string. Otherwise, an error occurs.
PostgreSQL currently supports only UTF8.
You must use a colon or the VALUE keyword as a separator between
keys and values. Multiple key/value pairs are separated by commas.
{ NULL | ABSENT } ON NULL
Defines whether NULL values are allowed in the constructed
JSON object:
NULL
Default. NULL values are allowed.
ABSENT
If the value is NULL,
the corresponding key/value pair is omitted from the generated
JSON object.
{ WITH | WITHOUT } UNIQUE [ KEYS ]
Defines whether duplicate keys are allowed:
WITHOUTDefault. The constructed JSON object can contain duplicate keys.
WITHDuplicate keys are not allowed. If the input data contains duplicate keys, an error is returned. This check is performed before removing JSON items with NULL values.
Optionally, you can add the KEYS keyword for semantic clarity.
RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ]
The output clause that specifies the type of the generated JSON object. For details, see Section 9.16.3.4.2.
Alternatively, you can create JSON objects by using
PostgreSQL-specific json_object_agg()/
jsonb_object_agg() aggregate functions.
See Section 9.21 for details.
For films with did = 103, aggregate key/value pairs
of film genre (f.kind) and title (f.title)
into a single object:
SELECT
JSON_OBJECTAGG(
f.kind VALUE f.title)
AS films_list
FROM films AS f
where f.did = 103;
films_list
----------------------------------------------------
{ "Action" : "Vertigo", "Drama" : "Becket", "Action" : "48 Hrs" }
Return the same object as jsonb. Note that only a single film of
the action genre is included as the jsonb type does not allow duplicate keys.
SELECT
JSON_OBJECTAGG(
f.kind VALUE f.title
RETURNING jsonb)
AS films_list
FROM films AS f
where f.did = 103;
films_list
----------------------------------------------------
{"Drama": "Becket", "Action": "48 Hrs"}
Return objects of film titles and length, grouped by the film genre:
SELECT
f.kind,
JSON_OBJECTAGG(
f.title VALUE f.len
) AS films_list
FROM films AS f
GROUP BY f.kind;
kind | films_list
-------------+----------------------------------
Musical | { "West Side Story" : "02:32:00", "The King and I" : "02:13:00", "Bed Knobs and Broomsticks" : "01:57:00" }
Romantic | { "The African Queen" : "01:43:00", "Une Femme est une Femme" : "01:25:00", "Storia di una donna" : "01:30:00" }
Comedy | { "Bananas" : "01:22:00", "There's a Girl in my Soup" : "01:36:00" }
Drama | { "The Third Man" : "01:44:00", "Becket" : "02:28:00", "War and Peace" : "05:57:00", "Yojimbo" : "01:50:00", "Das Boot" : "02:29:00" }
Action | { "Vertigo" : "02:08:00", "48 Hrs" : "01:37:00", "Taxi Driver" : "01:54:00", "Absence of Malice" : "01:55:00" }
(5 rows)
JSON_ARRAYJSON_ARRAY( [ {value_expression[ FORMAT JSON ] } [, ...] ] [ { NULL | ABSENT } ON NULL ] [ RETURNINGdata_type[ FORMAT JSON [ ENCODING UTF8 ] ] ] )
JSON_ARRAY ( [query_expression] [ RETURNINGdata_type[ FORMAT JSON [ ENCODING UTF8 ] ] ] )
The JSON_ARRAY function constructs a JSON array from
the provided SQL or JSON data.
value_expression
The input clause that provides the data for constructing a JSON array.
The value_expression is an expression
that provides the input for the JSON value preceded by its type.
For JSON scalar types, you can omit the type.
The input value of the bytea type must be stored in UTF8
and contain a valid UTF8 string. Otherwise, an error occurs.
PostgreSQL currently supports only UTF8.
query_expression
An SQL query that provides the data for constructing a JSON array. The query must return a single column that holds the values to be used in the array.
{ NULL | ABSENT } ON NULL
Defines whether NULL values are allowed in the generated JSON array:
NULL
NULL values are allowed.
ABSENT
Default. If the value is NULL,
the corresponding key/value pair is omitted from the generated
JSON object.
This clause is only supported for arrays built from an explicit list of values. If you are using an SQL query to generate an array, NULL values are always omitted.
RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ]
The output clause that specifies the return type of the constructed JSON array. For details, see Section 9.16.3.4.2.
Alternatively, you can create JSON arrays by using
PostgreSQL-specific json_build_array()/
jsonb_build_array() functions.
See Section 9.16 for details.
From the films table, select some data
about the films distributed by Paramount Pictures
(did = 103) and return JSON arrays:
SELECT
JSON_ARRAY(
f.code,
f.title,
f.did
) AS films
FROM films AS f
WHERE f.did = 103;
films
----------------------------------------------------
["P_301", "Vertigo", 103]
["P_302", "Becket", 103]
["P_303", "48 Hrs", 103]
(3 rows)
Construct a JSON array from the list of film titles returned from the
films table by a subquery:
SELECT
JSON_ARRAY(
SELECT
f.title
FROM films AS f
where f.did = 103)
AS film_titles;
film_titles
----------------------------------------------------
["Vertigo", "Becket", "48 Hrs"]
(1 row)
JSON_ARRAYAGGJSON_ARRAYAGG( [value_expression] [ ORDER BYsort_expression] [ { NULL | ABSENT } ON NULL ] [ RETURNINGdata_type[ FORMAT JSON [ ENCODING UTF8 ] ] ] )
The JSON_ARRAYAGG function aggregates the provided SQL
or JSON data into a JSON array.
value_expression
The input clause that provides the input data to be aggregated as
a JSON array.
The value_expression can be a value or a query
returning the values to be used as input in array construction.
You can provide multiple input values separated by commas.
ORDER BY
Sorts the input data to be aggregated as a JSON array.
For details on the exact syntax of the ORDER BY clause, see ORDER BY Clause.
{ NULL | ABSENT } ON NULL
Defines whether NULL values are allowed in the constructed array:
NULL — NULL values are allowed.
ABSENT (default) — NULL
values are omitted from the generated array.
RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ]
The output clause that specifies the return type of the constructed JSON array. For details, see Section 9.16.3.4.2.
Alternatively, you can create JSON arrays by using
PostgreSQL-specific json_agg()/
jsonb_agg() functions.
See Section 9.21 for details.
Construct an array of film titles sorted in alphabetical order:
SELECT
JSON_ARRAYAGG(
f.title
ORDER BY f.title ASC) AS film_titles
FROM films AS f;
film_titles
----------------------------------------------------
["48 Hrs", "Absence of Malice", "Bananas", "Becket", "Bed Knobs and Broomsticks", "Das Boot", "Storia di una donna", "Taxi Driver", "The African Queen", "The King and I", "There's a Girl in my Soup", "The Third Man", "Une Femme est une Femme", "Vertigo", "War and Peace", "West Side Story", "Yojimbo"]
(1 row)
SQL/JSON query functions evaluate SQL/JSON path language expressions against JSON values, producing values of SQL/JSON types, which are converted to SQL types. All SQL/JSON query functions accept several common clauses described in Section 9.16.3.4. For details on the SQL/JSON path language, see Section 9.16.2.
In some usage examples for these functions, the following small table storing some JSON data will be used:
CREATE TABLE my_films (
js text );
INSERT INTO my_films VALUES (
'{ "favorites" : [
{ "kind" : "comedy", "films" : [
{ "title" : "Bananas",
"director" : "Woody Allen"},
{ "title" : "The Dinner Game",
"director" : "Francis Veber" } ] },
{ "kind" : "horror", "films" : [
{ "title" : "Psycho",
"director" : "Alfred Hitchcock" } ] },
{ "kind" : "thriller", "films" : [
{ "title" : "Vertigo",
"director" : "Alfred Hitchcock" } ] },
{ "kind" : "drama", "films" : [
{ "title" : "Yojimbo",
"director" : "Akira Kurosawa" } ] }
] }');
JSON_EXISTSJSON_EXISTS(context_item,path_expression[ PASSING {valueASvarname} [, ...]] [ RETURNINGdata_type] [ { TRUE | FALSE | UNKNOWN | ERROR } ON ERROR ] )
The JSON_EXISTS function checks whether the provided
JSON path expression can return any SQL/JSON items.
context_item, path_expression [ PASSING { value AS varname } [, ...]]
The input data to query, the JSON path expression defining the query, and an optional PASSING clause.
See Section 9.16.3.4.1 for details.
RETURNING data_type
The output clause that specifies the data type of the returned value.
The specified data type should have a cast from a boolean
type, which is returned by default.
{ TRUE | FALSE | UNKNOWN | ERROR } ON ERROR
Defines the return value if an error occurs. The default value is FALSE.
Check whether the provided jsonb data contains a
key/value pair with the key1 key, and its value
contains an array with one or more elements bigger than 2:
SELECT JSON_EXISTS(jsonb '{"key1": [1,2,3]}', 'strict $.key1[*] ? (@ > 2)');
json_exists
-------------
t
(1 row)
Note the difference between strict and lax modes if the required item does not exist:
-- Strict mode with ERROR on ERROR clause
SELECT JSON_EXISTS(jsonb '{"a": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR);
ERROR: Invalid SQL/JSON subscript
(1 row)
-- Lax mode
SELECT JSON_EXISTS(jsonb '{"a": [1,2,3]}', 'lax $.a[5]' ERROR ON ERROR);
json_exists
-------------
f
(1 row)
-- Strict mode using the default value for the ON ERROR clause
SELECT JSON_EXISTS(jsonb '{"a": [1,2,3]}', 'strict $.a[5]');
json_exists
-------------
f
(1 row)
JSON_VALUEJSON_VALUE(context_item,path_expression[ PASSING {valueASvarname} [, ...]] [ RETURNINGdata_type] [ { ERROR | NULL | DEFAULTexpression} ON EMPTY ] [ { ERROR | NULL | DEFAULTexpression} ON ERROR ] )
The JSON_VALUE function extracts a value from the provided
JSON data and converts it to an SQL scalar.
If the specified JSON path expression returns more than one
SQL/JSON item, an error occurs. To extract
an SQL/JSON array or object, use Section 9.16.3.2.3.
context_item, path_expression [ PASSING { value AS varname } [, ...]]
The input data to query, the JSON path expression defining the query, and an optional PASSING clause.
For details, see Section 9.16.2.
RETURNING data_type
The output clause that specifies the data type of the returned value.
Out of the box, PostgreSQL
supports the following types: json, jsonb,
bytea, and character string types (text, char,
varchar, and nchar).
The extracted value must be a single SQL/JSON scalar item
and have a cast to the specified type. Otherwise, an error occurs.
By default, JSON_VALUE returns a string
of the text type.
{ ERROR | NULL | DEFAULT expression } ON EMPTY
Defines the return value if no JSON value is found. The default is
NULL. If you use
DEFAULT ,
the provided expressionexpression is
evaluated and cast to the type specified in the RETURNING clause.
{ ERROR | NULL | DEFAULT expression } ON ERROR
Defines the return value if an unhandled error occurs. The default is
NULL. If you use
DEFAULT ,
the provided expressionexpression is
evaluated and cast to the type specified in the RETURNING clause.
Extract an SQL/JSON value and return it as an SQL
scalar of the specified type. Note that
JSON_VALUE can only return a
single scalar, and the returned value must have a
cast to the specified return type:
SELECT JSON_VALUE('"123.45"', '$' RETURNING float);
json_value
------------
123.45
(1 row)
SELECT JSON_VALUE('123.45', '$' RETURNING int ERROR ON ERROR);
json_value
------------
123
(1 row)
SELECT JSON_VALUE('"03:04 2015-02-01"', '$.datetime("HH24:MI YYYY-MM-DD")' RETURNING date);
json_value
------------
2015-02-01
(1 row)
SELECT JSON_VALUE('"123.45"', '$' RETURNING int ERROR ON ERROR);
ERROR: invalid input syntax for integer: "123.45"
SELECT JSON_VALUE(jsonb '[1]', 'strict $' ERROR ON ERROR);
ERROR: SQL/JSON scalar required
SELECT JSON_VALUE(jsonb '[1,2]', 'strict $[*]' ERROR ON ERROR);
ERROR: more than one SQL/JSON item
If the path expression returns an array, an object, or
multiple SQL/JSON items, an error is returned, as specified
in the ON ERROR clause:
SELECT JSON_VALUE(jsonb '[1]', 'strict $' ERROR ON ERROR);
ERROR: SQL/JSON scalar required
SELECT JSON_VALUE(jsonb '{"a": 1}', 'strict $' ERROR ON ERROR);
ERROR: SQL/JSON scalar required
SELECT JSON_VALUE(jsonb '[1,2]', 'strict $[*]' ERROR ON ERROR);
ERROR: more than one SQL/JSON item
SELECT JSON_VALUE(jsonb '[1,2]', 'strict $[*]' DEFAULT 1 ON ERROR);
1
JSON_QUERYJSON_QUERY(context_item,path_expression[ PASSING {valueASvarname} [, ...]] [ RETURNINGdata_type[ FORMAT JSON [ ENCODING UTF8 ] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ] WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULTexpression} ON EMPTY ] [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULTexpression} ON ERROR ] )
The JSON_QUERY function extracts an SQL/JSON
array or object from JSON data. This function must return
a JSON string, so if the path expression returns a scalar or multiple SQL/JSON
items, you must wrap the result using the WITH WRAPPER clause.
To extract a single SQL/JSON value, you can use Section 9.16.3.2.2.
context_item, path_expression [ PASSING { value AS varname } [, ...]]
The input data to query, the JSON path expression defining the query, and an optional PASSING clause.
For details, see Section 9.16.2.
RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ]
The output clause that specifies the data type of the returned value. For details, see Section 9.16.3.4.2.
{ WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ] WRAPPER
Defines whether to wrap a returned sequence of SQL/JSON items into an SQL/JSON array.
WITHOUT WRAPPER
Do not wrap the result.
This is the default behavior if the WRAPPER
clause is omitted.
WITH [UNCONDITIONAL] WRAPPERAlways wrap the result.
WITH CONDITIONAL WRAPPERWrap the result if the path expression returns anything other than a single SQL/JSON array or object.
Optionally, you can add the ARRAY keyword for semantic clarity.
You cannot use this clause together with the ON EMPTY clause.
{ KEEP | OMIT } QUOTES [ ON SCALAR STRING ]
Defines whether to keep or omit quotes if a scalar string is returned.
By default, scalar strings are returned with quotes. Using this
clause together with the WITH WRAPPER clause is not allowed.
Optionally, you can add the ON SCALAR STRING keywords for semantic clarity.
{ ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY
Defines the return value if no JSON value is found. The default is NULL.
If you use EMPTY [ARRAY] or EMPTY OBJECT,
an empty JSON array [] or object {} is returned, respectively.
If you use DEFAULT ,
the provided expressionexpression is evaluated and cast
to the type specified in the RETURNING clause.
You cannot use this clause together with the WRAPPER clause.
{ ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON ERROR
Defines the return value if an unhandled error occurs. The default is NULL.
If you use EMPTY [ARRAY] or EMPTY OBJECT,
an empty JSON array [] or object {} are returned, respectively.
If you use DEFAULT ,
the provided expressionexpression is evaluated and cast
to the type specified in the RETURNING clause.
Extract all film genres listed in the my_films table:
SELECT
JSON_QUERY(js, '$.favorites[*].kind' WITH WRAPPER ERROR ON ERROR)
FROM my_films;
json_query
------------
["comedy", "horror", "thriller", "drama"]
(1 row)
Note that the same query will result in an error if you omit the
WITH WRAPPER clause, as it returns multiple SQL/JSON items:
SELECT
JSON_QUERY(js, '$.favorites[*].kind' ERROR ON ERROR)
FROM my_films;
ERROR: more than one SQL/JSON item
Compare the effect of different WRAPPER clauses:
SELECT
js,
JSON_QUERY(js, 'lax $[*]') AS "without",
JSON_QUERY(js, 'lax $[*]' WITH WRAPPER) AS "with uncond",
JSON_QUERY(js, 'lax $[*]' WITH CONDITIONAL WRAPPER) AS "with cond"
FROM
(VALUES (jsonb '[]'), ('[1]'), ('[[1,2,3]]'), ('[{"a": 1}]'), ('[1, null, "2"]')) foo(js);
js | without | with uncond | with cond
----------------+-----------+----------------+----------------
[] | (null) | (null) | (null)
[1] | 1 | [1] | [1]
[[1, 2, 3]] | [1, 2, 3] | [[1, 2, 3]] | [1, 2, 3]
[{"a": 1}] | {"a": 1} | [{"a": 1}] | {"a": 1}
[1, null, "2"] | (null) | [1, null, "2"] | [1, null, "2"]
(5 rows)
Compare quote handling for scalar types with and without the OMIT QUOTES clause:
SELECT JSON_QUERY(jsonb '"aaa"', '$' RETURNING text); json_query ------------ "aaa" (1 row) SELECT JSON_QUERY(jsonb '"aaa"', '$' RETURNING text OMIT QUOTES); json_query ------------ aaa (1 row)
IS JSON
expression
IS [ NOT ] JSON
[ { VALUE | SCALAR | ARRAY | OBJECT } ]
[ { WITH | WITHOUT } UNIQUE [ KEYS ] ]
The IS JSON predicate tests whether the provided value is valid
JSON data. If you provide a specific JSON data type as a parameter,
you can check whether the value belongs to this type.
You can also use this predicate in the IS NOT JSON form.
The return values are:
t if the value satisfies the specified condition.
f if the value does not satisfy the specified condition.
expression
The input clause defining the value to test. You can provide the values
of json, jsonb,
bytea, or character string types.
VALUE | SCALAR | ARRAY | OBJECT
Specifies the JSON data type to test for:
VALUE (default) — any JSON type.
SCALAR — JSON number, string, or boolean.
ARRAY — JSON array.
OBJECT — JSON object.
{ WITH | WITHOUT } UNIQUE [ KEYS ]
Defines whether duplicate keys are allowed:
WITHOUT (default) — the
JSON object can contain duplicate keys.
WITH — duplicate keys are not allowed.
If the input data contains duplicate keys, it is considered to be invalid JSON.
Optionally, you can add the KEYS keyword for semantic clarity.
Compare the result returned by the IS JSON
predicate for different data types:
SELECT
js,
js IS JSON "is json",
js IS NOT JSON "is not json",
js IS JSON SCALAR "is scalar",
js IS JSON OBJECT "is object",
js IS JSON ARRAY "is array"
FROM
(VALUES ('123'), ('"abc"'), ('{"a": "b"}'), ('[1,2]'), ('abc')) foo(js);
js | is json | is not json | is scalar | is object | is array
------------+---------+-------------+-----------+-----------|-------------
123 | t | f | t | f | f
"abc" | t | f | t | f | f
{"a": "b"} | t | f | f | t | f
[1,2] | t | f | f | f | t
abc | f | t | f | f | f
(5 rows)
JSON_TABLEJSON_TABLE (context_item,path_expression[ ASjson_path_name] [ PASSING {valueASvarname} [, ...] ] COLUMNS (json_table_column[, ...] ) [ PLAN (json_table_plan) | PLAN DEFAULT ( { INNER | OUTER } [ , { CROSS | UNION } ] | { CROSS | UNION } [ , { INNER | OUTER } ] ) ] ) wherejson_table_columnis:nametype[ PATHjson_path_specification] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ] WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR | NULL | DEFAULTexpression} ON EMPTY ] [ { ERROR | NULL | DEFAULTexpression} ON ERROR ] |nametypeFORMATjson_representation[ PATHjson_path_specification] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ] WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULTexpression} ON EMPTY ] [ { ERROR | NULL | EMPTY { ARRAY | OBJECT } | DEFAULTexpression} ON ERROR ] |nametypeEXISTS [ PATHjson_path_specification] [ { ERROR | TRUE | FALSE | UNKNOWN } ON ERROR ] | NESTED PATHjson_path_specification[ ASpath_name] COLUMNS (json_table_column[, ...] ) |nameFOR ORDINALITYjson_table_planis:json_path_name[ { OUTER | INNER }json_table_plan_primary] |json_table_plan_primary{ UNIONjson_table_plan_primary} [...] |json_table_plan_primary{ CROSSjson_table_plan_primary} [...]json_table_plan_primaryis:json_path_name| (json_table_plan)
The JSON_TABLE function queries JSON data
and presents the results as a relational view, which can be accessed as a
regular SQL table. You can only use JSON_TABLE inside the
FROM clause of the SELECT statement
for an SQL table.
Taking JSON data as input, JSON_TABLE uses
a path expression to extract a part of the provided data that
will be used as a row pattern for the
constructed view. Each SQL/JSON item at the top level of the row pattern serves
as the source for a separate row in the constructed relational view.
To split the row pattern into columns, JSON_TABLE
provides the COLUMNS clause that defines the
schema of the created view. For each column to be constructed,
this clause provides a separate path expression that evaluates
the row pattern, extracts a JSON item, and returns it as a
separate SQL value for the specified column. If the required value
is stored in a nested level of the row pattern, it can be extracted
using the NESTED PATH subclause. Joining the
columns returned by NESTED PATH can add multiple
new rows to the constructed view. Such rows are called
child rows, as opposed to the parent row
that generates them.
The rows produced by JSON_TABLE are laterally
joined to the row that generated them, so you do not have to explicitly join
the constructed view with the original table holding JSON
data. Optionally, you can specify how to join the columns returned
by NESTED PATH using the PLAN clause.
Each NESTED PATH clause can generate one or more
columns, which are considered to be siblings
to each other. In relation to the columns returned directly from the row
expression or by the NESTED PATH clause of a
higher level, these columns are child columns.
Sibling columns are always joined first. Once they are processed,
the resulting rows are joined to the parent row.
context_item, path_expression [ AS json_path_name ] [ PASSING { value AS varname } [, ...]]
The input data to query, the JSON path expression defining the query,
and an optional PASSING clause, as described in
Section 9.16.3.4.1. The result of the input data
evaluation is called the row pattern. The row
pattern is used as the source for row values in the constructed view.
COLUMNS( json_table_column [, ...] )
The COLUMNS clause defining the schema of the
constructed view. In this clause, you must specify all the columns
to be filled with SQL/JSON items.
The json_table_column
expression has the following syntax variants:
name type
[ PATH json_path_specification ]
Inserts a single SQL/JSON item into each row of the specified column.
The provided PATH expression parses the
row pattern defined by json_api_common_syntax
and fills the column with produced SQL/JSON items, one for each row.
If the PATH expression is omitted,
JSON_TABLE uses the
$. path expression,
where namename is the provided column name.
In this case, the column name must correspond to one of the
keys within the SQL/JSON item produced by the row pattern.
Internally, Section 9.16.3.2.2 and Section 9.16.3.2.3 are used to produce resulting values. Section 9.16.3.2.3 is used for JSON, array, and composite column types, Section 9.16.3.2.2 is used for other types.
Optionally, you can add ON EMPTY and
ON ERROR clauses to define how to handle missing values
or structural errors.
WRAPPER and QUOTES clauses can only
be used with JSON, array, and composite types.
These clauses have the same syntax and semantics as in
Section 9.16.3.2.2 and
Section 9.16.3.2.3.
name type FORMAT json_representation
[ PATH json_path_specification ]
Generates a column and inserts a composite SQL/JSON item into each row of this column.
The provided PATH expression parses the
row pattern defined by json_api_common_syntax
and fills the column with produced SQL/JSON items, one for each row.
If the PATH expression is omitted,
JSON_TABLE uses the
$. path expression,
where namename is the provided column name.
In this case, the column name must correspond to one of the
keys within the SQL/JSON item produced by the row pattern.
Internally, Section 9.16.3.2.3 is used to produce resulting values.
Optionally, you can add WRAPPER, QUOTES,
ON EMPTY and ON ERROR clauses
to define additional settings for the returned SQL/JSON items.
These clauses have the same syntax and semantics as
in Section 9.16.3.2.3.
name type
EXISTS [ PATH json_path_specification ]
Generates a column and inserts a boolean item into each row of this column.
The provided PATH expression parses the
row pattern defined by json_api_common_syntax,
checks whether any SQL/JSON items were returned, and fills the column with
resulting boolean value, one for each row.
The specified type should have cast from
boolean.
If the PATH expression is omitted,
JSON_TABLE uses the
$. path expression,
where namename is the provided column name.
Optionally, you can add ON ERROR clause to define
error behavior. This clause have the same syntax and semantics as in
Section 9.16.3.2.1.
NESTED PATH json_path_specification [ AS json_path_name ]
COLUMNS ( json_table_column [, ...] )
Extracts SQL/JSON items from nested levels of the row pattern,
generates one or more columns as defined by the COLUMNS
subclause, and inserts the extracted SQL/JSON items into each row of these columns.
The json_table_column expression in the
COLUMNS subclause uses the same syntax as in the
parent COLUMNS clause.
The NESTED PATH syntax is recursive,
so you can go down multiple nested levels by specifying several
NESTED PATH subclauses within each other.
It allows to unnest the hierarchy of JSON objects and arrays
in a single function invocation rather than chaining several
JSON_TABLE expressions in an SQL statement.
You can use the PLAN clause to define how
to join the columns returned by NESTED PATH clauses.
name FOR ORDINALITY
Adds an ordinality column that provides sequential row numbering.
You can have only one ordinality column per table. Row numbering
is 1-based. For child rows that result from the NESTED PATH
clauses, the parent row number is repeated.
AS json_path_name
The optional json_path_name serves as an
identifier of the provided json_path_specification.
The path name must be unique and distinct from the column names.
When using the PLAN clause, you must specify the names
for all the paths, including the row pattern. Each path name can appear in
the PLAN clause only once.
PLAN ( json_table_plan )
Defines how to join the data returned by NESTED PATH
clauses to the constructed view.
To join columns with parent/child relationship, you can use:
INNER
Use INNER JOIN, so that the parent row
is omitted from the output if it does not have any child rows
after joining the data returned by NESTED PATH.
OUTER
Use LEFT OUTER JOIN, so that the parent row
is always included into the output even if it does not have any child rows
after joining the data returned by NESTED PATH, with NULL values
inserted into the child columns if the corresponding
values are missing.
This is the default option for joining columns with parent/child relationship.
To join sibling columns, you can use:
UNION
Use FULL OUTER JOIN ON FALSE, so that both parent and child
rows are included into the output, with NULL values inserted
into both child and parent columns for all missing values.
This is the default option for joining sibling columns.
CROSS
Use CROSS JOIN, so that the output includes
a row for every possible combination of rows from the left-hand
and the right-hand columns.
PLAN DEFAULT ( option [, ... ] )
Overrides the default joining plans. The INNER and
OUTER options define the joining plan for parent/child
columns, while UNION and CROSS
affect the sibling columns. You can override the default plans for all columns at once.
Even though the path names are not included into the PLAN DEFAULT
clause, they must be provided for all the paths to conform to
the SQL/JSON standard.
Query the my_films table holding
some JSON data about the films and create a view that
distributes the film genre, title, and director between separate columns:
SELECT jt.* FROM
my_films,
JSON_TABLE ( js, '$.favorites[*]' COLUMNS (
id FOR ORDINALITY,
kind text PATH '$.kind',
NESTED PATH '$.films[*]' COLUMNS (
title text PATH '$.title',
director text PATH '$.director'))) AS jt;
----+----------+------------------+-------------------
id | kind | title | director
----+----------+------------------+-------------------
1 | comedy | Bananas | Woody Allen
1 | comedy | The Dinner Game | Francis Veber
2 | horror | Psycho | Alfred Hitchcock
3 | thriller | Vertigo | Hitchcock
4 | drama | Yojimbo | Akira Kurosawa
(5 rows)
Find a director that has done films in two different genres:
SELECT
director1 AS director, title1, kind1, title2, kind2
FROM
my_films,
JSON_TABLE ( js, '$.favorites' AS favs COLUMNS (
NESTED PATH '$[*]' AS films1 COLUMNS (
kind1 text PATH '$.kind',
NESTED PATH '$.films[*]' AS film1 COLUMNS (
title1 text PATH '$.title',
director1 text PATH '$.director')
),
NESTED PATH '$[*]' AS films2 COLUMNS (
kind2 text PATH '$.kind',
NESTED PATH '$.films[*]' AS film2 COLUMNS (
title2 text PATH '$.title',
director2 text PATH '$.director'
)
)
)
PLAN (favs OUTER ((films1 INNER film1) CROSS (films2 INNER film2)))
) AS jt
WHERE kind1 > kind2 AND director1 = director2;
JSON_SERIALAIZEJSON_SERIALIZE (expression[ FORMAT JSON [ ENCODING UTF8 ] ] [ RETURNINGdata_type[ FORMAT JSON [ ENCODING UTF8 ] ] ] )
The JSON_SERIALIZE function transforms an SQL/JSON value
into a character or binary string.
expression [ FORMAT JSON [ ENCODING UTF8 ] ]
JSON typed expression that provides a data for
serialization. Accepted JSON types (json and
jsonb), any character string types (text,
char, etc.), binary strings (bytea) in
UTF8 encoding.
For null input, null value is returned.
The optional FORMAT clause is provided to conform
to the SQL/JSON standard.
RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ]
The output clause that specifies the target character or binary string
type (text, char, bytea, etc.).
Alternatively, you can construct JSON values simply
using PostgreSQL-specific casts to
json and jsonb types.
Construct serialized JSON using the provided strings:
SELECT JSON_SERIALIZE(JSON_SCALAR('foo'));
json_serialize
----------------
"foo"
(1 row)
SELECT JSON_SERIALIZE('{"foo": "bar", "baz": [1, 2]}' RETURNING bytea);
json_serialize
--------------------------------------------------------------
\x7b22666f6f223a2022626172222c202262617a223a205b312c20325d7d
(1 row)
context_item, path_expression
[ PASSING { value AS varname } [, ...]]
The input clause specifies the JSON data to query and the exact query path to be passed to SQL/JSON query functions:
The context_item is the JSON data to query.
Currently for functions JSON_VALUE,
JSON_EXISTS, and JSON_QUERY
this must be a value of type jsonb.
The path_expression is an SQL/JSON path
expression that specifies the items to be retrieved from the JSON
data. For details on path expression syntax, see
Section 9.16.2.
The optional PASSING clause provides the values for
the named variables used in the SQL/JSON path expression.
The input clause is common for all SQL/JSON query functions.
RETURNING data_type [ FORMAT JSON [ ENCODING UTF8 ] ]
The output clause that specifies the return type of the generated
JSON object. Out of the box, PostgreSQL
supports the following types: json, jsonb,
bytea, and character string types (text, char,
varchar, and nchar).
To use another type, you must create a cast from json to that type.
By default, the json type is returned.
The optional FORMAT clause is provided to conform to the SQL/JSON standard.
The output clause is common for both constructor and query SQL/JSON functions.