Reorganize Tips & Tricks after #94
This commit is contained in:
parent
80936d9886
commit
4d7940a5d3
5 changed files with 216 additions and 212 deletions
|
|
@ -6,6 +6,7 @@ Only accretive/fixative changes will be made from now on.
|
|||
|
||||
The following changes have been committed to the **master** branch since the 1.0.384 release:
|
||||
|
||||
* Split **Tips & Tricks** into its own page, with a whole new section on using JSON data types with PostgreSQL (#94 -- thank you @vharmain).
|
||||
* Bump dependencies to latest.
|
||||
|
||||
## Stable Builds
|
||||
|
|
|
|||
|
|
@ -231,213 +231,4 @@ By default, for compatibility with their default adapter (`clojure.java.jdbc`),
|
|||
;;=> #:CHARACTERS{:ID 1, :NAME "Westley", :SPECIALTY "love", :CREATED_AT #inst "2019-09-27T18:52:54.413000000-00:00"}
|
||||
```
|
||||
|
||||
## Tips & Tricks
|
||||
|
||||
This section will accrue various tips and tricks that make it easier to use `next.jdbc` with a variety of databases. It will be mostly organized by database, but there are a few that are cross-database and those are listed first.
|
||||
|
||||
### CLOB & BLOB SQL Types
|
||||
|
||||
Columns declared with the `CLOB` or `BLOB` SQL types are typically rendered into Clojure result sets as database-specific custom types but they will implement `java.sql.Clob` or `java.sql.Blob` (as appropriate). In general, you can only read the data out of those Java objects during the current transaction, which effectively means that you need to do it either inside the reduction (for `plan`) or inside the result set builder (for `execute!` or `execute-one!`). If you always treat these types the same way for all columns across the whole of your application, you could simply extend `next.jdbc.result-set/ReadableColumn` to `java.sql.Clob` (and/or `java.sql.Blob`). Here's an example for reading `CLOB` into a `String`:
|
||||
|
||||
```clojure
|
||||
(extend-protocol rs/ReadableColumn
|
||||
java.sql.Clob
|
||||
(read-column-by-label ^String [^java.sql.Clob v _]
|
||||
(with-open [rdr (.getCharacterStream v)] (slurp rdr)))
|
||||
(read-column-by-index ^String [^java.sql.Clob v _2 _3]
|
||||
(with-open [rdr (.getCharacterStream v)] (slurp rdr))))
|
||||
```
|
||||
|
||||
There is a helper in `next.jdbc.result-set` to make this easier -- `clob->string`:
|
||||
|
||||
```clojure
|
||||
(extend-protocol rs/ReadableColumn
|
||||
java.sql.Clob
|
||||
(read-column-by-label ^String [^java.sql.Clob v _]
|
||||
(clob->string v))
|
||||
(read-column-by-index ^String [^java.sql.Clob v _2 _3]
|
||||
(clob->string v)))
|
||||
```
|
||||
|
||||
As noted in [Result Set Builders](/doc/result-set-builders.md), there is also `clob-column-reader` that can be used with the `as-*-adapter` result set builder functions.
|
||||
|
||||
No helper or column reader is provided for `BLOB` data since it is expected that the semantics of any given binary data will be application specific. For a raw `byte[]` you could probably use:
|
||||
|
||||
```clojure
|
||||
(.getBytes v 1 (.length v)) ; BLOB has 1-based byte index!
|
||||
```
|
||||
|
||||
Consult the [java.sql.Blob documentation](https://docs.oracle.com/javase/8/docs/api/java/sql/Blob.html) for more ways to process it.
|
||||
|
||||
### MS SQL Server
|
||||
|
||||
By default, you won't get table names as qualifiers with Microsoft's JDBC driver (you might with the jTDS drive -- I haven't tried that recently). See this [MSDN forum post about `.getTableName()`](https://social.msdn.microsoft.com/Forums/sqlserver/en-US/55e8cbb2-b11c-446e-93ab-dc30658caf99/resultsetmetadatagettablename-returns-instead-of-table-name) for details. According to one of the answers posted there, if you specify `:result-type` and `:concurrency` in the options for `execute!`, `execute-one!`, `plan`, or `prepare`, that will cause SQL Server to return table names for columns. `:result-type` needs to be `:scoll-sensitive` or `:scroll-insensitive` for this to work. `:concurrency` can be `:read-only` or `:updatable`.
|
||||
|
||||
### MySQL
|
||||
|
||||
MySQL generally stores tables as files so they are case-sensitive if your O/S is (Linux) or case-insensitive if your O/S is not (Mac, Windows) but the column names are generally case-insensitive. This can matter when if you use `next.jdbc.result-set/as-lower-maps` because that will lower-case the table names (as well as the column names) so if you are round-tripping based on the keys you get back, you may produce an incorrect table name in terms of case. You'll also need to be careful about `:table-fn`/`:column-fn` because of this.
|
||||
|
||||
It's also worth noting that column comparisons are case-insensitive so `WHERE foo = 'BAR'` will match `"bar"` or `"BAR"` etc.
|
||||
|
||||
### Oracle
|
||||
|
||||
Ah, dear old Oracle! Over the years of maintaining `clojure.java.jdbc` and now `next.jdbc`, I've had all sorts of bizarre and non-standard behavior reported from Oracle users. The main issue I'm aware of with `next.jdbc` is that Oracle's JDBC drivers all return an empty string from `ResultSetMetaData.getTableName()` so you won't get qualified keywords in the result set hash maps. Sorry!
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
When you use `:return-keys true` with `execute!` or `execute-one!` (or you use `insert!`), PostgreSQL returns the entire inserted row (unlike nearly every other database that just returns any generated keys!).
|
||||
|
||||
If you have a query where you want to select where a column is `IN` a sequence of values, you can use `col = ANY(?)` with a native array of the values instead of `IN (?,?,?,,,?)` and a sequence of values.
|
||||
|
||||
What does this mean for your use of `next.jdbc`? In `plan`, `execute!`, and `execute-one!`, you can use `col = ANY(?)` in the SQL string and a single primitive array parameter, such as `(int-array [1 2 3 4])`. That means that in `next.jdbc.sql`'s functions that take a where clause (`find-by-keys`, `update!`, and `delete!`) you can specify `["col = ANY(?)" (int-array data)]` for what would be a `col IN (?,?,?,,,?)` where clause for other databases and require multiple values.
|
||||
|
||||
#### Streaming Result Sets
|
||||
|
||||
You can get PostgreSQL to stream very large result sets (when you are reducing over `plan`) by setting the following options:
|
||||
|
||||
* `:auto-commit false` -- when opening the connection
|
||||
* `:fetch-size 4000, :concurrency :read-only, :cursors :close, :result-type :forward-only` -- when running `plan` (or when creating a `PreparedStatement`).
|
||||
|
||||
#### Working with json and jsonb
|
||||
|
||||
Postgres has [good support](https://www.postgresql.org/docs/current/datatype-json.html) for storing, querying and manipulating JSON data. Basic Clojure datastructures (lists, vectors, and maps) transform pretty well to JSON data. With little help next.jdbc can automatically convert Clojure data to JSON and back for us.
|
||||
|
||||
First we define functions for JSON encoding and decoding. `decode-key-fn` transforms all JSON-keys (strings) to keywords. metosin/jsonista is used here but cheshire would work as well.
|
||||
|
||||
``` clojure
|
||||
(require '[jsonista.core :as json])
|
||||
|
||||
(def mapper (json/object-mapper {:decode-key-fn keyword}))
|
||||
(def ->json json/write-value-as-string)
|
||||
(def <-json #(json/read-value % mapper))
|
||||
```
|
||||
|
||||
Then we create helper functions to transform Clojure data to PGobjects
|
||||
containing JSON and vice versa.
|
||||
|
||||
``` clojure
|
||||
(import '[org.postgresql.util PGobject])
|
||||
|
||||
(defn ->pgobject
|
||||
"Transforms clojure data to a PGobject that contains the data as
|
||||
JSON. PGObject type defaults to `jsonb` but can be changed via
|
||||
metadata key `:pgtype`"
|
||||
[x]
|
||||
(let [pgtype (or (:pgtype (meta x)) "jsonb")]
|
||||
(doto (PGobject.)
|
||||
(.setType pgtype)
|
||||
(.setValue (->json x)))))
|
||||
|
||||
(defn <-pgobject
|
||||
"Transform PGobject containing `json` or `jsonb` value to clojure
|
||||
data."
|
||||
[^org.postgresql.util.PGobject v]
|
||||
(let [type (.getType v)
|
||||
value (.getValue v)]
|
||||
(if (#{"jsonb" "json"} type)
|
||||
(with-meta (<-json value) {:pgtype type})
|
||||
value)))
|
||||
```
|
||||
|
||||
Finally we extend `next.jdbc.prepare/SettableParameter` and `next.jdbc.result-set/ReadableColumn` protocols to make the conversion between clojure data and PGobject JSON automatically.
|
||||
|
||||
``` clojure
|
||||
(require '[next.jdbc.prepare :as prepare])
|
||||
(require '[next.jdbc.result-set :as result-set])
|
||||
|
||||
;; if sql parameter is a clojure map...
|
||||
(extend-protocol prepare/SettableParameter
|
||||
clojure.lang.IPersistentMap
|
||||
(set-parameter [m s i]
|
||||
(.setObject s i (->pgobject m))))
|
||||
|
||||
;; ...or vector, they'll be transformed to PGobjects for postgres.
|
||||
(extend-protocol prepare/SettableParameter
|
||||
clojure.lang.IPersistentVector
|
||||
(set-parameter [v s i]
|
||||
(.setObject s i (->pgobject v))))
|
||||
|
||||
;; if result-set contains PGobjects we'll convert them to clojure data
|
||||
;; while reading (if column is either json or jsonb type)
|
||||
(extend-protocol result-set/ReadableColumn
|
||||
org.postgresql.util.PGobject
|
||||
(read-column-by-label [^org.postgresql.util.PGobject v _]
|
||||
(<-pgobject v))
|
||||
(read-column-by-index [^org.postgresql.util.PGobject v _2 _3]
|
||||
(<-pgobject v)))
|
||||
```
|
||||
|
||||
##### Inserting and querying json
|
||||
|
||||
Let's assume we have following table:
|
||||
|
||||
``` sql
|
||||
create table demo (
|
||||
id serial primary key,
|
||||
doc_jsonb jsonb,
|
||||
doc_json json
|
||||
)
|
||||
```
|
||||
|
||||
We can now insert clojure data into json and jsonb fields:
|
||||
|
||||
``` clojure
|
||||
(require '[next.jdbc :as jdbc])
|
||||
(require '[next.jdbc.sql :as sql])
|
||||
|
||||
(def db { ...db-spec here... })
|
||||
(def ds (jdbc/get-datasource db))
|
||||
|
||||
(def test-map
|
||||
{:some-key "some val" :nested {:a 1} :null-val nil :vector [1 2 3]})
|
||||
|
||||
(def data1
|
||||
{:doc_jsonb test-map
|
||||
:doc_json (with-meta test-map {:pgtype "json"})})
|
||||
|
||||
(sql/insert! ds :demo data1)
|
||||
|
||||
(def test-vector
|
||||
[{:a 1} nil 2 "lalala" []])
|
||||
|
||||
(def data2
|
||||
{:doc_jsonb test-vector
|
||||
:doc_json (with-meta test-vector {:pgtype "json"})})
|
||||
|
||||
(sql/insert! ds :demo data2)
|
||||
```
|
||||
|
||||
And result-set data is nicely transformed into Clojure data when querying:
|
||||
|
||||
``` clojure
|
||||
(sql/get-by-id ds :demo 1)
|
||||
=> #:demo{:id 1,
|
||||
:doc_json
|
||||
{:some-key "some val",
|
||||
:nested {:a 1},
|
||||
:vector [1 2 3],
|
||||
:null-val nil},
|
||||
:doc_jsonb
|
||||
{:some-key "some val",
|
||||
:nested {:a 1},
|
||||
:vector [1 2 3],
|
||||
:null-val nil}}
|
||||
|
||||
(sql/get-by-id ds :demo 2)
|
||||
=> #:demo{:id 2,
|
||||
:doc_json [{:a 1} nil 2 "lalala" []],
|
||||
:doc_jsonb [{:a 1} nil 2 "lalala" []]}
|
||||
|
||||
;; Query by value of JSON field 'some-key'
|
||||
(sql/query ds ["select id, doc_jsonb::json->'nested' as foo from demo where doc_jsonb::json->>'some-key' = ?" "some val"])
|
||||
=> [{:demo/id 1, :foo {:a 1}}]
|
||||
```
|
||||
|
||||
##### json or jsonb?
|
||||
|
||||
* `json` column stores JSON data as strings (reading and writing is fast but manipulation is slow, field order is preserved)
|
||||
* `jsonb` column stores JSON data in binary format (manipulation is significantly faster but reading and writing is a little slower)
|
||||
|
||||
If you're unsure whether you want to use json or jsonb, use jsonb.
|
||||
|
||||
[<: Getting Started](/doc/getting-started.md) | [Result Set Builders :>](/doc/result-set-builders.md)
|
||||
[<: Getting Started](/doc/getting-started.md) | [Tips & Tricks :>](/doc/tips-and-tricks.md)
|
||||
|
|
|
|||
|
|
@ -116,7 +116,7 @@ user=>
|
|||
```
|
||||
|
||||
Relying on the default result set builder -- and table-qualified column names -- is the recommended approach to take, if possible, with a few caveats:
|
||||
* MS SQL Server produces unqualified column names by default (see [**Tips & Tricks**](/doc/getting-started/friendly-sql-functions.md#tips--tricks) for how to get table names back from MS SQL Server),
|
||||
* MS SQL Server produces unqualified column names by default (see [**Tips & Tricks**](/doc/tips-and-tricks.md) for how to get table names back from MS SQL Server),
|
||||
* Oracle's JDBC driver doesn't support `.getTableName()` so it will only produce unqualified column names (also mentioned in **Tips & Tricks**),
|
||||
* If your SQL query joins tables in a way that produces duplicate column names, and you use unqualified column names, then those duplicated column names will conflict and you will get only one of them in your result -- use aliases in SQL (`as`) to make the column names distinct,
|
||||
* If your SQL query joins a table to itself under different aliases, the _qualified_ column names will conflict because they are based on the underlying table name provided by the JDBC driver rather the alias you used in your query -- again, use aliases in SQL to make those column names distinct.
|
||||
|
|
|
|||
|
|
@ -115,4 +115,4 @@ Remember that a protocol extension will apply to all code running in your applic
|
|||
|
||||
Note that the converse, converting Clojure values to database-specific types is handled by the `SettableParameter` protocol, discussed in the next section ([Prepared Statements](/doc/prepared-statements.md#prepared-statement-parameters)).
|
||||
|
||||
[<: Friendly SQL Functions](/doc/friendly-sql-functions.md) | [Prepared Statements :>](/doc/prepared-statements.md)
|
||||
[<: Tips & Tricks](/doc/tips-and-tricks.md) | [Prepared Statements :>](/doc/prepared-statements.md)
|
||||
|
|
|
|||
212
doc/tips-and-tricks.md
Normal file
212
doc/tips-and-tricks.md
Normal file
|
|
@ -0,0 +1,212 @@
|
|||
# Tips & Tricks
|
||||
|
||||
This page contains various tips and tricks that make it easier to use `next.jdbc` with a variety of databases. It is mostly organized by database, but there are a few that are cross-database and those are listed first.
|
||||
|
||||
## CLOB & BLOB SQL Types
|
||||
|
||||
Columns declared with the `CLOB` or `BLOB` SQL types are typically rendered into Clojure result sets as database-specific custom types but they will implement `java.sql.Clob` or `java.sql.Blob` (as appropriate). In general, you can only read the data out of those Java objects during the current transaction, which effectively means that you need to do it either inside the reduction (for `plan`) or inside the result set builder (for `execute!` or `execute-one!`). If you always treat these types the same way for all columns across the whole of your application, you could simply extend `next.jdbc.result-set/ReadableColumn` to `java.sql.Clob` (and/or `java.sql.Blob`). Here's an example for reading `CLOB` into a `String`:
|
||||
|
||||
```clojure
|
||||
(extend-protocol rs/ReadableColumn
|
||||
java.sql.Clob
|
||||
(read-column-by-label ^String [^java.sql.Clob v _]
|
||||
(with-open [rdr (.getCharacterStream v)] (slurp rdr)))
|
||||
(read-column-by-index ^String [^java.sql.Clob v _2 _3]
|
||||
(with-open [rdr (.getCharacterStream v)] (slurp rdr))))
|
||||
```
|
||||
|
||||
There is a helper in `next.jdbc.result-set` to make this easier -- `clob->string`:
|
||||
|
||||
```clojure
|
||||
(extend-protocol rs/ReadableColumn
|
||||
java.sql.Clob
|
||||
(read-column-by-label ^String [^java.sql.Clob v _]
|
||||
(clob->string v))
|
||||
(read-column-by-index ^String [^java.sql.Clob v _2 _3]
|
||||
(clob->string v)))
|
||||
```
|
||||
|
||||
As noted in [Result Set Builders](/doc/result-set-builders.md), there is also `clob-column-reader` that can be used with the `as-*-adapter` result set builder functions.
|
||||
|
||||
No helper or column reader is provided for `BLOB` data since it is expected that the semantics of any given binary data will be application specific. For a raw `byte[]` you could probably use:
|
||||
|
||||
```clojure
|
||||
(.getBytes v 1 (.length v)) ; BLOB has 1-based byte index!
|
||||
```
|
||||
|
||||
Consult the [java.sql.Blob documentation](https://docs.oracle.com/javase/8/docs/api/java/sql/Blob.html) for more ways to process it.
|
||||
|
||||
## MS SQL Server
|
||||
|
||||
By default, you won't get table names as qualifiers with Microsoft's JDBC driver (you might with the jTDS drive -- I haven't tried that recently). See this [MSDN forum post about `.getTableName()`](https://social.msdn.microsoft.com/Forums/sqlserver/en-US/55e8cbb2-b11c-446e-93ab-dc30658caf99/resultsetmetadatagettablename-returns-instead-of-table-name) for details. According to one of the answers posted there, if you specify `:result-type` and `:concurrency` in the options for `execute!`, `execute-one!`, `plan`, or `prepare`, that will cause SQL Server to return table names for columns. `:result-type` needs to be `:scoll-sensitive` or `:scroll-insensitive` for this to work. `:concurrency` can be `:read-only` or `:updatable`.
|
||||
|
||||
## MySQL
|
||||
|
||||
MySQL generally stores tables as files so they are case-sensitive if your O/S is (Linux) or case-insensitive if your O/S is not (Mac, Windows) but the column names are generally case-insensitive. This can matter when if you use `next.jdbc.result-set/as-lower-maps` because that will lower-case the table names (as well as the column names) so if you are round-tripping based on the keys you get back, you may produce an incorrect table name in terms of case. You'll also need to be careful about `:table-fn`/`:column-fn` because of this.
|
||||
|
||||
It's also worth noting that column comparisons are case-insensitive so `WHERE foo = 'BAR'` will match `"bar"` or `"BAR"` etc.
|
||||
|
||||
## Oracle
|
||||
|
||||
Ah, dear old Oracle! Over the years of maintaining `clojure.java.jdbc` and now `next.jdbc`, I've had all sorts of bizarre and non-standard behavior reported from Oracle users. The main issue I'm aware of with `next.jdbc` is that Oracle's JDBC drivers all return an empty string from `ResultSetMetaData.getTableName()` so you won't get qualified keywords in the result set hash maps. Sorry!
|
||||
|
||||
## PostgreSQL
|
||||
|
||||
When you use `:return-keys true` with `execute!` or `execute-one!` (or you use `insert!`), PostgreSQL returns the entire inserted row (unlike nearly every other database that just returns any generated keys!).
|
||||
|
||||
If you have a query where you want to select where a column is `IN` a sequence of values, you can use `col = ANY(?)` with a native array of the values instead of `IN (?,?,?,,,?)` and a sequence of values.
|
||||
|
||||
What does this mean for your use of `next.jdbc`? In `plan`, `execute!`, and `execute-one!`, you can use `col = ANY(?)` in the SQL string and a single primitive array parameter, such as `(int-array [1 2 3 4])`. That means that in `next.jdbc.sql`'s functions that take a where clause (`find-by-keys`, `update!`, and `delete!`) you can specify `["col = ANY(?)" (int-array data)]` for what would be a `col IN (?,?,?,,,?)` where clause for other databases and require multiple values.
|
||||
|
||||
### Streaming Result Sets
|
||||
|
||||
You can get PostgreSQL to stream very large result sets (when you are reducing over `plan`) by setting the following options:
|
||||
|
||||
* `:auto-commit false` -- when opening the connection
|
||||
* `:fetch-size 4000, :concurrency :read-only, :cursors :close, :result-type :forward-only` -- when running `plan` (or when creating a `PreparedStatement`).
|
||||
|
||||
### Working with JSON and JSONB
|
||||
|
||||
PostgreSQL has good support for [storing, querying and manipulating JSON data](https://www.postgresql.org/docs/current/datatype-json.html). Basic Clojure data structures (lists, vectors, and maps) transform pretty well to JSON data. With a little help `next.jdbc` can automatically convert Clojure data to JSON and back for us.
|
||||
|
||||
First we define functions for JSON encoding and decoding. We're using [metosin/jsonista](https://github.com/metosin/jsonista) in these examples but you could use any JSON library, such as [Cheshire](https://github.com/dakrone/cheshire) or [clojure.data.json](https://github.com/clojure/data.json).
|
||||
|
||||
```clojure
|
||||
(require '[jsonista.core :as json])
|
||||
|
||||
;; :decode-key-fn here specifies that JSON-keys will become keywords:
|
||||
(def mapper (json/object-mapper {:decode-key-fn keyword}))
|
||||
(def ->json json/write-value-as-string)
|
||||
(def <-json #(json/read-value % mapper))
|
||||
```
|
||||
|
||||
Next we create helper functions to transform Clojure data to and from PostgreSQL Objects
|
||||
containing JSON:
|
||||
|
||||
```clojure
|
||||
(import '(org.postgresql.util PGobject))
|
||||
|
||||
(defn ->pgobject
|
||||
"Transforms Clojure data to a PGobject that contains the data as
|
||||
JSON. PGObject type defaults to `jsonb` but can be changed via
|
||||
metadata key `:pgtype`"
|
||||
[x]
|
||||
(let [pgtype (or (:pgtype (meta x)) "jsonb")]
|
||||
(doto (PGobject.)
|
||||
(.setType pgtype)
|
||||
(.setValue (->json x)))))
|
||||
|
||||
(defn <-pgobject
|
||||
"Transform PGobject containing `json` or `jsonb` value to Clojure
|
||||
data."
|
||||
[^org.postgresql.util.PGobject v]
|
||||
(let [type (.getType v)
|
||||
value (.getValue v)]
|
||||
(if (#{"jsonb" "json"} type)
|
||||
(with-meta (<-json value) {:pgtype type})
|
||||
value)))
|
||||
```
|
||||
|
||||
Finally we extend `next.jdbc.prepare/SettableParameter` and `next.jdbc.result-set/ReadableColumn` protocols to make the conversion between clojure data and PGobject JSON automatic:
|
||||
|
||||
```clojure
|
||||
(require '[next.jdbc.prepare :as prepare])
|
||||
(require '[next.jdbc.result-set :as rs])
|
||||
|
||||
;; if a SQL parameter is a Clojure hash map or vector, it'll be transformed
|
||||
;; to a PGobject for JSON/JSONB:
|
||||
(extend-protocol prepare/SettableParameter
|
||||
clojure.lang.IPersistentMap
|
||||
(set-parameter [m s i]
|
||||
(.setObject s i (->pgobject m))))
|
||||
|
||||
clojure.lang.IPersistentVector
|
||||
(set-parameter [v s i]
|
||||
(.setObject s i (->pgobject v))))
|
||||
|
||||
;; if a row contains a PGobject then we'll convert them to Clojure data
|
||||
;; while reading (if column is either "json" or "jsonb" type):
|
||||
(extend-protocol rs/ReadableColumn
|
||||
org.postgresql.util.PGobject
|
||||
(read-column-by-label [^org.postgresql.util.PGobject v _]
|
||||
(<-pgobject v))
|
||||
(read-column-by-index [^org.postgresql.util.PGobject v _2 _3]
|
||||
(<-pgobject v)))
|
||||
```
|
||||
|
||||
#### Inserting and Querying JSON
|
||||
|
||||
Let's assume we have following table:
|
||||
|
||||
``` sql
|
||||
create table demo (
|
||||
id serial primary key,
|
||||
doc_jsonb jsonb,
|
||||
doc_json json
|
||||
)
|
||||
```
|
||||
|
||||
We can now insert Clojure data into json and jsonb fields:
|
||||
|
||||
```clojure
|
||||
(require '[next.jdbc :as jdbc])
|
||||
(require '[next.jdbc.sql :as sql])
|
||||
|
||||
(def db { ...db-spec here... })
|
||||
(def ds (jdbc/get-datasource db))
|
||||
|
||||
(def test-map
|
||||
{:some-key "some val" :nested {:a 1} :null-val nil :vector [1 2 3]})
|
||||
|
||||
(def data1
|
||||
{:doc_jsonb test-map
|
||||
:doc_json (with-meta test-map {:pgtype "json"})})
|
||||
|
||||
(sql/insert! ds :demo data1)
|
||||
|
||||
(def test-vector
|
||||
[{:a 1} nil 2 "lalala" []])
|
||||
|
||||
(def data2
|
||||
{:doc_jsonb test-vector
|
||||
:doc_json (with-meta test-vector {:pgtype "json"})})
|
||||
|
||||
(sql/insert! ds :demo data2)
|
||||
```
|
||||
|
||||
And those columns are nicely transformed into Clojure data when querying:
|
||||
|
||||
```clojure
|
||||
(sql/get-by-id ds :demo 1)
|
||||
=> #:demo{:id 1,
|
||||
:doc_json
|
||||
{:some-key "some val",
|
||||
:nested {:a 1},
|
||||
:vector [1 2 3],
|
||||
:null-val nil},
|
||||
:doc_jsonb
|
||||
{:some-key "some val",
|
||||
:nested {:a 1},
|
||||
:vector [1 2 3],
|
||||
:null-val nil}}
|
||||
|
||||
(sql/get-by-id ds :demo 2)
|
||||
=> #:demo{:id 2,
|
||||
:doc_json [{:a 1} nil 2 "lalala" []],
|
||||
:doc_jsonb [{:a 1} nil 2 "lalala" []]}
|
||||
|
||||
;; Query by value of JSON field 'some-key'
|
||||
(sql/query ds [(str "select id, doc_jsonb::json->'nested' as foo"
|
||||
" from demo where doc_jsonb::json->>'some-key' = ?")
|
||||
"some val"])
|
||||
=> [{:demo/id 1, :foo {:a 1}}]
|
||||
```
|
||||
|
||||
#### JSON or JSONB?
|
||||
|
||||
* A `json` column stores JSON data as strings (reading and writing is fast but manipulation is slow, field order is preserved)
|
||||
* A `jsonb` column stores JSON data in binary format (manipulation is significantly faster but reading and writing is a little slower)
|
||||
|
||||
If you're unsure whether you want to use json or jsonb, use jsonb.
|
||||
|
||||
[<: Friendly SQL Functions](/doc/friendly-sql-functions.md) | [Result Set Builders :>](/doc/result-set-builders.md)
|
||||
Loading…
Reference in a new issue