Improve docs for column name conflicts

This commit is contained in:
Sean Corfield 2019-11-13 10:04:52 -08:00
parent 6450bfa0f2
commit 5e52d30927
2 changed files with 7 additions and 1 deletions

View file

@ -11,7 +11,7 @@ The following changes have been committed to the **master** branch since the 1.0
* Clarify what `execute!` and `execute-one!` produce when the result set is empty (`[]` and `nil` respectively, and there are now tests for this). Similarly for `find-by-keys` and `get-by-id`. * Clarify what `execute!` and `execute-one!` produce when the result set is empty (`[]` and `nil` respectively, and there are now tests for this). Similarly for `find-by-keys` and `get-by-id`.
* Add **MS SQL Server** section to **Tips & Tricks** to note that returns an empty string for table names by default (so table-qualified column names are not available). Using the `:result-type` (scroll) and `:concurrency` options will cause table names to be returned. * Add **MS SQL Server** section to **Tips & Tricks** to note that returns an empty string for table names by default (so table-qualified column names are not available). Using the `:result-type` (scroll) and `:concurrency` options will cause table names to be returned.
* Clarify that **Friendly SQL Functions** are deliberately simple (hint: they will not be enhanced or expanded -- use `plan`, `execute!`, and `execute-one!` instead, with a DSL library if you want!). * Clarify that **Friendly SQL Functions** are deliberately simple (hint: they will not be enhanced or expanded -- use `plan`, `execute!`, and `execute-one!` instead, with a DSL library if you want!).
* Improve migration docs: explicitly recommend the use of a datasource for code that needs to work with both `clojure.java.jdbc` and `next.jdbc`; add caveat about column name conflicts. * Improve migration docs: explicitly recommend the use of a datasource for code that needs to work with both `clojure.java.jdbc` and `next.jdbc`; add caveats about column name conflicts (in several places).
* Improve `datafy`/`nav` documentation around `:schema`. * Improve `datafy`/`nav` documentation around `:schema`.
* Update `org.clojure/java.data` to `"0.1.4"` (0.1.2 fixes a number of reflection warnings). * Update `org.clojure/java.data` to `"0.1.4"` (0.1.2 fixes a number of reflection warnings).

View file

@ -115,6 +115,12 @@ user=> (jdbc/execute-one! ds ["select * from address where id = ?" 2]
user=> user=>
``` ```
Relying on the default result set builder -- and table-qualified column names -- is the recommended approach to take, if possible, with a few caveats:
* MS SQL Server produces unqualified column names by default (see [**Tips & Tricks**](/doc/getting-started/friendly-sql-functions#tips--tricks) for how to get table names back from MS SQL Server),
* Oracle's JDBC driver doesn't support `.getTableName()` so it will only produce unqualified column names (also mentioned in **Tips & Tricks**),
* If your SQL query joins tables in a way that produces duplicate column names, and you use unqualified column names, then those duplicated column names will conflict and you will get only one of them in your result -- use aliases in SQL (`as`) to make the column names distinct,
* If your SQL query joins a table to itself under different aliases, the _qualified_ column names will conflict because they are based on the underlying table name provided by the JDBC driver rather the alias you used in your query -- again, use aliases in SQL to make those column names distinct.
### `plan` & Reducing Result Sets ### `plan` & Reducing Result Sets
While those functions are fine for retrieving result sets as data, most of the time you want to process that data efficiently, so `next.jdbc` provides a SQL execution function that works with `reduce` and with transducers to consume the result set without the intermediate overhead of creating Clojure data structures for every row: While those functions are fine for retrieving result sets as data, most of the time you want to process that data efficiently, so `next.jdbc` provides a SQL execution function that works with `reduce` and with transducers to consume the result set without the intermediate overhead of creating Clojure data structures for every row: