A response to Lucas Eder's "JDBC improvements and convenience method suggestions" post.
Before tackling each point individually, the theme of them all is 'make JDBC easier for end users', and I'm not so sure this is worthwhile, both design-wise and pragmatically.
JDBC is what it is. It contains a bunch of dubious design choices that cannot be revisited without breaking backwards compatibility for callers (JDBC as an interface has given itself a free pass to break compat for implementers).
The problem with shipping really nice APIs for moving targets in the core, is that the core is generally not the place to start adding dueling implementations, is not the place for code that may want to (lightly) break backwards compatibility, and it is not the place for implementations that trade expressive power away in favour of being nicer. Core libraries have to expose everything as much as possible, even things that almost nobody needs. A library can make a different tradeoff; the few for whom the tradeoff is a step too far simply use a different library, or, worst case scenario, write it to the core layer or write their own library, or even fork it.
These attempts to make JDBC nicer to use really want to be opinionated, and quite a few of them require a design choice that doesn't feel universal to me, and, of course, the 'old' setX methods have to stay. Not just because it would be backwards incompatible to remove them; because they also expose certain DB engine features that could plausibly be desired. I'll cover those when I tackle individual points.
Look at JOOQ and JDBI - these are very complex pieces of work that probably have no business being core. I remember the sheer amount of grey hairs Stephen Colebourne got, and turning joda-time into JSR310 seems a few ballparks simpler.
I'd instead venture a quite different direction to go: Update the JDBC docs to further highlight that it really isn't meant to be used directly. You can, but this is like writing machinecode for a CPU. You can do that. But it's more written for middleware authors. The target audience for JDBC is you, and the hibernate team, and so on. Right now I see a ton of java users (e.g. via StackOverflow) use JDBC without any additional libraries. It would help if the docs themselves are more clear in spelling out that you probably don't wanna do that. Neatly solves the problem and makes your entire post irrelevant.
- Add
java.sql.Freeable
Yeah, sure. I'd like to see if that close() business in the oracle driver is addressed now, first.
Add PreparedStatement.set(Object...)
Add Connection.executeXYZ() methods
You're attempting to reduce the number of method calls required to run a query, but this is only going halfway.
The rest of your post sort of half assumes that it goes further, so now I'm not sure what you are proposing. Instead of:
try (ps = con.prepareStatement(...)) {
  ps.set(1, 2, 3);
  ps.executeUpdate();
}why not:
try (var ps = con.prepareStatement(...)) {
  ps.executeUpdate(1, 2, 3);
}Nevermind the set method. This saves you a call.
You could go even further, as you do later on:
try (var rs = con.executeUpdate("SQL HERE", 1, 2, 3)) {
  query rs (ResultSet) here
}But there are reasons to go via the intermediary of a PreparedStatement (re-use, for one), and it would be odd that it's an all-or-nothing affair (either this last snippet, or old-style - nothing in between).
Hence, executeUpdate(paramVarArgs) on PreparedStatement, and also on connection itself (executeUpdate(String, Object...)), but no set. set unlocks nothing fundamental, and I think its worthwhile to guard against the PreparedStatement type growing even more methods, it already has far too many.
But this gets to a more fundamental issue, which is that this doesn't fit with the spirit of JDBC.
JDBC is fundamentally designed around compile-time typing. You don't call .set(5, someString). You call .setString(). This goes to ridiculous lengths: .setNull requires that you tell the system what type you want that null to represent. The postgres driver, at least, is picky about this too, you have to pass the right value from the SqlTypes enum.
Experience from writing middle layers (not sure how JOOQ does it): I have something like this in my JDBI-esque library, and it 'fixes' the null issue by replacing the ? in the SQL string with the letters NULL. For the rest, a giant if (x instanceof ) ... block with lots of instanceof. If x is a CharSequence, we go for ps.setString(), and so on.
I like this (otherwise I wouldn't have written it), but this 'simple' change request is fundamentally rewriting what JDBC is supposed to be: It changes things from being 'write-typed' to being 'run-typed'.
With a middleware layer I can stomach it (the middleware layer simply says it is 'runtime typed', and the fact that it does this by doing a boatload of instanceof checks / runtime Map<Class<?>, Something> lookups, is an implementation detail the user of the middleware needn't be concerned with. But adding it directly to the core JDBC classes (Connection, PreparedStatement, and ResultSet) means the API becomes schizophrenic (because the old set methods can't go away).
We can't even deprecate those set methods: Plausibly, a db engine can give you better performance if you use the compile-time-typed variants.
One answer could be to move the 'typing' of the parameters into the SQL:
try (var ps = con.prepareStatement("SELECT * FROM foo WHERE x = ?bool AND y = ?string")) {}But this is hairy for backwards compatibility (maybe some code out there really wanted bool in the SQL for some reason).
A better take would be to make a second API and the caller picks which API they want. Each API is 'clean', or as clean as it can be, exposing only its 'vision' on how DB interaction is supposed to work. The runtime typed version wouldn't have a .setInt method in the first place. At best it has a .set(idx, int value) overload solely to avoid a box-and-unbox step for performance reasons only.
- Add
Result.get():Object[]
- Let ResultSet extends Iterable<Object[]>
This again comes across as going halfway and is part of the reason why I think the better answer is to accept that fundamentally the model should be 'JDBC is the low-level, some middleware layer is what you actually use'. Trying to skip the ResultSet step by instead just getting the whole row really screams: "... just let me pass in a mapper that maps a resultset row straight into some type of my choosing, and then let me just for (Student s : ps.executeByMapping(rsToStudentMapper)) or similar.
Possibly your intent here is that "just get me the entire resultset row via an Object[]" is solely for middleware purposes: Someone writing actual raw JDBC would never want to use this, I'd think: Object[] is clichéd in how annoying it is to use, you have to cast/instanceof everything, and the array behaves bizarrely (equals, hashCode, and toString all non-idiomatic).
Going down the rabbit hole (which various responses to your post did: Why not a List, etc) doesn't end until you get to a mapped concept, where the 'return type' of whatever you do to iterate through it, is a reified model type, i.e. mapped behaviour.
At some point you're just rebuilding JOOQ in the java.sql package. As awesome as JOOQ is, that's a bad idea, JOOQ is too complicated and too end-user/stylistically opinionated. This is a more fundamental issue (how important is the 'batteries included' aspect of a core library, vs. how important is it that these libraries aren't weighed down too much by the requirement to be incredibly stable and generalized due to their hallowed status of being in the java.* package structure?)
I strongly trend towards: This sort of thing should not be in the core package. So far, npm went too far with 'dependency hell', something like padLeft should not be a dependency, but java doesn't go far enough. At least, java-the-language does not have sufficient features available to it to start putting such things in the core. If there was a way to import a version, or to make a breaking update whilst exposing a 'mirror' API that can still expose the old API to code expecting an old version, maybe such things can be in the core. Unfortunately, whilst I've mentioned some ideas on how to do this to various OpenJDK committers, so far I don't think there's any serious movement on the idea.
- Add SQLInput.getConnection() and SQLOutput.getConnection()
I lack the experience to give you meaningful feedback on this one. It sounds good.
- Add Connection.transaction() overloads for functional transaction usage
This one really drives the point home. The spring JDBC middleware layer does something that is, to me, crazy: If a method marked as @Transactional throws an unchecked exception, the spring middleware layer assumes the code failed to execute properly and will rollback the transaction. However, if it throws a checked exception (but not SQLException, if memory serves), the middleware layer assumes that's an alternate, intentional return, and therefore that the code did exactly what it wanted to: The transaction is committed.
Spring is rather popular, so unfortunately this has poisoned the community somewhat, who may now be assuming that throwing checked exceptions is one way to force commits. However, I would hope that those not poisoned by this idea would find that highly surprising behaviour.
A second open question is retry: Currently JDBC as a spec inherently knows about SERIALIZABLE isolation, but has no idea what retry is. Even though just about every DB I know of implements SERIALIZABLE that way. (optimistic locking). I don't know about JOOQ, but my middleware layer catches SQLExceptions, has db-engine-dependent code to check if it indicates retry, and if so, will just run the lambda from the top. It also has testing tooling so you can intentionally trigger a random amount of fake retries, as you'll need to ensure the code in the lambda block is idempotent. Programming against this library is fantastic (or I wouldn't have set things up this way, of course), but I'm pretty sure this is opinionated enough to be seriously problematic for a core library, as well as leaning heavily into db engine impl details that JDBC probably shouldn't be presuming, such as that transactions occur with optimistic locking, and that the library assumes that you're using SERIALIZABLE.
Attempting to deliver a lambda-based API for transactions without making a call on how to deal with idempotency concerns, and what constitutes a rollback vs a commit, sounds rather tricky.
My implementation suggestion on GitHub includes:
All the links are broken unfortunately.
c.executeUpdate("INSERT INTO t VALUES (?, ?, ?)", 1, 2, 3);
Half-hearted, again. You suggest we write it like so:
try (var rs = c.executeUpdate("INSERT INTO t SELECT birthdate < ? AS adult, name, category FROM foo WHERE id > ? AND foobar BETWEEN ? AND ?",
  LocalDate.now().minusYears(18),
  0,
  5,
  12)) {
  ...
}This seems significantly worse than a simple update that is 'append' based:
try (var rs = c.update()
  .append("INSERT INTO t SELECT")
  .append("birthdate < ? AS adult, ", LocalDate.now().minusYears(18))
  .append("name, category FROM foo ")
  .append("WHERE id > ? ", 0)
  .append("AND foobar BETWEEN ? AND ?", 5, 12)
  .query()) {
}the above is still a bit half-hearted; this kind of 'append' style is lightly annoying in insert/update statements due to having to mess with the comma, and for querying clauses, having to switch WHERE with AND is similarly convoluted. You possibly want 'comma counting' to be done by the API and not by your string literals. Something like:
try (var rs = c.update()
  .append("INSERT INTO t SELECT ")
  .field("birthdate < ? AS adult", LocalDate.now().minusYears(18)),
  .fields"name", "category") // API figures out commas.
  .append("FROM foo ")
  .condition("id > ?", 0) // API figures out 'WHERE' and space at end
  .condition("foobar BETWEEN ? AND ?", 5, 12)
  .query()) {
}But this is now very opinionated and starting to drift away from the nature of JDBC (which is, amongst other things, that JDBC as a library has no idea what SQL is. It's just strings passed straight to the db engine for further processing).
I think the last snippet is closer to the right answer than any other, but it can be improved a lot, too. I'd say this last snippet is far too opinionated as is - and this this kind of thing is exactly where it should be: JOOQ, JDBI, ORMs, spring-jdbctemplate, and so on.
The simple end-run around all these ideas is to stop considering JDBC as having the goal of being 'a nice API' for direct SQL interactions. The goal of JDBC is to expose everything any major DB engine wants to expose, so that middleware can deliver on the job of making a nice API, and that any exotic needs can still be filled at some cost (by writing to the underlying raw API).