Last active
April 6, 2023 06:42
-
-
Save codebje/58d1b12e7a2d0ed31b3a to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
public <T> T streamQuery(String sql, Function<Stream<SqlRowSet>, ? extends T> streamer, Object... args) { | |
return jdbcTemplate.query(sql, resultSet -> { | |
final SqlRowSet rowSet = new ResultSetWrappingSqlRowSet(resultSet); | |
final boolean parallel = false; | |
// The ResultSet API has a slight impedance mismatch with Iterators, so this conditional | |
// simply returns an empty iterator if there are no results | |
if (!rowSet.next()) { | |
return streamer.apply(StreamSupport.stream(Spliterators.emptySpliterator(), parallel)); | |
} | |
Spliterator<SqlRowSet> spliterator = Spliterators.spliteratorUnknownSize(new Iterator<SqlRowSet>() { | |
private boolean first = true; | |
@Override | |
public boolean hasNext() { | |
return !rowSet.isLast(); | |
} | |
@Override | |
public SqlRowSet next() { | |
if (!first || !rowSet.next()) { | |
throw new NoSuchElementException(); | |
} | |
first = false; // iterators can be unwieldy sometimes | |
return rowSet; | |
} | |
}, Spliterator.IMMUTABLE); | |
return streamer.apply(StreamSupport.stream(spliterator, parallel)); | |
}, args); | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@sabirove It's been a while since I looked at this. This actually does work lazily. RowSet fetches on demand using a database cursor. That is the whole point of this approach. You have to indeed keep the connection open and process everything in a single connection for this to work because otherwise the database closes the cursor.
I used this to stream export tables that definitely would not fit in memory on the small vm I used to run this on. You need some additional Spring hackery to be able to write to the response directly of course (it normally would buffer everything in memory and then write to the response). Not that trivial with Spring Flux but I managed to do that. Basically, this allows you stream many gigabytes of data to a file with a simple curl command. It starts streaming data right away while it is still fetching data from the database and it writes at the speed it can fetch data from the database. Nice if you want your data in csv or ndjson format via a rest api, which was in fact my use case.
@wakedeer Nice that Spring added proper support for this. Makes this a bit easier.