Is there a way to limit the number of records fetched from the jdbc source using spark sql 2.2.0?
I am dealing with a task of moving (and transforming) a large number of
To limit the number of downloaded rows, a SQL query can be used instead of the table name in "dbtable". Description in documentation.
In query "where" condition can be specified, for example, with server specific features to limit the number of rows (like "rownum" in Oracle).