I Have a prepared statement
INSERT INTO mst(time) VALUES (?);
where time is of type Timestamp in a PostgreSQL database.
I am inserting
JodaTime's DateTime constructor can handle this for you now. (I'm not sure if that was true when the question was posted, but it's a top Google result so I figured I'd add a newer solution.)
There's a few API options:
public DateTime(Object instant);
public DateTime(Object instant, DateTimeZone zone);
Both options accept java.sql.Timestamp because it extends java.util.Date, but the Nanoseconds will be ignored (floored), because DateTime and Date only have millisecond resolution*. Without a specific timezone it will default to DateTimeZone.UTC.
"Resolution" is how many digits are provided. "Precision" is how accurate the representation is. For example, MSSQL's DateTime has millisecond resolution, but only ~1/3 of a second precision (DateTime2 has variable resolution and higher precision).
UTC Timestamp with Millisecond Resolution Example:
new DateTime(resultSet.getTimestamp(1));
If you're using TIMESTAMP WITH TIME ZONE in your database then you can't use java.sql.Timestamp, because it doesn't support time zones. You'll have to use ResultSet#getString and parse the string.
Timestamp without Time Zone with Second Resolution Example**:
LocalDateTime dt = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
.parseLocalDateTime(resultSet.getString(1));
UTC Timestamp with Second Resolution Example**:
DateTime dt = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
.parseDateTime(resultSet.getString(1));
Timestamp with Time Zone (offset format) with Second Resolution Example**:
DateTime dt = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss Z")
.parseDateTime(resultSet.getString(1));
Bonus: DateTimeFormat#forPattern statically caches parsers by pattern so you don't have to.
I generally recommend using a String in your DBO model in order to make resolution explicit and avoid generating intermediate objects. (Is 2013-11-14 09:55:25 equal to 2013-11-14 09:55:25.000?) I generally try to distinguish between "database model objects" optimizing for data preservation concerns and "business model objects" optimizing for service level usage with a transformation/mapping layer in between. I find having CRUD based DAOs generating business objects directly tends to mix up the priorities and optimize for neither, throwing exceptions from unexpected places because of missed edge cases. Having an explicit transformation layer also allows you to add validation if necessary, like if you don't control the data source. Separating the concerns also makes it easier to test each layer independently.
* If you need to resolve to nanosecond resolution in your business model you'll have to use a different library.
** Timestamp String format may vary between databases, not sure.