问题
I am using the Spring JdbcUtils.extractDatabaseMetaData()
method to analyze the database. The function calls a callback and hands over a DatabaseMetaData
object. This object provides the getColumns(String catalog, String schemaPattern, String tableNamePattern, String columnNamePattern)
.
I call it like this getColumns("",TABLE_OWNER_USERNAME,null,null)
and get 400 columns as a result. These are exactly the results that I want, but the request takes over 1 minute.
Can I somehow optimize this query to be fast? Pulling 400 rows should happen in 1 seconds and not one minute.
EDIT: I don't suspect the Spring part being slow. Closer analysis showed that fetching the DatabaseMetaData
takes a few seconds butexecuting the getColumns()
takes really long.
回答1:
Maybe it's a better approach to query ALL_TAB_COLUMNS. Here is an example:
public final List<Column> getColumnsByOwner(final String owner) {
final String sql = "SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH, "
+ " DATA_PRECISION, DATA_SCALE, NULLABLE, DATA_DEFAULT"
+ " FROM ALL_TAB_COLUMNS"
+ " WHERE OWNER = ? ORDER BY COLUMN_ID";
return jdbcTemplate.query(sql,
new Object[] { owner },
new RowMapper<Column>() {
@Override
public Column mapRow(final ResultSet res, final int rowNum)
throws SQLException {
final Column reg = new Column();
reg.setColumnName(res.getString("COLUMN_NAME"));
//Read other properties
reg.setNullable(res.getString("NULLABLE").equals("Y"));
return reg;
}
});
}
If you need to filter by table simply add " AND TABLE_NAME = ?" to sql and tableName as another parameter.
Hope it helps.
回答2:
Having reverse engineered the actually communications between client and server I can reveal that Oracle's DatabaseMetaData.getColumns() methods sends the following SQL query (though this may change with ODBC driver versions and settings):
declare
in_owner varchar2(128);
in_name varchar2(128);
in_column varchar2(128);
xyzzy SYS_REFCURSOR;
begin
in_owner := :1; // Which resolves to the schema (user) name supplied
in_name := :2; // Which resolves to the table name supplied
in_column := :3; // Which gets set to '%';
open xyzzy for
SELECT NULL AS table_cat,
t.owner AS table_schem,
t.table_name AS table_name,
t.column_name AS column_name,
DECODE( (SELECT a.typecode
FROM ALL_TYPES A
WHERE a.type_name = t.data_type),
'OBJECT', 2002,
'COLLECTION', 2003,
DECODE(substr(t.data_type, 1, 9),
'TIMESTAMP',
DECODE(substr(t.data_type, 10, 1),
'(',
DECODE(substr(t.data_type, 19, 5),
'LOCAL', -102, 'TIME ', -101, 93),
DECODE(substr(t.data_type, 16, 5),
'LOCAL', -102, 'TIME ', -101, 93)),
'INTERVAL ',
DECODE(substr(t.data_type, 10, 3),
'DAY', -104, 'YEA', -103),
DECODE(t.data_type,
'BINARY_DOUBLE', 101,
'BINARY_FLOAT', 100,
'BFILE', -13,
'BLOB', 2004,
'CHAR', 1,
'CLOB', 2005,
'COLLECTION', 2003,
'DATE', 93,
'FLOAT', 6,
'LONG', -1,
'LONG RAW', -4,
'NCHAR', -15,
'NCLOB', 2011,
'NUMBER', 2,
'NVARCHAR', -9,
'NVARCHAR2', -9,
'OBJECT', 2002,
'OPAQUE/XMLTYPE', 2009,
'RAW', -3,
'REF', 2006,
'ROWID', -8,
'SQLXML', 2009,
'UROWI', -8,
'VARCHAR2', 12,
'VARRAY', 2003,
'XMLTYPE', 2009,
1111)))
AS data_type,
t.data_type AS type_name,
DECODE (t.data_precision, null,
DECODE(t.data_type, 'NUMBER',
DECODE(t.data_scale, null, 0 , 38),
DECODE (t.data_type, 'CHAR', t.char_length, 'VARCHAR', t.char_length, 'VARCHAR2', t.char_length, 'NVARCHAR2', t.char_length, 'NCHAR', t.char_length, 'NUMBER', 0, t.data_length) ), t.data_precision)
AS column_size,
0 AS buffer_length,
DECODE (t.data_type, 'NUMBER', DECODE(t.data_precision, null, DECODE(t.data_scale, null, -127 , t.data_scale), t.data_scale), t.data_scale) AS decimal_digits,
10 AS num_prec_radix,
DECODE (t.nullable, 'N', 0, 1) AS nullable,
NULL AS remarks,
t.data_default AS column_def,
0 AS sql_data_type,
0 AS sql_datetime_sub,
t.data_length AS char_octet_length,
t.column_id AS ordinal_position,
DECODE (t.nullable, 'N', 'NO', 'YES') AS is_nullable,
null as SCOPE_CATALOG,
null as SCOPE_SCHEMA,
null as SCOPE_TABLE,
null as SOURCE_DATA_TYPE,
'NO' as IS_AUTOINCREMENT,
t.virtual_column as IS_GENERATEDCOLUMN
FROM all_tab_cols t
WHERE t.owner LIKE in_owner ESCAPE '/'
AND t.table_name LIKE in_name ESCAPE '/'
AND t.column_name LIKE in_column ESCAPE '/'
AND t.user_generated = 'YES'
ORDER BY table_schem, table_name, ordinal_position;
end;
You can appreciate why that might be a bit slow, especially as the ALL_TAB_COLS and ALL_TYPES tables can each be 1000's of records long. Nevertheless while Oracle struggles to execute the first ever invocation (minutes) subsequent calls return results almost instantly. This is a classic table-join performance issue where even though a subset of data is required the engine joins the whole dataset before calculating and delivering the required subset. Subsequently data/results caching works to improve the performance of subsequent queries.
The better solution might be to use get_ddl() and parse the returned table definition as per this thread.
Alternatively you may be able to query the metadata on a table faster by executing a dummy query then using resultSetMetadata as follows (Note: column remarks metadata may not be immediately available):
ResultSet rs = connection.CreateStatement.executeQuery("SELECT * from MyTable WHERE 1=0");
ResultSetMetaData md = rs.getMetaData();
for (int ix = 1; ix <= md.getColumnCount(); ix++)
{
int colSize = md.getPrecision(ix);
String nativeType = md.getColumnTypeName(ix);
int jdbcType = md.getColumnType(ix);
// and so on....
}
来源:https://stackoverflow.com/questions/8792890/jdbc-with-spring-slow-metadata-fetch-oracle