CASE 1: I have a table with 30 columns and I query using 4 columns in the where clause.
CASE 2: I have a table with 6 columns and I query using 4 columns in the whe
There will be no performance difference based on the column position. Now the construction of the table is a different story e.g. number of rows, indexes, number of columns etc.
The scenario you are talking about where you are comparing the position of the column in the two tables is like comparing apples to oranges almost, because there are so many different variables besides the column position.
Test it and see!
There will be a performance difference, however 99% of the time you won't notice it - usually you won't even be able to detect it!
You can't even guarantee that that the table with fewer columns will be quicker - if its bothering you then try it and see.
Technical rubbish: (from the perspective of Microsoft SQL Server)
With the assumption that in all other respects (indexes, row counts, the data contained in the 6 common columns etc...) the tables are identical, then the only real difference will be that the larger table is spread over more pages on disk / in memory.
SQL server only attempts to read the data it absolutely requires, however it will always load an entire page at a time (8 KB). Even with the exact same amount data is required as the output to the query, if that data is spread over more pages then more IO is required.
That said, SQL server is incredibly efficient with its data access, and so you are very unlikely to see a noticeable impact on performance except in extreme circumstances.
Besides, it is also likely that your query will be run against the index rather than the table anyway, and so with indexes exactly the same size the change is likely to be 0.
Since you specified you are using the WHERE clause it will depend on how many rows are returned. If the value in your WHERE clause is UNIQUE or a PRIMARY KEY than the difference is almost non-existent. You can use EXPLAIN ANALYZE in front of your SELECT statement to view the planning time and execution time values and than you can compare your queries.
Unless you have a very wide column set difference with no index being used (thus a table scan) you should see little difference in performance. That being said, it is always useful/benificial to return as few columns as possible to satisfy your needs. The catch here is that greater benifit can be had by returning the columns you need rather than a second database fetch for other columns.
Depends on width of the table (Bytes per row), how many rows in the table, and whether there are indices on the columns used by the query. No definitive answer without that info. However, the more columns in the table, chances are it is wider. But the effect of a proper index is much more significant than the effect of the table size.
Does the total number of columns in a table impact performance (if the same subset of columns is selected, and if there are no indices on the table)
Yes, marginally, with no indexes at all, both queries (Table A and Table B) will do table scans. Given that Table B
has fewer columns than Table A
, the rows per page (density) will be higher on B
and so B
will be marginally quicker as fewer pages need to be fetched.
However, given that your queries are of the form:
SELECT b,c,d
FROM X
WHERE f='foo';
the performance of the query will be dominated by the indexing on column f
, rather than the number of columns in the underlying tables.
For the OP's exact queries, the fastest performance will result from the following indexing:
A(f) INCLUDE (b,c,d)
B(f) INCLUDE (b,c,d)
Irrespective of the number of columns in Table A or Table B, with the above indexes in place, performance should be identical for both queries (assuming the same number of rows and similar data in both tables), given that SQL will hit the indexes which are now of similar column widths and row densities, without needing any additional data from the original table.
Does the number of columns in the select affect query performance?
The main benefit of returning fewer columns in a SELECT
is that SQL might be able to avoid reading from the table / cluster, and instead, if it can retrieve all the selected
data from an index (either as indexed columns and / or included columns in the case of a covering index).
Obviously, the columns used in the predicate (where filter), i.e. f
in your example, MUST be in the indexed columns of the index, and the data distribution must be sufficiently selective, in order for an index to be used in the first place.
There is also a secondary benefit in returning fewer columns from a SELECT
, as this will reduce any I/O overhead, especially if there is a slow network between the Database server and the app consuming the data - i.e. it is good practice to only ever return the columns you actually need, and to avoid using SELECT *
.
Edit
Some other plans:
B(f)
with no other key or INCLUDE
columns, or with an incomplete set of INCLUDE
columns (i.e. one or more of b, c or d
are missing):SQL Server will likely need to do a Key or RID Lookup as even if the index is used, there will be a need to "join" back to the table to retrieve the missing columns in the select clause. (The lookup type depends on whether the table has a clustered PK or not)
B(f,b,c,d)
This will still be very performant, as the index will be used and the table avoided, but won't be quite as good as the covering index, because the density of the index tree will be less due to the additional key columns in the index.