I have a query whose results are stored in a GTT (Global Temporary Table) and in a Collection.
Selecting the data from the GTT again, I get a very small cost: 103.
The most important difference between collections and GTT in SQL, is that CBO(cost-based optimizer) has limitations for TABLE function (kokbf$...), for example JPPD doesn't work with TABLE() functions. Some workarounds: http://orasql.org/2019/05/30/workarounds-for-jppd-with-view-and-tablekokbf-xmltable-or-json_table-functions/
Global temporary tables can have statistics as any other table. In fact they are like any other table, they have data segments, just in temporary tablespace.
In 11g the statistics are global so they sometimes cause issues with execution plans. In 12c they are session based so each session gets proper ones (if available).
The collection type cardinality is based on DB block size and for default 8 kB block is 8168. Collection content is stored in PGA. It's quite common to hint the cardinality when using collection types in complex queries to hint the optimizer. You can also use extended optimizer interface for implementing own way for calculating cost.
Edit - added tests:
CREATE TYPE STRINGTABLE IS TABLE OF VARCHAR2(255);
CREATE GLOBAL TEMPORARY TABLE TMP (VALUE VARCHAR2(255));
INSERT INTO TMP SELECT 'Value' || LEVEL FROM DUAL CONNECT BY LEVEL <= 1000000;
DECLARE
x STRINGTABLE;
cnt NUMBER;
BEGIN
SELECT VALUE BULK COLLECT INTO x FROM TMP;
DBMS_OUTPUT.PUT_LINE(TO_CHAR(SYSTIMESTAMP, 'MI:SS.FF3'));
SELECT SUM(LENGTH(VALUE)) INTO cnt FROM TMP;
DBMS_OUTPUT.PUT_LINE(TO_CHAR(SYSTIMESTAMP, 'MI:SS.FF3'));
SELECT SUM(LENGTH(COLUMN_VALUE)) INTO cnt FROM TABLE(x);
DBMS_OUTPUT.PUT_LINE(TO_CHAR(SYSTIMESTAMP, 'MI:SS.FF3'));
END;
In this case is the access to GTT about twice as fast then to collection, cca 200 ms vs. 400 ms on my test machine. When I increased the number of rows to 10 000 000, I got ORA-22813: operand value exceeds system limits on the second query.