Oracle 10g small Blob or Clob not being stored inline?

前端 未结 5 988
一个人的身影
一个人的身影 2021-02-14 00:42

According to the documents I\'ve read, the default storage for a CLOB or BLOB is inline, which means that if it is less than approx 4k in size then it will be held in the table.

相关标签:
5条回答
  • 2021-02-14 01:01

    The behavior of Oracle LOBs is the following.

    A LOB is stored inline when:

    (
      The size is lower or equal than 3964
      AND
      ENABLE STORAGE IN ROW has been defined in the LOB storage clause
    ) OR (
      The value is NULL
    )
    

    A LOB is stored out-of-row when:

    (
      The value is not NULL
    ) AND (
      Its size is higher than 3964
      OR
      DISABLE STORAGE IN ROW has been defined in the LOB storage clause
    )
    

    Now this is not the only issue which may impact performance.

    If the LOBs are finally not stored inline, the default behavior of Oracle is to avoid caching them (only inline LOBs are cached in the buffer cache with the other fields of the row). To tell Oracle to also cache non inlined LOBs, the CACHE option should be used when the LOB is defined.

    The default behavior is ENABLE STORAGE IN ROW, and NOCACHE, which means small LOBs will be inlined, large LOBs will not (and will not be cached).

    Finally, there is also a performance issue at the communication protocol level. Typical Oracle clients will perform 2 additional roundtrips per LOBs to fetch them: - one to retrieve the size of the LOB and allocate memory accordingly - one to fetch the data itself (provided the LOB is small)

    These extra roundtrips are performed even if an array interface is used to retrieve the results. If you retrieve 1000 rows and your array size is large enough, you will pay for 1 roundtrip to retrieve the rows, and 2000 roundtrips to retrieve the content of the LOBs.

    Please note it does not depend on the fact the LOB is stored inline or not. They are complete different problems.

    To optimize at the protocol level, Oracle has provided a new OCI verb to fetch several LOBs in one roundtrips (OCILobArrayRead). I don't know if something similar exists with JDBC.

    Another option is to bind the LOB on client side as if it was a big RAW/VARCHAR2. This only works if a maximum size of the LOB can be defined (since the maximum size must be provided at bind time). This trick avoids the extra rountrips: the LOBs are just processed like RAW or VARCHAR2. We use it a lot in our LOB intensive applications.

    Once the number of roundtrips have been optimized, the packet size (SDU) can be resized in the net configuration to better fit the situation (i.e. a limited number of large roundtrips). It tends to reduce the "SQL*Net more data to client" and "SQL*Net more data from client" wait events.

    0 讨论(0)
  • 2021-02-14 01:04

    There are two indirections when it comes to CLOBs and BLOBs:

    1. The LOB value might be stored in a different database segment than the rest of the row.

    2. When you query the row, only the non-LOB fields are contained in the result set and accessing the LOB-fields requries one or more additional round trips between the client and the server (per row!).

    I don't quite know how you measure the execution time and I've never used Oracle Monitor, but you might primarily be affected by the second indirection. Depending on the client software you use, it is possible to reduce the round trips. E.g. when you use ODP.NET, the parameter is called InitialLobFetchSize.

    Update:

    One one to tell which of the two indirections is relevant, you can run your LOB query with 1000 rows twice. If the time drops significantly from the first to the second run, it's indirection 1. On the second run, the caching pays off and access to the separate database segment isn't very relevant anymore. If the time stays about the same, it's the second indirection, namely the round trips between the client and the server, which cannot improve between two runs.

    The time of more than 8 seconds for 1000 rows in a very simple query indicate it's indirection 2 because 8 seconds for 1000 rows can't really be explained with disk access unless your data is very scattered and your disk system under heavy load.

    0 讨论(0)
  • 2021-02-14 01:04

    This is the key information (how to read LOB without extra roundtrips), which is not available in Oracle's documentation I think:

    Another option is to bind the LOB on client side as if it was a big RAW/VARCHAR2. This only works if a maximum size of the LOB can be defined (since the maximum size must be provided at bind time). This trick avoids the extra rountrips: the LOBs are just processed like RAW or VARCHAR2. We use it a lot in our LOB intensive applications.

    I had problem with loading simple table (few GB) with one blob column ( 14KB => thousands of rows) and I was investigating it for a long time, tried a lot of lob storage tunings (DB_BLOCK_SIZE for new tablespace, lob storage specification - CHUNK ), sqlnet.ora settings, client prefetching attributes, but this (treat BLOB as LONG RAW with OCCI ResultSet->setBufferData on client side) was the most important thing (persuade oracle to send blob column immediately without sending lob locator at first and loading each lob separately based on lob locator.

    Now I can get even ~ 500Mb/s throughput (with columns < 3964B). Our 14KB blob will be separated into multiple columns - so it'll be stored in row to get almost sequential reads from HDD. With one 14KB blob (one column) I get ~150Mbit/s because of non-sequential reads (iostat: low amount of merged read requests).

    NOTE: don't forget to set also lob prefetch size/length:

    err = OCIAttrSet(session, (ub4) OCI_HTYPE_SESSION, (void *) &default_lobprefetch_size, 0, (ub4) OCI_ATTR_DEFAULT_LOBPREFETCH_SIZE, errhp);

    But I don't know how is it possible to achieve the same fetching throughput with ODBC connector. I was trying it without any success.

    0 讨论(0)
  • 2021-02-14 01:19

    Indeed, it is stored within the row. You are likely dealing with the simple overhead of using a LOB instead of a varchar. Nothing is free. The DB probably doesn't know ahead of time where to find the row, so it probably still "follows a pointer" and does extra work just in case the LOB is big. If you can get by with a varchar, you should. Even old hacks like 2 varchars to deal with 8000 characters might solve your business case with higher performance.

    LOBS are slow, difficult to query, etc. On the positive, they can be 4G.

    What would be interesting to try is to shove something just over 4000 bytes into that clob, and see what the performance looks like. Maybe it is about the same speed? This would tell you that it's overhead slowing you down.

    Warning, at some point network traffic to your PC slows you down on these kind of tests.

    Minimize this by wrapping in a count, this isolates the work to the server:

    select count(*) from (select x,y from clobtest where rownum<1001)

    You can achieve similar effects with "set autot trace", but there will be tracing overhead too.

    0 讨论(0)
  • 2021-02-14 01:26

    If you're "hoping to achieve similar response times to reading the varchar2 column z", then you'll be disappointed in most cases. If you're using a CLOB I suppose you need to store more than 4,000 bytes, right? Then if you need to read more bytes that's going to take longer.

    BUT if you have a case where yes, you use a CLOB, but you're interested (in some instances) only in the first 4,000 bytes of the column (or less), then you have a chance of getting similar performance. It looks like Oracle can optimize the retrieval if you use something like DBMS_LOB.SUBSTR and ENABLE STORAGE IN ROW CACHE clause with your table. Example:

    CREATE TABLE clobtest (x INT PRIMARY KEY, y CLOB)
    LOB (y) STORE AS (ENABLE STORAGE IN ROW CACHE);
    
    INSERT INTO clobtest VALUES (0, RPAD('a', 4000, 'a'));
    UPDATE clobtest SET y = y || y || y;
    INSERT INTO clobtest SELECT rownum, y FROM all_objects, clobtest WHERE rownum < 1000;
    
    CREATE TABLE clobtest2 (x INT PRIMARY KEY, z VARCHAR2(4000));
    
    INSERT INTO clobtest2 VALUES (0, RPAD('a', 4000, 'a'));
    INSERT INTO clobtest2 SELECT rownum, z FROM all_objects, clobtest2 WHERE rownum < 1000;
    
    COMMIT;
    

    In my tests on 10.2.0.4 and 8K block, these two queries give very similar performance:

    SELECT x, DBMS_LOB.SUBSTR(y, 4000) FROM clobtest;
    SELECT x, z FROM clobtest2;
    

    Sample from SQL*Plus (I ran the queries multiple times to remove physical IO's):

    SQL> SET AUTOTRACE TRACEONLY STATISTICS
    SQL> SET TIMING ON
    SQL>
    SQL> SELECT x, y FROM clobtest;
    
    1000 rows selected.
    
    Elapsed: 00:00:02.96
    
    Statistics
    ------------------------------------------------------
              0  recursive calls
              0  db block gets
           3008  consistent gets
              0  physical reads
              0  redo size
         559241  bytes sent via SQL*Net to client
         180350  bytes received via SQL*Net from client
           2002  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
           1000  rows processed
    
    SQL> SELECT x, DBMS_LOB.SUBSTR(y, 4000) FROM clobtest;
    
    1000 rows selected.
    
    Elapsed: 00:00:00.32
    
    Statistics
    ------------------------------------------------------
              0  recursive calls
              0  db block gets
           2082  consistent gets
              0  physical reads
              0  redo size
          18993  bytes sent via SQL*Net to client
           1076  bytes received via SQL*Net from client
             68  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
           1000  rows processed
    
    SQL> SELECT x, z FROM clobtest2;
    
    1000 rows selected.
    
    Elapsed: 00:00:00.18
    
    Statistics
    ------------------------------------------------------
              0  recursive calls
              0  db block gets
           1005  consistent gets
              0  physical reads
              0  redo size
          18971  bytes sent via SQL*Net to client
           1076  bytes received via SQL*Net from client
             68  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
           1000  rows processed
    

    As you can see, consistent gets are quite higher, but SQL*Net roundtrips and bytes are nearly identical in the last two queries, and that apparently makes a big difference in execution time!

    One warning though: the difference in consistent gets might become a more likely performance issue if you have large result sets, as you won't be able to keep everything in buffer cache and you'll end up with very expensive physical reads...

    Good luck!

    Cheers

    0 讨论(0)
提交回复
热议问题