SQLPlus - spooling to multiple files from PL/SQL blocks

后端 未结 6 1016
说谎
说谎 2020-12-17 04:06

I have a query that returns a lot of data into a CSV file. So much, in fact, that Excel can\'t open it - there are too many rows. Is there a way to control spool

相关标签:
6条回答
  • 2020-12-17 04:49

    Have you looked at setting up an external data connection in Excel (assuming that the CSV files are only being produced for use in Excel)? You could define an Oracle view that limits the rows returned and also add some parameters in the query to allow the user to further limit the result set. (I've never understood what someone does with 64K rows in Excel anyway).

    I feel that this is somewhat of a hack, but you could also use UTL_MAIL and generate attachments to email to your user(s). There's a 32K size limit to the attachments, so you'd have to keep track of the size in the cursor loop and start a new attachment on this basis.

    0 讨论(0)
  • 2020-12-17 04:50

    Try this for a pure SQL*Plus solution...

    set pagesize 0
    set trimspool on  
    set headsep off 
    set feedback off
    set echo off 
    set verify off
    set timing off
    set linesize 4000
    
    DEFINE rows_per_file = 50
    
    
    -- Create an sql file that will create the individual result files
    SET DEFINE OFF
    
    SPOOL c:\temp\generate_one.sql
    
    PROMPT COLUMN which_dynamic NEW_VALUE dynamic_filename
    PROMPT
    
    PROMPT SELECT 'c:\temp\run_#'||TO_CHAR( &1, 'fm000' )||'_result.txt' which_dynamic FROM dual
    PROMPT /
    
    PROMPT SPOOL &dynamic_filename
    
    PROMPT SELECT *
    PROMPT   FROM ( SELECT a.*, rownum rnum
    PROMPT            FROM ( SELECT object_id FROM all_objects ORDER BY object_id ) a
    PROMPT           WHERE rownum <= ( &2 * 50 ) )
    PROMPT  WHERE rnum >= ( ( &3 - 1 ) * 50 ) + 1
    PROMPT /
    
    PROMPT SPOOL OFF
    
    SPOOL OFF
    
    SET DEFINE &
    
    
    -- Define variable to hold number of rows
    -- returned by the query
    COLUMN num_rows NEW_VALUE v_num_rows
    
    -- Find out how many rows there are to be
    SELECT COUNT(*) num_rows
      FROM ( SELECT LEVEL num_files FROM dual CONNECT BY LEVEL <= 120 );
    
    
    -- Create a master file with the correct number of sql files
    SPOOL c:\temp\run_all.sql
    
    SELECT '@c:\temp\generate_one.sql '||TO_CHAR( num_files )
                                       ||' '||TO_CHAR( num_files )
                                       ||' '||TO_CHAR( num_files ) file_name
      FROM ( SELECT LEVEL num_files 
               FROM dual 
            CONNECT BY LEVEL <= CEIL( &v_num_rows / &rows_per_file ) )
    /
    
    SPOOL OFF
    
    -- Now run them all
    @c:\temp\run_all.sql
    
    0 讨论(0)
  • 2020-12-17 04:53

    utl_file is the package you are looking for. You can write a cursor and loop over the rows (writing them out) and when mod(num_rows_written,num_per_file) == 0 it's time to start a new file. It works fine within PL/SQL blocks.

    Here's the reference for utl_file: http://www.adp-gmbh.ch/ora/plsql/utl_file.html

    NOTE: I'm assuming here, that it's ok to write the files out to the server.

    0 讨论(0)
  • 2020-12-17 05:02

    Got a solution, don't know why I didn't think of this sooner...

    The basic idea is that the master sqplplus script generates an intermediate script that will split the output to multiple files. Executing the intermediate script will execute multiple queries with different ranges imposed on rownum, and spool to a different file for each query.

    set termout off
    set serveroutput on
    set echo off
    set feedback off
    variable v_rowCount number;
    spool intermediate_file.sql
    declare
         i number := 0;
         v_fileNum number := 1;
         v_range_start number := 1;
         v_range_end number := 1;
         k_max_rows constant number := 65536;
    begin
        dbms_output.enable(10000);
        select count(*) 
        into :v_err_count
        from ...
        /* You don't need to see the details of the query... */
    
        while i <= :v_err_count loop
    
              v_range_start := i+1;
              if v_range_start <= :v_err_count then
                i := i+k_max_rows;
                v_range_end := i;
    
                dbms_output.put_line('set colsep ,  
    set pagesize 0
    set trimspool on 
    set headsep off
    set feedback off
    set echo off
    set termout off
    set linesize 4000
    spool large_data_file_'||v_fileNum||'.csv
    select data_string
    from (select rownum rn, data_object
          from 
          /* Details of query omitted */
         )
    where rn >= '||v_range_start||' and rn <= '||v_range_end||';
    spool off');
              v_fileNum := v_fileNum +1;
             end if;
        end loop;
    end;
    /
    spool off
    prompt     executing intermediate file
    @intermediate_file.sql;
    set serveroutput off
    
    0 讨论(0)
  • 2020-12-17 05:02

    While your question asks how to break the greate volume of data into chunks Excel can handle, I would ask if there is any part of the Excel operation that can be moved into SQL (PL/SQL?) that can reduce the volume of data. Ultimately it has to be reduced to be made meaningful to anyone. The database is a great engine to do that work on.

    When you have reduced the data to more presentable volumes or even final results, dump it for Excel to make the final presentation.

    This is not the answer you were looking for but I think it is always good to ask if you are using the right tool when it is getting difficult to get the job done.

    0 讨论(0)
  • 2020-12-17 05:05

    Use split on the resulting file.

    0 讨论(0)
提交回复
热议问题