Google Search

Thursday, February 28, 2013

DBMS_APPLICATION_INFO

DBMS_APPLICATION_INFO is a Oracle supplied package which can be helpful in debugging, to track the progress of a long running procedure or to set any useful information in v$session view, which can be useful for Programmers, DBAs.

Consider a scenario where you are executing some statements (DML) in a loop. Lets assume that the loop iterations are more than 10 million. Now if something fails (say unique constraint error) in middle of loop, it becomes difficult to get to the exact record due to which it failed.

One way is to use DBMS_OUTPUT for each iteration, but that would fill up the stack.
Another way is use autonomous error logging procedure which would write to a separate error table. But this is not efficient way of doing since performance will slow down and also the unnecessarily we are filling up the error table.

In situations like these, Application_Info package comes handy.
This package has a procedure called SET_CLIENT_INFO, this will set values in column CLIENT_INFO of the table v$session. This value will be immediately visible to other sessions and we need not even commit.

Psuedo Code:

Session1:

for rec in (some select which has 10 million records)
loop
-- ...Some calculations
DBMS_APPLICATION_INFO.SET_CLIENT_INFO('Program is executing - ' ||  rec.col1 || '-' || rec.col2);
--- DMLs
end loop;

When the above program is running, we can query v$session to monitor the progress.

select client_info from v$session where sid := :x;

It can also be used to set the Bind variable values being used in a dynamic query.

The other procedures inside the package are below:
SET_MODULE/GET_MODULE : This allows you to set MODULE and ACTION columns in v$session.
      MODULE column should be name of the Process
       ACTION column should be set to name of the procedure you are executing in a Package.

SET_SESSION_LONGOPS : use this to monitor progress of any long running routine. Set the value for any routine which will take more than few seconds to.



Monday, January 28, 2013

Faster Inserts



Lets consider an application which has large number of inserts into a table having primary key.
Generally primary key values are generated by sequences.
When we have thousands of concurrent inserts (or parallel inserts) using sequences,  all the values goto the right hand side of the Unique Index
causing massive contention for right hand side block, hence slowing down the whole process.

Previously the solution for this was:
1. Reverse Key Indexes : Good for small primary key values but it is noticed that performance decreases for large numbers.
2. HASH Partitioning : Performance is not upto expectation in RAC clustered environment.

Best solution for the above problem was suggested by Tom Kyte (https://www.facebook.com/pages/Tom-Kyte-Ask-Tom-fan-page/189391211108914)
in Real World Performance India (Sangam12)
Which is to code the sequence to ensure there is no index contention.
We shall create a new sequence structure as Instance# || SessionsId || Sequence
The above will be inserted into Primary Key column instead of just sequence value.


Below is a demo:

create table TEST_INSERTS( COL1 number primary key, DESCR VARchar2(200) );
CREATE SEQUENCE SEQINS;

--First lets do the old way using only sequences

begin
for i in 1..10000000
loop
INSERT INTO TEST_INSERTS VALUES (SEQINS.nextval,'Using SEQINS only');
end loop;
commit;
end;
/

-- New Method for optimal scaling and response time characteristics
begin
for i in 1..10000000
loop
/*
iF YOUR DATABASE is RAC clustered then use as below
INSERT INTO TEST_INSERTS VALUES (sys_context('USER_ENV','INSTANCE')|| sys_context('USER_ENV','SID')|| SEQINS.nextval,'Using SEQINS only');
*/
INSERT INTO TEST_INSERTS VALUES (sys_context('USER_ENV','SID')|| SEQINS.nextval,'Using Composite SEQINS');
end loop;
commit;
end;
/

 More Information below

http://asktom.oracle.com/pls/apex/f?p=100:11:0::NO::P11_QUESTION_ID:6077843300346542181

Thursday, December 27, 2012

Oracle - Getting Bind variable values



It is common to use Bind variables in  our programs, The biggest advantage is Bind variables allow sharing of cursors in the Library cache and hence avoids Hard parsing of queries.

When we have performance issues with any query, we will be somehow able retrieve the SQL Text but unfortunately we won’t get the actual values used during the execution of query.

In Oracle 11G and above, this is possible using DBMS_XPLAN.DISPLAY_CURSOR(sql_id VARCHAR2,cursor_child_no NUMBER,format VARCHAR2)
We need to pass sql_id of the query to get the actual value of Bind variable.

Below is a Demo:

SELECT * FROM emp where ename = :ragh ;

From the Explain plan it’s not possible to determine the value passed at runtime.
Below is output of explain plan –

Plan hash value: 2872589290

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    11 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    11 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ENAME"=:RAGH) à Actual value is not displayed.


----------------

From the v$session views, I will get the sql_Id for this query and then

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('f0g0xnq3ynd4u', null, 'ADVANCED'));

SQL_ID  f0g0xnq3ynd4u, child number 0
-------------------------------------
SELECT * FROM emp where ename = :ragh

Plan hash value: 2872589290

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    11 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------


Peeked Binds (identified by position):
--------------------------------------

   1 - :RAGH (VARCHAR2(30), CSID=31): 'Raghu'

From this we can get the actual literal values of the variables used at runtime.

Wednesday, November 14, 2012

Incremental Statistics in Oracle 11G

Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table.

Partitioned tables are big ones and if we need to regularly gather statistics on this, it would be very time consuming.

Generally latest partitions are the ones where data is inserted and mostly the older partitions are remain untouched. With enabling Incremental statistics on a table, only on those partitions statistics are gathered where there are any DMLs on it, remaining are not scanned which would save lot of time.
Below is a demo:

create table incre_stats_tab(id ,business_dt )
partition by range(business_dt)
(
partition p1 values less than (to_date('1-JAN-2009','DD-MON-YYYY'))
,partition p2 values less than (to_date('1-JAN-2010','DD-MON-YYYY'))
,partition p3 values less than (to_date('1-JAN-2011','DD-MON-YYYY'))
,partition p4 values less than (to_date('1-JAN-2012','DD-MON-YYYY'))
) as
select level,to_date('1-JAN-2008','DD-MON-YYYY')+ level from dual connect by level < 1000;

 We have not inserted data for 2011 year

SELECT * FROM incre_stats_tab PARTITION FOR (to_date('3-JAN-2011','DD-MON-YYYY'));
-- returns zero records

 To initiate incremental statistics for a partitioned table, the INCREMENTAL preference must be set to TRUE

 begin
dbms_stats.set_table_prefs('HR','incre_stats_tab','INCREMENTAL','TRUE');
END;

 SELECT DBMS_STATS.GET_PREFS('INCREMENTAL','HR','incre_stats_tab') FROM DUAL;

 Lets gather stats now
begin
dbms_stats.gather_table_stats('HR','incre_stats_tab');
END;

 SELECT TABLE_NAME,LAST_ANALYZED FROM USER_TABLES WHERE TABLE_NAME='INCRE_STATS_TAB';

 select partition_name,last_analyzed from user_tab_partitions where table_name='INCRE_STATS_TAB' order by partition_position;
 P1 14-NOV-12 11:54
P2 14-NOV-12 11:54
P3 14-NOV-12 11:54
P4 14-NOV-12 11:54

 Following column statistics for INCRE_STATS_TAB table
select column_name,num_distinct,num_nulls from user_tab_col_statistics where table_name='INCRE_STATS_TAB';

 Lets insert data for year 2011 and gather statistics again

insert into incre_stats_tab select level,to_date('1-JAN-2011','DD-MON-YYYY')+ level from dual connect by level < 100;

 begin
dbms_stats.gather_table_stats('HR','incre_stats_tab');
END;

Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the inserts into partition p4

 select partition_name,last_analyzed from user_tab_partitions where table_name='INCRE_STATS_TAB' order by partition_position;
 P1 14-NOV-12 11:54
P2 14-NOV-12 11:54
P3 14-NOV-12 11:54
P4 15-NOV-12 12:03

As we can see, only the parition that was affected by the DML was analysed and others were not even scanned. This would be a great performance imporvement for large table.

Incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

Useful Links:

https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics

http://www.oracle-base.com/articles/11g/statistics-collection-enhancements-11gr1.php




Thursday, October 25, 2012

Get Create Table script (or any DDL) using DBMS_METADATA

We can use DBMS_METADATA package to get the DDL Scripts for any Objects.
If we need to remove the storage clauses, below can be used.

begin
dbms_metadata.set_transform_param(DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE );
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'TABLESPACE',TRUE);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SEGMENT_ATTRIBUTES',FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',false);
end;

select dbms_metadata.get_ddl('TABLE','TABLE_NAME','SCHEMA') FROM DUAL;

And all related Indexes script can be obtained by using below query-

select dbms_metadata.get_dependent_ddl('INDEX','TABLE_NAME','SCHEMA') from dual;

Truly Generic Code

Got this wonderful code written by Steven Feuerstein, in his website

http://www.plsqlchallenge.com/pls/apex/f?p=10000:659:417200574319978:CLEAR_HA:NO::P659_QUIZ_ID,P659_COMP_EVENT_ID,P659_QUESTION_ID:3564,2426,&cs=37432F63779BD40D386050B1A90113B43

Dynamic SQL is a programming methodology for generating and running SQL statements at run time. It is useful when writing general-purpose and flexible programs like ad hoc query systems, when writing programs that must run database definition language (DDL) statements, or when you do not know at compilation time the full text of a SQL statement or the number or data types of its input and output variables.

Oracle PL/SQL offers very few opportunities for "reflection": obtaining at runtime information about data structures used by our programs.

CREATE OR REPLACE PROCEDURE gen_bulk_statements (
base_name_in IN VARCHAR2
, query_in IN VARCHAR2
)
IS
c_type_names DBMS_DESCRIBE.varchar2_table;

TYPE col_info_rt IS RECORD (
col_name all_tab_columns.column_name%TYPE
, col_type all_tab_columns.data_type%TYPE
);

TYPE col_info_aat IS TABLE OF col_info_rt
INDEX BY PLS_INTEGER;

l_columns col_info_aat;

PROCEDURE load_type_translators
IS
c_varchar2 CONSTANT PLS_INTEGER := 1;
c_number CONSTANT PLS_INTEGER := 2;
c_binary_integer CONSTANT PLS_INTEGER := 3;
c_long CONSTANT PLS_INTEGER := 8;
c_rowid CONSTANT PLS_INTEGER := 11;
c_date CONSTANT PLS_INTEGER := 12;
c_raw CONSTANT PLS_INTEGER := 23;
c_longraw CONSTANT PLS_INTEGER := 24;
c_opaque CONSTANT PLS_INTEGER := 58;
c_char CONSTANT PLS_INTEGER := 96;
c_refcursor CONSTANT PLS_INTEGER := 102;
c_urowid CONSTANT PLS_INTEGER := 104;
c_mlslabel CONSTANT PLS_INTEGER := 106;
c_clob CONSTANT PLS_INTEGER := 112;
c_blob CONSTANT PLS_INTEGER := 113;
c_bfile CONSTANT PLS_INTEGER := 114;
c_cfile CONSTANT PLS_INTEGER := 115;
c_object_type CONSTANT PLS_INTEGER := 121;
c_nested_table CONSTANT PLS_INTEGER := 122;
c_varray CONSTANT PLS_INTEGER := 123;
c_timestamp CONSTANT PLS_INTEGER := 180;
c_timestamp_tz CONSTANT PLS_INTEGER := 181;
c_interval_ym CONSTANT PLS_INTEGER := 182;
c_interval_ds CONSTANT PLS_INTEGER := 183;
c_timestamp_ltz CONSTANT PLS_INTEGER := 231;
c_record CONSTANT PLS_INTEGER := 250;
c_indexby_table CONSTANT PLS_INTEGER := 251;
c_boolean CONSTANT PLS_INTEGER := 252;
BEGIN
c_type_names ( c_varchar2 ) := 'VARCHAR2';
c_type_names ( c_number ) := 'NUMBER';
c_type_names ( c_binary_integer ) := 'BINARY_INTEGER';
c_type_names ( c_long ) := 'LONG';
c_type_names ( c_rowid ) := 'ROWID';
c_type_names ( c_date ) := 'DATE';
c_type_names ( c_raw ) := 'RAW';
c_type_names ( c_longraw ) := 'LONG RAW';
c_type_names ( c_char ) := 'CHAR';
c_type_names ( c_mlslabel ) := 'MLSLABEL';
c_type_names ( c_record ) := 'RECORD';
c_type_names ( c_indexby_table ) := 'INDEX-BY TABLE';
c_type_names ( c_boolean ) := 'BOOLEAN';
c_type_names ( c_object_type ) := 'OBJECT TYPE';
c_type_names ( c_nested_table ) := 'NESTED TABLE';
c_type_names ( c_varray ) := 'VARRAY';
c_type_names ( c_clob ) := 'CLOB';
c_type_names ( c_blob ) := 'BLOB';
c_type_names ( c_bfile ) := 'BFILE';
END load_type_translators;

FUNCTION columns_for_query ( query_in IN VARCHAR2 )
RETURN col_info_aat
IS
cur PLS_INTEGER := DBMS_SQL.open_cursor;
l_count PLS_INTEGER;
l_from_dbms_sql DBMS_SQL.desc_tab;
retval col_info_aat;
BEGIN
-- Parse the query and then get the columns.
DBMS_SQL.parse ( cur, query_in, DBMS_SQL.native );
DBMS_SQL.describe_columns ( cur, l_count, l_from_dbms_sql );
DBMS_SQL.close_cursor ( cur );

-- Now move the "raw" data to my list of names and types.
-- In particular, convert the integer type to the string description.
FOR colind IN 1 .. l_from_dbms_sql.COUNT
LOOP
retval ( colind ).col_name := l_from_dbms_sql ( colind ).col_name;
retval ( colind ).col_type :=
CASE
WHEN c_type_names ( l_from_dbms_sql ( colind ).col_type ) IN
( 'VARCHAR2', 'CHAR' )
THEN c_type_names ( l_from_dbms_sql ( colind ).col_type )
|| '(32767)'
ELSE c_type_names ( l_from_dbms_sql ( colind ).col_type )
END;
END LOOP;

RETURN retval;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ( DBMS_UTILITY.format_error_stack );
DBMS_SQL.close_cursor ( cur );
RAISE;
END columns_for_query;

PROCEDURE initialize ( columns_out OUT col_info_aat )
IS
BEGIN
load_type_translators;
columns_out := columns_for_query ( query_in );
END initialize;

PROCEDURE gen_declarations ( columns_in IN col_info_aat )
IS
BEGIN
DBMS_OUTPUT.put_line ( 'DECLARE TYPE ' || base_name_in
|| '_rt IS RECORD ('
);

FOR colind IN 1 .. columns_in.COUNT
LOOP
DBMS_OUTPUT.put_line ( CASE
WHEN colind = 1
THEN NULL
ELSE ','
END
|| columns_in ( colind ).col_name
|| ' '
|| columns_in ( colind ).col_type
);
END LOOP;

DBMS_OUTPUT.put_line ( ');' );
DBMS_OUTPUT.put_line ( 'TYPE '
|| base_name_in
|| '_ct IS TABLE OF '
|| base_name_in
|| '_rt INDEX BY PLS_INTEGER;'
);
END gen_declarations;

PROCEDURE gen_bulk_query_and_insert ( columns_in IN col_info_aat )
IS
BEGIN
-- First, with collections of records.
DBMS_OUTPUT.put_line ( 'l_dataset ' || base_name_in || '_ct;' );
DBMS_OUTPUT.put_line ( 'CURSOR '
|| base_name_in
|| '_cur IS '
|| query_in
|| ';'
);
DBMS_OUTPUT.put_line ( 'BEGIN' );
DBMS_OUTPUT.put_line ( 'OPEN ' || base_name_in || '_cur;' );
DBMS_OUTPUT.put_line ( 'FETCH '
|| base_name_in
|| '_cur BULK COLLECT INTO l_dataset;'
);
DBMS_OUTPUT.put_line ( 'END;' );
-- Now break out the individual collections, necessary if you are going to
-- use these in a FORALL statement.
DBMS_OUTPUT.put_line ( 'DECLARE' );

FOR colind IN 1 .. columns_in.COUNT
LOOP
DBMS_OUTPUT.put_line ( 'TYPE '
|| columns_in ( colind ).col_name
|| '_aat IS TABLE OF '
|| columns_in ( colind ).col_type
|| ' INDEX BY PLS_INTEGER;'
);
DBMS_OUTPUT.put_line ( 'l_'
|| columns_in ( colind ).col_name
|| ' '
|| columns_in ( colind ).col_name
|| '_aat;'
);
END LOOP;

DBMS_OUTPUT.put_line ( 'CURSOR '
|| base_name_in
|| '_cur IS '
|| query_in
|| ';'
);
DBMS_OUTPUT.put_line ( 'BEGIN' );
DBMS_OUTPUT.put_line ( 'OPEN ' || base_name_in || '_cur;' );
DBMS_OUTPUT.put_line ( 'FETCH ' || base_name_in
|| '_cur BULK COLLECT INTO '
);

FOR colind IN 1 .. columns_in.COUNT
LOOP
-- Display other attributes, as desired.
DBMS_OUTPUT.put_line ( CASE
WHEN colind = 1
THEN NULL
ELSE ','
END
|| 'l_'
|| columns_in ( colind ).col_name
);
END LOOP;

DBMS_OUTPUT.put_line ( ';' );
-- Now the insert statement.
DBMS_OUTPUT.put_line ( 'FORALL indx IN '
|| 'l_'
|| columns_in ( 1 ).col_name
|| '.FIRST .. '
|| 'l_'
|| columns_in ( 1 ).col_name
|| '.LAST'
);
DBMS_OUTPUT.put_line ( 'INSERT INTO ' || base_name_in || ' VALUES (' );

FOR colind IN 1 .. columns_in.COUNT
LOOP
-- Display other attributes, as desired.
DBMS_OUTPUT.put_line ( CASE
WHEN colind = 1
THEN NULL
ELSE ','
END
|| 'l_'
|| columns_in ( colind ).col_name
);
END LOOP;

DBMS_OUTPUT.put_line ( '); END;' );
END gen_bulk_query_and_insert;
BEGIN
initialize ( l_columns );

IF l_columns.COUNT > 0
THEN
gen_declarations ( l_columns );
gen_bulk_query_and_insert ( l_columns );
END IF;
END gen_bulk_statements;
/

Wednesday, September 12, 2012

Rebuild Local Indexes-Query

We rebuild the Local indexes using the command

ALTER INDEX IDX_NAME rebuild partition partition_name;

But if we have subpartitions , we need to provide(or rebuild) indexes by providing subpartition name.
Below block can be used to dynamically query with or w/o subpartition.


for PART_IDXNAME in (SELECT PART.INDEX_NAME,NVL(SUBPART.SUBPARTITION_NAME,PART.PARTITION_NAME) PARTITION_NAME, NVL2(SUBPART.SUBPARTITION_NAME,'SUBPARTITION','PARTITION') PTYPE
FROM user_ind_partitions PART, user_ind_subpartitions SUBPART
WHERE
PART.INDEX_NAME = SUBPART.INDEX_NAME(+)
AND PART.index_name IN (SELECT index_name
FROM USER_INDEXES
WHERE table_name = p_table_name
AND PARTITIONED='YES'))
loop

execute immediate ' ALTER INDEX ' || PART_IDXNAME.INDEX_NAME || ' REBUILD ' || PART_IDXNAME.PTYPE || ' ' || PART_IDXNAME.PARTITION_NAME || ' nologging ' ;

end loop;