Google Search

Thursday, May 16, 2013

Row source generation

Below are different ways of row source generation, can used to create test data or also some cases in joins.

Lets consider that we need to generate 10 rows starting from 101-105 and 201 to 205


1. select case
          when level <= 5 then 100 + level
                          else 195 + level
       end id
  from dual
connect by level <= 10
 order by id

2.

select level id
  from dual
 where level between 101 and 105
    or level between 201 and 205
connect by level <= 205
 order by id

3.

select 100 * x.x + y.y id
  from (
   select level x
     from dual
   connect by level <= 2
       ) x
 cross join (
   select level y
     from dual
   connect by level <= 5
       ) y
 order by id

4.

select case
          when rownum <= 5 then 100 + rownum
                           else 195 + rownum
       end id
  from dual
connect by rownum <= 10
 order by id

5.

select to_number(column_value) id
  from xmltable('101,102,103,104,105,201,202,203,204,205')
 order by id

6.

select to_number(column_value) id
  from xmltable('101 to 105, 201 to 205')
 order by id

7.

select to_number(column_value) id
  from xmltable('for $i in (100, 200), $j in 1 to 5 return $i + $j')
 order by id

8.

with rowsource(id) as (
   select 1 id
     from dual
    union all
   select id + 1 id
     from rowsource
    where id < 205
)
select id
  from rowsource
 where id between 101 and 105
    or id between 201 and 205
 order by id

9.

with rowsource(id) as (
   select 101 id
     from dual
    union all
   select case id
             when 105 then 201
             else id + 1
          end id
     from rowsource
    where id < 205
)
select id
  from rowsource
 order by id

Courtesy: (PL/SQL Challenge by Steven Feurestien)

http://www.plsqlchallenge.com/pls/apex/f?p=10000:659:5558871037305::NO:659:P659_COMP_EVENT_ID,P659_QUIZ_ID,P659_QUESTION_ID:140896,,7589&cs=1CADD4B82BF7B20DE4520099D400CACB9













Wednesday, March 27, 2013

SQL*Plus settings

Below are list of SQL*Plus commands that can be useful sometimes.
One major advantage of SQL*Plus is for testing, since this is not changed and is same from release to release, we can use as a tool to reproduce any testing.

Instead of executing below commands every time we login to sql*plus, we can add all below commands
into login.sql file (at product\11.2.0\dbhome_1\sqlplus\admin ) so that they are executed automatically everytime when a user starts SQL*Plus, or uses the SQL*Plus CONNECT command.

Below settings are picked up from Tom Kyte's (http://www.facebook.com/pages/Tom-Kyte-Ask-Tom-fan-page/189391211108914)  book "Effective Oracle by Design"

Just copy paste the below in login.sql /glogin.sql

REM turn off the terminal output - make it so SQLPLUS doesnot
REM print anything when we log in
set termout off

REM default your editor here. SQLPlus has many individual settings
REM This is one of the most important ones
define _editor=vi

REM serveroutput controls whether your DBMS_OUTPUT.PUT_LINE calls
REM go into the bit bucket (serveroutput off) or get displayed
REM on screen. The format wrapped  elements causes
REM sqlplus to preserve leading white space - very useful

set serveroutput on size 1000000 format wrapped

REM Here I set some default column widths for commonly queried columns

column object_name format a30
column segment_name a30
column file_name format a40
column name format a30
column file_name format a30
column what format a30 word_wrapped
column plan_plus_exp format a100

REM by default, a spool file is a fixed width file with lots of
REM trailing blanks. Trimspool removes these trailing balnks
REM making the spool file significantly smaller

set trimspool on

REM LONG controls how much of a LONG or CLOB sqlplus displays by default.
REM It defaults to 80 characters which in general is far too small.

set long 5000

REM This sets the default width at which sqlplus wraps the output.

set linesize 131

REM sqlplus prints columns headings every N lines of output this defaults to 14 lines.

set pagesize 9999

REM here is how we set my signature prompt in sqlplus to
REM username@database> Here we use NEW_VALUE concept to format a nice prompt string
REM that defaults to IDLE

define gname=idle
column global_name new_value gname 
select lower(user) || '@' ||
substr(global_name,1,decode(dot,0,length(global_name),dot-1)) global_name
from (select global_name,instr(global_name,'.') dot
from global_name);
set sqlprompt '&gname>  '

REM and lastly, we'll put termout back on sqlplus prints
REM to the screen

set termout on

Thursday, February 28, 2013

DBMS_APPLICATION_INFO

DBMS_APPLICATION_INFO is a Oracle supplied package which can be helpful in debugging, to track the progress of a long running procedure or to set any useful information in v$session view, which can be useful for Programmers, DBAs.

Consider a scenario where you are executing some statements (DML) in a loop. Lets assume that the loop iterations are more than 10 million. Now if something fails (say unique constraint error) in middle of loop, it becomes difficult to get to the exact record due to which it failed.

One way is to use DBMS_OUTPUT for each iteration, but that would fill up the stack.
Another way is use autonomous error logging procedure which would write to a separate error table. But this is not efficient way of doing since performance will slow down and also the unnecessarily we are filling up the error table.

In situations like these, Application_Info package comes handy.
This package has a procedure called SET_CLIENT_INFO, this will set values in column CLIENT_INFO of the table v$session. This value will be immediately visible to other sessions and we need not even commit.

Psuedo Code:

Session1:

for rec in (some select which has 10 million records)
loop
-- ...Some calculations
DBMS_APPLICATION_INFO.SET_CLIENT_INFO('Program is executing - ' ||  rec.col1 || '-' || rec.col2);
--- DMLs
end loop;

When the above program is running, we can query v$session to monitor the progress.

select client_info from v$session where sid := :x;

It can also be used to set the Bind variable values being used in a dynamic query.

The other procedures inside the package are below:
SET_MODULE/GET_MODULE : This allows you to set MODULE and ACTION columns in v$session.
      MODULE column should be name of the Process
       ACTION column should be set to name of the procedure you are executing in a Package.

SET_SESSION_LONGOPS : use this to monitor progress of any long running routine. Set the value for any routine which will take more than few seconds to.



Monday, January 28, 2013

Faster Inserts



Lets consider an application which has large number of inserts into a table having primary key.
Generally primary key values are generated by sequences.
When we have thousands of concurrent inserts (or parallel inserts) using sequences,  all the values goto the right hand side of the Unique Index
causing massive contention for right hand side block, hence slowing down the whole process.

Previously the solution for this was:
1. Reverse Key Indexes : Good for small primary key values but it is noticed that performance decreases for large numbers.
2. HASH Partitioning : Performance is not upto expectation in RAC clustered environment.

Best solution for the above problem was suggested by Tom Kyte (https://www.facebook.com/pages/Tom-Kyte-Ask-Tom-fan-page/189391211108914)
in Real World Performance India (Sangam12)
Which is to code the sequence to ensure there is no index contention.
We shall create a new sequence structure as Instance# || SessionsId || Sequence
The above will be inserted into Primary Key column instead of just sequence value.


Below is a demo:

create table TEST_INSERTS( COL1 number primary key, DESCR VARchar2(200) );
CREATE SEQUENCE SEQINS;

--First lets do the old way using only sequences

begin
for i in 1..10000000
loop
INSERT INTO TEST_INSERTS VALUES (SEQINS.nextval,'Using SEQINS only');
end loop;
commit;
end;
/

-- New Method for optimal scaling and response time characteristics
begin
for i in 1..10000000
loop
/*
iF YOUR DATABASE is RAC clustered then use as below
INSERT INTO TEST_INSERTS VALUES (sys_context('USER_ENV','INSTANCE')|| sys_context('USER_ENV','SID')|| SEQINS.nextval,'Using SEQINS only');
*/
INSERT INTO TEST_INSERTS VALUES (sys_context('USER_ENV','SID')|| SEQINS.nextval,'Using Composite SEQINS');
end loop;
commit;
end;
/

 More Information below

http://asktom.oracle.com/pls/apex/f?p=100:11:0::NO::P11_QUESTION_ID:6077843300346542181

Thursday, December 27, 2012

Oracle - Getting Bind variable values



It is common to use Bind variables in  our programs, The biggest advantage is Bind variables allow sharing of cursors in the Library cache and hence avoids Hard parsing of queries.

When we have performance issues with any query, we will be somehow able retrieve the SQL Text but unfortunately we won’t get the actual values used during the execution of query.

In Oracle 11G and above, this is possible using DBMS_XPLAN.DISPLAY_CURSOR(sql_id VARCHAR2,cursor_child_no NUMBER,format VARCHAR2)
We need to pass sql_id of the query to get the actual value of Bind variable.

Below is a Demo:

SELECT * FROM emp where ename = :ragh ;

From the Explain plan it’s not possible to determine the value passed at runtime.
Below is output of explain plan –

Plan hash value: 2872589290

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    11 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    11 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ENAME"=:RAGH) à Actual value is not displayed.


----------------

From the v$session views, I will get the sql_Id for this query and then

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('f0g0xnq3ynd4u', null, 'ADVANCED'));

SQL_ID  f0g0xnq3ynd4u, child number 0
-------------------------------------
SELECT * FROM emp where ename = :ragh

Plan hash value: 2872589290

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    11 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------


Peeked Binds (identified by position):
--------------------------------------

   1 - :RAGH (VARCHAR2(30), CSID=31): 'Raghu'

From this we can get the actual literal values of the variables used at runtime.

Wednesday, November 14, 2012

Incremental Statistics in Oracle 11G

Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table.

Partitioned tables are big ones and if we need to regularly gather statistics on this, it would be very time consuming.

Generally latest partitions are the ones where data is inserted and mostly the older partitions are remain untouched. With enabling Incremental statistics on a table, only on those partitions statistics are gathered where there are any DMLs on it, remaining are not scanned which would save lot of time.
Below is a demo:

create table incre_stats_tab(id ,business_dt )
partition by range(business_dt)
(
partition p1 values less than (to_date('1-JAN-2009','DD-MON-YYYY'))
,partition p2 values less than (to_date('1-JAN-2010','DD-MON-YYYY'))
,partition p3 values less than (to_date('1-JAN-2011','DD-MON-YYYY'))
,partition p4 values less than (to_date('1-JAN-2012','DD-MON-YYYY'))
) as
select level,to_date('1-JAN-2008','DD-MON-YYYY')+ level from dual connect by level < 1000;

 We have not inserted data for 2011 year

SELECT * FROM incre_stats_tab PARTITION FOR (to_date('3-JAN-2011','DD-MON-YYYY'));
-- returns zero records

 To initiate incremental statistics for a partitioned table, the INCREMENTAL preference must be set to TRUE

 begin
dbms_stats.set_table_prefs('HR','incre_stats_tab','INCREMENTAL','TRUE');
END;

 SELECT DBMS_STATS.GET_PREFS('INCREMENTAL','HR','incre_stats_tab') FROM DUAL;

 Lets gather stats now
begin
dbms_stats.gather_table_stats('HR','incre_stats_tab');
END;

 SELECT TABLE_NAME,LAST_ANALYZED FROM USER_TABLES WHERE TABLE_NAME='INCRE_STATS_TAB';

 select partition_name,last_analyzed from user_tab_partitions where table_name='INCRE_STATS_TAB' order by partition_position;
 P1 14-NOV-12 11:54
P2 14-NOV-12 11:54
P3 14-NOV-12 11:54
P4 14-NOV-12 11:54

 Following column statistics for INCRE_STATS_TAB table
select column_name,num_distinct,num_nulls from user_tab_col_statistics where table_name='INCRE_STATS_TAB';

 Lets insert data for year 2011 and gather statistics again

insert into incre_stats_tab select level,to_date('1-JAN-2011','DD-MON-YYYY')+ level from dual connect by level < 100;

 begin
dbms_stats.gather_table_stats('HR','incre_stats_tab');
END;

Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the inserts into partition p4

 select partition_name,last_analyzed from user_tab_partitions where table_name='INCRE_STATS_TAB' order by partition_position;
 P1 14-NOV-12 11:54
P2 14-NOV-12 11:54
P3 14-NOV-12 11:54
P4 15-NOV-12 12:03

As we can see, only the parition that was affected by the DML was analysed and others were not even scanned. This would be a great performance imporvement for large table.

Incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

Useful Links:

https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics

http://www.oracle-base.com/articles/11g/statistics-collection-enhancements-11gr1.php