Google Search

Thursday, December 27, 2012

Oracle - Getting Bind variable values



It is common to use Bind variables in  our programs, The biggest advantage is Bind variables allow sharing of cursors in the Library cache and hence avoids Hard parsing of queries.

When we have performance issues with any query, we will be somehow able retrieve the SQL Text but unfortunately we won’t get the actual values used during the execution of query.

In Oracle 11G and above, this is possible using DBMS_XPLAN.DISPLAY_CURSOR(sql_id VARCHAR2,cursor_child_no NUMBER,format VARCHAR2)
We need to pass sql_id of the query to get the actual value of Bind variable.

Below is a Demo:

SELECT * FROM emp where ename = :ragh ;

From the Explain plan it’s not possible to determine the value passed at runtime.
Below is output of explain plan –

Plan hash value: 2872589290

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    11 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    11 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ENAME"=:RAGH) à Actual value is not displayed.


----------------

From the v$session views, I will get the sql_Id for this query and then

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('f0g0xnq3ynd4u', null, 'ADVANCED'));

SQL_ID  f0g0xnq3ynd4u, child number 0
-------------------------------------
SELECT * FROM emp where ename = :ragh

Plan hash value: 2872589290

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    11 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------


Peeked Binds (identified by position):
--------------------------------------

   1 - :RAGH (VARCHAR2(30), CSID=31): 'Raghu'

From this we can get the actual literal values of the variables used at runtime.

Wednesday, November 14, 2012

Incremental Statistics in Oracle 11G

Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table.

Partitioned tables are big ones and if we need to regularly gather statistics on this, it would be very time consuming.

Generally latest partitions are the ones where data is inserted and mostly the older partitions are remain untouched. With enabling Incremental statistics on a table, only on those partitions statistics are gathered where there are any DMLs on it, remaining are not scanned which would save lot of time.
Below is a demo:

create table incre_stats_tab(id ,business_dt )
partition by range(business_dt)
(
partition p1 values less than (to_date('1-JAN-2009','DD-MON-YYYY'))
,partition p2 values less than (to_date('1-JAN-2010','DD-MON-YYYY'))
,partition p3 values less than (to_date('1-JAN-2011','DD-MON-YYYY'))
,partition p4 values less than (to_date('1-JAN-2012','DD-MON-YYYY'))
) as
select level,to_date('1-JAN-2008','DD-MON-YYYY')+ level from dual connect by level < 1000;

 We have not inserted data for 2011 year

SELECT * FROM incre_stats_tab PARTITION FOR (to_date('3-JAN-2011','DD-MON-YYYY'));
-- returns zero records

 To initiate incremental statistics for a partitioned table, the INCREMENTAL preference must be set to TRUE

 begin
dbms_stats.set_table_prefs('HR','incre_stats_tab','INCREMENTAL','TRUE');
END;

 SELECT DBMS_STATS.GET_PREFS('INCREMENTAL','HR','incre_stats_tab') FROM DUAL;

 Lets gather stats now
begin
dbms_stats.gather_table_stats('HR','incre_stats_tab');
END;

 SELECT TABLE_NAME,LAST_ANALYZED FROM USER_TABLES WHERE TABLE_NAME='INCRE_STATS_TAB';

 select partition_name,last_analyzed from user_tab_partitions where table_name='INCRE_STATS_TAB' order by partition_position;
 P1 14-NOV-12 11:54
P2 14-NOV-12 11:54
P3 14-NOV-12 11:54
P4 14-NOV-12 11:54

 Following column statistics for INCRE_STATS_TAB table
select column_name,num_distinct,num_nulls from user_tab_col_statistics where table_name='INCRE_STATS_TAB';

 Lets insert data for year 2011 and gather statistics again

insert into incre_stats_tab select level,to_date('1-JAN-2011','DD-MON-YYYY')+ level from dual connect by level < 100;

 begin
dbms_stats.gather_table_stats('HR','incre_stats_tab');
END;

Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the inserts into partition p4

 select partition_name,last_analyzed from user_tab_partitions where table_name='INCRE_STATS_TAB' order by partition_position;
 P1 14-NOV-12 11:54
P2 14-NOV-12 11:54
P3 14-NOV-12 11:54
P4 15-NOV-12 12:03

As we can see, only the parition that was affected by the DML was analysed and others were not even scanned. This would be a great performance imporvement for large table.

Incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

Useful Links:

https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics

http://www.oracle-base.com/articles/11g/statistics-collection-enhancements-11gr1.php




Thursday, October 25, 2012

Get Create Table script (or any DDL) using DBMS_METADATA

We can use DBMS_METADATA package to get the DDL Scripts for any Objects.
If we need to remove the storage clauses, below can be used.

begin
dbms_metadata.set_transform_param(DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE );
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'TABLESPACE',TRUE);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SEGMENT_ATTRIBUTES',FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',false);
end;

select dbms_metadata.get_ddl('TABLE','TABLE_NAME','SCHEMA') FROM DUAL;

And all related Indexes script can be obtained by using below query-

select dbms_metadata.get_dependent_ddl('INDEX','TABLE_NAME','SCHEMA') from dual;

Truly Generic Code

Got this wonderful code written by Steven Feuerstein, in his website

http://www.plsqlchallenge.com/pls/apex/f?p=10000:659:417200574319978:CLEAR_HA:NO::P659_QUIZ_ID,P659_COMP_EVENT_ID,P659_QUESTION_ID:3564,2426,&cs=37432F63779BD40D386050B1A90113B43

Dynamic SQL is a programming methodology for generating and running SQL statements at run time. It is useful when writing general-purpose and flexible programs like ad hoc query systems, when writing programs that must run database definition language (DDL) statements, or when you do not know at compilation time the full text of a SQL statement or the number or data types of its input and output variables.

Oracle PL/SQL offers very few opportunities for "reflection": obtaining at runtime information about data structures used by our programs.

CREATE OR REPLACE PROCEDURE gen_bulk_statements (
base_name_in IN VARCHAR2
, query_in IN VARCHAR2
)
IS
c_type_names DBMS_DESCRIBE.varchar2_table;

TYPE col_info_rt IS RECORD (
col_name all_tab_columns.column_name%TYPE
, col_type all_tab_columns.data_type%TYPE
);

TYPE col_info_aat IS TABLE OF col_info_rt
INDEX BY PLS_INTEGER;

l_columns col_info_aat;

PROCEDURE load_type_translators
IS
c_varchar2 CONSTANT PLS_INTEGER := 1;
c_number CONSTANT PLS_INTEGER := 2;
c_binary_integer CONSTANT PLS_INTEGER := 3;
c_long CONSTANT PLS_INTEGER := 8;
c_rowid CONSTANT PLS_INTEGER := 11;
c_date CONSTANT PLS_INTEGER := 12;
c_raw CONSTANT PLS_INTEGER := 23;
c_longraw CONSTANT PLS_INTEGER := 24;
c_opaque CONSTANT PLS_INTEGER := 58;
c_char CONSTANT PLS_INTEGER := 96;
c_refcursor CONSTANT PLS_INTEGER := 102;
c_urowid CONSTANT PLS_INTEGER := 104;
c_mlslabel CONSTANT PLS_INTEGER := 106;
c_clob CONSTANT PLS_INTEGER := 112;
c_blob CONSTANT PLS_INTEGER := 113;
c_bfile CONSTANT PLS_INTEGER := 114;
c_cfile CONSTANT PLS_INTEGER := 115;
c_object_type CONSTANT PLS_INTEGER := 121;
c_nested_table CONSTANT PLS_INTEGER := 122;
c_varray CONSTANT PLS_INTEGER := 123;
c_timestamp CONSTANT PLS_INTEGER := 180;
c_timestamp_tz CONSTANT PLS_INTEGER := 181;
c_interval_ym CONSTANT PLS_INTEGER := 182;
c_interval_ds CONSTANT PLS_INTEGER := 183;
c_timestamp_ltz CONSTANT PLS_INTEGER := 231;
c_record CONSTANT PLS_INTEGER := 250;
c_indexby_table CONSTANT PLS_INTEGER := 251;
c_boolean CONSTANT PLS_INTEGER := 252;
BEGIN
c_type_names ( c_varchar2 ) := 'VARCHAR2';
c_type_names ( c_number ) := 'NUMBER';
c_type_names ( c_binary_integer ) := 'BINARY_INTEGER';
c_type_names ( c_long ) := 'LONG';
c_type_names ( c_rowid ) := 'ROWID';
c_type_names ( c_date ) := 'DATE';
c_type_names ( c_raw ) := 'RAW';
c_type_names ( c_longraw ) := 'LONG RAW';
c_type_names ( c_char ) := 'CHAR';
c_type_names ( c_mlslabel ) := 'MLSLABEL';
c_type_names ( c_record ) := 'RECORD';
c_type_names ( c_indexby_table ) := 'INDEX-BY TABLE';
c_type_names ( c_boolean ) := 'BOOLEAN';
c_type_names ( c_object_type ) := 'OBJECT TYPE';
c_type_names ( c_nested_table ) := 'NESTED TABLE';
c_type_names ( c_varray ) := 'VARRAY';
c_type_names ( c_clob ) := 'CLOB';
c_type_names ( c_blob ) := 'BLOB';
c_type_names ( c_bfile ) := 'BFILE';
END load_type_translators;

FUNCTION columns_for_query ( query_in IN VARCHAR2 )
RETURN col_info_aat
IS
cur PLS_INTEGER := DBMS_SQL.open_cursor;
l_count PLS_INTEGER;
l_from_dbms_sql DBMS_SQL.desc_tab;
retval col_info_aat;
BEGIN
-- Parse the query and then get the columns.
DBMS_SQL.parse ( cur, query_in, DBMS_SQL.native );
DBMS_SQL.describe_columns ( cur, l_count, l_from_dbms_sql );
DBMS_SQL.close_cursor ( cur );

-- Now move the "raw" data to my list of names and types.
-- In particular, convert the integer type to the string description.
FOR colind IN 1 .. l_from_dbms_sql.COUNT
LOOP
retval ( colind ).col_name := l_from_dbms_sql ( colind ).col_name;
retval ( colind ).col_type :=
CASE
WHEN c_type_names ( l_from_dbms_sql ( colind ).col_type ) IN
( 'VARCHAR2', 'CHAR' )
THEN c_type_names ( l_from_dbms_sql ( colind ).col_type )
|| '(32767)'
ELSE c_type_names ( l_from_dbms_sql ( colind ).col_type )
END;
END LOOP;

RETURN retval;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ( DBMS_UTILITY.format_error_stack );
DBMS_SQL.close_cursor ( cur );
RAISE;
END columns_for_query;

PROCEDURE initialize ( columns_out OUT col_info_aat )
IS
BEGIN
load_type_translators;
columns_out := columns_for_query ( query_in );
END initialize;

PROCEDURE gen_declarations ( columns_in IN col_info_aat )
IS
BEGIN
DBMS_OUTPUT.put_line ( 'DECLARE TYPE ' || base_name_in
|| '_rt IS RECORD ('
);

FOR colind IN 1 .. columns_in.COUNT
LOOP
DBMS_OUTPUT.put_line ( CASE
WHEN colind = 1
THEN NULL
ELSE ','
END
|| columns_in ( colind ).col_name
|| ' '
|| columns_in ( colind ).col_type
);
END LOOP;

DBMS_OUTPUT.put_line ( ');' );
DBMS_OUTPUT.put_line ( 'TYPE '
|| base_name_in
|| '_ct IS TABLE OF '
|| base_name_in
|| '_rt INDEX BY PLS_INTEGER;'
);
END gen_declarations;

PROCEDURE gen_bulk_query_and_insert ( columns_in IN col_info_aat )
IS
BEGIN
-- First, with collections of records.
DBMS_OUTPUT.put_line ( 'l_dataset ' || base_name_in || '_ct;' );
DBMS_OUTPUT.put_line ( 'CURSOR '
|| base_name_in
|| '_cur IS '
|| query_in
|| ';'
);
DBMS_OUTPUT.put_line ( 'BEGIN' );
DBMS_OUTPUT.put_line ( 'OPEN ' || base_name_in || '_cur;' );
DBMS_OUTPUT.put_line ( 'FETCH '
|| base_name_in
|| '_cur BULK COLLECT INTO l_dataset;'
);
DBMS_OUTPUT.put_line ( 'END;' );
-- Now break out the individual collections, necessary if you are going to
-- use these in a FORALL statement.
DBMS_OUTPUT.put_line ( 'DECLARE' );

FOR colind IN 1 .. columns_in.COUNT
LOOP
DBMS_OUTPUT.put_line ( 'TYPE '
|| columns_in ( colind ).col_name
|| '_aat IS TABLE OF '
|| columns_in ( colind ).col_type
|| ' INDEX BY PLS_INTEGER;'
);
DBMS_OUTPUT.put_line ( 'l_'
|| columns_in ( colind ).col_name
|| ' '
|| columns_in ( colind ).col_name
|| '_aat;'
);
END LOOP;

DBMS_OUTPUT.put_line ( 'CURSOR '
|| base_name_in
|| '_cur IS '
|| query_in
|| ';'
);
DBMS_OUTPUT.put_line ( 'BEGIN' );
DBMS_OUTPUT.put_line ( 'OPEN ' || base_name_in || '_cur;' );
DBMS_OUTPUT.put_line ( 'FETCH ' || base_name_in
|| '_cur BULK COLLECT INTO '
);

FOR colind IN 1 .. columns_in.COUNT
LOOP
-- Display other attributes, as desired.
DBMS_OUTPUT.put_line ( CASE
WHEN colind = 1
THEN NULL
ELSE ','
END
|| 'l_'
|| columns_in ( colind ).col_name
);
END LOOP;

DBMS_OUTPUT.put_line ( ';' );
-- Now the insert statement.
DBMS_OUTPUT.put_line ( 'FORALL indx IN '
|| 'l_'
|| columns_in ( 1 ).col_name
|| '.FIRST .. '
|| 'l_'
|| columns_in ( 1 ).col_name
|| '.LAST'
);
DBMS_OUTPUT.put_line ( 'INSERT INTO ' || base_name_in || ' VALUES (' );

FOR colind IN 1 .. columns_in.COUNT
LOOP
-- Display other attributes, as desired.
DBMS_OUTPUT.put_line ( CASE
WHEN colind = 1
THEN NULL
ELSE ','
END
|| 'l_'
|| columns_in ( colind ).col_name
);
END LOOP;

DBMS_OUTPUT.put_line ( '); END;' );
END gen_bulk_query_and_insert;
BEGIN
initialize ( l_columns );

IF l_columns.COUNT > 0
THEN
gen_declarations ( l_columns );
gen_bulk_query_and_insert ( l_columns );
END IF;
END gen_bulk_statements;
/

Wednesday, September 12, 2012

Rebuild Local Indexes-Query

We rebuild the Local indexes using the command

ALTER INDEX IDX_NAME rebuild partition partition_name;

But if we have subpartitions , we need to provide(or rebuild) indexes by providing subpartition name.
Below block can be used to dynamically query with or w/o subpartition.


for PART_IDXNAME in (SELECT PART.INDEX_NAME,NVL(SUBPART.SUBPARTITION_NAME,PART.PARTITION_NAME) PARTITION_NAME, NVL2(SUBPART.SUBPARTITION_NAME,'SUBPARTITION','PARTITION') PTYPE
FROM user_ind_partitions PART, user_ind_subpartitions SUBPART
WHERE
PART.INDEX_NAME = SUBPART.INDEX_NAME(+)
AND PART.index_name IN (SELECT index_name
FROM USER_INDEXES
WHERE table_name = p_table_name
AND PARTITIONED='YES'))
loop

execute immediate ' ALTER INDEX ' || PART_IDXNAME.INDEX_NAME || ' REBUILD ' || PART_IDXNAME.PTYPE || ' ' || PART_IDXNAME.PARTITION_NAME || ' nologging ' ;

end loop;

Monday, September 10, 2012

DB Compression: Frequently Asked Questions

DB Compression: Frequently Asked Questions
1. What is DB Compression?
2. What are different compression types?
3. How does Compression work?
4. What happens when we compress on a partition which is already compressed?
5. If we compress a table, Will Index get compressed too?
6. Is it compulsory to compress Indexes too?
7. Is it compulsory to rebuild indexes after Compression?
8. Is it compulsory to gather table stats after compression?
9. Even after compression, table size doesn’t reduce much?
10. What is post load compression?
11. What happens if process fails in middle of Compression?
12. Can we decompress the data, if so How?
13. When compression is happening, is there any downtime, Will the table be accessible?
14. Is it ok if there are Global Indexes?
15. How LOB Columns are handled?
16. How to find out whether a table/Partition is compressed or not?
17. What are performance advantages/disadvantages due to compression?
18. To use Compression option do we need to purchase any extra license?
19. Would it be possible to add or remove the columns from a compressed table?


1. What is DB Compression?
Database compression is a disk space saving feature of Oracle by reducing the size of relational tables. In addition to saving storage space, compression can result in increased I/O performance and reduced memory use in the buffer cache.
2. What are different compression types?
The compression clause can be specified at the tablespace, table or partition level with the following options:

NOCOMPRESS - The table or partition is not compressed. This is the default action when no compression clause is specified.
COMPRESS - This option is considered suitable for data warehouse systems. Compression is enabled on the table or partition during direct-path inserts only.
COMPRESS FOR DIRECT_LOAD OPERATIONS - This option has the same affect as the simple COMPRESS keyword.
COMPRESS FOR ALL OPERATIONS - This option is considered suitable for OLTP systems. As the name implies, this option enables compression for all operations, including regular DML statements. This option requires the COMPATIBLE initialization parameter to be set to 11.1.0 or higher. In 11gR2 this option has been renamed to COMPRESS FOR OLTP and the original name has been deprecated.
3. How does Compression work?
Compression Algorithm eliminates duplicate values within a database block, across rows and multiple columns, by creating symbol table metadata.
Each duplicate value is replaced by a short reference to the symbol table.
Compressed structure is self-contained in the block, no additional IO when accessing compressed data.

4. What happens when we compress on a partition which is already compressed?
The records which already got compressed will not have any changes.
If there are new records inserted which is not compressed those will get compressed.

5. If we compress a table, Will Index get compressed too?
No, Indexes will not get compressed; it has to be compressed separately.

6. Is it compulsory to compress Indexes too?
It is not compulsory to Compress Indexes when we compress a table.
Generally Indexes are created on columns which has unique records , hence compression on the indexes will save very less or no disk space. However we do have option for compresing indexes separately.

7. Is it compulsory to rebuild indexes after Compression?
YES, It is required to re-build indexes after compression.
All the indexes will be in unusable state after the table is compressed.
“MOVE” command might relocate table rows; hence it is compulsory to rebuild indexes.

8. Is it compulsory to Gather table stats after compression?
Table statistics may need to be gathered again after post-load style compression. When many partitions are compressed at one time, or when non-partitioned tables are compressed for the first time table statistics may change significantly due to change in the number of rows stored in DB Blocks, causing Oracle Optimizer to possibly bypass Indexes.
9. Even after compression, table size doesn’t reduce much?
Compresses works by eliminating duplicate values in a database block. Compressed data is stored in a database block itself.
Duplicate values in all the rows and columns in a block are stored once at the beginning of the block in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table.
So if the table doesn’t contain any duplicate values, then compression doesn’t reduce space.
Even if a table contains duplicate values, it is important those are in the same block or else there will not be much space reduction.
10. What is post load compression ?
Post Load Compression is a compression technique where data is not compressed on the fly while being inserted.
Instead, we need to manually execute commands to Compress at regular intervals. Below command is used to compress existing data in the table.
ALTER TABLE t1 MOVE PARTITION p1 COMPRESS;

11. What happens if process fails in middle of Compression?
There is chance of data block getting corrupted.

12. Can we decompress the data, if so How?
Yes, we can uncompress/decompress the compressed data using the below command.
ALTER TABLE tablename MOVE NOCOMPRESS NOLOGGING PARALLEL 4;
Again, it is compulsory to rebuild the Indexes after uncompressing the data.

13. When compression is happening, is there any downtime, Will the table be accessible?
While compressing a table or partition, concurrent write must be avoided. Concurrent read on indexed tables may fail or bypass index if we are querying from the particular partition, resulting in slower performance.
Indexes become UNUSABLE and must be rebuilt after post-load compression.
Compression utilities must be scheduled carefully to run during off-peak times.

14. Is it ok if there are Global Indexes?
A partitioned table which has GLOBAL indexes can be compressed.
15. How LOB Columns are handled?
Compression on LOB columns can be done only in Oracle 11G and above.
LOB columns must be declared as Secure File LOBs to use compression.

16. How to find out whether a table/Partition is compressed or not?
In the *_TABLES data dictionary views, compressed tables have ENABLED in the COMPRESSION column. For partitioned tables, this column is null, and the COMPRESSION column of the *_TAB_PARTITIONS views indicates the partitions that are compressed. In addition, the COMPRESS_FOR column indicates the compression method in use for the table or partition.
SELECT table_name,partition_name, compression, compress_for FROM user_tab_partitions where table_name=’TABLE_NAME';
17. What are performance advantages/disadvantages due to compression?
Advantages:
DISK STORAGE COST SAVINGS:
Disk storage cost savings are the most tangible benefit of compression for large databases (savings propagate to DR/Backup systems as well).
Expect to reduce storage space consumption by 50%-80%.
DISK I/O REDUCTION:
Reduced I/O, can improve query performance for I/O bound systems.
MORE EFFICIENT MEMORY UTILIZATON:
Compression can result in more efficient memory utilization as more data fits in the buffer cache.
NETWORK TRAFFIC REDUCTION:
Data Guard Redo Transport Services used to transfer redo can be configured to transmit in a compressed format to reduce network bandwidth consumption.
Systems with NAS based disk storage will benefit from reduced network traffic.
SQL QUERY PERFORMANCE is SAME or BETTER:
SQL Query performance does not degrade when tables are compressed.
Faster full scan/range scan operations.
Limitations:
While compressing a table or partition, concurrent write must be avoided. Concurrent read on indexed tables may fail or bypass index resulting in slower performance.
Indexes become UNUSABLE and must be rebuilt after post-load compression.
May not be suitable for tables which are not partitioned and are loaded via regular INSERT. Incrementally recompressing the table will require the entire table segment to be moved.

18. To use Compression option do we need to purchase any extra license?
Oracle Advanced Compression --> extra cost option on top of enterprise edition.
Direct-Load Table Compression --> No extra cost option, but requires enterprise edition.
19. Would it be possible to add or remove the columns from a compressed table?
It is possible to ADD a column to already compressed table.
But it is not possible to DROP a column.
As an alternative we can use
alter table t drop unused columns;

Monday, August 6, 2012

Query from a Specific partition

To query from a specific partition, we used provide partition name.
Like below:

select * from Table_1 partition (Q1_2012);

But it is hard to remember partition names and also we cannot use it inside a procedure/functions, since it is difficult to construct partition names at run time.

--> To facilitate the access to specific partition, Oracle Database 11g offers a new syntax for partitioning SQLs:

select * from Table_1 partition for (to_date('02-Feb-2012','dd-mon-yyyy'));

This method allows us to query from a specific partition without explicitly stating the partition name.


--> This can be useful while loading the data too.One can dynamically pass the date and oracle finds the partition it belongs to.

insert into SALES /*+ APPEND*/
partition for (to_date(''||vstart_id||'', ''''YYYYMM''''))
select PROD_ID, CUST_ID, TIME_ID, CHANNEL_ID, PROMO_ID, QUANTITY_SOLD, AMOUNT_SOLD
from SRC_SALES
where time_id between to_date(:vstart_id, ''''YYYYMM'''') and last_day(to_date(:vstart_id, ''''YYYYMM'''')) '';


--> While Dropping Partitions


alter table SALES drop partition for (to_date('02-feb-2012','DD-MON-YYYY'));


Thursday, July 19, 2012

Analytic Functions

If you have to learn Analytic functions in Oracle, then the below link is the best place to start with.
I learnt it with this, the explanation is simple and through concepts makes this blog to come up first when googled.


http://orafaq.com/node/55

Thursday, June 7, 2012

Update JOIN

It’s a common requirement that we will have to update a table using values from another table.
Something like

update t set value =
(
select value from s where t.key = s.key_to_t and s.fld_name='A'
)

Using above techinque we must remember that all the rows in T which did not have a match in S, will be updated to NULL. Which is not desired.
Hence we need to use the below technique:

update t set value =
(
select value from s where t.key = s.key_to_t and s.fld_name='A'
)
where exists (select 1 from s s1 where t.key = s1.key_to_t and s1.fld_name='A');

By this we make sure that only the rows which match the criteria is updated and remaining are not touched.

But if the table is huge , the where exists clause may slowed down the query.
We can also use another method:


create table t ( key int, value int );

create table s ( key_to_t int, value int, fld_name varchar2(1), unique(value,fld_name) );

insert into t values ( 1, null );
insert into t values ( 2, 111 );

insert into s values ( 1, 100, 'A' );


insert into s values ( 1, 100, 'B' );

update (select t.value,s.value new_value
from t, s
where t.key = s.key_to_t
and s.fld_name = 'A')
set value = new_value;

For the above technique we must make sure we have a Primary Key column or Unique Index on the columns.

or

merge into t
using (select * from s where fld_name = 'A') s
on (t.key = s.key_to_t)
when matched then update set value = s.value;



Sunday, May 20, 2012

Search an element in a collection ("member of")

If we have to search for an element inside a collection (nested table), we often tend use a for loop and compare each element in the array.
Instead we can make use of inbuilt feature of Nested table "member of":
This searches whether the element exists inside the collection or not and returns True or False.

Below is a small example:

declare
type TYPJOB is table of varchar2(100); -- Nested Table type
V_TYPJOB TYPJOB ;
v_srch_str varchar2(100) := 'CLERK';
begin
select job bulk collect into V_TYPJOB from EMP;
if (V_SRCH_STR member of V_TYPJOB) then
DBMS_OUTPUT.PUT_LINE(v_srch_str || '--exists inside the Collection');
else
dbms_output.put_line(v_srch_str || '--doesnt exists inside the Collection');
end if;
end;

This will be pretty useful if the collection size is large.

Monday, April 16, 2012

Grants on an Object

If you would want to know what all grants does a Object have, we can make use of below procedure. This will mostly help to get grants from PRODUCTION environment, where developers have restricted access,when we do not have direct access to the main schema.
But before that we need to get access to DBMS_METADATA pkg from DBAs.

SELECT DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT', 'EMP','SCOTT') from DUAL;

---------
"
GRANT SELECT ON "SCOTT"."EMP" TO "SCOTT"
"

Sunday, March 18, 2012

Rebuild local Indexes in Parallel - DBMS_PCLXUTIL

If we are creating local index on large tables or rebuilding local indexes then we can consider using the dbms_pclxutil package.
it provides intra-PARTITION parallelism For Creating Partition Wise Local Indexes.
that means you can parallelize INDEX builds against each PARTITION in addition to across Partitions.
pclxutil just automates the rebuilding of the index using the job queues.


CREATE TABLE PCLXUTIL_TAB(ID NUMBER, DT_COB DATE) PARTITION BY RANGE (DT_COB)
(
PARTITION DT_2008 VALUES LESS THAN (TO_DATE('01/01/2009','MM/DD/YYYY')),
PARTITION DT_2009 VALUES LESS THAN (TO_DATE('01/01/2010','MM/DD/YYYY')),
PARTITION DT_2010 VALUES LESS THAN (TO_DATE('01/01/2011','MM/DD/YYYY')),
PARTITION DT_2011 VALUES LESS THAN (TO_DATE('01/01/2012','MM/DD/YYYY')),
PARTITION DT_2012 VALUES LESS THAN (TO_DATE('01/01/2013','MM/DD/YYYY')),
PARTITION DT_2013 VALUES LESS THAN (TO_DATE('01/01/2014','MM/DD/YYYY'))
);

INSERT INTO PCLXUTIL_TAB SELECT LEVEL , SYSDATE-LEVEL FROM DUAL CONNECT BY LEVEL <= 1000;
--SELECT * FROM PCLXUTIL_TAB;

CREATE INDEX PCLXUTIL_IDX ON PCLXUTIL_TAB(DT_COB) LOCAL UNUSABLE;
/* The above causes the dictionary entries to be created without "building" the index itself, the time consuming part of creating an index */

BEGIN
DBMS_PCLXUTIL.BUILD_PART_INDEX(4,4,'PCLXUTIL_TAB','PCLXUTIL_IDX',TRUE);
END;
/

Sunday, February 19, 2012

DBMS_PARALLEL_EXECUTE

One of the new features of Oracle 11G is DBMS_PARALLEL_EXECUTE package.
This allows us to do things in parallel.
If we have to update (insert) millions of rows, update statement isn't hard to write, but we need to think about rollback segment and more importantly time it takes to complete.
DBMS_PARALLEL_EXECUTE breaks up large tables based on some criteria and does the DML Operation in parallel on the chunks of data. This speeds up the DML Operations.
One can break the tables based on ROWIDs,NUMBER (Primary Key), Any Values, SQL Statement etc.Implicit commit happens.

To use DBMS_PARALLEL_EXECUTE to run tasks in parallel, your schema will need the CREATE JOB system privilege.


http://www.oracle.com/technetwork/issue-archive/2010/10-may/o30plsql-086044.html


http://www.oracle-base.com/articles/11g/dbms_parallel_execute_11gR2.php

Example:

CREATE TABLE test_tab (
id NUMBER,
description VARCHAR2(50),
num_col NUMBER,
CONSTRAINT test_tab_pk PRIMARY KEY (id)
);

insert /*+append*/ into test_tab select level,'Description for '|| level,case when mod(level,5)=0 then 10 when mod(level,3) = 0 then 20 else 30 end
from dual connect by level <=500000;

begin
DBMS_PARALLEL_EXECUTE.CREATE_TASK(TASK_NAME => 'TEST_TASK');
END;

SELECT *
FROM user_parallel_execute_tasks;

--CREATE CHUNKS BY ROWID
BEGIN
dbms_parallel_execute.create_chunks_by_rowid(task_name => 'TEST_TASK',table_owner => 'SCOTT',table_name => 'TEST_TAB',by_row => true,chunk_size => 10000);
end;

SELECT chunk_id, status, start_rowid, end_rowid
FROM user_parallel_execute_chunks
WHERE task_name = 'TEST_TASK'
ORDER BY chunk_id;

--RUN TASK

DECLARE
l_sql_stmt VARCHAR2(32767);
BEGIN
l_sql_stmt := 'UPDATE /*+ ROWID (dda) */ test_tab t
SET t.num_col = t.num_col + 10
WHERE rowid BETWEEN :start_id AND :end_id';

DBMS_PARALLEL_EXECUTE.run_task(task_name => 'TEST_TASK',
sql_stmt => l_sql_stmt,
language_flag => DBMS_SQL.NATIVE,
PARALLEL_LEVEL => 10);
END;