DavidYahalom.com is an IT knowledgebase dedicated to the world of databses and RDBMS systems by David Yahalom. Here you'll find articles, views, news, tips and in-depth analysis about Oracle, DB2 LUW, Sql Server and MySql. I hope you'll enjoy your stay.

23rd
AUG

Perform a DataPump schema copy directly over a Database Link

Posted by David Yahalom under Oracle

For today - a cool little tip about performing a DataPump export/import directly over a DBLink.

This can help us “move” or “copy” schemas between databases without the need for creating the DataPump export file in a directory and copying it to the remote server. It also allows us to perform the export/import with a single command!

Yes, there is no need for intermediate dump files and no need to “copy” dump files between servers!

Data is exported directly across an existing database link and imported directly into the target database.

On the target database:

1. Let’s create a public database link to the SOURCE server connecting to the schema we want to import (scott in our case):

SQL> create public database link old_scott connect to scott identified by tiger using 'source_server';

2. Create a new user (the destination schema to which we are gong to import the data from the source server):

SQL> create user new_scott identified by tiger;

SQL> grant connect, resource to new_scott;

3. Grant the new_scott user the ability to read and write data in the “temp” directory on the destination server:

This is the directory on the target server in which the import file from the source will automatically be created in as part of the network DataPump export/import operation.

SQL> grant read, write on directory dmpdir to new_scott;

4. Run the IMPORT command with impdp directly on the TARGET server specifying the NETWORK_LINK parameter “pointing” to the source database.

All work is performed on the target system. The only reference to the source systems is via the database link.

impdp new_scott/tiger DIRECTORY=dmpdir NETWORK_LINK=old_scott remap_schema=scott:new_scott

Quick and very easy way to copy schemas across two different database servers!

19th
AUG

Quick tutorial on Fined Grained Auditing

Posted by David Yahalom under Security, Oracle

Complex auditing in Oracle is much simpler to implement than most DBAs imagine. While standard Oracle auditing (AUDIT SELECT TABLE, UPDATE TABLE, INSERT TABLE…) is usually a well understood and utilised feature, many DBAs forget that all the way back in the 9i days, Oracle introduced a much more granular method of auditing table access via the Fined Grained Auditing feature.

Fine Grained Auditing (FGA) can be understood as policy-based auditing. As opposed to the standard auditing functionality, FGA lets you specify the conditions necessary for an audit record to be generated which can help keep the “audit trail” smaller and thus easier to track and manage when the time comes to track activity on a specific database table:

  • FGA policies are programatically bound to the object (table, view) by using the DBMS_FGA package.
  • It allows you to create any desired condition, for example: Audit an event only when the following conditions are true:
  1. A table is accessed between 6 p.m. and 6 a.m. and on Saturday and Sunday.
  2. An IP address from outside the corporate network is used.
  3. A specific column has been selected or updated.
  4. A specific value for this column has been used.

It is actually extremely simple to configure. Quick example:

1. Enable complete auditing for INSERT, UPDATE, DELETE on the HR.EMPLOYEES table.


BEGIN
DBMS_FGA.ADD_POLICY(
object_schema      => 'HR',
object_name        => 'EMPLOYEES',
policy_name        => 'AUDIT_EMP_TABLE',
enable             =>  TRUE,
statement_types    => 'INSERT, UPDATE, DELETE',
audit_column_opts => DBMS_FGA.ALL_COLUMNS,
audit_condition => NULL,
audit_trail => DBMS_FGA.DB_EXTENDED);
END;
/

Note that with DBMS_FGA.ADD_POLICY we have configured a new FGA policy with the following parameters:

  • object_schema = HR
  • object_name = EMPLOYEES (so we audit the HR.EMPLOYEES table).
  • policy_name = name of the policy we want to create, for example “AUDIT_EMP_TABLE”
  • enable = TRUE (enable the newly created policy)
  •  statement_types = type of commands to audit
  • audit_column_opts = we can select if we want to audit specific columns
  • audit_trail = select to where we want to write the “audit trail”, that is, where the audit records will be located. For example, here we configure the audit trail to be located inside the database (sys.fga_log$ table) which is usually what most DBAs configure.

2. After we have created the policy, w can verify it is indeed configured:

SQL> SELECT POLICY_NAME FROM DBA_AUDIT_POLICIES;

POLICY_NAME
------------------------------
AUDIT_EMP_TABLE

3. To test our new policy, lets perform a quick update on the employees table which we will later check to see if it gets audited:

SQL> update employees set salary=4500 where employee_id =190;

1 row updated.

SQL> commit;

Commit complete.

SQL>

4. To view the “audit trail”, lets select from the sys.fga_log$ view (internal Oracle audit view for which we can create a synonym for easier access). This view contains many columns, we can select the object, actual SQL command and timestamp:

SQL> SELECT substr(obj$name,1,25) as OBJECT_NAME, substr(LSQLTEXT,1,50) || '...'
as SQLTEXT, ntimestamp# as timestamp
2  FROM SYS.FGA_LOG$;

EMPLOYEES
update employees set salary=4500 where employee_id...
18-DEC-11 03.36.35.837000 PM

Easy and very straightforward.

13th
JAN

Understand database buffer cache usage

Posted by admin under Tuning, Oracle

As you know, the buffer cache is an “expensive” database resource. While the amount of RAM available in servers is getting larger and larger with every generation, so is the requirements for realtime transactions, caching more data, etc…

So, in terms of caching data, RAM could still be a bottleneck for your database.  We all want to avoid physical I/O. We need to make sure appropriate database segments (indexes, tables) are cached in memory.

We can set the Oracle automatic SGA tuning parameters  (or do manual memory sizing), but how do we know which objects are using the buffer cache? Which objects are getting cached? Simple!

To help us better understand which objects are cached in the database buffer cache, we can use the following query:

SELECT owner, object_name, object_type type, round((COUNT(*)*8)/1024) MB_SIZE
From V$bh
Join All_Objects On Object_Id = Objd
Group By Owner, Object_Name, Object_Type
ORDER BY MB_SIZE DESC;

The MB_SIZE column calculates the amount of blocks resident in the buffer cache X block size (it’s 8K here, but change according to your environment) / 1024 to get the output in megabytes.

Update! An even better query that will display results from the keep cache as well as the default cache. Remember to change * in the top query to the block size of your database.

Select Object_Name  As Parttion_Name, round(sum(Blocks)*8/1024) as MBYTES
from
(
Select Decode(pd.bp_id,1,'KEEP',2,'RECYCLE',3,'DEFAULT',
4,'2K SUBCACHE',5,'4K SUBCACHE',6,'8K SUBCACHE',
7,'16K SUBCACHE',8,'32KSUBCACHE','UNKNOWN') Subcache,
bh.object_name,bh.subname, bh.blocks
From X$kcbwds Ds,X$kcbwbpd Pd,(Select /*+ use_hash(x) */ Set_Ds,
o.name object_name, o.subname, count(*) BLOCKS
from obj$ o, x$bh x where o.dataobj# = x.obj
And X.State !=0 And O.Owner# !=0
group by set_ds,o.name, o.subname) bh
where ds.set_id >= pd.bp_lo_sid
and ds.set_id <= pd.bp_hi_sid
And Pd.Bp_Size != 0
and ds.addr=bh.set_ds
)
group by object_name
order by MBYTES desc

4th
OCT

Expdp fails with ORA-01427

Posted by David Yahalom under Bug, Oracle

Well, here’s an interesting tidbit when using expdp I’ve noticed today. While doing a very simple export of a single schema on a 11.2.0.1 database I got the following error message:

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS []
ORA-01427: single-row subquery returns more than one row

ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 86
ORA-06512: at “SYS.KUPW$WORKER”, line 8159

—– PL/SQL Call Stack —–
object line object
handle number name
0×7cf93e860 19028 package body SYS.KUPW$WORKER
0×7cf93e860 8191 package body SYS.KUPW$WORKER
0×7cf93e860 12728 package body SYS.KUPW$WORKER
0×7cf93e860 2425 package body SYS.KUPW$WORKER
0×7cf93e860 8846 package body SYS.KUPW$WORKER
0×7cf93e860 1651 package body SYS.KUPW$WORKER
0×7ef8a4448 2 anonymous block

Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS []
ORA-01427: single-row subquery returns more than one row

ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 86
ORA-06512: at “SYS.KUPW$WORKER”, line 8159

This was generated by a very regular export command:

 expdp system/<PASSWORD> DIRECTORY=orabackup_nfs DUMPFILE=xxxx.DMP LOGFILE=xxxx.LOG SCHEMAS=xxxx

As it turns out, there’s a published Oracle bug on this which affects Oracle 11.2.0.1 - 11.2.0.2:

DataPump Export (expdp) Returns ORA-1427 When Partitioned IOT With Same Name Exists in More Than One Schema [ID 1064840.1]

As it turns out, the cause is if a partitioned IOT with same name exists in two different schemas. There’s any easy way to find these duplicate partitioend IOTs by running the following query as SYSDBA (change schema name to the schema you are trying to export):

select a.owner||'.'||a.table_name table1,
b.owner||'.'||b.table_name table2
from     dba_tables a,
dba_tables b
where   a.owner = 'TEST1' and
b.owner != 'TEST1' and
a.iot_type = 'IOT' and
b.iot_type = a.iot_type and
b.table_name = a.table_name and
exists (select 1
from   dba_part_tables
where  owner = a.owner and
table_name = a.table_name);

The solution provided is to either drop the IOT from one of the schemas (if possible of course), install the fix for bug 9214753 or change the KU$_IOTPART_DATA_VIEW view to a new version provided with the note in Metalink.

28th
SEP

How to trace an Oracle session from another session

Posted by David Yahalom under Oracle

DBMS_MONITOR is a package which allows tracing of session activity.
Using DBMS_MONITOR you can start a trace of any session in the database – from whatever session you are connected to. It doesn’t even have to be the session you want to trace.

So, for example, if you have session A running and you are connected from session B, you can trace session A to diagnose its activity from session B using the DBMS_MONITOR package.

DBMS_MONITOR requests the database S.P to generate the trace files required. These trace files contain information about queries and query stats including wait events and even bind variable information.

The syntax is:

DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE(client_id, wait info, bind variables);

For example:

exec DBMS_MONITOR.SESSION_TRACE_ENABLE (40,105,true,true);

This will start a trace for a specific session identified with sid value of 40 and serial# value of 105 and will generate trace files which include wait events and bind variable information.

Stopping the trace once the information is collected is simple and ca be done by running:

exec DBMS_MONITOR.SESSION_TRACE_DISABLE(40,105);

Following the creation of the raw dump files, we can use the tkprof  utility to format the trace file output. Tracing a session can generate very large trace files if left in its enabled state or for a session doing a lot of activity – this can fill the user dump directory pretty fast. 

 

28th

Increasing DML speed and throughput

Posted by David Yahalom under Oracle

1. Parallelism of the insert operation - invoke parallel DML (using the “PARALLEL” hint) to perform concurrent inserts on the same table.

Be advised that in order to avoid contention on the segment header, multiple freelists are required, or using ASSM which generates bitmap-based freelists.

In addition to INSERT, the PARALLEL hint also applies to - SELECT, MERGE, UPDATE, and DELETE statements.

2. Use the “APPEND” hint - causes the optimizer to use direct-path insert.

  • Conventional INSERT is the default in serial mode. In serial mode, direct path can be used only if you include the APPEND hint.
  • Direct-path INSERT is the default in parallel mode. In parallel mode, conventional insert can be used only if you specify the NOAPPEND hint.


In direct-path INSERT, data is appended to the end of the table, rather than using existing space currently allocated to the table. As a result, direct-path INSERT can be considerably faster than conventional INSERT. Basically this tells Oracle to “grab” free blocks for the inserted data ABOVE the table HWM.


A major benefit of direct-load INSERT is that you can load data without logging redo or undo entries, which improves the insert performance significantly.


3.
Use a larger block size - By using larger block sizes (such as 16K or 32K) you can reduce the amount of I/O as more rows fit into a single block before a “full block” condition is reached and causes the block to get unlisted from the Freelist (which is a latch that during multiple concurrent inserts can slow the database).


4.
Disable indexes - It’s faster, considerably, to rebuild indexes after a bulk-data load all at once compared to having the indexes update after each insert statement.


5.
Use Reverse key indexes - can help reduce insert contention on sequence-generated primary keys as data will spread across multiple blocks.

28th

Temporary tablespace usage in Oracle

Posted by David Yahalom under Oracle

Temp is used for sorting - which, by itself, is performed automatically “the in background” for various operations such as creating an index, using order by or group by. In addition, using hash joins uses temp space for the hash tables created by the process.

 The session will begin by sorting data in memory and if exceeds a certain threshold (note – not necessarily exceeding the amount of memory available), Oracle will break the operation into smaller “chunks” and will write partial results to the temporary tablespace. This will result in temp space usage.

The behavior described above is controlled by the workarea_size_policy parameter. When set to AUTO, the pga_aggregate_target parameter will instruct Oracle how much memory can be used by all sessions for sort activities. Oracle will automatically determine how much of this memory each individual session can use.

 When the workarea_size_policy parameter is set to manual on the other hand, parameters such as sort_area_size, hash_area_size, and bitmap_merge_area_size define the amount of memory a session can use for various sort related operations.

 It’s also important to note that a single SQL statement can perform multiple sorts.

 When temp blocks which belong to a sort are no longer required, these blocks are marked as such and will be re-allocated when a new sort is started.

When there’s not enough free space in the temporary tabelsapce to meet the sort requirements, the sort operation will fail. This can occur when there are no unused blocks for the sort segments (too many parallel sorts running) or no space available in the temporary tablespace as a whole.

 When this happens Oracle will raise the “ORA-1652: unable to extend temp segment” error. Please note that not a lot of information is provided in addition to the ORA-1652 error. So further diagnostic of such errors requires some additional work.

 

Diagnosing temp space usage in Oracle can be done utilizing the internal Oracle diagnostic mechanism to give us information about ORA-1652 errors by generating trace files. These traces will contain all the required information for diagnostic including the SQL statement text. There will be overhead to the system, but should be minimal under most conditions as the trace file will only be written when an error occurs.

When an ORA-1652 error occurs, a trace file will be created in the udump directory.

 Tracing ORA-1652 errors can either be done at the session or instance level:

 ALTER SESSION SET EVENTS ‘1652 trace name errorstack’;

 OR

 ALTER SYSTEM SET EVENTS ‘1652 trace name errorstack’;

 Also, monitoring the temporary tablespace in realtime can be done using the following query:

SELECT   A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM     v$sort_segment A,
(
SELECT   B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM     v$tablespace B, v$tempfile C
WHERE    B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE    A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

 

Sort segment usage divided by session:

 SELECT   S.sid || ‘,’ || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
S.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used,
T.tablespace,
COUNT(*) sort_ops
FROM     v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE    T.session_addr = S.saddr
AND      S.paddr = P.addr
AND      T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
S.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;

Sort segment usage divided by statement:

SELECT   S.sid || ‘,’ || S.serial# sid_serial, S.username,
T.blocks * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
T.sqladdr address, Q.hash_value, Q.sql_text
FROM     v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS
WHERE    T.session_addr = S.saddr
AND      T.sqladdr = Q.address (+)
AND      T.tablespace = TBS.tablespace_name
ORDER BY S.sid;

30th
SEP

What will happen to ASM when the disk path changes in Linux?

Posted by David Yahalom under Storage, ASM, RAC, Oracle

Have you even wondered what happens if disk device names and/or major-minor numbers changes at the operating system level? Will this cause any problems for ASM as it tires to access the drives even when the drive path has changed?

Well, no ! This will cause absolutely no problem what so ever to ASM. In fact, its one of ASMs best features.

You see, ASM only cares about the LOGICAL disk names and not the PHYSICAL drive paths or numbers. Changing the drive paths or major-minor numbers in the O/S is no problem for ASM. This is because ASM scans the disks based on what is defined in asm_diskstring - so during boot, all drives that have a valid ASM header will be automatically added to ASM.

These devices are opened with system calls (like fopen, etc.) so paths and/or major numbers are not even used.

The disk paths, names and major-minor numbers are not persistently recorded in any of the ASM metadata. Each ASM disk has a disk header which contains the disk name and diskgroup stamped in it.

If ASM discovery finds the required number of disks for a given diskgroup then it will be able to mount the diskgroup. The ASM metadata header will contain the fixed and persistent logical disk name (for example - ORCL_DATA_DISK1). On boot, ASM will scan all disks for one with the above name in its header.

Once it finds that disk it will make the connection between the current (again, not persistent) disk path and the logical disk name.

23rd
FEB

The relation between an Oracle instance and memory in Windows

Posted by David Yahalom under Windows, Hardware, Oracle

The Oracle instance memory allocation works differently in Windows then it does in UNIX-Linux like operating systems.

In Windows environments, on starting up the Oracle instance all global memory pages are committed (like the buffer cache, redo log buffer and the library cache). However, and this is an important difference compared to POSIX operating systems, only a small number of these memory pages are actually cached in memory upon instance startup causing most of the SGA not to be part of  Oracle’s active working set.

This forces Oracle to compete on equal grounds with other processes. The O/S might swap certain pages from Oracle’s working set in memory to disk during periods of increased activity or when other processes are paging more and compete for RAM. Windows pages according to activity using an MRU-like mechanism, so when paging is going-on system-wide, Windows will try to reserve RAM pages to the most active process, the one that is paging the most.

If Windows decides to swap the database instance to disk, this will cause a severe performance degradation for the Oracle Database and might even grind the instance to a complete halt.

There are two registry parameters that exist and allow us to manipulate the working set of the Oracle process. These are:

  • ORA_WORKINGSETMIN or ORA_%SID%_WORKINGSETMIN:
    Minimum working set for the ORACLE.EXE process (units = MB)
  • ORA_WORKINGSETMAX or ORA_%SID%_WORKINGSETMAX:
    Maximum working set for the ORACLE.EXE process (units = MB)

You can set these parameters under:

  • For single ORACLE_HOME installations: HKEY_LOCAL_MACHINE -> SOFTWARE -> ORACLE
  • For multiple ORACLE_HOME installations: HKEY_LOCAL_MACHINE -> SOFTWARE -> ORACLE -> HOMEx (for multiple homes)

ORA_WORKINGSETMIN can be used to prevent the pages allocated to the Oracle process from dropping below the defined threshold (in MB) until the instance is shutdown.

The biggest benefit of setting these parameters will be in an environment where Oracle coexists with other applications. Although it can be beneficial in other scenraios as well, such as when production and test instances are running on the same physical machine.

Another very useful parameter that exist in the Win32/64 platform is: PRE_PAGE_SGA. This parameter causes Oracle will force Oracle to allocate all SGA pages upon instance startup thus bringing all of them to memory as the working set of the Oracle executable.

This will allow the instance to reach maximum performance more quickly rather than through an incremental build up as pages are loaded on a need-to basis.

Combining ORA_WORKINGSETMIN  with PRE_PAGE_SGA will force the Oracle instance to start above the minimum threshold and not drop below. Using ORA_WORKINGSETMIN in isolation will cause the Oracle working set rises above the threshold it will not drop below.

22nd
FEB

Blog has been renamed!

Posted by David Yahalom under General IT

As you probably noticed I’ve changed my blog’s name and URL to www.davidyahalom.com instead of www.authoritybase.com. After doing some thinking I’ve decided that it is more appropriate for my blog at its current stage to reflect my name instead of using something more original. You can still access it by typing www.authoritybase.com, but all new articles and views will be published under the www.davidyahalom.com domain.

I’ve also created a new logo for my blog, one that better reflects the database related content on this site.

DavidYahalom.com - Oracle, Databases, SQL, news, views, articles and in-depth analysis is powered by Wordpress. Designed by Free WordPress Themes.