Oracle, AWS and Cloud

It’s been a long gap for my blogging activity. For past 2 years I’ve been shifting my focus on core oracle technologies and started working on cloud technologies more. It’s been an interesting journey for me for past couple years. I’ll start blogging on new technologies that I’ve been working on and start posting more work logs over here.


EBS-Optimization inconsistencies

Note: Below issue has been fixed by AWS sometime between 6/15 and 6/23. Ref:

Today I’ve observed that since introducing new EC2 instances and announcing the present-generation and previous generation instances, AWS made some changes to their EC2 instances. One of the notable change is the EBS Optimization characteristics of previous generation’s instances.

Below is the screenshot of this information as on 6/15/2014 5:40 PM EST.

As you can see there are 2 major changes… m2.2xlarge and m2.4xlarge instance types do not have EBS-Optimized option anymore. However, this information is contradicting with other pages with in AWS website.

By referring to EBS optimization page in AWS EC2 documentation at, it is clearly mentioned that, these 2 instance types are having EBS optimization capability of 500 MBPS and 1000 MBPS respectively.

Another notable change is with instance type cr1.8xlarge. As per the table, network performance on this instance type is “High”. However, it used to “10 Gigabit”. Same was confirmed using the original blogpost from its launch back in Jan 2013 @

I did another quick check in AWS console to make sure that backend AWS engine is not modified to decrease this performance and I could see that it’s just the EC2 instance page that was modified incorrectly.

Hope AWS folks will correct this pretty soon.

Autoconfig issues with DUAL table


Users are unable to login to Applications


Oracle EBS on Linux 5


Below errors are observed in OACoreGroup.0.stderr

[Jul 30, 2010 8:07:22 PM IST]:1280500642725:Thread[Thread-4698,10,main][fnd.framework.OAException]:Application: FND, Message Name: FND_GENERIC_MESSAGE. Tokens: MESSAGE = java.sql.SQLException: ORA-01403: no data found
ORA-06512: at line 1
;  (Could not lookup message because there is no database connection)

[Jul 30, 2010 8:07:22 PM IST]:1280500642725:Thread[Thread-4698,10,main][fnd.framework.OAException]:Application: FND, Message Name: FND_GENERIC_MESSAGE. Tokens: MESSAGE = java.sql.SQLException: ORA-01403: no data found
ORA-06512: at line 1

;  (Could not lookup message because there is no database connection)

Troubleshooting Process:

Autoconfig is failing with below error:

*******FATAL ERROR*******
PROGRAM : (/v03/oracle/DEV02/dev02comn/admin/install/DEV02_shine/
TIME    : Fri Jul 30 20:26:51 2010
FUNCTION: TXK::SQLPLUS::setConnectInfo [ Level 2 ]
Argument  must be a scalar

TXK::Error::abort('TXK::Error', 'HASH(0x8456578)') called at /v03/oracle/DEV02/dev02appl/au/11.5.0/perl/TXK/ line 291
TXK::Util::getScalarArg('TXK::Util', 'user', undef) called at /v03/oracle/DEV02/dev02appl/au/11.5.0/perl/TXK/ line 259
TXK::SQLPLUS::setConnectInfo('TXK::SQLPLUS=HASH(0x80c7248)', 'HASH(0x8456584)') called at /v03/oracle/DEV02/dev02comn/admin/install/DEV02_shine/ line 180
eval {...} called at /v03/oracle/DEV02/dev02comn/admin/install/DEV02_shine/ line 64

As there are not many known hits in Oracle support with this error, tried troubleshooting the issue using OS utilities..

Before we start troubleshooting issue, it is important to consistently reproduce it. SO I tried to reproduce the error by executing only the perl file having issues:

/v02/oracle/DEV02/dev02ora/iAS/Apache/perl/bin/perl -I /v02/oracle/DEV02/dev02ora/iAS/Apache/perl/lib/5.00503:/v02/oracle/DEV02/dev02ora/iAS/Apache/perl/lib/site_perl/5.005:/v03/oracle/DEV02/dev02appl/au/11.5.0/perl /v03/oracle/DEV02/dev02comn/admin/install/DEV02_shine/ apps apps

Fortunately the issue reproduced we got the same exception again.

By taking a closer look at this issue, first it looks like a techsktack scripts issue. But this is a fatal error, which often mean that developer of this scripts did not plan that this situation could occur. Often some not so common errors or non-standard errors.

Also this error occurred at below stage:

FUNCTION: TXK::SQLPLUS::_doExecute [ Level 3 ]
Cannot find SESSION_ID: in connect buffer

This means, issues is generating from database tier and the session itself is not establishing the way this script expects.

Immediate check to perform would be to ensure that using sqlplus executable we can establish DB connection for standard users like apps, applsys, applsyspub etc.. In our case all these tests were successful.

Next step is to find out which statement in this sqlplus session is failing. We can always generate 10046 trace at DB level and extract the SQL. But it might not be the first choice for two reasons:

1. 10046 trace at DB level would be too lengthy and it is tough to extract exact sql statement quickly..

2. 10046 trace will have no information, if the issue is with in the SQL plus client and the statement causing issue did not reach database yet. ( SQLPlus semantic issues/logical issues etc…)

Next step was to find out what exactly happened before this exception was raised. OS tools like strace, truss are best utilities in these cases. When the seeded/packaged programs do not provide debug information, using these tools will provide complete details. However, output from these commands is often too detailed and will take huge efforts to identify correct source of error. Understanding of OS system calls and linux fundamentals would be handy in using these commands.

I’ve used strace to perform this debugging in this case:

output of this command is trimmed for readability.

strace /v02/oracle/DEV02/dev02ora/iAS/Apache/perl/bin/perl -I /v02/oracle/DEV02/dev02ora/iAS/Apache/perl/lib/5.00503:/v02/oracle/DEV02/dev02ora/iAS/Apache/perl/lib/site_perl/5.005:/v03/oracle/DEV02/dev02appl/au/11.5.0/perl /v03/oracle/DEV02/dev02comn/admin/install/DEV02_shine/ apps apps

execve("/v02/oracle/DEV02/dev02ora/iAS/Apache/perl/bin/perl", ["/v02/oracle/DEV02/dev02ora/iAS/Apache/perl/bin/perl", "-I", "/v02/oracle/DEV02/dev02ora/iAS/Apache/perl/lib/5.00503:/v02/oracle/DEV02/dev02ora/iAS/Apache/perl/lib/site_perl/5.005:/v03/oracle/DEV02/dev02appl/au/11.5.0/perl", "/v03/oracle/DEV02/dev02comn/admin/install/DEV02_shine/", "apps", "apps"], [/* 363 vars */]) = 0
uname({sys="Linux", node="", ...}) = 0
brk(0)                                  = 0x80c7000
open("/etc/", O_RDONLY)    = 3
stat64("/v02/oracle/DEV02/dev02ora/8.0.6/bin/sqlplus", {st_mode=S_IFREG|0751, st_size=3789833, ...}) = 0
pipe([3, 4])                            = 0
fcntl64(3, F_GETFL)                     = 0 (flags O_RDONLY)
fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
write(8, "whenever sqlerror exit failure\n", 31) = 31
write(8, "connect apps/apps@DEV02\n", 24) = 24
write(8, "set head off\n", 13)          = 13
write(8, "select \'SESSION_ID:\'||sys_guid()"..., 47) = 47
write(8, "exit success\n", 13)          = 13
fork()                                  = 28017
fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 4), ...}) = 0
old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb75e7000
_llseek(1, 0, 0xbfff7324, SEEK_CUR)     = -1 ESPIPE (Illegal seek)
read(9, "Connected.\n\nno rows selected\n\n", 4096) = 30
read(9, "", 4096)                       = 0
close(9)                                = 0
write(1, "*******FATAL ERROR*******\nPROGRA"..., 1024*******FATAL ERROR*******
PROGRAM : (/v03/oracle/DEV02/dev02comn/admin/install/DEV02_shine/
TIME    : Fri Jul 30 20:27:22 2010
FUNCTION: TXK::SQLPLUS::_doExecute [ Level 3 ]
Cannot find SESSION_ID: in connect buffer

The above output more or less summarizes that one sql stamen executed inside DB has resulted in “no rows” and the process got aborted. SQL statement was  “select \’SESSION_ID:\’||sys_guid()”. This is not complete statement but we can get it from v$sqlarea pretty easily. Also it was confirmed that the DB connection was successful , otherwise error could have been different.

Below query in the database has produced the exact query for us:

SQL> select SQL_TEXT from v$sql where SQL_TEXT like '%SESSION_ID%';

select 'SESSION_ID:'||sys_guid() h1 from dual

When we used this query as apps.. it did not gave any results:

SQL> select 'SESSION_ID:'||sys_guid() h1 from dual;

no rows selected

same query when executed as applsys was successful..

SQL>  select sys_guid() from dual;


It only means two things..

1. sys_guid() function depends on user context for its output or

2. we are not using the same set of objects in both queries.

Below queries revealed the real issue…

SQL> select * from dual;

no rows selected

SQL> select owner,object_name ,object_type,created from dba_objects where object_name like 'DUAL';

OWNER                          OBJECT_NAME                              OBJECT_TYPE        CREATED
------------------------------ ---------------------------------------- ------------------ ---------------
SYS                            DUAL                                     TABLE              27-MAY-04
PUBLIC                         DUAL                                     SYNONYM            27-MAY-04
APPS                           DUAL                                     TABLE              21-JUN-12

Ideally there should only be one dual table in database. Someone here has created Dual table in APPS schema with out any rows and it has resulted in all the issues.

All seeded programs use DUAL table heavily to extract output of functions easily. Unknowingly this will make DUAL table as one of the significant component of their program ,where as not much attention is paid on integrity of this table. This is just one example of what can happen:

SQL> desc dual

Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
ENAME                                              VARCHAR2(5)


Now the solution was pretty straight forward. We have removed the bad DUAL table reran the autoconfig process. Everything went fine without any issues.

Reducing Patch Downtime in Oracle EBS

Note: This content majorly based on the article Top 7 Ways of Reducing Patching Downtimes for Apps by Steven Chan and few other supporting articles.

1. Overview

Patching an Oracle Applications environment is a major maintenance activity which will generally account to significant percentage of planned downtime of production environments. This patching could be as simple as applying a one-off patch to fix a specific bug to as huge as applying a maintenance pack. In this document we will discuss various options that can be utilized to reduce the patching downtime of Oracle Applications environments.

2. Techniques to Reduce Patching Downtime

Oracle Applications provides various features that can utilized to decrease the amount of downtime required to perform patching. Some of these options require investment of extra resources in terms of extra servers and storage, while others require efficient usage of existing adpatch options and other ad utilities with proper analysis & planning.

While adopting these methods one should note that not all methods will decrease the patching time. Patch downtime majorly depends on the type of patch that we are applying, the kind of activity that the patch is performing and the patch levels of existing environment. One should also note that patching activity depends equally on both database tier resources and on application tier. Careful analysis of patching bottlenecks is vital in deciding the right mix of techniques to ensure maximum reduction of patching downtime.

It is also mandatory to note that the one can only decrease the patch downtime but not eliminate it. Very few patches can be applied as hot patches with any downtime. Often, patch README files do not mention whether a patch can be applied as a hot patch or not. One should do a complete analysis of patch activity to identify whether a patch can be as a hot patch or not.

Below are the techniques recommended by Oracle to decrease the patching downtime…

2.1 Using Staged Application File system

The Staged Application File System approach allows us to perform as many changes as possible in an offline Apps environment, and defers taking down production environment only for the final database patches tasks. Using this approach, one can apply new patches to an exact clone of production E-Business Suite environment. This can be done while production system is still running. The staged Applications environment is then used to run database updates and APPL_TOP changes into production environment.


  1. Topology – A staged Applications system should duplicate the topology of your Production system. Each physical APPL_TOP of your Production system must exist in your staged system.
  2. Patch Level –11.5.10 or AD.I.1 is required. For environments prior to 11.5.10 – patch 3285255

Steps Involved

Below are the high level steps involved with this process.

  1. Prepare Source Environment
    1. Maintain Snapshot information
    2. Ensure minimum patch levels as per pre-requisites
  2. Prepare Staged environment (Should be exact replica of PROD topology)
  3. Apply necessary patches to Staged environment
  4. Update Production system
    1. Update production database
    2. Update production APPL_TOP
  5. Post patching steps
    1. Export patch history on Staged environment
    2. Import patch history on Production environment.

Only step 4 requires production environment to be down and steps 4-a and 4-b can be executed in parallel. This will ensure that overall time spent required to perform patching is minimal.


  1. Can be used for patches of any size and complexity.
  2. Significant reduction for patch down time for huge patches.


  1. Requires double hardware resources – number of servers and storage.
  2. Complexity involved in the initial setup phase.

Evaluating the benefit

Below example will help us understand how Staged Applications environment will help us in reduced down time.

Example 1:

Below is an example of a one-off patch application. This patch is majorly DB bound and does little activity on the Application file system.

From this report, we can observe that 20% of total patch time is spent on Administration phase which will involve tasks like forms generation and reports generation.

These reports can be generated on the Staged Application file system (R12 Only) before hand to assess the exact amount of time that can be gained by using the Staged Application File system approach.

Phase Name Start Time End Time Elapsed Jobs

5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:45



tbm: Create

5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:45




5/12/2011 9:45

5/12/2011 9:55




5/12/2011 9:56

5/12/2011 9:57




5/12/2011 9:57

5/12/2011 10:00




5/12/2011 9:45

5/12/2011 10:00




2.2 Use a shared application-tier file system

Creating a multi-node system with a shared application tier file system saves patching time in multiple Application Node environments, as we apply patch only once – on the primary node. This ensures minimal downtime as all adpatch actions are performed only once.

This diagram is a sample representation of a Shared Application Tier file system environment in R12 environment.


  • Sharing the Application Tier File System in Oracle Applications Release 11i [ID 233428.1]
  • Sharing The Application Tier File System in Oracle E-Business Suite Release 12 [ID 384248.1]
  • How To Apply Patches On Shared Application Tier File System Environment [ID 745580.1]

2.3 Use Distributed AD

Distributed AD is a parallel processing feature that can reduce downtime by efficiently utilizing all the available resources on a shared application file system. AD Administration and Auto Patch run on the primary node and direct workers running on that node and other nodes in the system. The AD Controller utility controls and monitors the actions of the workers that you specify.

This parallel processing makes use of managers, which direct the actions of worker processes. The manager assigns each worker a processing job and monitors its progress. When a worker completes a job, the manager assigns it another until all jobs are complete. AD Administration and AutoPatch can be directed to distribute processing tasks across multiple remote machines in a multi-node system. This type of parallel processing operation is called Distributed AD. It further reduces the time to complete a maintenance task by utilizing the processing capabilities of all of the nodes in the system.

Because the AD workers create and update file system objects as well as database objects, Distributed AD can only be used on systems that are using a Shared Application Tier File System to ensure that files are maintained in a single, centralized location.


  • AD.H or higher
  • Shared Application File system


  • Increased utilization of OS resources on all nodes of Shared Application systems
  • Works efficiently in environments where the middle tier resources are a bottleneck for activities like forms generation.


  • Complex to implement. Requires manual intervention in adpatch operations
  • Will only work for application tier operations. If DB tier is having resource shortage, performance improvement will not be significant.
  • Assigning more number of workers is not always a solution if the number of workers is not the bottleneck. Proper care must be taken while deciding the number of workers.


2.4 Merge multiple patches using AD Merge Patch

Merging patches saves time because the AutoPatch overhead of starting a new session is eliminated for those patches that are consolidated.  Duplicate linking, generating or database actions are run once only.  If two patches update the same file, AD Merge Patch will save time by applying only the latest one.  Patches can — and should — be merged with their listed prerequisite patches.

Some guide lines while considering Merge Patch:

  • Oracle DBA (AD) patches may change the AutoPatch utility itself. Always merge AD patches separately from non-AD patches.
  • Merge patches with their listed prerequisite patches to make patch application easier.
  • Individual patches can be merged with aggregated patches such as minipacks, maintenance releases, and release update packs.
  • Merged patches can be merged with other patches. This may be useful when maintaining a “current” patch which can be applied to other systems in order to bring them up to the same level.
  • Merging a patch performs any necessary system-specific character set translation.

2.5 Run AD Patch in Non-Interactive Mode

Automating the patching process by applying patches in non-interactive mode will help us reduce the delays introduced by manual intervention. This process allows you to store the responses to the patching prompts in a defaults file, and then specify the name of this file when you run AutoPatch. This is particularly helpful when applying huge number of patches.


  • AD Command Line Options for Release R12 [ID 1078973.1]

2.6 Effective use of AD Patch options

AD Patch provides various options which can be used effectively to avoid repetitive tasks performed by AD Patch. These options will have greater benefit particularly if we are applying multiple patches separately. Alternatively we can also use merge feature of ad patches to avoid repeated execution of ad patches. Below are some of the options which can be used to optimize the adpatch runtime.


This option will stop compilation of database INVLID objects after every patch.


This option will stop maintenance of MRC schema operations after every patch.


This option will suppress compilation of jsp files.


Suppresses generate driver actions like genform, genpll…

3. Special Consideration for NLS patches

If the Oracle Applications environment is enabled with multiple NLS languages, NLS translation versions of all patches needs to be applied. This will significantly increase the overall downtime of the environment and thus needs to be handled appropriately. Below are the two most prominent techniques used for with NLS patches to reduce the downtime.

3.1 Merge NLS (Translation) Patches and Apply Them During Uptime

If there are multiple patches for multiple languages, merge all US patches into a single patch. Then, merge the NLS translation patches for each active language into a single patch for each language. Apply the US patches first during downtime. Then, the merged NLS translation patches can be applied during uptime.

3.2 Translation Synchronization

The Translation Synchronization patch feature provides a quick way to synchronize existing translations with the American English file versions on given Applications instance. By applying just one patch for each language, once can bring your translations up to current Applications patch level. You can also choose to get the latest translations to bring your translations up-to-date.

The basic concept involves preparing a single manifest file for your Applications instance.  This manifest reflects the language software content for all the active languages on your current Applications system.  The manifest will be used to generate customized patches for your system.

This feature reduces the installation time and complexity for synchronizing language installations. It also provides a simple way for you to update your existing translations, after major patching activities.


  • 11i environments – 11i.AD.I.2 with patch 5837664 or 11i.AD.I.6
  • R12 environments – R12.AD.A.1


  • Requesting Translation Synchronization Patches [ID 252422.1]

4. Considerations for Database Tier

Often database tier operations of adpatch consume majority of the downtime. To ensure that this down time is reduces one should optimize the initialization parameters of database for patching specific workload. This will ensure that the database performance is optimally tuned for different kind of workloads like patching and normal operations. For example: Increasing the SGA /PGA sizing related parameters.

5. References

Automate SM/OPTSTAT component purging in SYSAUX tablespace

This post is a followup of my earlier post on handling SM/OPTSTAT component growth in SYSAUX tablespace.

Edit: Oracle has released patch 8553944  to fix this issue. Please check My Oracle Support for patch availability for your environment.

In our environment we have used a simple PL/SQL procedure to gradually purge this SM/OPTSTAT component data. As the purge process is resource intensive it was important for us to perform this purge activity in smaller chunks. Below is the PL/SQL procedure that was developed by one of my colleague (Naveen.KNS) . He gladly accepted to share this code with the user community. Thanks to Naveen!


This code will purge two days of old statistics in one run. One can schedule this procedure using any scheduling mechanism in off peak hours.

Do note that this process will ONLY purge the information but will not release the free space. You might have to perform re-org of necessary tables to reclaim the space.

Till next time…

Release Content Documents for Oracle Applications 12.2

Oracle has released these documents. My Oracle Support document “Oracle E-Business Suite Releases 12.1 and 12.2 Release Content Documents [ID 1302189.1] “ has link to the necessary documents. There is quite a bit of interesting information that any Apps DBA would be excited about in those documents.

As updated in some popular blogs here and here, R12.2 will be available pretty soon. There are quite a few interesting features that one can try in 12.2 version. One should consider upgrading to 11gR2 version at the earliest to get a glimpse of those interesting features.

More details soon.

2010 in review

The stats helper monkeys at mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Minty-Fresh™.

Crunchy numbers

Featured image

A helper monkey made this abstract painting, inspired by your stats.

A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 1,300 times in 2010. That’s about 3 full 747s.

In 2010, there were 7 new posts, not bad for the first year!

The busiest day of the year was November 16th with 26 views. The most popular post that day was Handling SM/OPTSTAT component growth in SYSAUX tablespace.

Where did they come from?

The top referring sites in 2010 were and

Some visitors came searching, mostly for ora-01400: cannot insert null into (“applsys”.”wf_local_roles_stage”.”notification_preference”), sm/optstat, oracle sysaux full, oracle database growth using dba_hist, and dba_hist_seg_stat.

Attractions in 2010

These are the posts and pages that got the most views in 2010.


Handling SM/OPTSTAT component growth in SYSAUX tablespace July 2010
1 comment


Free space in Auto extensible tablespaces February 2010


Measuring Database Space Growth February 2010


Resolving block corruption issues in R12 instance after clone June 2010
One execution per script April 2010

Handling SM/OPTSTAT component growth in SYSAUX tablespace

From Oracle 10g onwards, there is serious bug ( As per oracle, a feature ), due to which the SYSAUX tablespace will grow continuously. There is only way to limit this growth and it is not clearly documented anywhere. Below are the notes I prepared in this regard.

Below information is applicable to all versions from to 11.2.

Edit: Oracle has released patch 8553944  to fix this issue. Please check My Oracle Support for patch availability for your environment.

Whats the issue:

1.From 10gR1 onwards, Oracle will maintain the backup of the optimizer statistics automatically, even if you don’t ask it to do so.

2.This backup information is stored in SYSAUX tablespace and tables involved in this are as below:

WRI$_OPTSTAT_OPR                         TABLE

3.Default retention of this information is obtained with below command:

select dbms_stats.get_stats_history_retention from dual;

4.Default value for this setting is 30 days and can be decreased using the below command:

exec dbms_stats.alter_stats_history_retention(10);


5.Purging of this information is not scheduled using any job mechanism. Its not present in DBA_JOBS and DBA_SCHEDULER_JOBS. This is handled by MMON background process internally.

6.As per the description from ML Note: SYSAUX Grows Because Optimizer Stats History is Not Purged [ID 1055547.1], MMON will do this activity once in 24 hrs and if it cannot complete this in 5 mins, it leaves the task.

7.NO messages will be written in alert log or anywhere else to indicate that this purge did not happen. With every passing day, the probability of the failure increases and it will come to a stage, where automatic purging will never complete by MMON successfully.

8.Alternate provided by Oracle for this is to run, DBMS_STATS.PURGE_STATS program to purge this stats information. This can be done manually and is safe. No timeouts involved here. only a manual termination or abnormal failure will stop the purge.

What else am I missing here?

9.Even though we set the retention, it may happen that the due to purge failure, we will accumulate huge amount of statistics history in SYSAUX, making it to grow continuously.

10.As the purge failure is not written anywhere, we might miss the fact that there is a process which is consuming space regularly.

11.PURGE_STATS process will perform a delete from these tables. Hence the free space is not released back to tablespace resulting in free space fragmentation inside this table.

How do I know that I’ve this issue?

12.Find out your present retention value using the below statement:

select dbms_stats.get_stats_history_retention from dual;

13.Find out the oldest statistics history using below statement:

select dbms_stats.get_stats_history_availability from dual;

14.Compare the above two and you’ll know what to do?

What should I do now?

15.Identify the retention in the system at the earliest and change to a proper value. 30 days is default. However, lesser values would not do much harm.

16.Turn off the Autoextend on the SYSAUX at the earliest to ensure that the tablespace doesn’t grow out of bounds and finally become complete unmanageable.

17.Start scheduling the PURGE_STATS at the earliest and keep SYSAUX tablespace size in control.

18.Perform re-org of the related tables to reclaim the space.

Why should I care?

19.With every passing day, the size of the table grows bigger and bigger, finally becoming a top space consumer. Efforts to be put in for cleanup will increase exponentially . DO it to day in 1hr or do it in 5hr tomorrow.

20.PURGE_STATS procedure does a plain delete from command with a specific where clause. This leaves lot of mess as the process will complete fully or doesn’t do any thing at all. No COMMITS at regular intervals in this code.

21.For an average database of size 150G, in 6 months time , you may end up with 100M rows in this table. Clean up of this much data is always a pain.

Any tips?

22.Identify what is the oldest information you have in the stats history using the below command.

select dbms_stats.get_stats_history_availability from dual;

23.If the information is too old, start purging the information in smaller chunks. remember that if you cancel the process in between, it will do a complete rollback before coming out.

24.Use the below command to know on which dates there was huge stats generation.

select trunc(SAVTIME),count(1) from WRI$_OPTSTAT_HISTHEAD_HISTORY group by  trunc(SAVTIME) order by 1;

25.Start with oldest date and invoke below command

exec dbms_stats.purge_stats(to_date('01-JAN-2010','DD-MON-YYYY'));

NOTE: Please note that this process is resource intensive. Do not perform in business hours.

26.Once the amount of data comes down to a reasonable limit schedule it in off-peak hours, as a regular maintanance activity.

27.Keep an eye on the amount of fragmentation and consider re-org process if needed.

Edit: 1/6/2016. Please do review below MOS note if the issue is still not resolved after applying patches mentioned above.. SYSAUX Statistics History Tables are Still Growing Even After Application of Patch 14373728 (Doc ID 1528085.1)

Wild behavior of wildcards

While playing with files on one of Linux server I’ve observed strange behavior with wild card “*”.

[oraxxx@vision test]$ cat a1.txt
[oraxxx@vision test]$ cat a2.txt

[oraxxx@vision test]$ ls
a1.txt  a2.txt
[oraxxx@vision test]$ cp a*.txt
[oraxxx@vision test]$ echo $?

This command is expected to fail as we did not provide any destination folder
But it got succeeded as the shell has translated a*.txt to “a1.txt a2.txt”
Finally “cp a*.txt” translated to “cp a1.txt a2.txt”

[oraxxx@vision test]$ cat a1.txt
[oraxxx@vision test]$ cat a2.txt

Little more play revealed more interesting behavior.

[oraxxx@vision test]$ mkdir a3
[oraxxx@vision test]$ ls -ltr
total 24
-rw-r--r-- 1 oraxxx dba    2 Jun 22 04:31 a1.txt
-rw-r--r-- 1 oraxxx dba    2 Jun 22 04:31 a2.txt
drwxr-xr-x 2 oraxxx dba 4096 Jun 22 10:26 a3
[oraxxx@vision test]$ cp a*
[oraxxx@vision test]$ echo $?
[oraxxx@vision test]$ ls -l `pwd`/a3
total 16
-rw-r--r-- 1 oraxxx dba 2 Jun 22 10:26 a1.txt
-rw-r--r-- 1 oraxxx dba 2 Jun 22 10:26 a2.txt

This behavior is valid as long as the wildcard character results in only 2 files. It is also valid if the last one resulted is a directory.
We just need to be careful while using commands similar to these. You never know when you’ll overwrite an important file.