Reseach of how many records are null in a field of one or more tables, and how many are null differentiated by value of another field

Find in a table, for any field of the table, how many records are NULL, and of these NULL how many separate by values of another field of the same table.

Example result:

TAB_NAME                      COL_NAME                      NUM_ROWS                   NUM_NULLS_TOT       NUM_NULLS_X_COLVAL               COLVAL_LE_ID

————————————————————————————-

S_REPORT                        S_TAX_BRACKET_CD                        7111600   7111599   2470            631

S_REPORT                        S_TAX_BRACKET_CD                        7111600   7111599   57859         FC222200

S_REPORT                        S_TAX_BRACKET_CD                        7111600   7111599   3038882   Z2030

S_REPORT                        S_TAX_BRACKET_CD                        7111600   7111599   4012388   C8101

 

The S_REPORT table has 7111600 rows and has a column name S_TAX_BRACKET_CD that has 7111599 NULL rows. The S_REPORT table also has a column name LE_ID which its value are 631, FC222200, Z2030 e C8101. The NULL rows are separated for column name LE_ID values in:

LE_ID=631 has 2470 NULL rows

LE_ID=FC222200 has 57859 NULL rows

LE_ID=Z2030 has 3038882 NULL rows

LE_ID=C8101 has 4012388 NULL rows

Below the SQL script code that create two tables (T_NUM_NULLS and T_NUM_NULLS_X_COLVAL), and one view (V_NUM_NULLS_X_COLVAL) where find the result after script run. Important: before the script run in your DB, modify the “nametab” and “namecol” default variable value in the script according to your research.

 

/* 

/* Procedure that find, for every column name in the tables specified in nametab variable, how many records are NULL,                  */

/* grouped by column value specified in namecol variable.                                                                              */

/* The result of the procedure are in: V_NUM_NULLS_X_COLVAL view, T_NUM_NULLS and T_NUM_NULLS_X_COLVAL tables                          */

/* Modify how would you prefer the default nametab and namecol values                                                                  */

/*                                                                                                                                     */

/* NOTICES:                                                                                                                            */

/* 1) The column name contents in namecol variable must be present in all tables where you execute the research of NULL values.        */

/* 2) The procedure has locked the data type dimension in the EXECUTE IMMEDIATE strings – this could be cause run errors.              */

/* 3) Please rispect the standard naming rules of Oracle database objects for the content of the nametab and namecol variables,        */

/*    otherwise the procedure will return an error.                                                                                    */

DECLARE

cnt INTEGER;– Variabile 0 o 1 numero record trovati – 0 significa che la tabella T_NUM_NULLS non esiste

nametab VARCHAR2(30)default’S_REPORT%’;– Name/s table/s string variable

namecol VARCHAR2(30)default’LE_ID’;– Column name string variable that contains the categories to distinguish NULLS

numtabtot INTEGER;

numtabok INTEGER;

BEGIN

 

– Variables values check

nametab :=REPLACE(nametab,’ ‘,”);

namecol :=REPLACE(namecol,’ ‘,”);

namecol :=REPLACE(namecol,’%’,”);

 

– Check if exists the T_NUM_NULLS table. If exists, drop and create it, otherwise create it.

SELECT count(*)INTO cnt FROM USER_TABLES WHERE table_name =’T_NUM_NULLS’;

 

IF cnt =0

THEN

EXECUTEIMMEDIATE’create table T_NUM_NULLS (TAB_NAME VARCHAR2(30), COL_NAME VARCHAR2(30), NUM_ROWS NUMBER, NUM_NULLS NUMBER)’;

ELSE

EXECUTEIMMEDIATE’drop table T_NUM_NULLS’;

EXECUTEIMMEDIATE’create table T_NUM_NULLS (TAB_NAME VARCHAR2(30), COL_NAME VARCHAR2(30), NUM_ROWS NUMBER, NUM_NULLS NUMBER)’;

ENDIF;

– Check if exists the T_NUM_NULLS_X_COLVAL table. If exists, drop and create it, otherwise create it.

SELECT count(*)INTO cnt FROM USER_TABLES WHERE table_name =’T_NUM_NULLS_X_COLVAL’;

 

IF cnt =0

THEN

EXECUTEIMMEDIATE’create table T_NUM_NULLS_X_COLVAL (TAB_NAME VARCHAR2(30), COL_NAME VARCHAR2(30), NUM_NULLS NUMBER, COLVAL_’ || namecol || ‘ VARCHAR2(8))’;

ELSE

EXECUTEIMMEDIATE’drop table T_NUM_NULLS_X_COLVAL’;

EXECUTEIMMEDIATE’create table T_NUM_NULLS_X_COLVAL (TAB_NAME VARCHAR2(30), COL_NAME VARCHAR2(30), NUM_NULLS NUMBER, COLVAL_’ || namecol || ‘ VARCHAR2(8))’;

ENDIF;

– Fills numtabok and numtabtot variables to check that the column name content in namecol variable, exists in all tables where we doing the research of NULLS, in order to avoid below query errors.

SELECT COUNT(*)INTO numtabok

FROM

(

SELECTDISTINCT(table_name)

FROM ALL_TAB_COLUMNS

WHERE1=1

AND TABLE_NAME LIKE” || nametab || ”

)

;

SELECT COUNT(*)INTO numtabtot

FROM

(

SELECTDISTINCT(t2.table_name), t2.COLUMN_NAME

FROM

(

SELECTDISTINCT(table_name), COLUMN_NAME

FROM ALL_TAB_COLUMNS

WHERE1=1

AND table_name LIKE” || nametab || ”

) t1

INNERJOIN

(

SELECTDISTINCT(table_name),COLUMN_NAME

FROM ALL_TAB_COLUMNS

WHERE1=1

AND table_name LIKE” || nametab || ”

AND COLUMN_NAME =” || namecol || ”

) t2

ON(t1.table_name = t2.table_name)

)

;

 

– Creating view that contains the procedure result.

EXECUTEIMMEDIATE’CREATE OR REPLACE VIEW V_NUM_NULLS_X_COLVAL AS SELECT t1.TAB_NAME, t1.COL_NAME, t1.NUM_ROWS, t1.NUM_NULLS NUM_NULLS_TOT, t2.NUM_NULLS NUM_NULLS_X_COLVAL, t2.COLVAL_’ || namecol || ‘ FROM T_NUM_NULLS t1 LEFT OUTER JOIN T_NUM_NULLS_X_COLVAL t2 ON t1.TAB_NAME = t2.TAB_NAME AND t1.COL_NAME = t2.COL_NAME ‘;

IF numtabtot = numtabok

THEN

FOR someone IN

(

select t1.TABLE_NAME, t2.COLUMN_NAME, t1.NUM_ROWS, t2.NUM_NULLS

from

(

select TABLE_NAME, NUM_ROWS

from USER_TABLES

where1=1

AND table_name IN

(

SELECTDISTINCT(table_name)

FROM ALL_TAB_COLUMNS

WHERE1=1

AND table_name LIKE” || nametab || ”

)

) t1

INNERJOIN

(

select TABLE_NAME, COLUMN_NAME, NUM_NULLS

from ALL_TAB_COL_STATISTICS

where1=1

AND table_name IN

(

SELECTDISTINCT(table_name)

FROM ALL_TAB_COLUMNS

WHERE1=1

AND table_name LIKE” || nametab || ”

)

) t2

on t1.TABLE_NAME = t2.TABLE_NAME

)

LOOP

EXECUTEIMMEDIATE’insert into T_NUM_NULLS (TAB_NAME, COL_NAME, NUM_ROWS, NUM_NULLS) values (”’ || someone.TABLE_NAME || ”’, ”’ || someone.COLUMN_NAME || ”’, ”’ || someone.NUM_ROWS || ”’, ”’ || someone.NUM_NULLS || ”’)’;

EXECUTEIMMEDIATE’insert into T_NUM_NULLS_X_COLVAL (TAB_NAME, COL_NAME, NUM_NULLS, COLVAL_’ || namecol || ‘) ‘ || ‘select ”’ || someone.TABLE_NAME || ”’ TABLE_NAME, ”’ || someone.COLUMN_NAME || ”’ COLUMN_NAME, count(*) NUM_NULLS, ‘ || namecol || ‘ from ‘ || someone.TABLE_NAME || ‘ where ‘ || someone.COLUMN_NAME || ‘ is null group by ‘ || namecol || ”;

ENDLOOP;

ELSE

FOR someone IN

(

select t1.TABLE_NAME, t2.COLUMN_NAME, t1.NUM_ROWS, t2.NUM_NULLS

from

(

select TABLE_NAME, NUM_ROWS

from USER_TABLES

where1=1

AND table_name IN

(

SELECTDISTINCT(table_name)

FROM ALL_TAB_COLUMNS

WHERE1=1

AND table_name LIKE” || nametab || ”

)

) t1

INNERJOIN

(

select TABLE_NAME, COLUMN_NAME, NUM_NULLS

from ALL_TAB_COL_STATISTICS

where1=1

AND table_name IN

(

SELECTDISTINCT(table_name)

FROM ALL_TAB_COLUMNS

WHERE1=1

AND table_name LIKE” || nametab || ”

)

) t2

on t1.TABLE_NAME = t2.TABLE_NAME

)

LOOP

EXECUTEIMMEDIATE’insert into T_NUM_NULLS (TAB_NAME, COL_NAME, NUM_ROWS, NUM_NULLS) values (”’ || someone.TABLE_NAME || ”’, ”’ || someone.COLUMN_NAME || ”’, ”’ || someone.NUM_ROWS || ”’, ”’ || someone.NUM_NULLS || ”’)’;

/* Compared to the previous FOR LOOP, isn’t executed the “insert into T_NUM_NULLS_X_COLVAL” because at least one of tables of research doesn’t have the search topic column name. */

ENDLOOP;

ENDIF;

END;

/

 

 

Goal Search Algorithm


Description:
Goal search algorithm or search algorithm of the limit of a monotonically increasing sequence bounded above.
1) Purpose:
Research of limit of a monotonically increasing sequence bounded above.
2) Example:
Suppose to want to create a 10 meters high tower with a tolerance of 10 millimeters, having a given number of bricks, each of different height,
stacked one over another. To make the tower we will be obligated to use bricks in Last In First Out order.
Suppose to have this stack of bricks named XdataTable:
908
906
832
454
464
522
45
229
5057
433
930
214
1
1
1
2
1
219
1
83
1
3
82
424
2
58
30
24
461
675
By running the goal_search_procedure.sql will be returned the following succession of bricks named XdataTableTarget, with highlighted
the bricks do not used, which will allow us to reach the goal set to realize a 10 meter tower.
908
906
832
454
464
522
45
229
5057
433
930
214
1
1
1
2
1
219
1
83
1
3
82
424
258
30
24
461
675
3) Example enviroment preparation
For convenience here is prepared the scripts for run the preceding example.
———————————
– TEST ENVIROMENT PREPARATION –
———————————
CREATE TABLE XDataTable
(
SortId NUMBER,
DataValue NUMBER
)
TABLESPACE TFTCUS04
LOGGING
NOCOMPRESS
NOCACHE
MONITORING;
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (1, 908);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (2, 906);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (3, 832);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (4, 454);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (5, 464);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (6, 522);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (7, 45);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (8, 229);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (9, 5057);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (10, 433);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (11, 930);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (12, 214);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (13, 1);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (14, 1);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (15, 1);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (16, 2);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (17, 1);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (18, 219);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (19, 1);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (20, 83);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (21, 1);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (22, 3);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (23, 82);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (24, 424);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (25, 2);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (26, 58);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (27, 30);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (28, 24);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (29, 461);
INSERT INTO XDATATABLE (SortId, DataValue) VALUES (30, 675);
CREATE TABLE XDataTableSorted
(
Row_Id NUMBER,
DataValue NUMBER,
FLAG_JUMP NUMBER,
SumDataValue NUMBER
)
TABLESPACE TFTCUS04
LOGGING
NOCOMPRESS
NOCACHE
MONITORING;
CREATE TABLE XDataTableTemp
(
Row_Id NUMBER,
DataValue NUMBER,
FLAG_JUMP NUMBER,
SumDataValue NUMBER
)
TABLESPACE TFTCUS04
LOGGING
NOCOMPRESS
NOCACHE
MONITORING;
CREATE TABLE XDataTableTarget
(
Row_Id NUMBER,
DataValue NUMBER,
SumDataValue NUMBER,
WeightDataValue NUMBER
)
TABLESPACE TFTCUS04
LOGGING
NOCOMPRESS
NOCACHE
MONITORING;
COMMIT;
4) goal_search_procedure.sql script:
The following script searches and returns the goal succession
/*
File: goal_search_procedure.sql
Date of creation: 2014-11-07
Description: Goal search algorithm or search algorithm of the limit of a monotonically increasing sequence bounded above.
Autore:
*/
DECLARE
SumDataValueProgr NUMBER DEFAULT 0;
Tollerance NUMBER;
TargetValue NUMBER;
i NUMBER;MaxRowId NUMBER;
NoMoreData NUMBER DEFAULT 0;
BEGIN
– Variable initialization
Tollerance := 10;
TargetValue := 10000;
dbms_output.put_line(‘Tollerance: ‘|| Tollerance);
dbms_output.put_line(‘TargetValue: ‘|| TargetValue);
– Preparation of XDataTableSorted table (elaboration table) with the data of the DataTableSource table (source table)
DELETE FROM XDataTableSorted;
dbms_output.put_line(‘DELETE FROM XDataTableSorted’);
INSERT INTO XDataTableSorted (Row_Id, DataValue, FLAG_JUMP, SumDataValue)
SELECT SortId Row_Id,
DataValue,
0 AS FLAG_JUMP,
SUM (DataValue) OVER (ORDER BY SortId ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS SumDataValue
FROM XDataTable
WHERE 1=1
ORDER BY SortId
;
dbms_output.put_line(‘INSERT INTO XDataTableSorted’);
SELECT MAX(Row_Id) INTO MaxRowId
FROM XDataTableSorted;
dbms_output.put_line(‘MaxRowId: ‘|| MaxRowId);
– Preparation of XDataTableTarget table (results table)
DELETE FROM XDataTableTarget;
dbms_output.put_line(‘DELETE FROM XDataTableTarget’);
commit;
– Loop of search of SumDataValue value as much as possible near at TargetValue value
– CoverLoop: LOOP
<<CoverLoop>> LOOP
SELECT MIN(Row_Id) INTO i
FROM (
SELECT FLAG_JUMP, SUM(DataValue) OVER (ORDER BY Row_Id) SumDataValue, Row_Id
FROM XDataTableSorted
WHERE 1=1
AND FLAG_JUMP <> 1
) CalcSumDataValueProgr
WHERE SumDataValue > TargetValue
AND FLAG_JUMP <> 1
;
dbms_output.put_line(‘i: ‘|| i);
IF (i IS NULL) THEN
SELECT MAX(Row_Id)+1 INTO i
FROM (
SELECT FLAG_JUMP, SUM(DataValue) OVER (ORDER BY Row_Id) SumDataValue, Row_Id
FROM XDataTableSorted
WHERE 1=1
AND FLAG_JUMP <> 1
) CalcSumDataValueProgr
WHERE 1=1
AND FLAG_JUMP <> 1
;
dbms_output.put_line(‘exNull i: ‘|| i);
SELECT distinct(FLAG_JUMP) INTO NoMoreData
FROM XDataTableSorted
WHERE Row_Id > i or Row_Id = MaxRowId;
END IF;
– SET i = i – 1
i := i – 1;
dbms_output.put_line(‘i – 1: ‘|| i);
SELECT SumDataValue INTO SumDataValueProgr
FROM (
SELECT FLAG_JUMP, SUM(DataValue) OVER (ORDER BY Row_Id) SumDataValue, Row_Id
FROM XDataTableSorted
WHERE 1=1
AND FLAG_JUMP <> 1
) CalcSumDataValueProgr
WHERE Row_Id = i
;
dbms_output.put_line(‘SumDataValueProgr: ‘|| SumDataValueProgr);
– Verifies that (TargetValue > SumDataValueProgr-Tolerance) or that has arrived at the last Row_Id
IF (SumDataValueProgr > TargetValue – Tollerance) OR (i = MaxRowId) OR (NoMoreData = 1)
THEN
– Set FLAG_JUMP = 1 for all bad deal/records that satisfy the condition
UPDATE XDataTableSorted
SET FLAG_JUMP = 1, SumDataValue = 0
WHERE DataValue > ABS((TargetValue – Tollerance) – SumDataValueProgr)
AND Row_Id > i
;
dbms_output.put_line(‘UPDATE EXIT XDataTableSorted – FLAG_JUMP = 1′);
– Preparation of XDataTableTemp table (temporary support table)
DELETE FROM XDataTableTemp;
dbms_output.put_line(‘DELETE FROM XDataTableTemp’);
– Populating of the XDataTableTemp table (temporary support table)
INSERT INTO xdatatabletemp (Row_Id, DataValue, FLAG_JUMP, SumDataValue)
SELECT Row_Id,
DataValue,
FLAG_JUMP,
SUM (DataValue) OVER (ORDER BY Row_Id ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS SumDataValue
FROM XDataTableSorted
WHERE 1=1
and FLAG_JUMP = 0
;
dbms_output.put_line(‘INSERT INTO XDataTableTemp’);– Update sumdatavalue for all deal/records Ok (x FLAG_JUMP = 0) that satisfy the condition
UPDATE xdatatablesorted
SET sumdatavalue = (SELECT xdatatabletemp.sumdatavalue
FROM xdatatabletemp
WHERE xdatatabletemp.row_id = xdatatablesorted.row_id)
WHERE EXISTS (SELECT xdatatabletemp.sumdatavalue
FROM xdatatabletemp
WHERE xdatatabletemp.row_id = xdatatablesorted.row_id);
dbms_output.put_line(‘UPDATE XDataTableSorted – sumdatavalue x FLAG_JUMP = 0′);
– Insert the records that were identified as OK from CoverLoop into the XDataTableTarget table
INSERT INTO XDataTableTarget (Row_Id, DataValue, SumDataValue, WeightDataValue)
SELECT Row_Id
,DataValue
,SumDataValue
,1 AS WeightDataValue
FROM (
SELECT Row_Id
,DataValue
,SUM(DataValue) OVER (ORDER BY Row_Id) SumDataValue
FROM XDataTableSorted
WHERE 1=1
AND FLAG_JUMP = 0
) SDTable
;
dbms_output.put_line(‘INSERT INTO XDataTableTarget – FLAG_JUMP = 0 – ‘|| SumDataValueProgr);
– Insert the records that were identified as discarded from CoverLoop into the XDataTableTarget table
INSERT INTO XDataTableTarget (Row_Id, DataValue, SumDataValue, WeightDataValue)
SELECT Row_Id
,DataValue
,0 AS SumDataValue
,0 AS WeightDataValue
FROM (
SELECT Row_Id
,DataValue
FROM XDataTableSorted
WHERE 1=1
AND FLAG_JUMP = 1
) SDTable
;
dbms_output.put_line(‘INSERT INTO XDataTableTarget – FLAG_JUMP = 1 – ‘|| SumDataValueProgr);
commit;
– EXIT CoverLoop
EXIT CoverLoop;
ELSE
– Set FLAG_JUMP = 1 for all bad deal/records that satisfy the condition
UPDATE XDataTableSorted
SET FLAG_JUMP = 1, SumDataValue = 0
WHERE DataValue > ABS((TargetValue – Tollerance) – SumDataValueProgr)
AND Row_Id > i
;
dbms_output.put_line(‘UPDATE XDataTableSorted – FLAG_JUMP = 1′);
– Preparation of XDataTableTemp table (temporary support table)
DELETE FROM XDataTableTemp;
dbms_output.put_line(‘DELETE FROM XDataTableTemp’);
– Populating of the XDataTableTemp table (temporary support table)
INSERT INTO xdatatabletemp (Row_Id, DataValue, FLAG_JUMP, SumDataValue)
SELECT Row_Id,
DataValue,
FLAG_JUMP,
SUM (DataValue) OVER (ORDER BY Row_Id ASC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS SumDataValue
FROM XDataTableSorted
WHERE 1=1
and FLAG_JUMP = 0
;
dbms_output.put_line(‘INSERT INTO XDataTableTemp’);
– Update sumdatavalue for all deal/records Ok (x FLAG_JUMP = 0) that satisfy the condition
UPDATE xdatatablesorted
SET sumdatavalue = (SELECT xdatatabletemp.sumdatavalue
FROM xdatatabletemp
WHERE xdatatabletemp.row_id = xdatatablesorted.row_id)
WHERE EXISTS (SELECT xdatatabletemp.sumdatavalue
FROM xdatatabletemp
WHERE xdatatabletemp.row_id = xdatatablesorted.row_id);
dbms_output.put_line(‘UPDATE XDataTableSorted – sumdatavalue x FLAG_JUMP = 0′);
commit;
END IF;

END LOOP CoverLoop;
END;
4) goal_search_procedure.sql output:
– ******************************************************************************** –
/*
DBMS_OUTPUT
Tollerance: 10
TargetValue: 10000
DELETE FROM XDataTableSorted
INSERT INTO XDataTableSorted
MaxRowId: 30
DELETE FROM XDataTableTarget
i: 11
i – 1: 10
SumDataValueProgr: 9850
UPDATE XDataTableSorted – FLAG_JUMP = 1
DELETE FROM XDataTableTemp
INSERT INTO XDataTableTemp
UPDATE XDataTableSorted – sumdatavalue x FLAG_JUMP = 0
i: 23
i – 1: 22
SumDataValueProgr: 9944
UPDATE XDataTableSorted – FLAG_JUMP = 1DELETE FROM XDataTableTemp
INSERT INTO XDataTableTemp
UPDATE XDataTableSorted – sumdatavalue x FLAG_JUMP = 0
i:
exNull i: 29
i – 1: 28
SumDataValueProgr: 10000
UPDATE EXIT XDataTableSorted – FLAG_JUMP = 1
DELETE FROM XDataTableTemp
INSERT INTO XDataTableTemp
UPDATE XDataTableSorted – sumdatavalue x FLAG_JUMP = 0
INSERT INTO XDataTableTarget – FLAG_JUMP = 0 – 10000
INSERT INTO XDataTableTarget – FLAG_JUMP = 1 – 10000
*/
– ************

File Directory Change Monitoring

1) Purpose:

We want to check out a unix directory, signaling with an email if its content change.

 

2) Solution:

Before modify appropriately the configuration parameters inside, then run in background mode (sh filedirChangeMonitoring.sh &) the following shell script.

 

#!/bin/bash

#

#  filedirChangeMonitoring.sh

#  DATE : 2014-07-30

#  DESC.:

#  Script for the monitoring of files in a directory

#  The script checks if changes are made to the files of a directory and it alerts with an e-mail

#  Before running the script modify the following parameters: TO_ADDRESS, FROM_EMAIL_ADDRESS, xDirMon and xTime

#  PARENT SCRIPT: none

#

#  NOTE:

#  The script should be run in background mode as follow: sh filedirChangeMonitoring.sh &

#  The script uses these following support files that should not be deleted or modified in order to not distort the monitoring:

#  email_text.txt, testdira.txt and testdirb.txt

#  The script requires the sendmail service active.

#  (ps -ef| grep filedirChangeMonitoring and kill -9 PID)

 

# Configuration script parameters

#

# Recipients and sender of the email

TO_ADDRESS=”pippo@mail.com”

FROM_EMAIL_ADDRESS=”pluto@mail.com”

CC_ADDRESS=””

BCC_ADDRESS=””

# Hostname where the script is executed

xHostName=$(hostname –fqdn)

# Email subject

xSubject=”Dir Monitoring Script Notification $(date ‘+%Y/%m/%d %H:%M:%S’) of $xHostName”

# Directory where filedirChangeMonitoring.sh resides

DIR_SCRIPT=”$(pwd)”

# Monitored directory

xDirMon=”/home/mydir”

# File containing the text of the email to be sent

xMAIL_to_SEND=”$DIR_SCRIPT/email_text.txt”

# Control time interval in sec.

xTime=300

# Date command example that returns date and time

# date ‘+DATE: %Y/%m/%d%nTIME:%H:%M:%S’

 

exitcode=0

 

 

# Checks if the monitored directory exists

#

if !(ls $xDirMon) &> /dev/null; then

echo “Monitored directory not existent”

exit 3

fi

 

 

# Checks if the contents of the monitored directory has been changed

#

touch $DIR_SCRIPT/testdirb.txt

while true

do

ls -la $xDirMon > $DIR_SCRIPT/testdira.txt

 

if [ $(diff $DIR_SCRIPT/testdira.txt $DIR_SCRIPT/testdirb.txt| wc -l) -gt 0 ]; then

echo “To: $TO_ADDRESS” > $xMAIL_to_SEND

echo “From: $FROM_EMAIL_ADDRESS” >>  $xMAIL_to_SEND

# echo “Cc: $CC_ADDRESS” >>  $xMAIL_to_SEND

# echo “Bcc: $BCC_ADDRESS” >> $xMAIL_to_SEND

echo “Subject: $xSubject” >>  $xMAIL_to_SEND

diff $DIR_SCRIPT/testdira.txt $DIR_SCRIPT/testdirb.txt >> $xMAIL_to_SEND

# Mail send command

sendmail -v $TO_ADDRESS < $xMAIL_to_SEND

fi

 

cp $DIR_SCRIPT/testdira.txt $DIR_SCRIPT/testdirb.txt

sleep $xTime

done

 

 

if [ $exitcode -lt 3 ]; then

exit 0

else

exit 3

fi

 

3) Example:

Suppose to have the rigths rwx on /home and /home/mydir directories.

 

3.1) Create in /home the filedirChangeMonitoring.sh script

 

3.2) Modify the configuration parameters as follow:

TO_ADDRESS=”pippo@mail.com”

FROM_EMAIL_ADDRESS=”pluto@mail.com”

xDirMon=”/home/mydir”

xTime=300

 

3.3) Run sh filedirChangeMonitoring.sh &

 

3.4) Run in /home/mydir the touch pluto.txt command

 

3.5) Below the email text received after the writing of the pluto.txt file into the monitored directory /home/mydir

 

2c2

< drwxr-xr-x 3 user1 user1 28672 Jul 31 14:08 .

> drwxr-xr-x 3 user1 user1 28672 Jul 31 13:32 .

4d3

< -rw-rw-r– 1 user1 user1 0 Jul 31 14:08 pluto.txt

 

ORA-600 Lookup Error Categories (part 1c)

Internal Errors Categorised by mnemonic

kpor  progint/opi support for streaming protocol used by replication
kposc  progint/kpo support for scrollable cursors
kpotc  progint/opi oracle side support functions for setting up trusted external procedure callbacks
kpotx kpov  progint/kpo support for managing local and distributed transaction coordination.
kpp2 kpp3  sqllang/parse kpp2 – parse routines for dimensions;
kpp3 – parse support for create/alter/drop summary  statements
kprb kprc  progint/rpi support for executing sql efficiently on the Oracle server side as well as for copying data types during rpi operations
kptsc  progint/twotask callback functions provided to all streaming operation as part of replication functionality
kpu kpuc kpucp  progint/kpu Oracle kernel side programmatic user interface,  cursor management functions and client side connection pooling support
kqan kqap kqas  argusdbg/argusdbg server-side notifiers and callbacks for debug operations.
kql kqld kqlp  dict/libcache SQL Library Cache manager – manages the sharing of sql statements in the shared pool
kqr  dict/rowcache row cache management. The row cache consists of a set of facilities to provide fast access to table definitions and locking capabilities.
krbi krbx krby krcr krd krpi  rcv Backup and recovery related operations :
krbi – dbms_backup_restore package underlying support.; krbx -  proxy copy controller; krby – image copy; krcr – Recovery Controlfile Redo; krd – Recover Datafiles (Media & Standby Recovery);  krpi – support for the package : dbms_pitr
krvg krvt  rcv/vwr krvg – support for generation of redo associated with DDL; krvt – support for redo log miner viewer (also known as log miner)
ksa ksdp ksdx kse ksfd ksfh ksfq ksfv ksi ksim ksk ksl ksm ksmd ksmg ksn ksp kspt ksq ksr kss ksst ksu ksut  vos support for various kernel associated capabilities
ksx sqlexec/execsvc support for query execution associated with temporary tables
ksxa ksxp ksxr  vos support for various kernel associated capabilities in relation to OPS or RAC operation
kta  space/spcmgmt support for DML locks and temporary tables associated with table access
ktb ktbt ktc  txn/lcltx transaction control operations at the block level : locking block, allocating space within the block , freeing up space, etc.
ktec ktef ktehw ktein ktel kteop kteu  space/spcmgmt support for extent management operations :
ktec – extent concurrency operations; ktef – extent format; ktehw – extent high water mark operations; ktein – extent  information operations; ktel – extent support for sql loader; kteop – extent operations : add extent to segment, delete extent, resize extent, etc. kteu – redo support for operations changing segment header / extent map
ktf  txn/lcltx flashback support
ktfb ktfd ktft ktm  space/spcmgmt ktfb – support for bitmapped space manipulation of files/tablespaces;  ktfd – dictionary-based extent management; ktft – support for temporary file manipulation; ktm – SMON operation
ktp ktpr ktr ktri  txn/lcltx ktp – support for parallel transaction operation; ktpr – support for parallel transaction recovery; ktr – kernel transaction read consistency;
ktri – support for dbms_resumable package
ktsa ktsap ktsau ktsb ktscbr ktsf ktsfx ktsi ktsm ktsp ktss ktst ktsx ktt kttm  space/spcmgmt support for checking and verifying space usage
ktu ktuc ktur ktusm  txn/lcltx internal management of undo and rollback segments
kwqa kwqi kwqic kwqid kwqie kwqit kwqj kwqm kwqn kwqo kwqp kwqs kwqu kwqx  oltp/qs support for advanced queuing :
kwqa – advanced queue administration; kwqi – support for AQ PL/SQL trusted callouts; kwqic – common AQ support functions; kwqid – AQ dequeue support; kwqie – AQ enqueu support ; kwqit – time management operation ; kwqj – job queue scheduler for propagation; kwqm – Multiconsumer queue IOT support; kwqn – queue notifier; kwqo – AQ support for checking instType checking options; kwqp – queueing propagation; kwqs – statistics handling; kwqu – handles lob data. ; kwqx – support for handling transformations
kwrc kwre  oltp/re rules engine evaluation
kxcc kxcd kxcs  sqllang/integ constraint processing
kxdr sqlexec/dmldrv DML driver entrypoint
kxfp kxfpb kxfq kxfr kxfx  sqlexec/pq parallel query support
kxhf kxib  sqlexec/execsvc khhf- support for hash join file and memory management; kxib – index buffering operations
kxs  dict/instmgmt support for executing shared cursors
kxti kxto kxtr  dbproc/trigger support for trigger operation
kxtt  ram/partnmap support for temporary table operations
kxwph  ram/data support for managing attributes of the segment of a table / cluster / table-partition
kza  security/audit support for auditing operations
kzar  security/dac support for application auditing
kzck  security/crypto encryption support
kzd  security/dac support for dictionary access by security related functions
kzec  security/dbencryption support inserting and retrieving encrypted objects into and out of the database
kzfa kzft  security/audit support for fine grained auditing
kzia  security/logon identification and authentication operations
kzp kzra kzrt kzs kzu kzup  security/dac security related operations associated with privileges
msqima msqimb  sqlexec/sqlgen support for generating sql statments
ncodef npi npil npixfr  progint/npi support for managing remote network connection from  within the server itself
oba  sqllang/outbufal operator buffer allocate for various types of operators : concatenate, decode, NVL, etc.  the list is extensive.
ocik  progint/oci OCI oracle server functions
opiaba opidrv opidsa opidsc opidsi opiexe opifch opiino opilng opipar opipls opirip opitsk opix  progint/opi OPI Oracle server functions – these are at the top of the server stack and are called indirectly by ythe client in order to server the client request.
orlr  objmgmt/objmgr support for  C langauge interfaces to user-defined types (UDTs)
orp  objmgmt/pickler oracle’s external pickler / opaque type interfaces
pesblt pfri pfrsqc  plsql/cox pesblt – pl/sql built in interpreter; pfri – pl/sql runtime; pfrsqc – pl/sql callbacks for array sql and dml with returning
piht  plsql/gen/utl support for pl/sql implementation of utl_http package
pirg  plsql/cli/utl_raw support for pl/sql implementation of utl_raw package
pism  plsql/cli/utl_smtp support for pl/sql implementation of utl_smtp package
pitcb  plsql/cli/utl_tcp support for pl/sql implementation of utl_tcp package
piur  plsql/gen/utl_url support for pl/sql implementation of utl_url package
plio  plsql/pkg pl/sql object instantiation
plslm  plsql/cox support for NCOMP processing
plsm pmuc pmuo pmux  objmgmt/pol support for pl/sql handling of collections
prifold priold  plsql/cox support to allow rpc forwarding to an older release
prm  sqllang/param parameter handling associated with sql layer
prsa prsc prssz  sqllang/parse prsa – parser for alter cluster command; prsc – parser for create database command; prssz – support for parse context to be saved
psdbnd psdevn  progint/dbpsd psdbnd – support for managing bind variables; psdevn – support for pl/sql debugger
psdicd  progint/plsicds small number of ICD to allow pl/sql to call into ‘C’ source
psdmsc psdpgi  progint/dbpsd psdmsc – pl/sql system dependent miscellaneous functions ; psdpgi – support for opening and closing cursors in pl/sql
psf  plsql/pls pl/sql service related functions for instantiating called pl/sql unit in library cache
qbadrv qbaopn  sqllang/qrybufal provides allocation of buffer and control structures in query execution
qcdl qcdo  dict/dictlkup qcdl – query compile semantic analysis; qcdo – query compile dictionary support for objects
qci  dict/shrdcurs support for SQL language parser and semantic analyser
qcop qcpi qcpi3 qcpi4 qcpi5  sqllang/parse support for query compilation parse phase
qcs qcs2 qcs3 qcsji qcso  dict/dictlkup support for semantic analysis by SQL compiler
qct qcto  sqllang/typeconv qct – query compile type check operations; qcto -  query compile type check operators
qcu  sqllang/parse various utilities provided for sql compilation
qecdrv  sqllang/qryedchk driver performing high level checks on sql language query capabilities
qerae qerba qerbc qerbi qerbm qerbo qerbt qerbu qerbx qercb qercbi qerco qerdl qerep qerff qerfi qerfl qerfu qerfx qergi qergr qergs qerhc qerhj qeril qerim qerix qerjm qerjo qerle qerli qerlt qerns qeroc qeroi qerpa qerpf qerpx qerrm qerse qerso qersq qerst qertb qertq qerua qerup qerus qervw qerwn qerxt  sqlexec/rowsrc row source operators :
qerae – row source (And-Equal) implementation; qerba – Bitmap Index AND row source; qerbc – bitmap index compaction row source; qerbi – bitmap index creation row source; qerbm – QERB Minus row source; qerbo  – Bitmap Index OR row source; qerbt – bitmap convert row source; qerbu – Bitmap Index Unlimited-OR row source; qerbx – bitmap index access row source; qercb – row source: connect by; qercbi – support for connect by; qerco – count row source; qerdl – row source delete; qerep – explosion row source; qerff – row source fifo buffer; qerfi  – first row row source; qerfl  – filter row source definition; qerfu – row source: for update; qerfx – fixed table row source; qergi – granule iterator row source; qergr – group by rollup row source; qergs – group by sort row source; qerhc – row sources hash clusters; qerhj – row source Hash Join;  qeril  – In-list row source; qerim – Index Maintenance row source; qerix – Index row source; qerjo – row source: join; qerle – linear execution row source implementation; qerli – parallel create index; qerlt – row source populate Table;  qerns  – group by No Sort row source; qeroc – object collection iterator row source; qeroi – extensible indexing query component; qerpa – partition row sources; qerpf – query execution row source: prefetch; qerpx – row source: parallelizer; qerrm – remote row source; qerse – row source: set implementation; qerso – sort row source; qersq – row source for sequence number; qerst  – query execution row sources: statistics; qertb – table row source; qertq  – table queue row source; qerua – row source : union-All;
qerup – update row source; qerus – upsert row source ; qervw – view row source; qerwn – WINDOW row source; qerxt – external table fetch row source
qes3t qesa qesji qesl qesmm qesmmc  sqlexec/execsvc run time support for sql execution
qkacon qkadrv qkajoi qkatab qke qkk qkn qkna qkne  sqlexec/rwsalloc SQL query dynamic structure allocation routines
qks3t  sqlexec/execsvc query execution service associated with temp table transformation
qksmm qksmms qksop  sqllang/compsvc qksmm -  memory management services for the SQL compiler; qksmms – memory management simulation services for the SQL compiler; qksop – query compilation service for operand processing
qkswc  sqlexec/execsvc support for temp table transformation associated for with clause.
qmf  xmlsupp/util support for ftp server; implements processing of ftp commands
qmr qmrb qmrs  xmlsupp/resolver support hierarchical resolver
qms  xmlsupp/data support for storage and retrieval of XOBs
qmurs  xmlsupp/uri support for handling URIs
qmx qmxsax  xmlsupp/data qmx – xml support; qmxsax – support for handling sax processing
qmxtc  xmlsupp/sqlsupp support for ddl  and other operators related to the sql XML support
qmxtgx  xmlsupp support for transformation : ADT -> XML
qmxtsk  xmlsupp/sqlsupp XMLType support functions
qsme  summgmt/dict summary management expression processing
qsmka qsmkz  dict/dictlkup qsmka – support to analyze request in order to determine whether a summary could be created that would be useful; qsmkz – support for create/alter summary semantic analysis
qsmp qsmq qsmqcsm qsmqutl  summgmt/dict qsmp – summary management partition processing; qsmq – summary management dictionary access; qsmqcsm – support for create / drop / alter summary and related dimension operations; qsmqutl – support for summaries
qsms  summgmt/advsvr summary management advisor
qxdid  objsupp/objddl support for domain index ddl operations
qxidm  objsupp/objsql support for extensible index dml operations
qxidp  objsupp/objddl support for domain index ddl partition operations
qxim  objsupp/objsql extensible indexing support for objects
qxitex qxopc qxope  objsupp/objddl qxitex – support for create / drop indextype; qxope – execution time support for operator  callbacks; qxope – execution time support for operator DDL
qxopq qxuag qxxm  objsupp/objsql qxopq – support for queries with user-defined operators; qxuag – support for user defined aggregate processing; qxxm – queries involving external tables
rfmon rfra rfrdb rfrla rfrm rfrxpt  drs implements 9i data guard broker monitor
rnm  dict/sqlddl manages rename statement operation
rpi  progint/rpi recursive procedure interface which handles the the environment setup where multiple recursize statements are executed from one top level statement
rwoima  sqlexec/rwoprnds row operand operations
rwsima  sqlexec/rowsrc row source implementation/retrieval according to the defining query
sdbima  sqlexec/sort manages and performs sort operation
selexe  sqlexec/dmldrv handles the operation of select statement execution
skgm  osds platform specific memory management rountines interfacing with O.S. allocation functions
smbima sor  sqlexec/sort manages and performs sort operation
sqn  dict/sqlddl support for parsing references to sequences
srdima srsima stsima  sqlexec/sort manages and performs sort operation
tbsdrv  space/spcmgmt operations for executing create / alter / drop tablespace and related supporting functions
ttcclr ttcdrv ttcdty ttcrxh ttcx2y  progint/twotask two task common layer which provides high level interaction and negotiation functions for Oracle client when communicating with the server.  It also provides important function of converting client side data / data types into equivalent on the server and vice versa
uixexe ujiexe updexe upsexe  sqlexec/dmldrv support for : index maintenance operations, the execution of the update statement and associated actions connected with update as well as the upsert command which combines the operations of update and insert
vop  optim/vwsubq view optimisation related functionality
xct  txn/lcltx support for the management of transactions and savepoint operations
xpl  sqlexec/expplan support for the explain plan command
xty  sqllang/typeconv type checking functions
zlke  security/ols/intext label security error handling component

 

ORA-600 Lookup Error Categories (part 1b)

ORA-600 Lookup Error Categories

 

In the Oracle Server source, there are two types of ora-600 error :
à the first parameter is a number which reflects the source component or layer the error is connected with;

or

at the first parameter is a mnemonic which indicates the source module where the error originated. This type of internal error is now used in preference to an internal error number.

 

Both types of error may be possible in the Oracle server.

Internal Errors Categorised by number range

The following table provides an indication of internal error codes used in the Oracle server. Thus, if ora-600[X] is encountered, it is possible to glean some high level background information : the error in generated in the Y layer which indicates that there may be a problem with Z.

Ora-600 Base Functionality Description
1 Service Layer The service layer has within it a variety of service related components which are associated with in memory related activities in the SGA such as, for example : the management of Enqueues, System Parameters, System state objects (these objects track the use of structures in the SGA by Oracle server processes), etc.. In the main, this layer provides support to allow process communication and provides support for locking and the management of structures to support multiple user processes connecting and interacting within the SGA.Note : vos  – Virtual Operating System provides features to support the functionality above.  As the name suggests it provides base functionality in much the same way as is provided by an Operating System.

Ora-600 Base Functionality Description
1 vos Component notifier
100 vos Debug
300 vos Error
500 vos Lock
700 vos Memory
900 vos System Parameters
1100 vos System State object
1110 vos Generic Linked List management
1140 vos Enqueue
1180 vos Instance Locks
1200 vos User State object
1400 vos Async Msgs
1700 vos license Key
1800 vos Instance Registration
1850 vos I/O Services components
2000 Cache Layer Where errors are generated in this area, it is advisable to check whether the error is repeatable and whether the error is perhaps associated with recovery or undo type operations; where this is the case and the error is repeatable, this may suggest some kind of hardware or physical issue with a data file, control file or log file. The Cache layer is responsible for making the changes to the underlying files and well as managing the related memory structures in the SGA.Note : rcv indicates recovery. It is important to remember that the Oracle cache layer is effectively going through the same code paths as used by the recovery mechanism.

Ora-600 Base Functionality Description
2000 server/rcv Cache Op
2100 server/rcv Control File mgmt
2200 server/rcv Misc (SCN etc.)
2400 server/rcv Buffer Instance Hash Table
2600 server/rcv Redo file component
2800 server/rcv Db file
3000 server/rcv Redo Application
3200 server/cache Buffer manager
3400 server/rcv Archival & media recovery component
3600 server/rcv recovery component
3700 server/rcv Thread component
3800 server/rcv Compatibility segment

It is important  to consider when the error occurred and the context in which the error was generated. If the error does not reproduce, it may be an in memory issue.

4000 Transaction Layer Primarily the transaction layer is involved with maintaining structures associated with the management of transactions.  As with the cache layer , problems encountered in this layer may indicate some kind of issue at a physical level. Thus it is important to try and repeat the same steps to see if the problem recurs.

Ora-600 Base Functionality Description
4000 server/txn Transaction Undo
4100 server/txn Transaction Undo
4210 server/txn Transaction Parallel
4250 server/txn Transaction List
4300 space/spcmgmt Transaction Segment
4400 txn/lcltx Transaction Control
4450 txn/lcltx distributed transaction control
4500 txn/lcltx Transaction Block
4600 space/spcmgmt Transaction Table
4800 dict/rowcache Query Row Cache
4900 space/spcmgmt Transaction Monitor
5000 space/spcmgmt Transaction Extent

It is important to try and determine what the object involved in any reproducible problem is. Then use the analyze command. For more information, please refer to the analyze command as detailed in the context of  ; in addition, it may be worth using the dbverify as discussed in .

6000 Data Layer The data layer is responsible for maintaining and managing the data in the database tables and indexes. Issues in this area may indicate some kind of physical issue at the object level and therefore, it is important to try and isolate the object and then perform an anlayze on the object to validate its structure.

Ora-600 Base Functionality Description
6000 ram/data
ram/analyze
ram/index
data, analyze command and index related activity
7000 ram/object lob related errors
8000 ram/data general data access
8110 ram/index index related
8150 ram/object general data access

Again, it is important to try and determine what the object involved in any reproducible problem is. Then use the analyze command. For more information, please refer to the analyze command as detailed in the context of  ; in addition, it may be worth using the dbverify as discussed in .

12000 User/Oracle Interface & SQL Layer Components This layer governs the user interface with the Oracle server. Problems generated by this layer usually indicate : some kind of presentation or format error in the data received by the server, i.e. the client may have sent incomplete information; or there is some kind of issue which indicates that the data is received out of sequence

Ora-600 Base Functionality Description
12200 progint/kpo
progint/opi
lob related
errors at interface level on server side, xa , etc.
12300 progint/if OCI interface to coordinating global transactions
12400 sqlexec/rowsrc table row source access
12600 space/spcmgmt operations associated with tablespace : alter / create / drop operations ; operations associated with create table / cluster
12700 sqlexec/rowsrc bad rowid
13000 dict/if dictionary access routines associated with kernel compilation
13080 ram/index kernel Index creation
13080 sqllang/integ constraint mechanism
13100 progint/opi archival and Media Recovery component
13200 dict/sqlddl alter table mechanism
13250 security/audit audit statement processing
13300 objsupp/objdata support for handling of object generation and object access
14000 dict/sqlddl sequence generation
15000 progint/kpo logon to Oracle
16000 tools/sqlldr sql loader related

You should try and repeat the issue and with the use of sql trace , try and isolate where exactly the issue may be occurring within the application.

14000 System Dependent Component internal error values This layer manages interaction with the OS. Effectively it acts as the glue which allows the Oracle server to interact with the OS. The types of operation which this layer manages are indicated as follows.

Ora-600 Base Functionality Description
14000 osds File access
14100 osds Concurrency management;
14200 osds Process management;
14300 osds Exception-handler or signal handler management
14500 osds Memory allocation
15000 security/dac,
security/logon
security/ldap
local user access validation; challenge / response activity for remote access validation; auditing operation; any activities associated with granting and revoking of privileges; validation of password with external password file
15100 dict/sqlddl this component manages operations associated with creating, compiling (altering), renaming, invalidating, and dropping  procedures, functions, and packages.
15160 optim/cbo cost based optimizer layer is used to determine optimal path to the data based on statistical information available on the relevant tables and indexes.
15190 optim/cbo cost based optimizer layer. Used in the generation of a new index to determine how the index should be created. Should it be constructed from the table data or from another index.
15200 dict/shrdcurs used to in creating sharable context area associated with shared cursors
15230 dict/sqlddl manages the compilation of triggers
15260 dict/dictlkup
dict/libcache
dictionary lookup and library cache access
15400 server/drv manages alter system and alter session operations
15410 progint/if manages compilation of pl/sql packages and procedures
15500 dict/dictlkup performs dictionary lookup to ensure semantics are correct
15550 sqlexec/execsvc
sqlexec/rowsrc
hash join execution management;
parallel row source management
15600 sqlexec/pq component provides support for Parallel Query operation
15620 repl/snapshots manages the creation of snapshot or materialized views as well as related snapshot / MV operations
15640 repl/defrdrpc layer containing various functions for examining the deferred transaction queue and retrieving information
15660 jobqs/jobq manages the operation of the Job queue background processes
15670 sqlexec/pq component provides support for Parallel Query operation
15700 sqlexec/pq component provides support for Parallel Query operation; specifically mechanism for starting up and shutting down query slaves
15800 sqlexec/pq component provides support for Parallel Query operation
15810 sqlexec/pq component provides support for Parallel Query operation; specifically functions for creating mechanisms through which Query co-ordinator can communicate with PQ slaves;
15820 sqlexec/pq component provides support for Parallel Query operation
15850 sqlexec/execsvc component provides support for the execution of SQL statements
15860 sqlexec/pq component provides support for Parallel Query operation
16000 loader sql Loader direct load operation;
16150 loader this layer is used for ‘C’ level call outs to direct loader operation;
16200 dict/libcache this is part of library Cache operation. Amongst other things it manages the dependency of SQL objects and tracks who is permitted to access these objects;
16230 dict/libcache this component is responsible for managing access to remote objects as part of library Cache operation;
16300 mts/mts this component relates to MTS (Multi Threaded Server) operation
16400 dict/sqlddl this layer contains functionality which allows tables to be loaded / truncated and their definitions to be modified. This is part of dictionary operation;
16450 dict/libcache this layer layer provides support for multi-instance access to the library cache; this functionality is applicable therefore to OPS environments;
16500 dict/rowcache this layer provides support to load / cache Oracle’s dictionary in memory in the library cache;
16550 sqlexec/fixedtab this component maps data structures maintained in the Oracle code to fixed tables such that they can be queried using the SQL layer;
16600 dict/libcache this layer performs management of data structures within the library cache;
16651 dict/libcache this layer performs management of dictionary related information within library Cache;
16701 dict/libcache this layer provides library Cache support to support database creation and forms part of the bootstrap process;
17000 dict/libcache this is the main library Cache manager. This Layer maintains the in memory representation of cached sql statements together will all the necessary support that this demands;
17090 generic/vos this layer implementations error management operations: signalling errors, catching  errors, recovering from errors, setting error frames, etc.;
17100 generic/vos Heap manager. The Heap manager manages the storage of internal data in an orderly and consistent manner. There can be many heaps serving various purposes; and heaps within heaps. Common examples are the SGA heap, UGA heap and the PGA heap. Within a Heap there are consistency markers which aim to ensure that the Heap is always in a consistent state. Heaps are use extensively and are in memory structures – not on disk.
17200 dict/libcache this component deals with loading remote library objects into the local library cache with information from the remote database.
17250 dict/libcache more library cache errors ; functionality for handling pipe operation associated with dbms_pipe
17270 dict/instmgmt this component manages instantiations of procedures, functions, packages, and cursors in a session. This provides a means to keep track of what has been loaded in the event of process death;
17300 generic/vos manages certain types of memory allocation structure.  This functionality is an extension of the Heap manager.
17500 generic/vos relates to various I/O operations. These relate to async i/o operation,  direct i/o operation and the management of writing buffers from the buffer cache by potentially a number of database writer processes;
17625 dict/libcache additional library Cache supporting functions
17990 plsql plsql ‘standard’ package related issues
18000 txn/lcltx transaction and savepoint management operations
19000 optim/cbo cost based optimizer related operations
20000 ram/index bitmap index and index related errors.
20400 ram/partnmap operations on partition related objects
20500 server/rcv server recovery related operation
21000 repl/defrdrpc, 
repl/snapshot,
repl/trigger
replication related features
23000 oltp/qs AQ related errors.
24000 dict/libcache operations associated with managing stored outlines
25000 server/rcv tablespace management operations

 

Visit Us On TwitterVisit Us On Google PlusVisit Us On LinkedinCheck Our Feed