If there are indexes on the table, it will do an index full scan (small index) or index fast full scan (large index).
But which index will be used by the optimizer? The primary key index, or the index on the column that in the COUNT()?
The answer is it will use the smallest index on the table. Take a look the execution plan and then verify the index segment is the smallest one.
Friday, September 2, 2011
Tuesday, August 30, 2011
Using NEW_VALUE to Record Start/End Time in Oracle SQL Script
In a SQL script, I need to record the start and end time of a block which performs some DMLs, then I will query the records being DML'ed during the peroid.
COLUMN CHAR_SYSDATE NEW_VALUE START_DATETIME
SELECT TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS') CHAR_SYSDATE FROM DUAL;
......
...The block
......
COLUMN CHAR_SYSDATE NEW_VALUE END_DATETIME
SELECT TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS') CHAR_SYSDATE FROM DUAL;
SELECT ... FROM ...
WHERE audit_creation_date BETWEEN TO_DATE('&START_DATETIME','YYYYMMDDHH24MISS') AND TO_DATE('&END_DATETIME','YYYYMMDDHH24MISS');
COLUMN CHAR_SYSDATE NEW_VALUE START_DATETIME
SELECT TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS') CHAR_SYSDATE FROM DUAL;
......
...The block
......
COLUMN CHAR_SYSDATE NEW_VALUE END_DATETIME
SELECT TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS') CHAR_SYSDATE FROM DUAL;
SELECT ... FROM ...
WHERE audit_creation_date BETWEEN TO_DATE('&START_DATETIME','YYYYMMDDHH24MISS') AND TO_DATE('&END_DATETIME','YYYYMMDDHH24MISS');
Thursday, August 25, 2011
Indexes on Foreign Keys
Indexes on foreign keys are not always required. But I recommend to create the indexes on foreign keys when: 1) There are deletions on the parent table and 2) the child table is large.
It has happened to me a few times when deleting from the parent table, it was kind of hanging even there were no associated records in the child table. And when looking at the execution plan, it wouldn't show anything about the foreign key validation to the child table. After adding the indexes the deletions finished instantly.
It has happened to me a few times when deleting from the parent table, it was kind of hanging even there were no associated records in the child table. And when looking at the execution plan, it wouldn't show anything about the foreign key validation to the child table. After adding the indexes the deletions finished instantly.
Tuesday, July 12, 2011
Friday, July 8, 2011
Datapump doesn't support Unix PIPE
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_overview.htm#sthref48
The following complained dumpfile already exists
-----------------------
mknod ${dump_file} p
compress < ${dump_file} &
expdp \'/ as sysdba\' dumpfile=${dump_file} directory=db_exp schemas=$1 logfile=${dump_file}.log
The following complained dumpfile already exists
-----------------------
mknod ${dump_file} p
compress < ${dump_file} &
expdp \'/ as sysdba\' dumpfile=${dump_file} directory=db_exp schemas=$1 logfile=${dump_file}.log
Friday, June 24, 2011
No mysteries of Oracle Locally managed tablespace (LMT) and Automatic segment space management (ASSM)
Initially I called this post "Mysteries of ...", but I decided to change it to "No mysteries of...". Really there are no mysteries there with understanding.
The syntax to create a tablespace,
CREATE TABLESPACE xxx DATAFILE ...
EXTENT MANAGEMENT LOCAL [AUTOALLOCATE|UNIFORM SIZE nn]
[SEGMENT SPACE MANAGEMENT AUTO|MANUAL]
/
With LMT:
1. The NEXT storage parameter becomes obsolete.
2. The default extent allocation is AUTOALLOCATE, meaning Oracle will automatically determine the size of the extents. In my database here, for an object without INITIAL specified, the default size of an extent and the INITIAL is 640k. For objects with INITIAL from 100M~700M, the average extent size is 8M; For an object with INITIAL set to 8G, the average extent size is 62M – by looking at BYTES/EXTENTS in DBA_SEGMENTS.
3. When specifying UNIFORM SIZE nn, all the extent size is nn
4. Some experts including Tom Kyte recommended to use AUTOALLOCATE over UNIFORM SIZE: Just use system allocated extent sizes, segments will start small and grow in extent size as needed.
With ASSM:
1. It automates freelist management by replacing the traditional one-way linked-list freelists with bitmap freelists.
2. Storage parameters including PCTUSED, FREELISTS, and FREELIST GROUPS are out. Columns DBA_TABLES(PCT_USED,FREELISTS, FREELIST_GROUPS) and DBA_SEGMENTS(FREELISTS, FREELIST_GROUPS) are NULL.
The syntax to create a tablespace,
CREATE TABLESPACE xxx DATAFILE ...
EXTENT MANAGEMENT LOCAL [AUTOALLOCATE|UNIFORM SIZE nn]
[SEGMENT SPACE MANAGEMENT AUTO|MANUAL]
/
With LMT:
1. The NEXT storage parameter becomes obsolete.
2. The default extent allocation is AUTOALLOCATE, meaning Oracle will automatically determine the size of the extents. In my database here, for an object without INITIAL specified, the default size of an extent and the INITIAL is 640k. For objects with INITIAL from 100M~700M, the average extent size is 8M; For an object with INITIAL set to 8G, the average extent size is 62M – by looking at BYTES/EXTENTS in DBA_SEGMENTS.
3. When specifying UNIFORM SIZE nn, all the extent size is nn
4. Some experts including Tom Kyte recommended to use AUTOALLOCATE over UNIFORM SIZE: Just use system allocated extent sizes, segments will start small and grow in extent size as needed.
With ASSM:
1. It automates freelist management by replacing the traditional one-way linked-list freelists with bitmap freelists.
2. Storage parameters including PCTUSED, FREELISTS, and FREELIST GROUPS are out. Columns DBA_TABLES(PCT_USED,FREELISTS, FREELIST_GROUPS) and DBA_SEGMENTS(FREELISTS, FREELIST_GROUPS) are NULL.
Wednesday, June 8, 2011
Testing DataStage ODBC driver connectivity
Excerpted from IBM DataStage document
You can test the whether your ODBC drivers can successfully connect to your data sources.
Symptoms
If a job fails to connect to a data source using an ODBC connection, test the connection outside the job to see if the ODBC connection is the source of the problem.
Environment
The procedure applies to ODBC connections in a UNIX environment.
Diagnosing the problem
To test the connectivity of your ODBC connections:
Change directory to $DSHOME and set up the IBM® InfoSphere™ DataStage® environment by running dsenv:
. ./dsenv
Start the engine shell:
cd $DSHOME
bin/dssh
In the engine shell, log to the project:
LOGTO project_name
Get a list of available DSNs by typing:
DS_CONNECT
Test the required connection by typing:
DS_CONNECT DSN
Where DSN specifies the connection that you want to test.
Enter the user name and password to connect to the required data source.
After you have connected to the data source, enter .Q to close the connection.
You can test the whether your ODBC drivers can successfully connect to your data sources.
Symptoms
If a job fails to connect to a data source using an ODBC connection, test the connection outside the job to see if the ODBC connection is the source of the problem.
Environment
The procedure applies to ODBC connections in a UNIX environment.
Diagnosing the problem
To test the connectivity of your ODBC connections:
Change directory to $DSHOME and set up the IBM® InfoSphere™ DataStage® environment by running dsenv:
. ./dsenv
Start the engine shell:
cd $DSHOME
bin/dssh
In the engine shell, log to the project:
LOGTO project_name
Get a list of available DSNs by typing:
DS_CONNECT
Test the required connection by typing:
DS_CONNECT DSN
Where DSN specifies the connection that you want to test.
Enter the user name and password to connect to the required data source.
After you have connected to the data source, enter .Q to close the connection.
Monday, May 16, 2011
A Bash Script Template
#!/bin/sh
#---------------------------------------
# Script Description/Usage:
# To perform start/stop/restart of ......
#---------------------------------------
# Modification History:
# Jan 01, 2011 : A.Nie : initial creation
#---------------------------------------
#
RETVAL=0
# Check input arguments
if [ $# -ne 1 ] ; then
echo "Usage: `basename $0` {start|stop|restart}"
RETVAL=1
exit $RETVAL
fi
# Check running user
if [ `id -un` != "root" ] ; then
echo "Must run the script as root, exiting..."
RETVAL=2
exit $RETVAL
fi
# Define variables
VAR1=VARVALUE1
......
# Start function
start() {
......
}
# Stop function
stop() {
......
}
# Restart function
restart() {
stop
start
}
# Main
# While LOOP
while [ condition ] ; do
......
done
# For LOOP
for VAR in ...... ; do
......
done
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo "Usage: $0 {start|stop|restart}"
RETVAL=3
;;
esac
exit $RETVAL
#---------------------------------------
# Script Description/Usage:
# To perform start/stop/restart of ......
#---------------------------------------
# Modification History:
# Jan 01, 2011 : A.Nie : initial creation
#---------------------------------------
#
RETVAL=0
# Check input arguments
if [ $# -ne 1 ] ; then
echo "Usage: `basename $0` {start|stop|restart}"
RETVAL=1
exit $RETVAL
fi
# Check running user
if [ `id -un` != "root" ] ; then
echo "Must run the script as root, exiting..."
RETVAL=2
exit $RETVAL
fi
# Define variables
VAR1=VARVALUE1
......
# Start function
start() {
......
}
# Stop function
stop() {
......
}
# Restart function
restart() {
stop
start
}
# Main
# While LOOP
while [ condition ] ; do
......
done
# For LOOP
for VAR in ...... ; do
......
done
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo "Usage: $0 {start|stop|restart}"
RETVAL=3
;;
esac
exit $RETVAL
Wednesday, April 27, 2011
New Hidden Parameters in Oracle 11.2 Database
Read from:
http://blogs.oracle.com/UPGRADE/2011/03/new_hidden_parameters_in_oracl.html
_DATAFILE_WRITE_ERRORS_CRASH_INSTANCE=TRUE|FALSE
_MEMORY_IMM_MODE_WITHOUT_AUTOSGA=TRUE|FALSE
http://blogs.oracle.com/UPGRADE/2011/03/new_hidden_parameters_in_oracl.html
_DATAFILE_WRITE_ERRORS_CRASH_INSTANCE=TRUE|FALSE
_MEMORY_IMM_MODE_WITHOUT_AUTOSGA=TRUE|FALSE
Monday, April 25, 2011
Using Unix Scripts to Start and Stop Weblogic Managed Servers with Node Manager
In my startup script, I start the Node Manager, then start the Managed Server using WLST
---------------------------------
cd ${MW_HOME}/wlserver_10.3/server/bin
./startNodeManager.sh &
sleep 60
......
cd ${MW_HOME}/wlserver_10.3/common/bin
./wlst.sh ./startManagedWeblogic.py
......
In my stop script, I stop the Managed Server using WLST, then stop the Node Manager
---------------------------------
cd ${MW_HOME}/wlserver_10.3/common/bin
./wlst.sh ./stopManagedWeblogic.py
sleep 10
......
kill -9 `ps -ef|grep nodemanager|grep -v ps|awk '{print $2}'`
Here is the content of startManagedWeblogic.py
---------------------------------
nmConnect('<username>,'<password>','<domainname>','5556','<projectname>','<projectdirectory>','ssl')
nmStart('<managedServer>')
Here is the content of stopManagedWeblogic.py
---------------------------------
nmConnect ('<username>,'<password>','<domainname>','5556','<projectname>','<projectdirectory>','ssl')
nmKill('<managedServer>')
---------------------------------
cd ${MW_HOME}/wlserver_10.3/server/bin
./startNodeManager.sh &
sleep 60
......
cd ${MW_HOME}/wlserver_10.3/common/bin
./wlst.sh ./startManagedWeblogic.py
......
In my stop script, I stop the Managed Server using WLST, then stop the Node Manager
---------------------------------
cd ${MW_HOME}/wlserver_10.3/common/bin
./wlst.sh ./stopManagedWeblogic.py
sleep 10
......
kill -9 `ps -ef|grep nodemanager|grep -v ps|awk '{print $2}'`
Here is the content of startManagedWeblogic.py
---------------------------------
nmConnect('<username>,'<password>','<domainname>','5556','<projectname>','<projectdirectory>','ssl')
nmStart('<managedServer>')
Here is the content of stopManagedWeblogic.py
---------------------------------
nmConnect ('<username>,'<password>','<domainname>','5556','<projectname>','<projectdirectory>','ssl')
nmKill('<managedServer>')
Customize Weblogic Managed Server Start Properties in the Admin Console
After the creation of the managed servers, we often need to customize their 'Server Start' properties in the Admin Console in order to start them properly with the Admin Console. Below is a screenshot of my configuration here:
Except the 'Class Path' and 'Arguments' properties, other properties are straight-forward and easy to fill in.
For the two properties, my experience was to first start the managed server with the script startManagedWebLogic.sh, then do a 'ps -ef|grep java' to find the managed server process. You will find the 'Class Path' with -Djava.class.path=, then you can copy and paste into the 'Class Path' property in the Admin Console; You will also find some java arguments from the process that you can copy and paste into the 'Arguments' property.
You often need to adjust the arguments to increase the Java heap size, as with the defaults you are likely to get out of memory errors. Also you can add your own arguments there. For example, we added '-Ddenv=dev1' to for the applications to identify which nodes they are running in a cluster environment.
Saved the configuration. The managed servers then will be started with the customized properties, with the startManagedWebLogic.sh cript, Admin Console or Node Manager.
Except the 'Class Path' and 'Arguments' properties, other properties are straight-forward and easy to fill in.
For the two properties, my experience was to first start the managed server with the script startManagedWebLogic.sh, then do a 'ps -ef|grep java' to find the managed server process. You will find the 'Class Path' with -Djava.class.path=, then you can copy and paste into the 'Class Path' property in the Admin Console; You will also find some java arguments from the process that you can copy and paste into the 'Arguments' property.
You often need to adjust the arguments to increase the Java heap size, as with the defaults you are likely to get out of memory errors. Also you can add your own arguments there. For example, we added '-Ddenv=dev1' to for the applications to identify which nodes they are running in a cluster environment.
Saved the configuration. The managed servers then will be started with the customized properties, with the startManagedWebLogic.sh cript, Admin Console or Node Manager.
Thursday, April 7, 2011
Backup DataStage Project Assets
Using istool, I developed a script that accepts the project name as a parameter and backs up the entire project assets. The istool command is called in the script in the following way:
./istool export -domain $was_domname -u $username -p $passwd -archive $archname -datastage "-incdep -base=$DOMAINNAME/$PROJNAME */*.*"
Things I noticed:
- I use double quotation to quote the -datastage parameters instead of the single quotation (as said in the mannual), because I have to pass the domainname and projectname as shell script variables that cannot be quoted in single quotation. And the double quotation work.
- To backup the entire project assets, specify */*.*. It only takes one * and backs up all sub-folders recursively, */*/*.* or more * won't work.
- Note the istool export can only be imported as a whole, you cannot select individual object to import. To be able to select individual object to import, you need to use dsexport or dscmdexport to export, then using dsimport or dscmdimport to import. dsexport is a interactive tool while dscmdexport is a non-interactive tool that you can schedule job with Windows task manager.
- istool exists on both the client and the server, so you can schedule the export as a cron job if your server is on Linux/Unix. But dsexport and dscmdexport only exist on the client.
- dsexport is an interactive tool while dscmdexport is a non-interactive tool that you can use it to schedule job with Windows task manager.
./istool export -domain $was_domname -u $username -p $passwd -archive $archname -datastage "-incdep -base=$DOMAINNAME/$PROJNAME */*.*"
Things I noticed:
- I use double quotation to quote the -datastage parameters instead of the single quotation (as said in the mannual), because I have to pass the domainname and projectname as shell script variables that cannot be quoted in single quotation. And the double quotation work.
- To backup the entire project assets, specify */*.*. It only takes one * and backs up all sub-folders recursively, */*/*.* or more * won't work.
- Note the istool export can only be imported as a whole, you cannot select individual object to import. To be able to select individual object to import, you need to use dsexport or dscmdexport to export, then using dsimport or dscmdimport to import. dsexport is a interactive tool while dscmdexport is a non-interactive tool that you can schedule job with Windows task manager.
- istool exists on both the client and the server, so you can schedule the export as a cron job if your server is on Linux/Unix. But dsexport and dscmdexport only exist on the client.
- dsexport is an interactive tool while dscmdexport is a non-interactive tool that you can use it to schedule job with Windows task manager.
Wednesday, April 6, 2011
DataStage 8.1 Logging Issues and How to Purge the Logs
In DataStage 8.1 logging is changed to the metadata repository database by default, which has caused some performance issues including the logs being delayed in the director, or even missed. In 8.5 IBM reverts the change and the logs are logged back to the local project log files. See the link:
https://www-304.ibm.com/support/docview.wss?dc=D600&rs=14&uid=swg21370048&context=SSZJPZ&cs=UTF-8&lang=en&loc=en_US
In addition to the performance issues, I had another issue that my metadata database was filled up quickly because the logs in the xmeta would never been purged by the system. Before I realized it, the xmeta tablespace was already 30GB and the majority of the space was consumed by a table called LOGGING_LOGGINGEVENT1466CB5F, which had over 6 million records. I needed to purge the logs and release the space.
In my case, I looked into two ways to purge the logs:
- Invoke: $ISHOME/ASBServer/bin/PropertyAdmin.sh -unset -key minimalISFInitialization
- Start: $ISHOME/ASBServer/bin/MetadataServer.sh start
After that, I was able to schedule a purge job with LoggingAdmin.sh and removed the log entries from the table. However it was not done yet. I needed to free LOB space, see my another note:
http://ccdba-net.blogspot.com/2010/09/undo-data-for-lob-segment-is-kept.html
The SQL commands I used:
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT1_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT2_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT3_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT4_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT5_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT6_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CONTEXT_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(MESSAGE_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(THROWABLEINFO_XMETA) (shrink space cascade);
ALTER TABLE xmeta.LOGGING_LOGGINGEVENT1466CB5F ENABLE ROW MOVEMENT;
ALTER TABLE xmeta.LOGGING_LOGGINGEVENT1466CB5F SHRINK SPACE CASCADE;
ALTER TABLE xmeta.LOGGING_LOGGINGEVENT1466CB5F DISABLE ROW MOVEMENT;
But still, I was not done yet, the database data file wouldn't shrink due to high water mark usage. My last thing was to shutdown the server, export the xmeta schema out, drop and recreate the tablespace then import the xmeta schema back, then bring up the server.
https://www-304.ibm.com/support/docview.wss?dc=D600&rs=14&uid=swg21370048&context=SSZJPZ&cs=UTF-8&lang=en&loc=en_US
In addition to the performance issues, I had another issue that my metadata database was filled up quickly because the logs in the xmeta would never been purged by the system. Before I realized it, the xmeta tablespace was already 30GB and the majority of the space was consumed by a table called LOGGING_LOGGINGEVENT1466CB5F, which had over 6 million records. I needed to purge the logs and release the space.
In my case, I looked into two ways to purge the logs:
- Using the Information Server web console. I created a log view 'ALL' with all the categories - but really later when querying the table (group by the column CATEGORYNAME_XMETA) I found the majority of the record is IIS-DSTAGE-RUN category. Then I tried to purge the log view, but my web console session timed out every once a while with just some number of records being purged. I could repeat the way many times until all the records were purged but I decided to go with the second way of using LoggingAdmin.sh, as mentioned in the above IBM link.
- Using LoggingAdmin.sh. My initial try failed with Java class not being found errors. After worked with IBM support, it was found out the Websphere was in what is known as the minimal initialization state, meaning only a few of the services were active which didn't include the scheduling service. I was asked to put the WAS back to the normal state with commands:
- Invoke: $ISHOME/ASBServer/bin/PropertyAdmin.sh -unset -key minimalISFInitialization
- Start: $ISHOME/ASBServer/bin/MetadataServer.sh start
After that, I was able to schedule a purge job with LoggingAdmin.sh and removed the log entries from the table. However it was not done yet. I needed to free LOB space, see my another note:
http://ccdba-net.blogspot.com/2010/09/undo-data-for-lob-segment-is-kept.html
The SQL commands I used:
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT1_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT2_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT3_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT4_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT5_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CIT6_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(CONTEXT_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(MESSAGE_XMETA) (shrink space cascade);
alter table xmeta.LOGGING_LOGGINGEVENT1466CB5F modify lob(THROWABLEINFO_XMETA) (shrink space cascade);
ALTER TABLE xmeta.LOGGING_LOGGINGEVENT1466CB5F ENABLE ROW MOVEMENT;
ALTER TABLE xmeta.LOGGING_LOGGINGEVENT1466CB5F SHRINK SPACE CASCADE;
ALTER TABLE xmeta.LOGGING_LOGGINGEVENT1466CB5F DISABLE ROW MOVEMENT;
But still, I was not done yet, the database data file wouldn't shrink due to high water mark usage. My last thing was to shutdown the server, export the xmeta schema out, drop and recreate the tablespace then import the xmeta schema back, then bring up the server.
Monday, March 28, 2011
Oracle Temporary Table
Syntax: CREATE GLOBAL TEMPORARY TABLE ... ON COMMIT DELETE|PRESERVE ROWS;
DELETE ROWS: specify it for a transaction-specific temporary table. This is the default. Oracle database will truncate the table (delete all its rows) after each commit.
PRESERVE ROWS: specify it for a session-specific temporary table. Oracle database will truncate the table (delete all its rows) when the session is terminated.
When a temporary table is created the first time, its table metadata is stored in the data dictionary, but no space is allocated. Space is allocated for the table segement at the time of the first DML operation on the table, onto the user's default temporary tablespace. Multiple sessions/users can use the same temporary table definition but each session will be allocated with its own temporary table segment.
Privilege: for an user to access the temporary table in another schema, it needs the same privileges: SELECT, INSERT, UPDATE or DELETE as a normal table. However the user just needs INSERT privilege in order to use it and be able to DELETE|PRESERVE ROWS, which doesn't depend on the other privileges.
DELETE ROWS: specify it for a transaction-specific temporary table. This is the default. Oracle database will truncate the table (delete all its rows) after each commit.
PRESERVE ROWS: specify it for a session-specific temporary table. Oracle database will truncate the table (delete all its rows) when the session is terminated.
When a temporary table is created the first time, its table metadata is stored in the data dictionary, but no space is allocated. Space is allocated for the table segement at the time of the first DML operation on the table, onto the user's default temporary tablespace. Multiple sessions/users can use the same temporary table definition but each session will be allocated with its own temporary table segment.
Privilege: for an user to access the temporary table in another schema, it needs the same privileges: SELECT, INSERT, UPDATE or DELETE as a normal table. However the user just needs INSERT privilege in order to use it and be able to DELETE|PRESERVE ROWS, which doesn't depend on the other privileges.
Monday, March 14, 2011
Daylight time saving - issue with cron jobs
I had two Unix cron jobs to stop then start Oracle instances and listeners on Sundays weekly. It had run fine until this Monday morning my developers reported that they couldn't connect to Oracle.
After investigation I found the culprit being the daylight time saving. The stop job was scheduled @2:15am and the start job @3:00am. From the logs I found the stop jobs was actually kicked off @3:00am due to the daylight time saving, as well as the start job, so both jobs were run at the same time, which caused the listeners were not started.
So suggestion is not to schedule jobs between 2~3am on Sundays.
After investigation I found the culprit being the daylight time saving. The stop job was scheduled @2:15am and the start job @3:00am. From the logs I found the stop jobs was actually kicked off @3:00am due to the daylight time saving, as well as the start job, so both jobs were run at the same time, which caused the listeners were not started.
So suggestion is not to schedule jobs between 2~3am on Sundays.
Subscribe to:
Posts (Atom)