Tuning for Oracle9i on Tru64

 


Gathering Database Statistics on Tru64

Oracle9i release 1 (9.0.1) runs only on Tru64 UNIX V5.0A or higher. This is because Compaq changed the size of the long double data type from 64 bits on Tru64 UNIX V4.0x to 128 bits on Tru64 UNIX V5.x. This change causes certain Oracle operations to perform with increased precision. One of these operations stores statistics in the data dictionary after a table or index is analyzed.

The query optimizer within the Oracle server uses the statistics stored in the data dictionary to determine how best to execute a query. If the stored statistics do not match the statistic calculated by the query optimizer while it searches for the best plan, the query optimizer might use the wrong plan to execute the query. This can cause the query to perform poorly or fail.

For this reason, each schema should have all object statistics analyzed after the upgrade. The ANALYZE_SCHEMA package analyzes the entire schema using either estimate or calculate statistics. The ANALYZE_SCHEMA package estimates statistics reasonably quickly depending on the number of rows or percentage of rows sampled but it only produces statistics as accurate as the amount of data sampled. The ANALYZE_SCHEMA package takes longer to calculate statistics but it analyzes every block in the table or index to produce extremely accurate statistics.

The GATHER_SCHEMA_STATS package performs the same functions as the ANALYZE_SCHEMA package but offers more flexibility. One of its features is the ability to save the current table or index statistics in a table in case the new statistics cause problems.

Oracle Corporation recommends that you use the same analysis and sampling method when gathering the new statistics as you used in the previous Oracle version.

 

Oracle9i Real Application Clusters on Tru64

This section describes Oracle9i Real Application Clusters on Tru64.

 

Reliable Data Gram

Reliable Data Gram (RDG) is an IPC infrastructure for the Tru64 TruCluster platform. It is the default IPC method on Tru64 in Oracle9i and is optimized for Oracle9i Real Application Clusters environments.

 

Requirements

RDG requires that the node be a member of the cluster and connected through the memory channel.

 

RDG Subsystem Operating System Parameter Settings  

Parameter Setting
max_objs At least 5 times the number of Oracle processes per node and up to the larger of 10240 or the number of Oracle processes multiplied by 70.
msg_size Equal to or greater than the maximum value of the DB_BLOCK_SIZE parameter for the database.

Oracle Corporation recommends a value of 32768 because Oracle9i supports different block sizes for each tablespace.

max_async_req At least 100.

A value of 256 might provide better performance.

max_sessions At least the number of Oracle processes plus 2.
rdg_max_auto_msg_wires Must be set to 0.

 

Enabling UDP IPC

In Oracle9i, RDG is the default IPC method on Tru64. When the Oracle9i Real Application Clusters option is enabled, the Global Cache Service (GCS), Global Enqueue Service (GES), Interprocessor Parallel Query (IPQ), and Cache Fusion use RDG. The User Datagram Protocol (UDP) IPC implementation is still available but enable it explicitly.

You must enable the Oracle9i Real Application Clusters option before enabling UDP IPC. To enable the Oracle9i Real Application Clusters option, use the Oracle Universal Installer or enter the following commands:

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
$ make -f ins_rdbms.mk ioracle

For the Oracle IPC routines to use the UDP protocol, relink the oracle executable. Before performing the following steps, shut down all instances in the cluster.

To enable the UDP IPC, enter the following commands:

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk ipc_udp
$ make -f ins_rdbms.mk ioracle

To disable UDP IPC and revert to the default implementation for Oracle9i Real Application Clusters, enter the following commands:

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
$ make -f ins_rdbms.mk ioracle

 

TRU64_IPC_NET Initialization Parameter

The TRU64_IPC_NET initialization parameter is useful only if the Oracle9i Real Application Clusters and UDP IPC options are enabled. These options enable users to specify an interconnect for all IPC traffic that includes Oracle GCS, GES, and IPQ. Use the TRU64_IPC_NET parameter when the Memory Channel interconnect is overloaded. Overall cluster stability and performance might improve when you force Oracle GCS, GES, and IPQ traffic over a different interconnect by setting the TRU64_IPC_NET parameter. For example, to use the first fiber distributed data interface (FDDI) network controller for all GCS, GES, and IPQ IPC traffic, enter the following:

TRU64_IPC_NET="fta0"

You can set this parameter to any interconnect that can be configured on a Tru64 UNIX system. If the specified interconnect device cannot be configured or does not exist, Oracle9i uses the default network device for the system.

To determine the default network device for a system, enter the following command:

$ grep NETDEV= /etc/rc.config | grep -v #

If the default network device for this system is ee0, the command displays the following line:

NETDEV_0="ee0"

 

Tuning Asynchronous I/O

Oracle9i for Tru64 systems can perform either synchronous or asynchronous I/O. To improve performance, Oracle Corporation recommends that you use asynchronous I/O. Set the DISK_ASYNCH_IO parameter to TRUE to enable asynchronous I/O.

Oracle9i can use asynchronous I/O on any datafiles that are stored on AdvFS file systems, cluster file systems (CFS), or raw devices. You must tune some operating system parameters for optimal asynchronous I/O performance.

 

Operating System Parameters

Use the following formulas to determine the asynchronous I/O parameter requirements for a single instance. Note that the recommended setting of the parameters must be adjusted to accommodate any other applications that use asynchronous I/O, including multiple Oracle9i instances on a single node. The actual setting of each parameter is the sum of the requirements of all the Oracle9i instances plus the requirements of any other applications. The following tables lists the required real-time (rt) subsystem parameters on Tru64 systems:

Tru64 Version Parameter Setting
5.1 and later aio_task_max_num Greater than the maximum of either the DBWR I/O operations or the value of the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter, whichever is higher.

The maximum number of DBWR I/O operations defaults to 8192 unless the _DB_WRITER_MAX_WRITES initialization parameter is specified.

V5.0A aio_task_max_num aio_max_num Greater than the maximum of either the DBWR I/O operations or the value of the DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter, whichever is higher.

The maximum number of DBWR I/O operations defaults to 8192 unless the _DB_WRITER_MAX_WRITES initialization parameter is specified.

If you do not set these operating system parameters, the performance of Oracle9i is reduced and spurious I/O errors might occur. These errors are stored in the alert log and trace files.

 

Direct I/O Support and Concurrent Direct I/O Support Enabled in Oracle9i for Tru64

This section includes the following topics:

 

Single Instance Requirements

Oracle9i has the following requirement on single instance installations:

  • Tru64 UNIX 5.0A or later with the appropriate patchkits.

  • Oracle datafiles stored on a Tru64 UNIX AdvFS file system.

  • The disks that use the AdvFS file system must be physically connected to the computer running the Oracle9i instance. This includes disks attached by fiber channel. This specifically excludes cases where I/O must be served by another node because of a lack of physical connectivity.

On Tru64 UNIX V5.0A systems and higher in a non-clustered system environment, the AdvFS file system and direct I/O give almost all of the performance of raw devices because the file system cache is not used. In addition to this, the file system allows you to more easily manage the database files.

 

Clustered Systems

On V5.1 systems and higher, Tru64 supports CFS. CFS provides a single namespace file system for all nodes in a cluster. All file systems mounted in a cluster are automatically seen by all nodes in the cluster. Because it is layered on top of the AdvFS file system, the CFS file system inherits much of the characteristics of non-clustered systems.


Oracle9i Real Application Clusters is not supported on Tru64 UNIX 5.0A. 

 

Tru64 UNIX V5.1 Clustered Systems

Oracle Corporation supports CFS only on Tru64 UNIX 5.1 or later because this file system now supports a concurrent direct I/O model. Any node that has physical connectivity to a drive can issue data I/O to its file systems without consulting with the owning node.

All metadata changes to a file, for example extending, closing, changing the access or modification date, are still served by the owner node and can still cause cluster interconnect saturation. Therefore, it is possible for the CREATE TABLESPACE, ALTER TABLESPACE, ADD DATAFILE, ALTER DATABASE DATAFILE, or RESIZE commands to perform poorly on a CFS file system when compared to raw devices.

 

Multiple Instance (Oracle9i Real Application Clusters) Requirements

Oracle9i Real Application Clusters requires that you store Oracle datafiles on the Tru64 AdvFS file system. The disks that use the AdvFS file system must be physically connected to all computers running the Oracle instances. This includes disks attached by fiber channel. It excludes cases where I/O must be served by another node because of physical connectivity.

If the database is running in archive mode and the archive logs are being written to disk, the destination AdvFS domain should be served by the node of the instance that is archiving the redo log. For example, if you have a three-node cluster with one instance on each node (nodea, nodeb, and nodec), also have three archive destination AdvFS domains (arcnodea, arcnodeb, and arcnodec). The domains should be served by nodea, nodeb, and nodec respectively and the LOG_ARCHIVE_DEST initialization parameter for each instance should specify their respective locations.

 

Enabling Access to the Real Time Clock

Many Oracle processes are timed, especially if the TIMED_STATISTICS initialization parameter is set to TRUE. These timing functions call the Tru64 kernel and can affect Oracle9i performance. There is a feature in Tru64 that gives a process direct access to the real time clock. Using this feature improves performance more on a heavily loaded system than on a lightly loaded system.

To enable this feature:

  1. Log in as root.

  2. Enter the following commands:

    # mknod /dev/timedev c 15 0 
    # chmod +r /dev/timedev
    
    

    If your system is a cluster running Tru64 UNIX V5.1 or higher, enter these commands once on each cluster. If your system is a cluster running an earlier version of Tru64, enter the commands on each node.


    The special file /dev/timedev remains on the system after rebooting. 

  3. Restart the Oracle9i instance.

    The existence of the /dev/timedev file is checked only on instance startup.

Oracle Corporation recommends that you enable this feature on all instances in a cluster, and therefore on all nodes.

 

Setting Up Raw Devices

Do not attempt to set up raw devices without the help of an experienced system administrator and specific knowledge about the system you are using. 

To set up raw devices/volumes on Tru64 systems:

  1. If you are using Oracle9i Real Application Clusters, make sure that the partitions you are adding are on a shared disk. However, if your platform supports a cluster file system certified by Oracle Corporation, you can store the files that Oracle9i Real Application Clusters requires directly on the cluster file system.

  2. Determine the names of the free disk partitions.

    A free partition is one that is not used for a Tru64 file system and complies with these restrictions:

    • It is not listed when you execute the /usr/sbin/mount command.

    • It is not in use as a swap device.

    • It does not overlap a swap partition.

    • It is not in use by other Tru64 applications (for example, other instances of the Oracle9i server).

    • It does not overlap the Tru64 file system.

    • It does not use a space already used by the file system.

    To determine whether a partition is free, obtain a complete map of the starting locations and sizes of the partitions on the device and check for free space. Some partitions may contain file systems that are currently not mounted and are not listed in the /usr/sbin/mount output.


    Make sure that the partition does not start at cylinder 0. 

  3. Set up the raw device for use by the Oracle9i Server.

    Begin by verifying that the disk is partitioned. If it is not, use the disklabel command to partition it.

  4. Enter the ls command to view the owner and permissions of the device file. For example:

    $ ls -1a
    
    
  5. Make sure that the partition is owned by the Oracle software owner. If necessary, use the chown command to change the ownership on the block and character files for the device. For example:

    # chown oracle /dev/rdisk/dsk10c
    
    
  6. Make sure that the partition has the correct permissions. If necessary, use the chmod command to make the partition accessible to only the Oracle software owner. For example:

    # chmod 600 /dev/rdisk/dsk10c
    
    
  7. Create a symbolic link to the raw devices you require. For example:

    $ ln -s /dev/rdisk/dsk10c /oracle_data/datafile.dbf
    
    

    To verify that you have created the symbolic link, use the character special device (not the block special device) and enter the following command:

    $ ls -Ll datafile
    
    

    The following message should appear:

    crwxrwxrwx oracle dba datafile
    
    

    This symbolic link must be set up on each node of the cluster. Check that no two symbolic links specify the same raw device. 

  8. Create or add the new partition to a new database.

    From SQL*Plus, enter the following SQL command:


    The size of an Oracle datafile created in a raw partition must be at least 64 KB plus one Oracle block size smaller than the size of the raw partition. 

    SQL> CREATE DATABASE sid
      2  LOGFILE '/oracle_data/log1.dbf' SIZE 100K
      3 '/oracle_data/log2.dbf' SIZE 100K
      3  DATAFILE '/oracle_data/datafile.dbf' SIZE 10000K REUSE;
    
    

    To add the partition to a tablespace in an existing Oracle database instead, enter:

    SQL> ALTER TABLESPACE tablespace_name 
      2  ADD DATAFILE '/dev/rdisk/dsk10c' SIZE 10000K REUSE;
    
    

You can use the same procedure to set up a raw device for the redo log files.

 

Spike Optimization Tool

The Spike optimization tool (Spike) is a performance optimization tool that increases the performance of a Tru64 binary. In a testing environment, Spike, with feedback, increased the performance of the Oracle9i server by up to 23 percent on an OLTP workload.

For information on Spike, see the Tru64 documentation or enter one of the following commands:

  • man spike

  • spike

Oracle9i requires Spike version V5.1 (1.2.2.31.2.4 ADK) Feb 22 2001 or later.


If you have a version of Spike earlier than V5.1 (1.2.2.31.2.4 ADK) Feb 22 2001, contact Compaq for a patchkit. 

Enter the following command to check the version of Spike:

$ spike -V

You can download the latest version of Spike from the following URL:

http://www.tru64unix.compaq.com/spike/


Oracle Corporation does not support versions of the Oracle executable optimized using the spike command. If you encounter a problem in an Oracle9i binary that has been optimized using Spike, reproduce the problem with the original un-optimized binary. If the problem persists, see the "Preface" for information on Oracle services and support. 

 

Using Spike

This section describes the system resource required by Spike, how and why to use Spike optimization flags, and the various ways to run Spike.

 

Setting System Resources

 

System Resource Requirements for Spike  

Resource Minimum Value
Physical memory 1024 MB
max-per-proc-address-space parameter in the sysconfigtab file 1024 MB
max-per-proc-data-space parameter in the sysconfigtab file 1024 MB
vm-maxvas parameter in the sysconfigtab file 1024 MB

To set the value of these parameters in the /etc/sysconfigtab file, edit the following lines:

proc:

max-per-proc-address-space = 0x40000000
max-per-proc-data-size = 0x40000000
vm:

vm-maxvas = 0x40000000

Set the limits in your shell environment to the highest values. For the C shell, enter:

% limit datasize unlimited
% limit memoryuse unlimited
% limit vmemoryuse unlimited

Spike can run out of virtual memory if the stacksize limit is set too high. To avoid this problem, enter the following C shell command:

% limit stacksize 8192

 

Checking Optimization Flags

Spike provides a large number of optimization flags. However, you cannot use all spike command optimizations with Oracle9i. The following Spike optimization flags are certified to run with Oracle9i:

-arch -map -noaggressiveAlign -symbols_live
-controlOpt -nosplit -o -tune
-fb -nochain -optThresh -v
-feedback -noporder -splitThresh -V

When you run Spike, it places a copy of the optimization flags in the image header comment section of the binary that you are optimizing. Oracle9i checks Spike optimizations used on itself at the beginning of instance startup. If Oracle9i detects an optimization not known to work for the Oracle9i binary, or if the binary had been previously optimized with OM (the predecessor to Spike from Compaq), the instance startup fails with an ORA-4940 error message. If the instance startup fails, check the alert log file for more information.


Oracle9i release 1 (9.0.1) requires that you use the Spike -symbols_live optimization flag. 

 

Running Spike

Use one of the following methods to optimize an executable using Spike:

  • Static spiking

  • Running Spike with feedback

Static spiking requires only a few set-up steps and yields approximately half the performance benefit possible compared to running Spike with feedback.

Running Spike with feedback includes all of the optimizations of static spiking plus additional optimizations that are workload-related. Running spike with feedback provides the best possible performance benefit, however, it requires considerably more effort than static spiking.

For both running Spike with feedback and static spiking, Oracle Corporation recommends running the spiked Oracle binary in a test environment before moving it to a production environment.

 

Static Spiking

Static spiking performs optimizations that are not specific to your workload, such as manipulating the gp register and taking advantage of the CPU architecture. In a test environment, roughly half of the performance optimization gain possible from Spike was through static spiking. Furthermore, static spiking is relatively straight-forward and simple. The combination of simplicity and performance gain makes static spiking worth the effort.

Perform the following steps to use static spiking:

  1. Shut down the database.

  2. Spike the oracle image by entering the following command:

    $ spike oracle -o oracle.spike -symbols_live
    
    
  3. Save the original image and create a symbolic link to the spiked image by entering the following commands:

    $ mv oracle oracle.orig
    $ ln -s oracle.spike oracle
    
    
  4. Start up the database.


    Before contacting Oracle for support, use the original image to reproduce any problems. 

 

Running Spike with Feedback

Running Spike with feedback performs all of the same optimizations as static spiking plus optimizations that are workload-related such as hot and cold basic block movement. In a test environment, approximately half of the performance optimizations gained from Spike was due to the optimizations that depend on feedback information. Running Spike with feedback requires multiple steps and considerably more effort than static spiking. However, performance sensitive customers may find the extra effort worthwhile.

Perform the followings steps to run Spike with feeback:

  1. Instrument the Oracle binary by entering the following command:

    $ pixie -output oracle.pixie -dirname dir -pids oracle_image
    
    

    In the preceding example, oracle_image  is your original image.

    
    
    


    The -dirname option saves the oracle.Counts.pid files in the dir  directory. Because these files are large and may be numerous, depending on the workload, make sure that the directory has enough disk space. 

    This step also creates an oracle.Addrs file that is required later.

    The output of the pixie command might contain errors. You can safely ignore these errors.

  2. Shut down the database.

  3. Save the original image and create a symbolic link to the pixie image by entering the following commands:

    $ mv oracle oracle.orig
    $ ln -s oracle.pixie oracle
    
    
  4. Start up the database and run your workload.

    You cannot run as many users as you could with the standard executable because the pixie executable is larger and slower. As you use the Oracle9i server, several oracle.Counts.pid files are created, where pid is the process ID for the corresponding Oracle process. Keep track of the process id of each Oracle process for which the optimization is aimed. These could be the shadow Oracle processes of the clients.

  5. Shut down the database.

  6. Create a symbolic link to replace the original executable by entering the following command:

    $ ln -s oracle.orig oracle
    
    
  7. If you can identify one oracle.Counts.pid file as representative of your workload, perform step a. If merge several counts files together to better represent your workload, perform step b.

    1. Make sure that the oracle.Addrs file created by the pixie command, the oracle.Counts.pid files, and the original Oracle executable are available.

      Use the process id (pid) to pick a representative oracle.Counts.pid file and then copy it by entering the following command:

      $ cp oracle.Counts.pid oracle.Counts
      
      
    2. Use the prof utility to merge several oracle.Counts.pid files.

      If you are using the parallel query option, merge the oracle.Counts.pid files generated by the query slaves and the query coordinator, which is the shadow oracle process of the query-initiating client.

      If you are not using the parallel query option, merge the oracle.Counts.pid files from the Oracle foreground processes that use the most memory.

      To merge the oracle.Counts.pid files, enter the following command:

      $ prof -pixie -merge oracle.Counts $ORACLE_HOME/bin/oracle \
      oracle.Addrs oracle.Counts.pid1 oracle.Counts.pid2
      
      
  8. Make sure that the oracle.Addrs and oracle.Counts files are available in the current directory, then run Spike using the feedback information by entering the following command:

    $ spike oracle -fb oracle -o oracle.spike_fb -symbols_live
    
    

    The output of the spike command might contain errors. You can safely ignore these errors.

  9. Create a symbolic link to the new oracle image by entering the following command:

    $ ln -s oracle.spike_fb oracle
    
    
  10. Start up the database.

 

Enabling Oracle9i Directed Placement Optimizations

Compaq GS80, GS160, and GS320 systems consist of smaller building blocks called Resource Affinity Domains (RADs). A RAD is a collection of tightly coupled CPUs, memory modules, and an I/O controller coupled through a fast interconnect. A second-level interconnect connects each of the RADs together to form a larger configuration.

Unlike previous generation servers which have only one common shared interconnect between CPUs, memory, and I/O controller, GS80, GS160, and GS320 servers can offer superior performance and memory access times when a particular CPU accesses memory within its own RAD or uses its local I/O controller. Because of the switched interconnect, all I/O activity and memory accesses within one RAD do not interfere with those within another RAD. However, because memory accesses between a CPU and memory module located across RAD boundaries must traverse two levels of interconnect hierarchy, these memory references take longer relative to memory references that are within a RAD.

Directed memory and process placement support (available on Tru64 UNIX V5.1 and higher) allows sophisticated applications to communicate their specific needs for process and memory layout to the operating system. This communication results in greater performance through increased localization of memory references within a RAD.

Oracle9i includes enhanced support for the special capabilities of high performance servers such as the GS80, GS160, and GS320. Directed placement optimizations specifically take advantage of hierarchical interconnects available in GS80, GS160, and GS320 class servers. All previous generation servers have a single shared interconnect, so these servers neither directly benefit from directed placement optimizations nor is there any loss of performance on these servers. Therefore, these optimizations are disabled in Oracle9i.

 

Requirements to Run the Directed Placement Optimizations

The system must meet the following requirements for Oracle9i directed placement optimizations to work:

  • The system must be a Compaq GS80, GS160, or GS320 AlphaServer or similar locality sensitive Compaq system. The Oracle9i optimizations only affect systems that are locality sensitive.

  • The operating system must be Compaq Tru64 UNIX V5.1 or higher. Previous operating system versions do not include the required operating system support for Oracle9i to perform directed process and memory placement.

 

Enabling Oracle Directed Placement Optimizations

To enable Oracle directed placement optimizations, follow these steps:

  1. Shut down the Oracle instance.

  2. Relink the Oracle server by entering the following commands:

    $ cd $ORACLE_HOME/rdbms/lib
    $ make -f ins_rdbms.mk numa_on
    $ make -f ins_rdbms.mk ioracle
    
    

If you are not using a compatible version of Tru64 UNIX, the following message is displayed:

Operating System Version Does not Support NUMA.
Disabling NUMA!

If you enable Oracle directed placement optimizations, and later change Tru64 to an incompatible version, disable Oracle directed placement optimizations as described in the following section.

 

Disabling Oracle Directed Placement Optimizations

To disable Oracle directed placement optimizations, follow these steps:

  1. Shut down the Oracle instance.

  2. Relink the Oracle server using the numa_off option:

    $ cd $ORACLE_HOME/rdbms/lib
    $ make -f ins_rdbms.mk numa_off
    $ make -f ins_rdbms.mk ioracle
    
    

 

Using Oracle Directed Placement Optimizations

The Oracle directed placement optimizations assume an equi-partitioned configuration. This means that all RADs are configured with the same number of CPUs and the same amount of memory. The Oracle server is assumed to run across all RADs on the system.

 

Oracle Initialization Parameters

To make the most efficient use of the local environment, Oracle9i adjusts some initialization parameters automatically depending on the server configuration as reported by the operating system. This practice eliminates common errors in correctly computing subtle dependencies in these parameters.

 

Tru64 UNIX System Parameters

Set the system parameters in the following table to realize the full benefits of a NUMA system:

Subsystem Parameters Setting
ipc ssm_threshold 0
ipc shm_allocate_striped 1 (default)
vm rad_gh_regions[0]
rad_gh_regions[1]...
and so on
Size of the Shared Global Area in MBs divided by the number of RADs on the system

There are 63 rad_gh_regions parameters in the vm subsystem in Tru64 V5.1. Set only the parameters for the total number of RADs on the system. For example, if there are 4 RADs on the system (a GS160) and the SGA size is 10 GB, then set rad_gh_regions[0], rad_gh_regions[1], rad_gh_regions[2], and rad_gh_regions[3] to 2500. Note that you might have to raise this value slightly to 2501 or 2502 to successfully start the instance.

If CPUs and memory are taken off-line, Oracle9i continues to function, but loses performance. If you anticipate frequent off-lining of RADs or equi-partitioning is not feasible, Oracle Corporation recommends running Oracle9i Real Application Clusters, using one instance per RAD. Using Oracle9i Real Application Clusters, you can configure individual instances with different sets of initialization parameters to match the actual RAD configuration. You can also start up or shut down specific instances without affecting overall application availability.

 

Process Affinity to RADs

You can improve performance by directing the operating system to run the processes on specific RADs. If connections to the database are made through the Oracle Listener process, and there is a corresponding network interconnect adapter on the RAD, you can run a listener on each RAD. To run the listener on a particular RAD, enter the following command:

$ runon -r lsnrctl start [listener_name]

All Oracle shadow processes are automatically created on the same RAD as the Oracle listener.