Using AIX Network Installation Manager and CSM

You use AIX Network Installation Manager (NIM), CSM, and hardware commands to install AIX on the cluster nodes. NIM enables a cluster administrator to centrally manage the installation and configuration of AIX and optional software on machines within a network environment. In a CSM cluster, you may install and configure NIM on an AIX management server or on one or more AIX install servers.

 

 

Installing AIX and CSM on cluster nodes

Follow these steps to install AIX onto CSM Cluster nodes:

1. Verify the node definitions.

2. Create CSM node groups (optional).

3. Validate hardware control (required for hardware control).

4. Get network adapter information.

5. Set up Network Installation Manager (NIM).

6. Create additional NIM network definitions and routes (optional).

7. Create NIM machine definitions.

8. Create NIM machine groups (optional).

9. Prepare customization scripts (optional).

10. Prepare for secondary adapter configuration (optional).

11. Set up cluster configuration (optional).

12. Verify authentication methods for NIM (optional).

13. Prepare NIM to add the nodes.

14. Add OpenSSH and OpenSSL software (optional).

15. Add Kerberos client software (optional).

16. Add the nodes to the cluster.

17. Initiate a network installation of the nodes.

18. Monitor and verify the installation.

19. Enable Kerberos Version 5 remote commands (optional).

20. CSM post-installation tasks - this includes the following steps:

a. Getting started with the newly installed cluster.
b. Enabling remote commands to use Kerberos Version 5 authentication (optional).
c. Understanding the installation and configuration log files - for details about this step, refer to the following address:

http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.help.csm.doc/csm_books/csm_admin/am7ad13014.html

 

Maintaining NIM lpp_sources for each AIX release

One requirement of the Network Installation Manager (NIM) is that the AIX level of an lpp_source and the Shared Product Object Tree (SPOT) have to be the same as the AIX level in system backup to be restored or the node to be booted into maintenance. Trying to install a mksysb taken from a system running AIX 5.3ML05 using lpp_source and SPOT at AIX 5.3ML04 Level likely will fail.

In an environment with client nodes running at different levels of an AIX release, an lpp_source and a SPOT for each of these levels must be available to be able to restore client nodes backups or perform maintenance boots of the client nodes.

For our example with the different AIX 5.3 level, there is one lpp_source with the AIX 5.3 ML04 installation images at /export/lpp_source/lpp_source53ML04.

To create a new lpp_source for AIX 5.3 ML05 at directory /export/lpp_source/lpp_source53ML05, we can simply copy the AIX 5.3 ML04 lpp_source to the AIX 5.3 ML05 location and apply the ML05 updates to it. This requires more disk space, but ensures the integrity of the installation source.

Next steps are to create the new NIM lpp_source and SPOT resources for the new level AIX 5.3 ML05 and update the target partitions.

 

 

Installing the Virtual I/O Server using NIM

You can use the following procedures to install the Virtual I/O (VIO) server into environments managed by the HMC or Integrated Virtualization Manager using Network Installation Management (NIM).

We need the following files before beginning this procedure. These files are located on the Virtual I/O Server installation media:

nimol/ioserver_res/mksysb (the mksysb image)

In addition, the following system requirements must be met:

The NIM server with AIX 5.3 with 5300-03 or higher and a file system with at least 700 MB free space available

A logical partition of type Virtual I/O Server containing an Ethernet adapter connected to an active network for installing the Virtual I/O Server

A storage controller containing at least 16 GB of disk space allocated to the Virtual I/O Server partition

After the prerequisites have been met, you can also install a Virtual I/O Server or an Integrated Virtualization Manager through the SMIT interface. Run smitty installios to get access to the SMIT interface to the installios command.

The installios setup process creates the following NIM resources to start the installation:

bosinst_data, installp_bundle,lpp_source,mksysb,resolv_conf,SPOT,Client definition.

We need to know the following information as defined within the HMC environment:

HMC Name, Managed System Name, Partition Name, Partition Profile Name.

The full installios command is shown here:

/usr/sbin/installios -d'cd0' -h'riogrande.itsc.austin.ibm.com' -s 
'p5+-9133-55A-SN10D1FAG' -p'wasp5l_vio' -r'wasp5l_vio_limited' 
-i'9.3.5.170' -S'255.255.255.0' -g'9.3.5.41' -P'100' -D'full' 
-l'en_US'  '-N'

If you are installing the Virtual I/O Server logical partition, and if Secure Shell (SSH) and credentials have been configured on the NIM master, then the partition is network-booted from the Hardware Management Console (HMC) to begin the installation.

 

 

Installing the WebSphere partitions using NIM

Performing a NIM mksysb installation is faster than performing a NIM Runtime (rte) installation. And, using mksysb, you can optionally include other installed software. A mksysb image is the system backup image created by the AIX mksysb command. You can use this image to install other machines or to restore the machine that was the source of the mksysb.

1. After you complete all steps listed in Installing AIX and CSM on cluster nodes, you can install AIX on the nodes.

2. We suggest that you install it on the node with the basic operating system to create a Master image (also known as mksysb image).

Issue NIM bos_inst operation with the source attribute set to rte for one node to be installed. The command for node trinity is shown here:

csmsetupnim -n trinity

nim -o bos_inst -a source=rte -a lpp_source=lpp_source53ML05 -a 
spot=spot535 trinity

3. After the installation, configure and prepare the operating system to run WebSphere. You can, for example, preinstall the AIX ToolBox filesets, add the default user needed for your special needs, and preset NFS remote mounts. (Refer to Chapter 4, AIX configuration for details about what we applied, in our case, to the AIX operating system.)

4. Use the installed node to create a NIM mksysb resource and a mksysb image. You can perform this task by using a single NIM command, as shown here:
nim -o define -t mksysb -a server=master -a \ 
location=/export/mksysb/AIX53ML05_WebSphere Application Server 
61_Base -a mk_image=yes -a \ source=trinity WebSphere Application 
Server 61AIX535_mksysb

The created NIM resource can be verified.

5. Use this mksysb image and NIM resource to install AIX on all new partitions.

Before installing the new node, issue the command csmsetupnim to set up the CSM customization scripts to the automatically update command to exchange HBA public keys and update the Trusted Host List after AIX is installed on a node.

Issue the NIM bos_inst operation with the source attribute set to mksysb for one or a group of nodes to be installed. The commands for node us is shown here:

csmsetupnim -n us

nim -o bos_inst -a source=mksysb -a mksysb=WebSphere Application 
Server 61AIX535_mksysb -a spot=spot535 us

Issue the command lsnim -c resources us to verify that all resources have been allocated and that the Node NIM status is as required. The command output is shown here:

osprereboot       script

WebSphere Application Server 61AIX535_mksysb mksysb

spot535           spot

boot              boot

You can check the NIM status for the object machine by using the command lsnim. In our case, because we were only interested in the value of the NIM attribute Cstate, we used the command lsnim -l us | grep Cstate to check the output, as shown here:

Cstate         = BOS installation has been enabled

6. Initiate the node installation.

To initiate node installation, use netboot command. The command we used in our case is shown here:

netboot -n us

The progress is written to the CSM logfile /var/log/csm/netboot. In addition, you can use the rconsole command to connect to the partition from the CSM management server.

The rconsole command provides remote console support for nodes and devices in a cluster. The command uses the CSM database to determine the nodes and devices and their console access information. It provides similar functionality to the HMC Virtual Terminal without needing to have a HMC GUI session open.