Google Search

Google

Saturday, December 15, 2007

History

AIX Version 1, introduced in 1986 for the IBM 6150 RT workstation, was based on UNIX System V Releases 1 and 2. In developing AIX, IBM and INTERACTIVE Systems Corporation (whom IBM contracted) also incorporated source code from 4.2 and 4.3BSD UNIX.

Among other variants, IBM later produced AIX Version 3 (also known as AIX/6000), based on System V Release 3, for their IBM POWER-based RS/6000 platform. Since 1990, AIX has served as the primary operating system for the RS/6000 series (now called System p by IBM).

AIX Version 4, introduced in 1994, introduced support for symmetric multiprocessing with the introduction of the first RS/6000 SMP servers. AIX Version 4 continued to evolve though the 1990s culminating with the introduction of AIX 4.3.3 in 1999.

In the late 1990s, under Project Monterey, IBM and the Santa Cruz Operation planned to integrate AIX and UnixWare into a single 32-bit/64-bit multiplatform UNIX with particular emphasis on supporting the Intel IA-64 architecture. This never came to fruition, though a beta test version of AIX 5L for IA-64 systems was released.

AIX 6 was announced in May 2007 and ran an open beta from June 2007 until the general availability (GA) of AIX 6.1 on November 9th, 2007. Major new features in AIX 6.1 are full RBAC, Workload Partitions (WPAR) enabling application mobility, and Live Partition Mobility on the new POWER6 hardware.

Aix operating Systems

AIX® helps you to meet the demanding requirements for performance, energy efficiency, flexibility and availability in your IT environment.
  • Market momentum: AIX has dramatically expanded its share of the UNIX® market over the past several years – go with the leader!*
  • Unmatched performance: Industry-leading benchmarks underscore the highly scalable performance of AIX on IBM systems running Power Architecture™ technology-based processors.
  • Virtualization: AIX provides software-based and exploits hardware-based virtualization capabilities. These capabilities can help you to consolidate workloads to increase server utilization and lower energy and other costs.
  • Security: AIX includes a number of features designed to provide for a secure IT environment – and AIX can integrate into your existing security infrastructure.
  • Broad application support: Independent software vendors recognize AIX as a premier UNIX operating system. With over 8000 applications from over 3000 ISVs, the applications you need are probably already available on AIX.
  • Binary compatibility: AIX has a long history of maintaining binary compatibility from one release to the next – so that you can upgrade to the latest release with confidence.**
  • Legacy systems inspiration: AIX delivers the reliability, availability and security of a mainframe environment including industry-leading availability techniques.
  • Simple migration: IBM's customized service offerings make migration from competitive UNIX platforms to AIX quick and easy–consolidate all of your UNIX workload onto AIX.

Aix unix operating system

AIX® is an open standards-based, UNIX® operating system that allows you to run the applications you want, on the hardware you want—IBM UNIX servers. AIX in combination with IBM's virtualization offerings, provides new levels of flexibility and performance to allow you to consolidate workloads on fewer servers which can increase efficiency and conserve energy. AIX delivers high levels of security, integration, flexibility and reliability—essential for meeting the demands of today's information technology environments. AIX operates on the IBM Systems based on Power Architecture™ technology.
Read more


Featured topic

AIX 6

AIX Version 6.1
Now available, the next step in the evolution of the UNIX operating system introduces new capabilities for virtualization, security, availability and manageability.

Latest AIX offerings

AIX Version 6.1
AIX 6.1 extends the capabilities of the AIX operating system by including features such as Workload Partitions, Live Application Mobility, Role Based Access Control, Encrypting Filesystems, Concurrent AIX Kernel Updates and many other features. This release underscores the strategic importance of AIX as it delivers groundbreaking new features while maintaining full binary compatibility with previous AIX releases. More than ever, this release of AIX allows you to more efficiently use your IBM servers to support the needs of your business. AIX V6.1 represents the latest advance in a long record of IBM operating system innovation.



View all AIX and related software products


Independent software vendor support for AIX

Independent software vendors recognize AIX as a premier UNIX operating system. Over 3000 ISVs now support thousands of applications on the AIX operating system. It is likely that the applications that your business needs are already available on AIX. For more unique applications, IBM offers a rapid process for porting applications to the latest AIX release.
View ISV application availability


AIX in action

Read the latest news and client success stories.
IBM AIX Achieves UNIX 03 Product Standard Certification
IBM celebrates 20 years of AIX
IBM launches collaboration center for UNIX

AIx/HMc Tips sheet

AIX / HMC Tip Sheet
HMC Commands
lshmc –n (lists dynamic IP addresses served by HMC)
lssyscfg –r sys –F name,ipaddr (lists managed system attributes)
lssysconn –r sys (lists attributes of managed systems)
lssysconn –r all (lists all known managed systems with attributes)
rmsysconn –o remove –ip (removes a managed system from the HMC)
mkvterm –m {msys} –p {lpar} (opens a command line vterm from an ssh session)
rmvterm –m {msys} –p {lpar} (closes an open vterm for a partition)
Activate a partition
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b normal
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b sms
Shutdown a partition
chsysstate –m managedsysname –r lpar –o {shutdown/ossshutdown} –n partitionname [-immed][-restart]
VIO Server Commands
lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)
Create Shared Ethernet Adapter (SEA) on VIO Server
mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)
Create Virtual Storage Device Mapping
mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.
AIX Performance TidBits and Starter Set of Tuneables
Current starter set of recommended AIX 5.3 Performance Parameters. Please ensure you test these first before implementing in production as your mileage may vary.
Network
no –p –o rfc1323=1
no –p –o sb_max=1310720
no –p –o tcp_sendspace=262144
no –p –o tcp_recvspace=262144
no –p –o udp_sendspace=65536
no –p –o udp_recvspace=655360
nfso –p –o rfc_1323=1
NB Network settings also need to be applied to the adapters
nfso –p –o nfs_socketsize=600000
nfso –p –o nfs_tcp_socketsize=600000
Memory Settings
vmo – p –o minperm%=5
vmo –p –o maxperm%=80
vmo –p –o maxclient%=80
Let strict_maxperm and strict_maxclient default
vmo –p –o minfree=960
vmo –p –o maxfree=1088
vmo –p –o lru_file_repage=0
vmo –p –o lru_poll_interval=10
IO Settings
Let minpgahead and J2_minPageReadAhead default
ioo –p –o j2_maxPageReadAhead=128
ioo –p –o maxpgahead=16
ioo –p –o j2_maxRandomWrite=32
ioo –p –o maxrandwrt=32
ioo –p –o j2_nBufferPerPagerDevice=1024
ioo –p –o pv_min_pbug=1024
ioo –p –o numfsbufs=2048
If doing lots of raw I/O you may want to change lvm_bufcnt
Default is 9
ioo –p –o lvm_bufcnt=12
Others left to default that you may want to tweak include:
ioo –p –o numclust=1
ioo –p –o j2_nRandomCluster=0
ioo –p –o j2_nPagesPerWriteBehindCluster=32
Useful Commands
vmstat –v or –l or –s lvmo
vmo –o iostat (many new flags)
ioo –o svmon
schedo –o filemon
lvmstat fileplace
Useful Links
1. Lparmon – www.alphaworks.ibm.com/tech/lparmon
2. Nmon – www.ibm.com/collaboration/wiki/display/WikiPtype/nmon
3. Nmon Analyser – www-941.ibm.com/collaboration/wiki/display/WikiPtype/nmonanalyser
4. vmo, ioo, vmstat, lvmo and other AIX commands http://publib.boulder.ibm.com/infocenter/pseries/v5r3/topic/com.ibm.aix.doc/doc

Configuring MPIO

Configuring MPIO for the virtual AIX client
This document describes the procedure to set up Multi-Path I/O on the AIX clients of
the virtual I/O server.
Procedure:
This procedure assumes that the disks are already allocated to both the VIO servers
involved in this configuration.
•Creating Virtual Server and Client SCSI Adapters
First of all, via HMC create SCSI server adapters on the two VIO servers and
then two virtual client SCSI adapters on the newly created client partition, each
mapping to one of the VIO servers´ server SCSI adapter.
An example:
Here is an example of configuring and exporting an ESS LUN from both the
VIO servers to a client partition:
•Selecting the disk to export
You can check for the ESS LUN that you are going to use for MPIO by
running the following command on the VIO servers.
On the first VIO server:
$ lsdev -type disk
name status description
..
hdisk3 Available MPIO Other FC SCSI Disk Drive
hdisk4 Available MPIO Other FC SCSI Disk Drive
hdisk5 Available MPIO Other FC SCSI Disk Drive
..
$lspv
..
hdisk3 00c3e35c99c0a332 None
hdisk4 00c3e35c99c0a51c None
hdisk5 00c3e35ca560f919 None
..
In this case hdisk5 is the ESS disk that we are going to use for MPIO.
Then run the following command to list the attributes of the disk that you choose for MPIO:
$lsdev -dev hdisk5 -attr
..
algorithm fail_over Algorithm True
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
..
pvid 00c3e35ca560f9190000000000000000 Physical volume identifier
False
..
reserve_policy single_path Reserve Policy True
Note down the lun_id, pvid and the reserve_policy of the hdisk4.
•Command to change reservation policy on the disk
You see that the reserve policy is set to single_path.
Change this to no_reserve by running the following command:
$ chdev -dev hdisk5 -attr reserve_policy=no_reserve
hdisk4 changed
On the second VIO server:
On the second VIO server too, find the hdisk# that has the same pvid, it could
be a different one than the one on the first VIO server, but the pvid should the
same.
$ lspv
..
hdisk7 00c3e35ca560f919 None
..
The pvid of the hdisk7 is the same as the hdisk5 on the first VIO server.
$ lsdev -type disk
name status description
..
hdisk7 Available MPIO Other FC SCSI Disk Drive
..
$lsdev -dev hdisk7 -attr
..
algorithm fail_over Algorithm True
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
pvid 00c3e35ca560f9190000000000000000 Physical volume identifier
False
..
reserve_policy single_path Reserve Policy True
You will note that the lun_id, pvid of the hdisk7 on this server are the same as
the hdisk4 on the first VIO server.
$ chdev -dev hdisk7 -attr reserve_policy=no_reserve
hdisk6 changed
•Creating the Virtual Target Device

Now on both the VIO servers run the mkvdev command using the appropriate
hdisk#s respectively.
$ mkvdev -vdev hdisk# -vadapter vhost# -dev vhdisk#
The above command might have failed when run on the second VIO server, if
the reserve_policy was not set to no_reserve on the hdisk.
After the above command runs succesfully on both the servers, we have
same LUN exported to the client with mkvdev command on both servers.
•Check for correct mapping between the server and the client
Double check the client via the HMC that the correct slot numbers match the
respective slot numbers on the servers.
In the example, the slot number 4 for the client virtual scsi adapter maps to
slot number 5 of the VIO server VIO1_nimtb158.



And the slot number 5 for the client virtual SCSI adapter maps to the slot
number 5 of the VIO server VIO1_nimtb159.

•On the client partition
Now you are ready to install the client. You can install the client using any of
the following methods described in the red book on virtualization at
http://www.redbooks.ibm.com/redpieces/abstracts/sg247940.html:
1. NIM installation
2. Alternate disk installation
3. using the CD media
Once you install the client, run the following commands to check for MPIO:
# lsdev -Cc disk
hdisk0 Available Virtual SCSI Disk Drive
# lspv
hdisk0 00c3e35ca560f919 rootvg active
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
•Dual Path
When one of the VIO servers goes down, the path coming from that server
shows as failed with the lspath command.
# lspath
Failed hdisk0 vscsi0
Enabled hdisk0 vscsi1
•Path Failure Detection
The path shows up in the "failed" mode, even after the VIO server is up
again. We need to either change the status with the “chpath” command to
“enabled” state or set the the attributes “hcheck_interval” and “hcheck_mode” to
“60” and “nonactive” respectively for a path failure to be detected automatically.
•Setting the related attributes
Here is the command to be run for setting the above attributes on the client
partition:
$ chdev -l hdisk# -a hcheck_interval=60 –a hcheck_mode=nonactive -P
The VIO AIX client needs to be rebooted for hcheck_interval attribute to take
effect.
•EMC for Storage
In case of using EMC device as the storage device attached to VIO server,
then make sure of the following:
1. Powerpath version 4.4. is installed on the VIO servers.
2. Create hdiskpower devices which are shared between both the VIO
servers.
•Additional Information
Another thing to take note of is that you cannot have the same name for
Virtual SCSI Server Adapter and Virtual Target Device. The mkvdev command
will error out if the same name for both is used.
$ mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev hdiskpower0
Method error (/usr/lib/methods/define -g -d):
0514-013 Logical name is required.
The reserve attribute is named differently for an EMC device than the attribute
for ESS or FasTt storage device. It is “reserve_lock”.
Run the following command as padmin for checking the value of the
attribute.
$ lsdev -dev hdiskpower# -attr reserve_lock
Run the following command as padmin for changing the value of the attribute.
$ chdev -dev hdiskpower# -attr reserve_lock=no
•Commands to change the Fibre Channel Adapter attributes
And also change the following attributes of the fscsi#, fc_err_recov to “fast_fail”
and dyntrk to “yes”
$ chdev -dev fscsi# -attr fc_err_recov=fast_fail dyntrk=yes –perm
The reason for changing the fc_err_recov to “fast_fail” is that if the Fibre
Channel adapter driver detects a link event such as a lost link between a storage
device and a switch, then any new I/O or future retries of the failed I/Os will be
failed immediately by the adapter until the adapter driver detects that the device
has rejoined the fabric. The default setting for this attribute is 'delayed_fail’.
Setting the dyntrk attribute to “yes” makes AIX tolerate cabling changes in the
SAN.
The VIOS needs to be rebooted for fscsi# attributes to take effect.

other different aix commands

Introduction

As you know, AIX® has a vast array of commands that enable you to do a multitude of tasks. Depending on what you need to accomplish, you use only a certain subset of these commands. These subsets differ from user to user and from need to need. However, there are a few core commands that you commonly use. You need these commands either to answer your own questions or to provide answers to the queries of the support professionals.

In this article, I'll discuss some of these core commands. The intent is to provide a list that you can use as a ready reference. While the behavior of these commands should be identical in all releases of AIX, they have been only tested under AIX 5.3.

Note:
The bootinfo command discussed in the following paragraphs is NOT a user-level command and is NOT supported in AIX 4.2 or later.

Commands

Kernel

How would I know if I am running a 32-bit kernel or 64-bit kernel?

To display if the kernel is 32-bit enabled or 64-bit enabled, type:
bootinfo -K

How do I know if I am running a uniprocessor kernel or a multiprocessor kernel?

/unix is a symbolic link to the booted kernel. To find out what kernel mode is running, enter ls -l /unix and see what file /unix it links to. The following are the three possible outputs from the ls -l /unix command and their corresponding kernels:

/unix -> /usr/lib/boot/unix_up # 32 bit uniprocessor kernel /unix -> /usr/lib/boot/unix_mp # 32 bit multiprocessor kernel/unix -> /usr/lib/boot/unix_64 # 64 bit multiprocessor kernel

Note:
AIX 5L Version 5.3 does not support a uniprocessor kernel.

How can I change from one kernel mode to another?

During the installation process, one of the kernels, appropriate for the AIX version and the hardware in operation, is enabled by default. Let us use the method from the previous question and assume the 32-bit kernel is enabled. Let us also assume that you want to boot it up in the 64-bit kernel mode. This can be done by executing the following commands in sequence:

#ln -sf /usr/lib/boot/unix_64 /unix
#/usr/lib/boot/unix bosboot -ad /dev/hdiskxx
#shutdown -r

The /dev/hdiskxx directory is where the boot logical volume /dev/hd5 is located. To find out what xx is in hdiskxx, run the following command:

#lslv -m hd5

Note:
In AIX 5.2, the 32-bit kernel is installed by default. In AIX 5.3, the 64-bit kernel is installed on 64-bit hardware and the 32-bit kernel is installed on 32-bit hardware by default.

Hardware

How would I know if my machine is capable of running AIX 5L Version 5.3?

AIX 5L Version 5.3 runs on all currently supported CHRP (Common Hardware Reference Platform)-based POWER hardware.

How would I know if my machine is CHRP-based?

Run the prtconf command. If it's a CHRP machine, the string chrp appears on the Model Architecture line.

How would I know if my System p machine (hardware) is 32-bit or 64-bit?

To display if the hardware is 32-bit or 64-bit, type:

#bootinfo -y

How much real memory does my machine have?

To display real memory in kilobytes (KB), type one of the following:

#bootinfo -r

#lsattr -El sys0 -a realmem

Can my machine run the 64-bit kernel?

64-bit hardware is required to run the 64-bit kernel.

What are the values of attributes for devices in my system?

To list the current values of the attributes for the tape device, rmt0, type:

#lsattr -l rmt0 -E

To list the default values of the attributes for the tape device, rmt0, type:

#lsattr -l rmt0 -D

To list the possible values of the login attribute for the TTY device, tty0, type:

#lsattr -l tty0 -a login -R

To display system level attributes, type:

#lsattr -E -l sys0

How many processors does my system have?

To display the number of processors on your system, type:

#lscfg | grep proc

How many hard disks does my system have and which ones are in use?

To display the number of hard disks on your system, type:

#lspv

How do I list information about a specific physical volume?

To find details about hdisk1, for example, run the following command:

#lspv hdisk1

How do I get a detailed configuration of my system?

Type the following:

#lscfg

The following options provide specific information:
-p

Displays platform-specific device information. The flag is applicable to AIX 4.2.1 or later.
-v
Displays the VPD (Vital Product Database) found in the customized VPD object class.

For example, to display details about the tape drive, rmt0, type:

lscfg -vl rmt0

You can obtain very similar information by running the prtconf command.

How do I find out the chip type, system name, node name, model number, and so forth?

The uname command provides details about your system.

#uname -p
Displays the chip type of the system. For example, PowerPC.

#uname -r
Displays the release number of the operating system.

#uname -s
Displays the system name. For example, AIX.

#uname -n
Displays the name of the node.

#uname -a
Displays the system name, nodename, version, machine ID.

#uname -M
Displays the system model name. For example, IBM, 9114-275.

#uname -v
Displays the operating system version.

#uname -m
Displays the machine ID number of the hardware running the system.

#uname -u
Displays the system ID number.

HDLM installation on voi Server

rocedure: Install/Update HDLM drivers

# login to vio server as "padmin".
# Switch to "oem" prompt.
oem_setup_env
umount /mnt
mount bosapnim01:/export/lpp_source/hitachi /mnt

# Install and update all filesets from the directories below.
# "smitty install_all"
cd /mnt/hdlm_5601/aix_odm/V5.0.0.1
cd /mnt/hdlm_5601/aix_odm/V5.0.0.4u
cd /mnt/hdlm_5601/aix_odm/V5.0.1.4U
cd /mnt/hdlm_5601/aix_odm/V5.0.52.1U

# Copy license file.
cd /mnt/hdlm_5601/license/enterprise
cp *.plk /var/tmp/hdlm_license

# install and update all filesets from the above directory
# "smitty install_all"
# Fileset DLManager 5.60.1.100 Hitachi Dynamic Link Manager
cd /mnt/hdlm_5601

# Leave the current Directory and unmount Driver Source Directory.
cd /
umount /mnt


Procedure: Install/Update VIO fixpack

# Login to VIO server as "padmin"
# Obtain current IOS level
ioslevel

# Update VIO to latest IOS level
mount bosapnim01:/export/lpp_source/aix/vio_1200 /mnt
updateios -dev /mnt
** Enter "y" to continue install

# Return to "root" shell prompt and HALT system.
oem_setup_env
shutdown -Fh

# Activate LPAR from HMC WebSM


Procedure: Configure VIO Server to utilize Boot Disks
# Login as "padmin"
# Switch to "oem" prompt
oem_setup_env

# Run in korn shell 93
ksh93

# Remove any vhost adapter configuration settings
for (( i=0; i<=48; ++i ))
do
/usr/ios/cli/ioscli rmdev -pdev vhost${i}
done

# Remove all HDLM disks
for i in $( lsdev -Cc disk -F name | grep dlmfdrv )
do
rmdev -Rdl ${i}
done

# Remove all hdisks except for hdisk0 and hdisk1 - assumed to be rootvg
for i in $( lsdev -Cc disk -F name | grep hdisk | egrep -v 'hdisk0$ | hdisk1$' )
do
rmdev -Rdl ${i}
done

# If an HDLM unconfig file exists, rename it
[[ -f /usr/DynamicLinkManager/drv/dlmfdrv.unconf ]] &&
mv /usr/DynamicLinkManager/drv/dlmfdrv.unconf \
/usr/DynamicLinkManager/drv/$( date +"%Y%m%d").dlmfdrv.unconf

# Verify "dlmfdrv.unconf" was renamed.
ls /usr/DynamicLinkManager/drv

# Set fast fail Parameter for SCSI Adapters and Reconfigure FC Adapters
chdev -l fscsi0 -a fc_err_recov=fast_fail
chdev -l fscsi1 -a fc_err_recov=fast_fail
chdev -l fscsi2 -a fc_err_recov=fast_fail
cfgmgr -vl fcs0
cfgmgr -vl fcs1
cfgmgr -vl fcs2

# Change HDLM settings
cd /usr/DynamicLinkManager/bin
print y | ./dlmodmset -e on
print y | ./dlmodmset -b 68608

# Reconfigure HDLM disks
./dlmcfgmgr

# Turn off reserve settings on HDLM Driver
./dlnkmgr set -rsv on 0 -s

# Remove HDLM disks
for i in $( lsdev -Cc disk -F name | grep dlmfdrv )
do
rmdev -Rdl ${i}
done

# Change reserve policy on hdisks to "no_reserve"
for i in $( lsdev -Cc disk -F name |
grep hdisk |
egrep -v 'hdisk0$|hdisk1$' )
do
chdev -l ${i} -a reserve_policy=no_reserve
done

# Reconfigure HDLM disks
./dlmcfgmgr

# Verify all HDLM disks have an assigned PVID
for i in $( lsdev -Cc disk -F name | grep dlmfdrv )
do
chdev -l ${i} -a pv=yes
done
lspv

# Remove any vhost adapter configuration settings
/usr/ios/cli/ioscli lsmap -all

# Verify all vhosts adapters exist wihout Devices.
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U9119.590.51A432C-V3-C10 0x00000000

VTD NO VIRTUAL TARGET DEVICE FOUND

# Reboot VIO Server.
shutdown -Fr

Different RAID levels

The different raid levels available today

Raid 0 - Stripping data across the disks. This stripes the data across all the disks present in the
array. This improves the read and write performance. Eg. Reading a large file takes a
long time in comparison to reading the same file from a Raid 0 system.They is no data
redundancy in this case.

Raid 1 - Mirroring. In case of Raid 0 it was observed that there was no redundancy,i.e if one
disk fails then the data is lost. Raid 1 overcomes that problem by mirroring the data. So
if one disk fails the data is still accessible through the other disk.

Raid 2 - RAID level that does not use one or more of the "standard" techniques of mirroring,
striping and/or parity. It is implemented by splitting data at bit level and spreading it
across the data disks and redundant disk. It uses a special algorithm called as ECC
(error correction code) which is accompanied across each data block. These are tallied
when the data is read from the disk to maintain data integrity.

Raid 3 - data is striped across multiple disks at a byte level. The data is stripped with parity and
the parity is maintained in a separate disk. So if that disk goes off , it results in a data
loss.

Raid 4 - Similar to Raid 3 the only difference is that the data is striped across multiple disks at
block level.

Raid 5 - Block-level striping with distributed parity. The data and parity is stripped across all
disks thus increasing the data redundancy. Minimum three disks are required and if
any one disk goes off the data is still secure.

Raid 6 - Block-level striping with dual distributed parity. Its stripes blocks of data and parity
across all disks in the Raid except that it maintains two sets of parity information for
each parcel of data thus increasing the data redundancy. So if two disk go off the data
is still intact.

Raid 7 - Asynchronous, cached striping with dedicated parity. This level is not a open industry
standard. It is based on the concepts of Raid 3 and 4 and a great deal of cache is
included across multiple levels. Also there is a specialised real time processor to
manage the array asynchronously.

Implementetion Of PLM

Implementing PLM

PLM Software Installation

 Install the following filesets:
plm.license
plm.server.rte
plm.sysmgt.websm

 Make sure SSL and OpenSSH are also installed

 For setup of PLM, create .rhosts files on the server and all clients.After PLM has been set up, you can delete the .rhosts files.

Create SSH Keys


 On the server, enter:
# ssh-keygen –t rsa

 Copy the HMC’s secure keys to the server:
# scp hscroot@hmchostname:.ssh/authorized_keys2 \
~/.ssh/tmp_authorized_keys2
 Append the server’s keys to the temporary key file and copy it back to the HMC:
# cat ~/.ssh/id_rsa.pub >> ~/.ssh/tmp_authorized_keys2
# scp ~/.ssh/tmp_authorized_keys2 \
hscroot@hmchostname:.ssh/authorized_keys2


Test SSH and Enable WebSM

 Test SSH to the HMC. You should not be asked for a password.
# ssh hscroot@hmchostname lssyscfg –r sys

 On the PLM server, make sure you can run WebSM. Run:
# /usr/websm/bin/wsmserver -enable

Configure PLM Software

 On the PLM server, open WebSM and select Partition Load Manager.

 Click on Create a Policy File. In the window open on the General Tab, enter a policy file name on the first line

 Click on the Globals tab. Enter the fully qualified hostname of your HMC. Enter hscroot (or a user with the Systems Administration role) as the HMC user name. Enter the CEC name, which is the managed system name (not the fully qualified hostname).
 Click on the Groups tab. Click the Add button. Type in a group name. Enter the maximum CPU and memory values that you are allowed to use for PLM operations.

 Check both CPU and Memory management if you’re going to manage both.

 Click on Tunables. These are the defaults for the entire group. If you don’t understand a value, highlight it and select Help for a detailed description.
 Click on the Partitions tab. Click the Add button and add all of the running partitions in the group to the partitions list.
On the Partition Definition tab, use the partitions’ fully qualified hostnames and add them to the group you just created.

 Click OK to create the policy file.

 In the PLM server, view the policy file you created. It will be in /etc/plm/policies.

 Perform the PLM setup step using WebSM. You must be root. Once this finishes, you’ll see “Finished: Success” in the WebSM working window.
 In the server and a client partition, look at the /var/ct/cfg/ctrmc.acls file to see if these lines are at the bottom of the file:
IBM.LPAR
root@hmchostname * rw

If you need to edit this file, run this command afterward:
# refresh –s ctrmc

 Test RMC authentication by running this command from the PLM server, where remote_host is a PLM client
# CT_CONTACT=remote_host lsrsrc IBM.LPAR
If successful, a lot of LPAR information will be printed out instead of “Could not authenticate user”

 Start the PLM server. Look for “Finished:Success” in the WebSM working window.
Enter a configuration name. Enter your policy file name. Enter a new logfile name.
(If you have trouble with the logilfe, you may need to touch the file before you can access it)

 If the LPAR details window shows only zeroed-out information, then there’s probably an RMC authentication problem.

 If there’s a problem, on the server partition, run:
# /usr/sbin/rsct/bin/ctsvhbal
The output should list one or more identities. Check to see that the server’s fully qualified hostname is in the output.
 On each partition, run /usr/sbin/rsct/bin/ctsthl –l. At least one of the identities shown on the remote partition’s ctsvhbal output should show up on the other partitions’ ctsthl –l output. This is the RMC list of trusted hosts.
 If there are any entries in the RMC trusted hosts lists which are not fully qualified hostnames, remove them with the following command:
# /usr/sbin/rsct/bin/ctsthl –d –n identity
where identity is the trusted host list identity
 If one partition is missing a hostname, add it as follows:
# /usr/sbin/rsct/bin/ctsthl –l –n identity –m METHOD –p ID_VALUE
Identity is the fully qualified hostname of the other partition
rsa512 is the method
Id_value is obtained by running ctsthl –l on the other partition to determine its own identifier

Recovering a Failed VIO Disk

Here is a recovery procedure for replacing a failed client disk on a Virtual IO
server. It assumes the client partitions have mirrored (virtual) disks. The
recovery involves both the VIO server and its client partitions. However,
it is non disruptive for the client partitions (no downtime), and may be
non disruptive on the VIO server (depending on disk configuration). This
procedure does not apply to Raid5 or SAN disk failures.

The test system had two VIO servers and an AIX client. The AIX client had two
virtual disks (one disk from each VIO server). The two virtual disks
were mirrored in the client using AIX's mirrorvg. (The procedure would be
the same on a single VIO server with two disks.)

The software levels were:


p520: Firmware SF230_145 VIO Version 1.2.0 Client: AIX 5.3 ML3


We had simulated the disk failure by removing the client LV on one VIO server. The
padmin commands to simulate the failure were:


rmdev -dev vtscsi01 # The virtual scsi device for the LV (lsmap -all)rmlv -f aix_client_lv # Remove the client LV


This caused "hdisk1" on the AIX client to go "missing" ("lsvg -p rootvg"....The
"lspv" will not show disk failure...only the disk status at the last boot..)

The recovery steps included:

VIO Server


Fix the disk failure, and restore the VIOS operating system (if necessary)mklv -lv aix_client_lv rootvg 10G # recreate the client LV mkvdev -vdev aix_client_lv -vadapter vhost1 # connect the client LV to the appropriate vhost


AIX Client


cfgmgr # discover the new virtual hdisk2 replacepv hdisk1 hdisk2 # rebuild the mirror copy on hdisk2 bosboot -ad /dev/hdisk2 # add boot image to hdisk2bootlist -m normal hdisk0 hdisk2 # add the new disk to the bootlistrmdev -dl hdisk1 # remove failed hdisk1


The "replacepv" command assigns hdisk2 to the volume group, rebuilds the mirror, and
then removes hdisk1 from the volume group.

As always, be sure to test this procedure before using in production.

RS6000 Diagnoastic LEDS

--------------------------------------------------------------------------------
TITLE : Diagnostic LED numbers and codes.
OS LEVEL : AIX
DATE : 07/04/99
VERSION : 1.0
----------------------------------------------------------------------------

Built-In Self-Test (BIST) Indicators
------------------------------------

100 BIST completed successfully; control was passed to IPL ROS.
101 BIST started following reset.
102 BIST started, following the system unit's power-on reset.
103 BIST could not determine the system model number.
104 Equipment conflict; BIST could not find the CBA.
105 BIST could not read from the OCS EPROM.
106 BIST failed: CBA not found
111 OCS stopped; BIST detected a module error.
112 A checkstop occurred during BIST; checkstop results could not be logged out.
113 Three checkstops have occurred.
120 BIST starting a CRC check on the 8752 EPROM.
121 BIST detected a bad CRC in the first 32K bytes of the OCS EPROM.
122 BIST started a CRC check on the first 32K bytes of the OCS EPROM.
123 BIST detected a bad CRC on the OCS area of NVRAM.
124 BIST started a CRC check on the OCS area of NVRAM.
125 BIST detected a bad CRC on the time-of-day area of NVRAM.
126 BIST started a CRC check on the time-of-day area of NVRAM.
127 BIST detected a bad CRC on the 8752 EPROM.
130 BIST presence test started.
140 Running BIST. (Box Manufacturing Mode Only)
142 Box manufacturing mode operation.
143 Invalid memory configuration.
144 Manufacturing test failure.
151 BIST started AIPGM test code.
152 BIST started DCLST test code.
153 BIST started ACLST test code.
154 BIST started AST test code.
160 Bad EPOW Signal/Power status signal.
161 BIST being conducted on BUMP I/O.
162 BIST being conducted on JTAG.
163 BIST being conducted on Direct I/O.
164 BIST being conducted on CPU.
165 BIST being conducted on DCB and Memory.
166 BIST being conducted on Interrupts.
170 BIST being conducted on Multi-Processors.
180 Logout in progress.
182 BIST COP bus not responding.
185 A checkstop condition occurred during the BIST.
186 System logic-generated checkstop (Model 250 only).
187 Graphics-generated checkstop (Model 250).
195 Checkstop logout complete
199 Generic SCSI backplane
888 BIST did not start.

Power-On Self-Test (POST) Indicators
------------------------------------

200 IPL attempted with keylock in the Secure position.
201 IPL ROM test failed or checkstop occurred (irrecoverable).
202 Unexpected machine check interrupt.
203 Unexpected data storage interrupt.
204 Unexpected instruction storage interrupt.
205 Unexpected external interrupt.
206 Unexpected alignment interrupt.
207 Unexpected program interrupt.
208 Unexpected floating point unavailable interrupt.
209 Unexpected SVC interrupt.
20c L2 cache POST error. (The display shows a solid 20c for 5 seconds.)
210 Unexpected SVC interrupt.
211 IPL ROM CRC comparison error (irrecoverable).
212 RAM POST memory configuration error or no memory found (irrecoverable).
213 RAM POST failure (irrecoverable).
214 Power status register failed (irrecoverable).
215 A low voltage condition is present (irrecoverable).
216 IPL ROM code being uncompressed into memory.
217 End of boot list encountered.
218 RAM POST is looking for good memory.
219 RAM POST bit map is being generated.
21c L2 cache is not detected. (The display shows a solid 21c for 2 seconds.)
220 IPL control block is being initialized.
221 NVRAM CRC comparison error during AIX IPL(key mode switch in Normal mode).
Reset NVRAM by reaccomplishing IPL in Service mode. For systems with an
internal, direct-bus-attached (DBA) disk, IPL ROM attempted to perform an IPL from
that disk before halting with this operator panel display value.
222 Attempting a Normal mode IPL from Standard I/O planar-attached devices specified
in NVRAM IPL Devices List.
223 Attempting a Normal mode IPL from SCSI-attached devices specified in NVRAM IPL
Devices List.
224 Attempting a Normal mode IPL from 9333 subsystem device specified in NVRAM IPL
Devices List.
225 Attempting a Normal mode IPL from 7012 DBA disk-attached devices specified in
NVRAM IPL Devices List.
226 Attempting a Normal mode IPL from Ethernet specified in NVRAM IPL Devices List.
227 Attempting a Normal mode IPL from Token-Ring specified in NVRAM IPL Devices List.
228 Attempting a Normal mode IPL from NVRAM expansion code.
229 Attempting a Normal mode IPL from NVRAM IPL Devices List; cannot IPL from any
of the listed devices, or there are no valid entries in the Devices List.

Aix coomands

List of Unix Commands
#:
Used to make comments in a shell script or tells which shell to use as an interpreter for the script.
.:
Reads commands from a script and execute them in your current environment.
/etc/defaultrouter:
Defines the systems default routers. Values must be separated with whitespace, # can be used for comments.
/etc/gateways:
Contains all the routes and default gateways for the system.
/etc/hostname.interface:
Contains the hostname of the system and should match the hostname defined in the /etc/hosts file. The file is named with the interface name, such as hostname.hme0 or hostname.le0
/etc/hosts:
Configures names and aliases of IP-addresses. Fields should be separated with Tab or white space.
/etc/inetd.conf:
Is the Internet services database ASCII file which contains a list of available network services.
/etc/inetd.conf:
Is the Internet server database ASCII file that contains a list of available servers. Is invoked by inetd when it gets an Internet request via a socket.
/etc/inetd.conf:
Is the Internet server database, used by the inetd daemon, which contains a list of available network services.
/etc/inittab:
Is a script used by init. Controls process dispatching.
/etc/inittab:
Controls process dispatching. Used by init.
/etc/inittab:
Is a script used by init. Controls process dispatching.
/etc/lilo.conf:
Is the configuration file used by the Linux Loader while booting.
/etc/modules.conf:
Loads modules specific options at startup.
/etc/mygate:
Defines the systems default router or gateway.
/etc/myname:
Specifies the real host name for the system.
/etc/netsvc.conf:
Specifies how different name resolution services will look up names.
/etc/nodename:
Specifies the real hostname for the system.
/etc/nologin:
Is a text file that, if it exists in /etc/, will prevent non-root users from logging in. If a user attempts to login, it will be shown the contents of the file, and then be disconnected.
/etc/nologin:
Is a text file message that is shown to the user who tries to log on during a system shutdown process. After the message appears the log on procedure ends.

Aix Useful links

Thank you for your interest in AIX.

The following resources will help you in learning more about AIX:


Learn more about AIX
AIX & UNIX developerWorks zone
http://www-128.ibm.com/developerworks/aix

PowerAIX.org mit indischer User Group
http://www.poweraix.org/groups?groupid=5


Other useful Websites
Support for IBM System p servers
http://www-03.ibm.com/servers/eserver/support/unixservers

Technical Database for AIX
http://www14.software.ibm.com/webapp/set2/srchBroker/views/srchBroker.jsp

Redbooks domain for IBM System p
http://www.redbooks.ibm.com/portals/UNIX

Virtual Loaner Program
http://www.ibm.com/servers/enable/site/vlp


Whitepapers
Performance tuning UNIX systems
http://www-128.ibm.com/developerworks/eserver/library/au-unix-niceprocesses.html

Basic UNIX filesystem operations
http://www-128.ibm.com/developerworks/aix/library/au-unix-readdir.html

AIX 5L service strategy and best practices
http://www-03.ibm.com/servers/eserver/support/unixservers/aix_service_strategy.html

Hardware Management Console (HMC)
http://www-03.ibm.com/servers/eserver/support/unixservers/hmc_best_practices.html

High Availability on IBM System p servers. Considerations and sample architectures
http://www-03.ibm.com/servers/eserver/support/unixservers/arch_ha_systemp5.html

IBM System p servers in Production environments. Sample architectures and considerations
http://www-03.ibm.com/servers/eserver/support/unixservers/arch_prod_systemp5.html

IBM System p Firmware and Microcode. Service Strategies and Best Practices
http://www-03.ibm.com/servers/eserver/support/unixservers/arch_firmware_Systemp5.html

Redpapers
System p5 Quad-Core Module: Technical Overview & Introduction
Length: 10 pages
http://www.redbooks.ibm.com/redpapers/abstracts/redp4150.html

Redbooks
IBM System p5 Approaches to 24x7 Availability Including AIX 5L
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg247196.html?Open

IBM AIX 5L Reference for HP-UX System Administrators
http://www.redbooks.ibm.com/abstracts/sg246767.html?Open