Google Search

Google

Saturday, December 15, 2007

History

AIX Version 1, introduced in 1986 for the IBM 6150 RT workstation, was based on UNIX System V Releases 1 and 2. In developing AIX, IBM and INTERACTIVE Systems Corporation (whom IBM contracted) also incorporated source code from 4.2 and 4.3BSD UNIX.

Among other variants, IBM later produced AIX Version 3 (also known as AIX/6000), based on System V Release 3, for their IBM POWER-based RS/6000 platform. Since 1990, AIX has served as the primary operating system for the RS/6000 series (now called System p by IBM).

AIX Version 4, introduced in 1994, introduced support for symmetric multiprocessing with the introduction of the first RS/6000 SMP servers. AIX Version 4 continued to evolve though the 1990s culminating with the introduction of AIX 4.3.3 in 1999.

In the late 1990s, under Project Monterey, IBM and the Santa Cruz Operation planned to integrate AIX and UnixWare into a single 32-bit/64-bit multiplatform UNIX with particular emphasis on supporting the Intel IA-64 architecture. This never came to fruition, though a beta test version of AIX 5L for IA-64 systems was released.

AIX 6 was announced in May 2007 and ran an open beta from June 2007 until the general availability (GA) of AIX 6.1 on November 9th, 2007. Major new features in AIX 6.1 are full RBAC, Workload Partitions (WPAR) enabling application mobility, and Live Partition Mobility on the new POWER6 hardware.

Aix operating Systems

AIX® helps you to meet the demanding requirements for performance, energy efficiency, flexibility and availability in your IT environment.
  • Market momentum: AIX has dramatically expanded its share of the UNIX® market over the past several years – go with the leader!*
  • Unmatched performance: Industry-leading benchmarks underscore the highly scalable performance of AIX on IBM systems running Power Architecture™ technology-based processors.
  • Virtualization: AIX provides software-based and exploits hardware-based virtualization capabilities. These capabilities can help you to consolidate workloads to increase server utilization and lower energy and other costs.
  • Security: AIX includes a number of features designed to provide for a secure IT environment – and AIX can integrate into your existing security infrastructure.
  • Broad application support: Independent software vendors recognize AIX as a premier UNIX operating system. With over 8000 applications from over 3000 ISVs, the applications you need are probably already available on AIX.
  • Binary compatibility: AIX has a long history of maintaining binary compatibility from one release to the next – so that you can upgrade to the latest release with confidence.**
  • Legacy systems inspiration: AIX delivers the reliability, availability and security of a mainframe environment including industry-leading availability techniques.
  • Simple migration: IBM's customized service offerings make migration from competitive UNIX platforms to AIX quick and easy–consolidate all of your UNIX workload onto AIX.

Aix unix operating system

AIX® is an open standards-based, UNIX® operating system that allows you to run the applications you want, on the hardware you want—IBM UNIX servers. AIX in combination with IBM's virtualization offerings, provides new levels of flexibility and performance to allow you to consolidate workloads on fewer servers which can increase efficiency and conserve energy. AIX delivers high levels of security, integration, flexibility and reliability—essential for meeting the demands of today's information technology environments. AIX operates on the IBM Systems based on Power Architecture™ technology.
Read more


Featured topic

AIX 6

AIX Version 6.1
Now available, the next step in the evolution of the UNIX operating system introduces new capabilities for virtualization, security, availability and manageability.

Latest AIX offerings

AIX Version 6.1
AIX 6.1 extends the capabilities of the AIX operating system by including features such as Workload Partitions, Live Application Mobility, Role Based Access Control, Encrypting Filesystems, Concurrent AIX Kernel Updates and many other features. This release underscores the strategic importance of AIX as it delivers groundbreaking new features while maintaining full binary compatibility with previous AIX releases. More than ever, this release of AIX allows you to more efficiently use your IBM servers to support the needs of your business. AIX V6.1 represents the latest advance in a long record of IBM operating system innovation.



View all AIX and related software products


Independent software vendor support for AIX

Independent software vendors recognize AIX as a premier UNIX operating system. Over 3000 ISVs now support thousands of applications on the AIX operating system. It is likely that the applications that your business needs are already available on AIX. For more unique applications, IBM offers a rapid process for porting applications to the latest AIX release.
View ISV application availability


AIX in action

Read the latest news and client success stories.
IBM AIX Achieves UNIX 03 Product Standard Certification
IBM celebrates 20 years of AIX
IBM launches collaboration center for UNIX

AIx/HMc Tips sheet

AIX / HMC Tip Sheet
HMC Commands
lshmc –n (lists dynamic IP addresses served by HMC)
lssyscfg –r sys –F name,ipaddr (lists managed system attributes)
lssysconn –r sys (lists attributes of managed systems)
lssysconn –r all (lists all known managed systems with attributes)
rmsysconn –o remove –ip (removes a managed system from the HMC)
mkvterm –m {msys} –p {lpar} (opens a command line vterm from an ssh session)
rmvterm –m {msys} –p {lpar} (closes an open vterm for a partition)
Activate a partition
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b normal
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b sms
Shutdown a partition
chsysstate –m managedsysname –r lpar –o {shutdown/ossshutdown} –n partitionname [-immed][-restart]
VIO Server Commands
lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)
Create Shared Ethernet Adapter (SEA) on VIO Server
mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)
Create Virtual Storage Device Mapping
mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.
AIX Performance TidBits and Starter Set of Tuneables
Current starter set of recommended AIX 5.3 Performance Parameters. Please ensure you test these first before implementing in production as your mileage may vary.
Network
no –p –o rfc1323=1
no –p –o sb_max=1310720
no –p –o tcp_sendspace=262144
no –p –o tcp_recvspace=262144
no –p –o udp_sendspace=65536
no –p –o udp_recvspace=655360
nfso –p –o rfc_1323=1
NB Network settings also need to be applied to the adapters
nfso –p –o nfs_socketsize=600000
nfso –p –o nfs_tcp_socketsize=600000
Memory Settings
vmo – p –o minperm%=5
vmo –p –o maxperm%=80
vmo –p –o maxclient%=80
Let strict_maxperm and strict_maxclient default
vmo –p –o minfree=960
vmo –p –o maxfree=1088
vmo –p –o lru_file_repage=0
vmo –p –o lru_poll_interval=10
IO Settings
Let minpgahead and J2_minPageReadAhead default
ioo –p –o j2_maxPageReadAhead=128
ioo –p –o maxpgahead=16
ioo –p –o j2_maxRandomWrite=32
ioo –p –o maxrandwrt=32
ioo –p –o j2_nBufferPerPagerDevice=1024
ioo –p –o pv_min_pbug=1024
ioo –p –o numfsbufs=2048
If doing lots of raw I/O you may want to change lvm_bufcnt
Default is 9
ioo –p –o lvm_bufcnt=12
Others left to default that you may want to tweak include:
ioo –p –o numclust=1
ioo –p –o j2_nRandomCluster=0
ioo –p –o j2_nPagesPerWriteBehindCluster=32
Useful Commands
vmstat –v or –l or –s lvmo
vmo –o iostat (many new flags)
ioo –o svmon
schedo –o filemon
lvmstat fileplace
Useful Links
1. Lparmon – www.alphaworks.ibm.com/tech/lparmon
2. Nmon – www.ibm.com/collaboration/wiki/display/WikiPtype/nmon
3. Nmon Analyser – www-941.ibm.com/collaboration/wiki/display/WikiPtype/nmonanalyser
4. vmo, ioo, vmstat, lvmo and other AIX commands http://publib.boulder.ibm.com/infocenter/pseries/v5r3/topic/com.ibm.aix.doc/doc

Configuring MPIO

Configuring MPIO for the virtual AIX client
This document describes the procedure to set up Multi-Path I/O on the AIX clients of
the virtual I/O server.
Procedure:
This procedure assumes that the disks are already allocated to both the VIO servers
involved in this configuration.
•Creating Virtual Server and Client SCSI Adapters
First of all, via HMC create SCSI server adapters on the two VIO servers and
then two virtual client SCSI adapters on the newly created client partition, each
mapping to one of the VIO servers´ server SCSI adapter.
An example:
Here is an example of configuring and exporting an ESS LUN from both the
VIO servers to a client partition:
•Selecting the disk to export
You can check for the ESS LUN that you are going to use for MPIO by
running the following command on the VIO servers.
On the first VIO server:
$ lsdev -type disk
name status description
..
hdisk3 Available MPIO Other FC SCSI Disk Drive
hdisk4 Available MPIO Other FC SCSI Disk Drive
hdisk5 Available MPIO Other FC SCSI Disk Drive
..
$lspv
..
hdisk3 00c3e35c99c0a332 None
hdisk4 00c3e35c99c0a51c None
hdisk5 00c3e35ca560f919 None
..
In this case hdisk5 is the ESS disk that we are going to use for MPIO.
Then run the following command to list the attributes of the disk that you choose for MPIO:
$lsdev -dev hdisk5 -attr
..
algorithm fail_over Algorithm True
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
..
pvid 00c3e35ca560f9190000000000000000 Physical volume identifier
False
..
reserve_policy single_path Reserve Policy True
Note down the lun_id, pvid and the reserve_policy of the hdisk4.
•Command to change reservation policy on the disk
You see that the reserve policy is set to single_path.
Change this to no_reserve by running the following command:
$ chdev -dev hdisk5 -attr reserve_policy=no_reserve
hdisk4 changed
On the second VIO server:
On the second VIO server too, find the hdisk# that has the same pvid, it could
be a different one than the one on the first VIO server, but the pvid should the
same.
$ lspv
..
hdisk7 00c3e35ca560f919 None
..
The pvid of the hdisk7 is the same as the hdisk5 on the first VIO server.
$ lsdev -type disk
name status description
..
hdisk7 Available MPIO Other FC SCSI Disk Drive
..
$lsdev -dev hdisk7 -attr
..
algorithm fail_over Algorithm True
..
lun_id 0x5463000000000000 Logical Unit Number ID False
..
pvid 00c3e35ca560f9190000000000000000 Physical volume identifier
False
..
reserve_policy single_path Reserve Policy True
You will note that the lun_id, pvid of the hdisk7 on this server are the same as
the hdisk4 on the first VIO server.
$ chdev -dev hdisk7 -attr reserve_policy=no_reserve
hdisk6 changed
•Creating the Virtual Target Device

Now on both the VIO servers run the mkvdev command using the appropriate
hdisk#s respectively.
$ mkvdev -vdev hdisk# -vadapter vhost# -dev vhdisk#
The above command might have failed when run on the second VIO server, if
the reserve_policy was not set to no_reserve on the hdisk.
After the above command runs succesfully on both the servers, we have
same LUN exported to the client with mkvdev command on both servers.
•Check for correct mapping between the server and the client
Double check the client via the HMC that the correct slot numbers match the
respective slot numbers on the servers.
In the example, the slot number 4 for the client virtual scsi adapter maps to
slot number 5 of the VIO server VIO1_nimtb158.



And the slot number 5 for the client virtual SCSI adapter maps to the slot
number 5 of the VIO server VIO1_nimtb159.

•On the client partition
Now you are ready to install the client. You can install the client using any of
the following methods described in the red book on virtualization at
http://www.redbooks.ibm.com/redpieces/abstracts/sg247940.html:
1. NIM installation
2. Alternate disk installation
3. using the CD media
Once you install the client, run the following commands to check for MPIO:
# lsdev -Cc disk
hdisk0 Available Virtual SCSI Disk Drive
# lspv
hdisk0 00c3e35ca560f919 rootvg active
# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
•Dual Path
When one of the VIO servers goes down, the path coming from that server
shows as failed with the lspath command.
# lspath
Failed hdisk0 vscsi0
Enabled hdisk0 vscsi1
•Path Failure Detection
The path shows up in the "failed" mode, even after the VIO server is up
again. We need to either change the status with the “chpath” command to
“enabled” state or set the the attributes “hcheck_interval” and “hcheck_mode” to
“60” and “nonactive” respectively for a path failure to be detected automatically.
•Setting the related attributes
Here is the command to be run for setting the above attributes on the client
partition:
$ chdev -l hdisk# -a hcheck_interval=60 –a hcheck_mode=nonactive -P
The VIO AIX client needs to be rebooted for hcheck_interval attribute to take
effect.
•EMC for Storage
In case of using EMC device as the storage device attached to VIO server,
then make sure of the following:
1. Powerpath version 4.4. is installed on the VIO servers.
2. Create hdiskpower devices which are shared between both the VIO
servers.
•Additional Information
Another thing to take note of is that you cannot have the same name for
Virtual SCSI Server Adapter and Virtual Target Device. The mkvdev command
will error out if the same name for both is used.
$ mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev hdiskpower0
Method error (/usr/lib/methods/define -g -d):
0514-013 Logical name is required.
The reserve attribute is named differently for an EMC device than the attribute
for ESS or FasTt storage device. It is “reserve_lock”.
Run the following command as padmin for checking the value of the
attribute.
$ lsdev -dev hdiskpower# -attr reserve_lock
Run the following command as padmin for changing the value of the attribute.
$ chdev -dev hdiskpower# -attr reserve_lock=no
•Commands to change the Fibre Channel Adapter attributes
And also change the following attributes of the fscsi#, fc_err_recov to “fast_fail”
and dyntrk to “yes”
$ chdev -dev fscsi# -attr fc_err_recov=fast_fail dyntrk=yes –perm
The reason for changing the fc_err_recov to “fast_fail” is that if the Fibre
Channel adapter driver detects a link event such as a lost link between a storage
device and a switch, then any new I/O or future retries of the failed I/Os will be
failed immediately by the adapter until the adapter driver detects that the device
has rejoined the fabric. The default setting for this attribute is 'delayed_fail’.
Setting the dyntrk attribute to “yes” makes AIX tolerate cabling changes in the
SAN.
The VIOS needs to be rebooted for fscsi# attributes to take effect.

other different aix commands

Introduction

As you know, AIX® has a vast array of commands that enable you to do a multitude of tasks. Depending on what you need to accomplish, you use only a certain subset of these commands. These subsets differ from user to user and from need to need. However, there are a few core commands that you commonly use. You need these commands either to answer your own questions or to provide answers to the queries of the support professionals.

In this article, I'll discuss some of these core commands. The intent is to provide a list that you can use as a ready reference. While the behavior of these commands should be identical in all releases of AIX, they have been only tested under AIX 5.3.

Note:
The bootinfo command discussed in the following paragraphs is NOT a user-level command and is NOT supported in AIX 4.2 or later.

Commands

Kernel

How would I know if I am running a 32-bit kernel or 64-bit kernel?

To display if the kernel is 32-bit enabled or 64-bit enabled, type:
bootinfo -K

How do I know if I am running a uniprocessor kernel or a multiprocessor kernel?

/unix is a symbolic link to the booted kernel. To find out what kernel mode is running, enter ls -l /unix and see what file /unix it links to. The following are the three possible outputs from the ls -l /unix command and their corresponding kernels:

/unix -> /usr/lib/boot/unix_up # 32 bit uniprocessor kernel /unix -> /usr/lib/boot/unix_mp # 32 bit multiprocessor kernel/unix -> /usr/lib/boot/unix_64 # 64 bit multiprocessor kernel

Note:
AIX 5L Version 5.3 does not support a uniprocessor kernel.

How can I change from one kernel mode to another?

During the installation process, one of the kernels, appropriate for the AIX version and the hardware in operation, is enabled by default. Let us use the method from the previous question and assume the 32-bit kernel is enabled. Let us also assume that you want to boot it up in the 64-bit kernel mode. This can be done by executing the following commands in sequence:

#ln -sf /usr/lib/boot/unix_64 /unix
#/usr/lib/boot/unix bosboot -ad /dev/hdiskxx
#shutdown -r

The /dev/hdiskxx directory is where the boot logical volume /dev/hd5 is located. To find out what xx is in hdiskxx, run the following command:

#lslv -m hd5

Note:
In AIX 5.2, the 32-bit kernel is installed by default. In AIX 5.3, the 64-bit kernel is installed on 64-bit hardware and the 32-bit kernel is installed on 32-bit hardware by default.

Hardware

How would I know if my machine is capable of running AIX 5L Version 5.3?

AIX 5L Version 5.3 runs on all currently supported CHRP (Common Hardware Reference Platform)-based POWER hardware.

How would I know if my machine is CHRP-based?

Run the prtconf command. If it's a CHRP machine, the string chrp appears on the Model Architecture line.

How would I know if my System p machine (hardware) is 32-bit or 64-bit?

To display if the hardware is 32-bit or 64-bit, type:

#bootinfo -y

How much real memory does my machine have?

To display real memory in kilobytes (KB), type one of the following:

#bootinfo -r

#lsattr -El sys0 -a realmem

Can my machine run the 64-bit kernel?

64-bit hardware is required to run the 64-bit kernel.

What are the values of attributes for devices in my system?

To list the current values of the attributes for the tape device, rmt0, type:

#lsattr -l rmt0 -E

To list the default values of the attributes for the tape device, rmt0, type:

#lsattr -l rmt0 -D

To list the possible values of the login attribute for the TTY device, tty0, type:

#lsattr -l tty0 -a login -R

To display system level attributes, type:

#lsattr -E -l sys0

How many processors does my system have?

To display the number of processors on your system, type:

#lscfg | grep proc

How many hard disks does my system have and which ones are in use?

To display the number of hard disks on your system, type:

#lspv

How do I list information about a specific physical volume?

To find details about hdisk1, for example, run the following command:

#lspv hdisk1

How do I get a detailed configuration of my system?

Type the following:

#lscfg

The following options provide specific information:
-p

Displays platform-specific device information. The flag is applicable to AIX 4.2.1 or later.
-v
Displays the VPD (Vital Product Database) found in the customized VPD object class.

For example, to display details about the tape drive, rmt0, type:

lscfg -vl rmt0

You can obtain very similar information by running the prtconf command.

How do I find out the chip type, system name, node name, model number, and so forth?

The uname command provides details about your system.

#uname -p
Displays the chip type of the system. For example, PowerPC.

#uname -r
Displays the release number of the operating system.

#uname -s
Displays the system name. For example, AIX.

#uname -n
Displays the name of the node.

#uname -a
Displays the system name, nodename, version, machine ID.

#uname -M
Displays the system model name. For example, IBM, 9114-275.

#uname -v
Displays the operating system version.

#uname -m
Displays the machine ID number of the hardware running the system.

#uname -u
Displays the system ID number.

HDLM installation on voi Server

rocedure: Install/Update HDLM drivers

# login to vio server as "padmin".
# Switch to "oem" prompt.
oem_setup_env
umount /mnt
mount bosapnim01:/export/lpp_source/hitachi /mnt

# Install and update all filesets from the directories below.
# "smitty install_all"
cd /mnt/hdlm_5601/aix_odm/V5.0.0.1
cd /mnt/hdlm_5601/aix_odm/V5.0.0.4u
cd /mnt/hdlm_5601/aix_odm/V5.0.1.4U
cd /mnt/hdlm_5601/aix_odm/V5.0.52.1U

# Copy license file.
cd /mnt/hdlm_5601/license/enterprise
cp *.plk /var/tmp/hdlm_license

# install and update all filesets from the above directory
# "smitty install_all"
# Fileset DLManager 5.60.1.100 Hitachi Dynamic Link Manager
cd /mnt/hdlm_5601

# Leave the current Directory and unmount Driver Source Directory.
cd /
umount /mnt


Procedure: Install/Update VIO fixpack

# Login to VIO server as "padmin"
# Obtain current IOS level
ioslevel

# Update VIO to latest IOS level
mount bosapnim01:/export/lpp_source/aix/vio_1200 /mnt
updateios -dev /mnt
** Enter "y" to continue install

# Return to "root" shell prompt and HALT system.
oem_setup_env
shutdown -Fh

# Activate LPAR from HMC WebSM


Procedure: Configure VIO Server to utilize Boot Disks
# Login as "padmin"
# Switch to "oem" prompt
oem_setup_env

# Run in korn shell 93
ksh93

# Remove any vhost adapter configuration settings
for (( i=0; i<=48; ++i ))
do
/usr/ios/cli/ioscli rmdev -pdev vhost${i}
done

# Remove all HDLM disks
for i in $( lsdev -Cc disk -F name | grep dlmfdrv )
do
rmdev -Rdl ${i}
done

# Remove all hdisks except for hdisk0 and hdisk1 - assumed to be rootvg
for i in $( lsdev -Cc disk -F name | grep hdisk | egrep -v 'hdisk0$ | hdisk1$' )
do
rmdev -Rdl ${i}
done

# If an HDLM unconfig file exists, rename it
[[ -f /usr/DynamicLinkManager/drv/dlmfdrv.unconf ]] &&
mv /usr/DynamicLinkManager/drv/dlmfdrv.unconf \
/usr/DynamicLinkManager/drv/$( date +"%Y%m%d").dlmfdrv.unconf

# Verify "dlmfdrv.unconf" was renamed.
ls /usr/DynamicLinkManager/drv

# Set fast fail Parameter for SCSI Adapters and Reconfigure FC Adapters
chdev -l fscsi0 -a fc_err_recov=fast_fail
chdev -l fscsi1 -a fc_err_recov=fast_fail
chdev -l fscsi2 -a fc_err_recov=fast_fail
cfgmgr -vl fcs0
cfgmgr -vl fcs1
cfgmgr -vl fcs2

# Change HDLM settings
cd /usr/DynamicLinkManager/bin
print y | ./dlmodmset -e on
print y | ./dlmodmset -b 68608

# Reconfigure HDLM disks
./dlmcfgmgr

# Turn off reserve settings on HDLM Driver
./dlnkmgr set -rsv on 0 -s

# Remove HDLM disks
for i in $( lsdev -Cc disk -F name | grep dlmfdrv )
do
rmdev -Rdl ${i}
done

# Change reserve policy on hdisks to "no_reserve"
for i in $( lsdev -Cc disk -F name |
grep hdisk |
egrep -v 'hdisk0$|hdisk1$' )
do
chdev -l ${i} -a reserve_policy=no_reserve
done

# Reconfigure HDLM disks
./dlmcfgmgr

# Verify all HDLM disks have an assigned PVID
for i in $( lsdev -Cc disk -F name | grep dlmfdrv )
do
chdev -l ${i} -a pv=yes
done
lspv

# Remove any vhost adapter configuration settings
/usr/ios/cli/ioscli lsmap -all

# Verify all vhosts adapters exist wihout Devices.
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U9119.590.51A432C-V3-C10 0x00000000

VTD NO VIRTUAL TARGET DEVICE FOUND

# Reboot VIO Server.
shutdown -Fr