Monday 22 May 2017

VMM concepts

Virtual-memory segments are partitioned in units called pages; each page is either located in real physical memory (RAM) or stored on disk until it is needed. AIX uses virtual memory to address more memory than is physically available in the system. The management of memory pages in RAM or on disk is handled by the VMM.
A page is a fixed-size block of data (usually 4096 byte). A page might be resident in memory (that is, mapped into a location in physical memory), or a page might be resident on a disk (that is, paged out of physical memory into paging space or a file system).
The VMM maintains a free list of available page frames. The VMM also uses a page-replacement algorithm to determine which virtual-memory pages currently in RAM will have their page frames reassigned to the free list.
AIX tries to use all of RAM all of the time, except for a small amount which it maintains on the free list. To maintain this small amount of unallocated pages the VMM uses page outs and page steals to free up space and reassign those page frames to the free list.
overhead             -- The load that AIX incurs while sharing resources between user processes and performing its internal accounting.
page                 -- A fixed-size (4KB) block of memory.
page fault           -- It occurs when a process tries to access an address in virt mem. that does not have a location in physical memory.
                        In response, the system tries to load the appropriate data from the hard disk
page stealing daemon -- The daemon responsible for releasing pages of memory for use by other processes
                        (It makes room for incoming pages, by swapping out mem. pages that are not the part of the working set of a process.)
paging in            -- Reading pages from swap.
paging out           -- Releasing pages of physical memory for use.
Kernel continuously checks to see if the number of pages on the free list is below a threshold. If so the page stealing daemon, becomes active and begins copying pages to the swap area, starting with least recently used pages. Each page placed on the free list then becomes available for use by other processes. Pages written out to swap must be read back into physical memory when the process needs them again.
The AIX VMM integrates cached file data with the management of other types of virtual memory (for example, process data, process stack, and so forth). It caches the file data as pages, just like virtual memory for processes.
(In most modern computer systems, each thread has a reserved region of memory referred to as its stack.)
------------------

Working Storage

Working storage pages are pages that contain volatile data (in other words, data that is not preserved across a reboot).
Examples of virtual memory regions that consist of working storage pages are:
    - Process data
    - Stack
    - Shared memory
    - Kernel data
When modified working storage pages need to be paged out (moved from memory to the disk), they are written to paging space. Working storage pages are never written to a file system.
When a process exits, the system releases all of its private working storage pages. Thus, the system releases the working storage pages for the data of a process and stack when the process exits.

Permanent Storage

Permanent storage pages are pages that contain permanent data (that is, data that is preserved across a reboot). This permanent data is just file data. So, permanent storage pages are basically just pieces of files cached in memory.
When a modified permanent storage page needs to be paged out (moved from memory to disk), it is written to a file system.
You can divide permanent storage pages into two sub-types:
    - Non-client pages (aka persistent pages): these are pages containing cached Journaled File System (JFS) file data
    - Client pages: These are pages containing cached data for all other file systems (for example, JFS2 and Network File System (NFS)
------------------
In order to help optimize which pages are selected for replacement by the page replacement daemons, AIX classifies pages into one of two types:
    - Computational pages: pages used for the text, data, stack, and shared memory of a process
    - Non-computational pages: pages containing file data for files that are being read and written.
All working storage pages are computational. A working storage page is never marked as non-computational.
Depending on how you use the permanent storage pages, the pages can be computational or non-computational. If a file contains executable text for a process, the system treats the file as computational and marks all of the permanent storage pages in the file as computational. If the file does not contain executable text, the system treats the file as non-computational file and marks all of the pages in the file as non-computational.
Once a file has been marked as computational, it remains marked as a computational file until the file is deleted (or the system is rebooted). Thus, a file remains marked as computational even after it is moved or renamed.
------------------

Page replacement

The AIX page replacement daemons scan memory a page at a time to find pages to evict in order to free up memory. The page replacement daemons must choose pages carefully to minimize the performance impact of paging on the system, and the page replacement daemons target pages of different classes based on tunable parameter settings and system conditions.
There are a number of tunable parameters that you can use to control how AIX selects pages to replace.
------------------

minperm and maxperm

The two most basic page replacement tunable parameters are minperm and maxperm. These tunable parameters are used to indicate how much memory the AIX kernel should use to cache non-computational pages. The maxperm tunable parameter indicates the maximum amount of memory that should be used to cache non-computational pages. The minperm limit indicates the target minimum amount of memory that should be used for non-computational pages.
By default, maxperm is an "un-strict" limit, so it allows more non-computational files to be cached in memory when there is available free memory. The maxperm limit can be made a "strict" limit by setting the strict_maxperm tunable parameter to 1.
(The disadvantage of this is, that the number of non-computational pages cannot grow beyond maxperm and consume more memory when there is free memory on the system.)

numperm (lru_file_repage)

The number of non-computational pages is referred to as numperm: The vmstat -v command displays the numperm value for a system as a percentage of a system’s real memory.
When the number of non-computational pages (numperm) is greater than or equal to maxperm, the AIX page replacement daemons strictly target non-computational pages (for example, cached files that are not executables).
When the number of non-computational pages (numperm) is less than or equal to minperm, the AIX page replacement daemons target both computational and non-computational pages. In this case, AIX scans both classes of pages and evicts the least recently used pages.
When the number of non-computational pages (numperm) is between minperm and maxperm, the lru_file_repage (least recently used) tunable parameter controls what kind of pages the AIX page replacement daemons should steal.
Thus, the lru_file_repage tunable parameter can be set to 0. In this case, the AIX kernel always targets non-computational pages when numperm is between minperm and maxperm.
In most customer environments, it is most optimal to just have the kernel always target non-computational pages, because paging computational pages (for example, a process’s stack, data, and so forth) usually has a much higher performance cost on a process than paging non-computational pages (that is, data file cache). Thus, the lru_file_repage tunable parameter can be set to 0. In this case, the AIX kernel always targets non-computational pages when numperm is between minperm and maxperm
------

maxclient

The maxclient tunable parameter specifies a limit on the maximum amount of memory that should be used to cache non-computational client pages. Because all non-computational client pages are a subset of the total number of non-computational permanent storage pages, the maxclient limit must always be less than or equal to the maxperm limit.

numclient

The number of non-computational client pages is referred to as numclient. The vmstat -v command displays the numclient value for a system as a percentage of a system’s real memory.
By default, the maxclient limit is a strict limit. This means that the AIX kernel does not allow the non-computational client file cache to exceed the maxclient limit (that is, the AIX kernel does not allow numclient to exceed maxclient). When numclient reaches the maxclient limit, the AIX page replacement daemons strictly target client pages.
------

minfree, maxfree

Two other important parameters are minfree and maxfree. If the number of pages on your free list (vmstat -v: free pages) falls below the minfree parameter, VMM starts to steal pages (just to add to the free list), which is not good. It continues to do this until the free list has at least the number of pages in the maxfree parameter.
# vmstat -v                <-- for non-computational file-cache
       4980736 memory pages
        739175 lruable pages
        432957 free pages
             1 memory pools
         84650 pinned pages
          80.0 maxpin percentage
          20.0 minperm percentage      <<- system’s minperm% setting
          80.0 maxperm percentage      <<- system’s maxperm% setting
           2.2 numperm percentage      <<- % of memory containing non-comp. pages
         16529 file pages              <<- # of non-comp. pages
           0.0 compressed percentage
             0 compressed pages
           2.2 numclient percentage    <<- % of memory containing non-comp. client pages
          80.0 maxclient percentage    <<- system’s maxclient% setting
         16503 client pages            <<- # of client pages
So, in the above example, there are 16529 non-computational file pages mapped into memory. These non-computational pages consume 2.2 percent of memory. Of these 16529 non-computational file pages, 16503 of them are client pages.
The vmstat output does not provide information about computational file pages. Information about computational file pages can be gathered from the svmon command
# svmon -G                <--in memory pages of each type (work, pers., client)
               size      inuse       free        pin    virtual
memory       786432     209710     576722     133537     188426
pg space     131072       1121
               work       pers       clnt
pin          133537          0          0
in use       188426          0      21284
    - work: working storage
    - pers: persistent storage (persistent storage pages are non-client pages - that is, JFS pages.)
    - clnt: client storage
For each page type, svmon displays two rows:
    - in use: number of 4K pages mapped into memory
    - pin: number of 4K pages mapped into memory and pinned (pin is a subset of inuse)
So, in the above example, there are 188426 working storage pages mapped into memory. Of those 188426 working storage pages, 133537 of them are pinned (that is, can’t be paged out).
There are no persistent storage pages (because there are no JFS filesystems in use on the system). There are 21284 client storage pages, and none of them are pinned.
The svmon command does not display the number of permanent storage pages, but it can be calculated from the svmon output. As mentioned earlier, the number of permanent storage pages is the sum of the number of persistent storage pages and the number of client storage pages. So, in the above example, there are a total of 21284 permanent storage pages on the system:
0 persistent storage pages + 21284 client storage pages = 21284 permanent storage pages
The type of information reported by svmon is slightly different than vmstat. svmon  reports information about the number of in-memory pages of different types: working, persistent (that is, non-client), and client. svmon does not report information about computational versus non-computational. svmon just reports the total number of in-memory pages of each page type.
In contrast, vmstat reports information about non-computational versus computational pages.
To illustrate this difference, consider the above example of svmon output. Some of the 21284 client pages will be computational, and the rest of the 21284 client pages will be non-computational. To determine the breakdown of these client pages between computational and non-computational, use the vmstat command to determine how many of the 21284 client pages are non-computational.
-----------
suggested:
lru_file_repage = 0
maxperm = 90%
maxclient = 90%
minperm = 3%
strict_maxclient = 1 (default)
strict_maxperm = 0 (default)
# vmo -p -o lru_file_repage=0 -o maxclient%=90 -o maxperm%=90 -o minperm%=3
# vmo -p -o strict_maxclient=1 -o strict_maxperm=0
The above tunable parameters settings are the default settings for AIX Version 6.1.
-----------------------------
minfree: Minimum acceptable number of real-memory page frames in the free list. When the size of the free list falls below this number, the VMM begins stealing pages. It continues stealing pages until the size of the free list reaches maxfree.
-----------------------------
An example:
topas:
 MEMORY
 Real,MB   26623
 % Comp     57          <--this is used for processes (OS+appl.), if you add nmon Process+System, for me it was the same (46+11)
 % Noncomp  22          <--fs cache
 % Client   22          <--fs cache (for jfs2)
nmon:
 FileSystemCache
 (numperm) 22.5%        <--this is for fs cache
 Process   46.0%        <--this is for appl. processes
 System    11.3%        <--this is for the OS
 Free      20.2%        <--free
           -----
 Total    100.0%
-----------------------------
Excerpts from a tuning docs:
Set vmo:lru_file_repage=0; default=1  # Mandatory critical change
This change directs lrud to steal only JFS/JFS2 file-buffer pages unless/until numperm/numclient is less-than/equal-to vmo:minperm%, at which point lrud begins stealing both JFS/JFS2 file-buffer pages and computational memory pages.
Essentially stealing computational memory invokes pagingspace-pageouts.
I have found this change already made by most AIX 5.3 customers.
Set vmo:page_steal_method=1; default=0  # helpful, not critical
This change switches the lrud page-stealing algorithm from a physical memory address page-scanning method (=0) to a List-based page-scanning method (=1).
Set ioo:sync_release_ilock=1; default=0  # helpful, not critical
Default value =0 means that the i-node lock is held while all dirty pages of a file are flushed; thus, I/O to a file is blocked when the syncd daemon is running. Setting =1 will cause a sync() to flush all I/O to a file without holding the i-node lock, and then use the  i-node lock to do the commit.
Execute vmstat -v and compare the following values/settings:
minperm    should be 10, 5 or 3; default=20  
maxperm    should be 80 or higher; default=80 or 90
maxclient    should be 80 or higher; default=80 or 90
numperm    real-time percent of non-computational memory (includes client below)
numclient    real-time percent of JFS2/NFS/vxfs filesystem buffer-cache
paging space page outs are triggered when numperm or numclient is less-than-or-equal-to minperm.  
Typically numperm and numclient is greater than minperm, and as such, no paging space page outs can be triggered.
paging space page outs are triggered when numperm or numclient is less-than-or-equal-to minperm.  Typically numperm and numclient is greater than minperm, and as such, no paging space page outs can be triggered.

Versions POWER/PowerPC releases

AIX 7.1, September 10, 2010[3] 
AIX 6.1, November 9, 2007[2] 
AIX 5L 5.3, August 13, 2004[3] 
AIX 5L 5.2, October 18, 2002[4], end of support April 30, 2009[5] 
AIX 5L 5.1, May 4, 2001 (Support discontinued April 1, 2006)[7] 
AIX 4.3.3, September 17,1999 
AIX 4.3.2, October 23,1998 
AIX 4.3.1, April 24,1998 
AIX 4.3, October 31,1997 
AIX 4.2.1, April 25,1997 
AIX 4.2, May 17,1996 
AIX 4.1.5, November 8,1996 
AIX 4.1.4, October 20,1995 
AIX 4.1.3, July 7,1995 
AIX 4.1.1, October 28,1994 
AIX 4.1, August 12,1994 
AIX 4.0, 1994 
AIX 3.2 1992 
AIX 3.1, February 1990 
IBM PS/2 releases 
IBM 6150 RT releases


AIX V7 7.1, Sept 10, 2010[3] 



  • AIX 5.2 Workload Partitions for AIX 7
  • Support for export of fibre channel adapters to WPARs
  • VIOS disk support in a WPAR
  • Cluster Aware AIX

AIX 6.1, November 9, 2007[2] 

  • Workload Partitions (WPARs) operating system-level virtualization
  • Live Application Mobility
  • Live Partition Mobility
  • Security

  1. Role Based Access Control RBAC
  2. AIX Security Expert - A system and network security hardening tool
  3. Encrypting JFS2 filesystem
  4. Trusted AIX
  5. Trusted Execution


  • Integrated Electronic Service Agent(tm) for auto error reporting
  • Concurrent Kernel Maintenance
  • Kernel exploitation of POWER6 storage keys
  • ProbeVue dynamic tracing
  • Systems Director Console for AIX
  • Integrated filesystem snapshot


AIX 5L 5.3, August 13, 2004[3] 


  • NFS Version 4
  • Advanced Accounting
  • Virtual SCSI
  • Virtual Ethernet
  • Exploitation of Simultaneous multithreading (SMT)
  • Micro-Partitioning enablement
  • POWER5 exploitation
  • JFS2 quotas
  • Ability to shrink a JFS2 filesystem
  • kernel scheduler has been enhanced to dynamically increase and decrease the use of virtual processors.

AIX 5L 5.2, October 18, 2002[4], end of support April 30, 2009[5] 

  • Ability to run on the IBM BladeCenter JS20 with the PowerPC 970.
  • Minimum level required for POWER5 hardware
  • MPIO for Fibre Channel disks
  • iSCSI Initiator software
  • Participation in Dynamic LPAR
  • Concurrrent I/O (CIO) feature introduced for JFS2 released in Maintenance Level 01 in May 2003[6]

AIX 5L 5.1, May 4, 2001 (Support discontinued April 1, 2006)[7] 

  • Ability to run on an IA-64 architecture processor, although this never went beyond beta[8]
  • Minimum level required for POWER4 hardware and the last release that worked on the Micro Channel architecture
  • 64-bit kernel, installed but not activated by default
  • JFS2
  • Ability to run in a Logical Partition on POWER4
  • The L stands for Linux affinity
  • Trusted Computing Base (TCB)
  • Support for mirroring with striping


AIX 4.3.3, September 17,1999 

  • Online backup function
  • Workload Manager (WLM)
  • Introduction of topas utility

AIX 4.3.2, October 23,1998 AIX 4.3.1, April 24,1998 AIX 4.3, October 31,1997 

  • Ability to run on 64-bit architecture CPUs
  • IPv6
  • Web-based System Manager

AIX 4.2.1, April 25,1997 

  • NFS Version 3

AIX 4.2, May 17,1996 AIX 4.1.5, November 8,1996 AIX 4.1.4, October 20,1995 AIX 4.1.3, July 7,1995 

  • CDE 1.0 became the default GUI environment, replacing Motif X Window Manager.

AIX 4.1.1, October 28,1994 AIX 4.1, August 12,1994 AIX 4.0, 1994 

  • Run on RS/6000 systems with PowerPC processors and PCI busses.

AIX 3.2 1992 AIX 3.1, February 1990 

  • Journaled File System (JFS) filesystem type

AIX 3.0 1989 

  • LVM (Logical Volume Manager) was incorporated into OSF/1, and in 1995 for HP-UX[9], and the Linux LVM implementation is similar to the HP-UX LVM implementation.[10]
  • SMIT was introduced.
  • IBM PS/2 releases 

AIX PS/2 v1.1, 1989 

  • last version was 1.3, 1992.
  • IBM 6150 RT releases 

AIX v1.0, 1986 AIX v2.0 

  • last version was 2.2.1.

Using uuencode to embed binaries in scripts and copy/paste transfer files between servers

The uuendiv utility converts binary to text, and uudediv converts the text back to binary.   Wikipedia has a great article on the details of how this proces works (http://en.wikipedia.org/wiki/Uuencoding) 

These utilities create some interesting possibilities.    In this posting I'll cover embeding binary files in scripts and cover how to copy/paste transfer files between servers.

Embeding binary files in a script

If you have a script that also needs to include a binary file, you can uuendiv it and include it within the script itself.  So if you are writing a script to install/setup a binary you could actually also embed the binary within the script itself.   

Here is an example of what this would look like:
#!/usr/bin/ksh
echo this script will extract a binary file
echo the script could also do any other normal scripting stuff here

uudediv -o /tmp/testbinaryfile << 'ENDOFFILE'
<insert uuendiv output of the binary here>
ENDOFFILE

echo the /tmp/testbinaryfile has been extracted
To get the uuendiv output of the binary run a command like this:   uuendiv /usr/bin/ls /dev/stdout .  Take the output and replace the "<insert uuendiv output of the binary here>" text in the example with the uuendiv output.  
When the script runs, the file will be extracted.

Copy/Paste transfer files across servers

If you have a relatively small binary file that you need to transfer between servers, you can easily transfer it by copying and pasting it using uuendiv/uudediv.  This can be a time saver in some circumstances.   It also might be helpful if you have a server that isn't connected to the network but for which you can get a console on through something like the HMC. 

In this example, we will copy and paste the /usr/bin/ls binary between servers. 

On the source server, type:
uuendiv /usr/bin/ls /dev/stdout
Then copy all of the output in to the clipboard.

On the destination server, type:
uudediv -o /tmp/ls
Then press enter, and then paste in the uuendiv output from the source server.   The copy/pasted "ls" binary will be saved to /tmp/ls.   You can verify the source and destination "ls" files are identical by comparing the checksum of the files with the "csum" command.

Using AIX Tools to Debug Network Problems

Question

Using AIX Tools to Debug Network Problems

Answer

This document discusses some standard AIX commands that can check for network connectivity or performance problems.

From time to time users may be unable to access servers via their client applications or they may experience performance problems. When application and system checks do not indicate the problem, the system administrator may need to check the network or the system's network settings to find the problem. Using standard AIX tools, you can quickly determine if a server is experiencing a network problem due to configuration or network issues. These tools include thenetstat and tcpdump commands, which can help you isolate problems, from loss of connectivity to more complex network performance problems.

  • Basic tools and the OSI-RM
  • Using the netstat command
  • Using the tcpdump command

Basic tools and the OSI-RM

The AIX commands you can use for a quick checkup include the lsdeverrptnetstat and tcpdump commands. With these tools, you can assess the lower layers of your system's network configuration within the model known as the Open Systems Interconnection (OSI) Reference Model (RM) (see Table 1). Using the OSI-RM allows you to check common points of failure, without spending too much time looking at elusive errors that might be caused by loss of network access within an application.

Open Systems Interconnection Reference Model

 Model Layer           Function                         Assessment Tools
             
7. Application Layer  Consists of application          . 
                      programs that use the network.
6. Presentation Layer Standardizes data presentation 
                      to the applications.
5. Session Layer      Manages sessions between 
                      applications.
4. Transport Layer    Organizes data grams into        netstat -s 
                      segments and reliably delivers   iptrace 
                      them to upper layers.            tcpdump
3. Network Layer      Manages connections across the   netstat -in, -rn, -s, -D
                      network for the upper layers.    topas
                                                       iptrace
                                                       tcpdump
2. Data Link Layer    Provides reliable data delivery  netstat -v, -D
                      across the physical link.        iptrace
                                                       tcpdump
1.  Physical Layer    Defines the physical             netstat -v, -D 
                      characteristics of the           lsdev -C
                      network media.                   errpt
                                                       iptrace
                                                       tcpdump

Using the netstat command

One of the netstat tools, the netstat -v command, can help you decide if corrective action needs to be taken on the server or elsewhere in the network. Output from this command is the same as the entstattokstatfddistat, and atmstat commands combined. The netstat -v command assesses the physical and data link layers of the OSI-RM. Thus, it is one of the first commands you should use, after determining that there is no hardware availability problem. (The errpt andlsdev -C commands can help determine availability.) The netstat -v output can indicate whether you need to adjust configuration of a network adapter (to reestablish or improve communications) or tune an adapter for better data throughput.

Sample scenario

A simple scenario illustrates how the netstat -v command helps determine why a system is not communicating on its network.

The scenario assumes a system with the following characteristics:
  • An IBM 4-Port 10/100 Mbps Ethernet PCI Adapter (ent0 - ent3)
  • An onboard IBM 10/100 Mbps Ethernet PCI Adapter (ent4)
  • A single cable connected to one of the ports on the four-port adapters
  • A single IP address configured, on en0, which also maps to one of the logical devices (ent0) on the 4-Port card
The problem: Since TCP/IP was configured on en0, the system has been unable to ping any system on the network.
Example 1
  1. The lsdev -C and errpt commands were used to verify the availability of the adapter and interface.'

  2. The netstat -in command (interface configuration) and the netstat -rn (route configuration) command were used to check the IP configuration.

  3. After the first two preliminary steps, the next step is to use the netstat -v command to review specific statistics for adapter operations. Without a filter, thenetstat -v command produces at least 10 screens of data, so this examples uses the netstat -v ent0 command to limit the output as follows:

    netstat -v ent0 | grep -p "Specific Statistics"

    The RJ45 Port Link Status line in the sample output indicates whether or not the adapter has a link to the network. In this example, the RJ45 Port Link Status is down
    IBM 4-Port 10/100 Base-TX Ethernet PCI Adapter Specific Statistics:
    ------------------------------------------------
    Chip Version: 26
    RJ45 Port Link Status : down
    Media Speed Selected: Auto negotiation
    Media Speed Running: 100 Mbps Full Duplex
    Receive Pool Buffer Size: 384
    Free Receive Pool Buffers: 128
    No Receive Pool Buffer Errors: 0
    Inter Packet Gap: 96
    Adapter Restarts due to IOCTL commands: 1
  4. Running netstat -v a second time without a filter allows you to check the port link status for every adapter. For example, enter:

    netstat -v | more

    and then use /Specific as the search string for the more command. In this example, such a search shows that ent3, not ent0, shows a port link status ofup. This information indicates that the cable is in the wrong port on the 4-Port Adapter, and that moving the cable to the correct (that is, configured) port fixes the problem.
Example 2
Interpreting the portion of the netstat -v output that indicates adapter resource configuration can help isolate a system configuration problem. When setting up servers that provide for network backup (such as, TSM or SysBack), administrators commonly do some preliminary testing and achieve good results. Then, as more remote servers are added to the backup schedule, performance can decrease. Where network throughput was once good, but then has decreased, netstat -v can uncover potential problems with adapter resources.

Many modern adapters have tunable buffers that allow you to adjust the resources a device can obtain. When a backup server requires extensive resources to handle data reception, looking at the output of netstat -v for Receive Statistics and for Adapter Specific Statistics can help isolate potential network performance bottlenecks. It is not uncommon to see errors in the Adapter Specific section of the 10/100 Mbps adapter that indicate "No Receive Pool Buffer Errors". In Example 2 the netstat -v command is run twice, 30 seconds apart, while the server is handling several backup jobs. The output shows the default setting of 384 on the receive pool buffer needs to be adjusted higher. As long as no other errors suggesting additional problems show up in the output, you can safely assume that performance will improve when the receive pool buffer on ent4 is adjusted.
  1. Run the following command to see specific statistics for en4:

    netstat -v ent4 | grep -p "Specific Statistics"

    Command output is similar to the following:
    IBM 4-Port 10/100 Base-TX Ethernet PCI Adapter Specific Statistics:
    ------------------------------------------------
    Chip Version: 26
    RJ45 Port Link Status : up
    Media Speed Selected: Auto negotiation
    Media Speed Running: 100 Mbps Full Duplex
    Receive Pool Buffer Size: 384
    Free Receive Pool Buffers: 128
    No Receive Pool Buffer Errors: 999875
    Inter Packet Gap: 96
    Adapter Restarts due to IOCTL commands: 1
    
  2. Run the following commands to check the No Receive Pool Buffer Errors after 30 seconds:

    sleep 30 ; netstat -v ent4 | grep "Receive Pool Buffer Errors"

    Output is similar to the following:
    No Receive Pool Buffer Errors: 1005761

Using the tcpdump command

The netstat tools (netstat -innetstat -rn and netstat -v) cannot always determine the nature of a connection problem.
Example 3
Suppose your server has four separate network adapters configured and attached to separate network segments. Two are working fine (VLAN A and B) while no connections can be established to your server on the other two segments (VLAN C and D). The output of netstat -v shows that data is coming in on all four adapters and no errors are being logged, indicating that the configuration at the physical and data link layers is working. In such a case, you need to examine the inbound data itself. You can use the tcpdump tool to examine the data online to help you determine the connection problem.

The tcpdump command provides much data, but for quick analysis only some basics pieces of its output (IP addresses) are needed:
You also want to consider the logical configuration you have set up for your interfaces (netstat -in). In this example, en2 was configured with address 9.3.6.225 and is in VLAN C (IP network 9.3.6.224, netmask 255.255.255.240); en3 was configured with address 9.3.6.243 and is in VLAN D (IP network 9.3.6.240, netmask 255.255.255.240).

  1. Run the following command to check traffic on en2:

    tcpdump -i en2 -I -n

    Output similar to the following is displayed:
    -TIME STAMP-    -SOURCE IP-    -DESTINATION IP-   -FLAG   -ADDITION INFO- 
    09:04:27.313527323 9.3.6.244.23 > 9.3.6.241.38160: P 7:9(2) ack 8 win 
    65535
    09:04:27.402377282 9.3.6.245.45017 > 9.53.168.52.23: . ack 24 win 
    17520 (DF) [tos 0x10]
    09:04:27.418818536 9.3.6.241.38160 > 9.3.6.244.23: . ack 9 win 65535 
    [tos 0x10
    09:04:27.419054751 9.3.6.244.23 > 9.3.6.241.38160: P 9:49(40) ack 8 
    win 65535
    09:04:27.524512144 9.3.6.245.45017 > 9.53.168.52.23: P 4:5(1) ack 24 
    win 17520 (DF) [tos 0x10]
    09:04:27.526159054 9.53.168.52.23 > 9.3.6.245.45017: P 24:25(1) ack 5 
    win 2482 (DF)
    09:04:27.602600775 9.3.6.245.45017 > 9.53.168.52.23: . ack 25 win 
    17520 (DF) [tos 0x10]
    09:04:27.628488745 9.3.6.241.38160 > 9.3.6.244.23: . ack 49 win 65535 
    [tos 0x1
  2. Press Ctrl-C to stop the output display:

    ^C
    38 packets received by filter
    0 packets dropped by kernel
Useful data can be gained from the tcpdump output simply by recognizing the source IP addresses in the traffice (shown in bold type in the sample output). Thus, the sample output shows that ent2 is physically attached to the wrong network segment. The source IP addressses should be in the 9.2.6.22x range, not the 9.3.6.24x range. It is possible that swapping the cables for ent2 and ent3 may solve the problem. If not, you may need to ask your network administrator to reconfigure switch ports to pass the correct traffic. With the information you gain from using the netstat -v and tcpdump tools, you can better decide which action is most appropriate.

AIX provides many tools for querying TCP/IP status on AIX servers. However, the netstat and tcpdump commands do provide some methods for quick problem determination. For example, these tools can help determine if you own the problem or if it needs to be addressed by a network administrator.

For additional information, please refer to AIX Online Documents at the following URL: Link

Up and down the directory tree

Even if you've been using the cd command for years, you might be surprised at some of its features. After all, when did you last run man cd? (For a link to the documentation for the cd command, see Resources.) The cd command has a few little tricks that can save you lots of unnecessary typing. Considering how often you have to change directories, that's good news for you.

This article looks at the UNIX® directory structure and shows how you can see your current directory path and add your directory to your shell prompt. After looking at some simple cd examples, the article explains what is meant by the absolute path and the relative path and when to use each one. Then, I share some of my favourite time-saving cd shortcuts, such as how to get to your home directory and how to identify the home directory of any user. I show how to toggle back and forth between two directories and provide a little-known gem about cd that I like to call the cd shuffle. It's a simple way of moving between two similar directory paths.

Before looking at changing directories, however, it's important to understand a bit about the UNIX directory structure.
Uprooting the directory tree
What you may know as folders in Windows® operating systems are called directories in UNIX. A directory is a container that holds groups of files and other directories.

All directories in UNIX branch downward from the root directory, denoted by the forward slash (/). For example:

  • The directory /usr is a subdirectory of the root directory (/).
  • The /usr/spool directory is a subdirectory of /usr.
  • The /usr/spool/mail directory is a subdirectory of /usr/spool.
. . . and so on.
What directory am I in?
From a shell prompt, you can display the path name of the directory you're in by running the command pwd, shown in Listing 1. Remember this command as the present working directory.

Listing 1. Display the present working directory (pwd)
# pwd
# /home/surya
The abc of cd
To change to another directory, use cd followed by a space, and then the directory you want to go to. Remember that UNIX commands are case-sensitive, so make it all lowercase unless the directory name actually has uppercase letters in it. In the examples in Listing 2, each directory I'm changing to starts with a slash (/), because I'm using the absolute path, tracing the trail of directories all the way from the root directory.

Listing 2. The cd command using an absolute path
# cd /var
# cd /usr/spool/mail
# cd /home/surya
You can get to any directory at all—if you have permission (you need execute permission)—using its absolute path name. The commands in Listing 2 are provided as examples. You wouldn't ordinarily run two cd commands in a row, because the point of changing your working directory is to do some work in it, not to move on to somewhere else straight away.

If the directory you entered isn't a valid directory or you don't have permission to go there, the cd command reports an error. If your cd command fails, then you stay in the directory you started from.
Where am I?
Okay. Your cd command didn't report any failures, so you assume that it worked. But it would be nice to know for sure which directory you're now in.

Of course, you could run the pwd command every time you need to check the current working directory, but there's a better way. Whenever you run the cd command successfully, the new working directory is stored in the environmental variable $PWD (Present Working Directory). Note that this variable is in uppercase letters, unlike the pwd command. So, you could display the value of $PWD using echo, as you can see in Listing 3.
Listing 3. Display $PWD
# echo $PWD
# /home/surya
Doesn't seem much easier than running that pwd command you saw earlier, does it? Still, it's helpful to know that your directory is stored in a variable. Here's why: You can display the value of $PWD as part of the shell prompt.
Prompt my working directory
In the examples used so far, the shell prompt is set to the hash, or pound, symbol (#). But if you include $PWD in your shell prompt, you always know what directory you're in. Then, when you run cd, the variable PWD will be updated and displayed as part of the shell prompt.

You can set the shell prompt variable PS1 in your .profile in your home directory by adding the lines shown in Listing 4.

Listing 4. Include $PWD in the shell prompt
PS1='${PWD} > '
export PS1
When you next log in, your .profile should execute as part of the login process, and this should display the working directory as part of the shell prompt. If you know how to get to your home directory, you could execute the .profile right now (see Listing 5).

Listing 5. Execute your new .profile with the new shell prompt
# . ./.profile
/home/surya >
You can tailor the prompt to include the host name, your login name, or some other display characters. For details about enhancing the shell prompt, see the link to the article "Tip: Prompt magic" in Resources.
Relatively better
It's a bit cumbersome using absolute paths when you only want to jump from one branch of the tree to another nearby. That's why cd allows you to use relative paths. By default, the relative path refers to the directory relative to the current directory you are in. Using the relative path often means fewer keystrokes, although that depends on the directory you are going from and the one you're headed to.

To get from /home/surya to /home/surya/bin, for example, you don't have to enter the absolute path of the target directory (/home/surya/bin); it's enough to enter the new path relative to the one you're already in, as you can see in Listing 6.

Listing 6. The cd command using a relative path
/home/surya > cd bin
/home/surya/bin >
Notice how the shell prompt shows the new value for $PWD.
Visit your parents
If you run the ls command using the -a flag, you can see an entry for . (dot) and another for .. (dot-dot). The single dot represents the current directory. The two dots are for the parent directory—the one immediately above the directory you're in.

Using the parent directory is handy when you want to go up a level via cd. Listing 7 shows how.

Listing 7. Using the cd command to go to the parent directory (dot-dot) 
/var/spool/mqueue > cd ..
/var/spool > 
You can then head down to a new sub-directory (see Listing 8 ).

Listing 8. Using cd to go to a sub-directory
/var/spool > cd mail
/var/spool/mail >
Or, you could do all of that in a single command, as you see in Listing 9.

Listing 9. Branch to branch in one command
/var/spool/mqueue > cd ../mail
/var/spool/mail >
You can even jump up a couple of levels, and then down a couple, as shown in Listing 10.

Listing 10. Jump through branches
/usr/IBM/WebSphere/AppServer/profiles > cd ../../PortalServer/log
/usr/IBM/WebSphere/PortalServer/log > 
Once you get used to using the relative path, it becomes second nature.
The shortcut home
Every UNIX user has a home directory that is defined when the user is created. You could look up your home directory in /etc/passwd or use smit, but there's a better way of getting home.

Using cd straight to home
If you want to get to your own home directory, use the cd command without any parameters, as shown in Listing 11.

Listing 11. Fast-track home with cd
/usr/IBM/WebSphere/PortalServer/log > cd
/home/surya >
Your home directory is stored in the variable $HOME. That means that the cd command without parameters is equivalent to typing cd $HOME (see Listing 12).

Listing 12. cd $HOME
/var/spool/mail > cd $HOME
/home/surya >
That $HOME variable is useful for knowing your home directory even if you're not headed there just yet. In fact, the $HOME variable can be so helpful that it's got an alias: the tilde (~).

Call home, tilde

You may want to view or work on files in your home directory. If you're in some other directory, there's no need to go home first or to type the full directory path. Just use the tilde character. In Listing 13, I make a copy of my .profile in my home directory, all from the comfort of somewhere else.

Listing 13. Tilde shortcut for $HOME
/usr/IBM/WebSphere > cp ~/.profile ~/.profile.save
Remote access to your neighbour's home
You can also use tilde to list or work with files in another user's home directory (if your permissions allow it). To do this, just use tilde followed by the user's login name, as Listing 14 shows.

Listing 14. Tilde is everyone's HOME
/home/surya > cp ~john/.profile ~john/.profile.save
This is safer than guessing the user's home directory and easier than looking it up in /etc/passwd.
Dashing back
Quite often, you need to change directory only to run a command or two, and then return to the directory you were in previously ($OLDPWD). To do that, use the cd dashback. That's cd followed by a dash (cd -). In Listing 15, notice how the $PS1 shell prompt displays the new directory each time I run cd.

Listing 15. Return to previous directory
/home/surya > cd /usr/sys/inst.images
/usr/sys/inst.images > cd -
/home/surya >
The toggle switch
A consequence of this cd dashback is that if you enter it twice, you can toggle back and forth between two directories. This functionality could be useful if you wanted to change a program or configuration file in one directory and see the results in a log file in a different directory. Listing 16 shows the toggle between two directories. As with the other examples, I'm skipping the commands you might run right after you've actually changed directory.

Listing 16. Toggle between $PWD and $OLDPWD
/data/log > cd /apps/config
/apps/config > cd -
ps/config >
/data/log > cd -
/a
p
The cd shuffle
The feature that I find especially helpful is the cd shuffle. It's a simple way of switching from an old directory to a new one when the two directory paths have only one difference, such as a single word.

The syntax may look odd to UNIX old hands if they have never used it, but it works. See Listing 17.

Listing 17. cd shuffle syntax
cd directorya directoryb
The first parameter is the string you want to replace in the current directory path. The second parameter is the replacement string. For example, to move from v7 to v8, you just type cd v7 v8, as you can see in Listing 18.

Listing 18. Using cd shuffle
/programs/v7/reports/monthly > cd v7 v8
/programs/v8/reports/monthly >
That single command has saved 19 keystrokes! That's much simpler than going up three parent directories, and then heading back down the directory tree or using the absolute path.

This two-parameter cd command has lots of uses: swapping between similar directory paths where the only difference is a database instance name, a branch name, or maybe a date. The cd shuffle can save you thousands of keystrokes in a very short time.
Jump through history
If you have a directory for each year and each month of history, cd shuffle allows you to jump around from one year to another. See how it works in Listing 19.

Listing 19. New year
/hist/2010/april/reports > cd 2010 2011
/hist/2011/april/reports >
If you want to change to a different month within the same year, use cd shuffle with the from month and the to month as its parameters, as shown in Listing 20.

Listing 20. Swap month directory
/hist/2011/april/reports > cd april may
/hist/2011/may/reports > 
If two directory paths have only one string different, cd shuffle is ideal.
Is cd okay?
When I use cd in scripts, I always verify that the change directory has worked before continuing with the next command. I once saw an operating system wiped out by a two-line cleanup script that had been working every day for two years. An NFS-mounted directory became unavailable when a remote host was turned off. The cd command failed, and the cleanup script continued anyway until there was nothing left on the system to clean up.

A simple way of verifying that cd worked before proceeding with something else is to use the shell short-circuit && straight after a cd command. If the cd command fails, the next command won't continue. See Listing 21.

Listing 21. cd and short-circuit
cd /some/dir && rm *.log
Conclusion
In this article, you learned about the cd command. You saw how to change directories using the absolute path and the relative path, and you learned how to display the working directory you are in and toggle back and forth between two directories. You saw different ways of referring to home directories, and you learned about the little-known cd shuffle, which allows you to substitute a string in your current path and switch to a new path.

The cd command is so important on the command line, and it's used so often, that it's worth knowing some of its many shortcuts.

Unix/Linux find command examples

Real world FIND usage


find / -type f -name *.jpg  -exec cp {} . \;
find /dir -type f -size 0 -print
find . -type f -size +10000 -exec ls -al {} \;
find /var/nmon -mtime +30 | xargs -i rm {}
find /var/nmon -mtime +1 -exec gzip -9 {} \;
find . -atime +1 -type f -exec mv {} TMP \; # mv files older then 1 day to dir TMP
find . -name "-F" -exec rm {} \;   # a script error created a file called -F
find . -exec grep -i "vds admin" {} \;
find . \! -name "*.Z" -exec compress -f {} \;
find . -type f \! -name "*.Z" \! -name ".comment" -print | tee -a /tmp/list
find . -name *.ini
find . -exec chmod 775 {} \;
find . -user xuser1 -exec chown -R user2 {} \;
find . -user psoft  -exec rm -rf  {} \;
find . -name ebtcom*
find . -name mkbook
find . -exec grep PW0 {} \;
find . -exec grep -i "pw0" {} \;
find . -atime +6
find . -atime +6 -exec ll | more
find . -atime +6 -exec ll | more \;
find . -atime +6 -exec ll \;
find . -atime +6 -exec ls \;
find . -atime +30 -exec ls \;
find . -atime +30 -exec ls \; | wc -l
find . -name auth*
find . -exec grep -i plotme10 {};
find . -exec grep -i plotme10 {} \;
find . -ls -exec grep 'PLOT_FORMAT 22' {} \;
find . -print -exec grep 'PLOT_FORMAT 22' {} \;
find . -print -exec grep 'PLOT_FORMAT' {} \;
find . -print -exec grep 'PLOT_FORMAT' {} \;
find ./machbook -exec chown 184 {} \;
find . \! -name '*.Z' -exec compress {} \;
find . \! -name "*.Z" -exec compress -f {} \;
find /raid/03c/ecn -xdev -type f -print
find /raid/03c/ecn -xdev -path -type f -print
find / -name .ssh* -print | tee -a ssh-stuff
find . -name "*font*"
find . -name hpmcad*
find . -name *fnt*
find . -name hp_mcad* -print
find . -grep Pld {} \;
find . -exec grep Pld {} \;
find . -exec grep Pld {} \;
find . -exec grep PENWIDTH {} \; | more
find . -name config.pro
find . -name config.pro
find /raid -type d ".local_sd_customize" -print
find /raid -type d -name ".local_sd_customize" -print
find /raid -type d -name ".local_sd_customize" -ok cp /raid/04d/MCAD-apps/I_Custom/SD_custom/site_sd_customize/user_filer_project_dirs {} \;
find /raid -type d -name ".local_sd_customize" -exec cp /raid/04d/MCAD-apps/I_Custom/SD_custom/site_sd_customize/user_filer_project_dirs {} \;
find . -name xeroxrelease
find . -exec grep xeroxrelease {} \;
find . -name xeroxrelease
find . -name xeroxrelease* -print 2>/dev/null
find . -name "*release*" 2>/dev/null
find / -name "*xerox*" 2>/dev/null
find . -exec grep -i xeroxrelease {} \;
find . -print -exec grep -i xeroxrelease {} \;
find . -print -exec grep -i xeroxrelease {} \; > xeroxrel.lis
find . -exec grep -i xeroxrel {} \;
find . -print -exec grep -i xeroxrel {} \;
find . -print -exec grep -i xeroxrel {} \; | more
find /raid/03c/inwork -xdev -type f -print >> /raid/04d/user_scripts/prt_list.tmp
find . -exec grep '31.53' {} \;
find . -ls -exec grep "31/.53" {} \; > this.lis
find . -print -exec grep "31/.53" {} \; > this.lis
find . -print -exec grep 31.53 {} \; > this.lis
find . -exec grep -i pen {} /;
find . -exec grep -i pen {} \;
find . -print -exec grep -i pen {} \; | more
find . -exec grep -i pen {} \;
find . -atime +6 -exec ll | more \;
find . -atime +6 -exec ll \;
find . -atime +6 -exec ls \;
find . -atime +30 -exec ls \;
find . -atime +30 -exec ls \; | wc -l
find . \! -name '*.Z' -exec compress -f {} \;
find . -name 'cache*' -depth -exec rm {} \;
find . -name 'cache*' -depth -print | tee -a /tmp/cachefiles
find . -name 'cache[0-9][0-9]*' -depth -print | tee -a /tmp/cachefiles
find . -name 'hp_catfile' 'hp_catlock' -depth -print | tee -a /tmp/hp.cats
find . -name 'hp_catfile' -name 'hp_catlock' -depth -print | tee -a /tmp/hp.cats
find . -name 'hp_cat*' -depth -print | tee -a /tmp/hp.cats
find . -name 'hp_cat[fl]*' -depth -print | tee -a /tmp/hp.cats
find /raid -name 'hp_cat[fl]*' -depth -print
find . \! -name '*.Z' -exec compress -f {} \;
find . -name '*' -exec compress -f {} \;
find . -xdev -name "wshp1*" -print
find . -xdev -name "wagoneer*" -print
find . -name "xcmd" -depth -print
find /usr/contrib/src -name "xcmd" -depth -print
find /raid -type d -name ".local_sd_customize" -exec ls {} \;
find /raid -type d -name ".local_sd_customize" \
   -exec cp /raid/04d/MCAD-apps/I_Custom/SD_custom/site_sd_customize/user_filer_project_dirs {} \;
find . -name "rc.conf" -print
find . -name "rc.conf " -exec chmod o+r '{}' \; -print
find . -not (\ -name "*.v" -o -name "*,v" \) '{}' \; -print
=================================================================

Basic find command examples

This first Linux find example searches through the root filesystem ("/") for the file named "Chapter1". If it finds the file, it prints the location to the screen.
find / -name Chapter1 -type f -print

On Linux systems and modern Unix system you no longer need the -print option at the end of the find command, so you can issue it like this:
find / -name Chapter1 -type f

The "-f" option here tells the find command to return only files. If you don't use it, the find command will returns files, directories, and other things like named pipes and device files that match the name pattern you specify. If you don't care about that, just leave the "-type f" option off your command.

This next find command searches through only the /usr and /home directories for any file named "Chapter1.txt":
find /usr /home -name Chapter1.txt -type f

To search in the current directory -- and all subdirectories -- just use the . character to reference the current directory in your find commands, like this:
find . -name Chapter1 -type f

This next example searches through the /usr directory for all files that begin with the letters Chapter, followed by anything else. The filename can end with any other combination of characters. It will match filenames such as Chapter, Chapter1,Chapter1.bad, Chapter-in-life, etc.:
find /usr -name "Chapter*" -type f

This next command searches through the /usr/local directory for files that end with the extension .html. These file locations are then printed to the screen.
find /usr/local -name "*.html" -type f

Find directories with the Unix find command

Every option you just saw for finding files can also be used on directories. Just replace the -f option with a -d option. For instance, to find all directories named build under the current directory, use this command:
find . -type d -name build

Find files that don't match a pattern

To find all files that don't match a filename pattern, use the "-not" argument of the find command, like this:
find . -type f -not -name "*.html"

That generates a list of all files beneath the current directory whose filename DOES NOT end in ".html", so it matches files like *.txt, *.jpg, and so on.

Finding files that contain text (find + grep)

You can combine the Linux find and grep commands to powerfully search for text strings in many files.

This next command shows how to find all files beneath the current directory that end with the extension .java, and contain the characters StringBuffer. The -l argument to the grep command tells it to just print the name of the file where a match is found, instead of printing all the matches themselves:
find . -type f -name "*.java" -exec grep -l StringBuffer {} \;

(Those last few characters are required any time you want to exec a command on the files that are found. I find it helpful to think of them as a placeholder for each file that is found.)

This next example is similar, but here I use the -i argument to the grep command, telling it to ignore the case of the characters string, so it will find files that contain string, String, STRING, etc.:
find . -type f -name "*.java" -exec grep -il string {} \;

Acting on files you find (find + exec)

This command searches through the /usr/local directory for files that end with the extension .html. When these files are found, their permission is changed to mode 644 (rw-r--r--).
find /usr/local -name "*.html" -type f -exec chmod 644 {} \;

This find command searches through the htdocs and cgi-bin directories for files that end with the extension .cgi. When these files are found, their permission is changed to mode 755 (rwxr-xr-x). This example shows that the find command can easily search through multiple sub-directories (htdocs, cgi-bin) at one time.
find htdocs cgi-bin -name "*.cgi" -type f -exec chmod 755 {} \;

Running the ls command on files you find

From time to time I run the find command with the ls command so I can get detailed information about files the find command locates. To get started, this find command will find all the "*.pl" files (Perl files) beneath the current directory:
find . -name "*.pl"

In my current directory, the output of this command looks like this:
./news/newsbot/old/3filter.pl
./news/newsbot/tokenParser.pl
./news/robonews/makeListOfNewsURLs.pl

That's nice, but what if I want to see the last modification time of these files, or their filesize? No problem, I just add the "ls -ld" command to my find command, like this:
find . -name "*.pl" -exec ls -ld {} \;

This results in this very different output:
-rwxrwxr-x 1 root root 2907 Jun 15  2002 ./news/newsbot/old/3filter.pl
-rwxrwxr-x 1 root root 336 Jun 17  2002 ./news/newsbot/tokenParser.pl
-rwxr-xr-x 1 root root 2371 Jun 17  2002 ./news/robonews/makeListOfNewsURLs.pl

The "-l" flag of the ls command tells ls to give me a "long listing" of each file, while the -d flag is extremely useful in this case; it tells ls to give me the same output for a directory. Normally if you use the ls command on a directory, ls will list the contents of the directory, but if you use the -d option, you'll get one line of information, as shown above.

Find and delete

Be very careful with these next two commands. If you type them in wrong, or make the wrong assumptions about what you're searching for, you can delete a lot of files very fast. Make sure you have backups and all that, you have been warned.

Here's how to find all files beneath the current directory that begin with the letters 'Foo' and delete them.
find . -type f -name "Foo*" -exec rm {} \;

This one is even more dangerous. It finds all directories named CVS, and deletes them and their contents. Just like the previous command, be very careful with this command, it is dangerous(!), and not recommended for newbies, or if you don't have a backup.
find . -type d -name CVS -exec rm -r {} \;

Find files with different file extensions

The syntax to find multiple filename extensions with one command looks like this:
find . -type f \( -name "*.c" -o -name "*.sh" \)

Just keep adding more "-o" (or) options for each filename extension. Here's a link to

Case-insensitive file searching

To perform a case-insensitive search with the Unix/Linux find command, use the -iname option instead of -name. So, to search for all files and directories named foo, FOO, or any other combination of uppercase and lowercase characters beneath the current directory, use this command:
find . -iname foo

If you're just interested in directories, search like this:
find . -iname foo -type d

And if you're just looking for files, search like this:
find . -iname foo -type f

Find files by modification time

To find all files and directories that have been modified in the last seven days, use this find command:
find . -mtime -7

To limit the output to just files, add the "-type f" option as shown earlier:
find . -mtime -7 -type f

and to show just directories:
find . -mtime -7 -type d

Unix Create a Symbolic Link

Q. How do I create links under UNIX / Linux operating systems?

A. You need to use ln command, which is a standard Unix / Linux / BSD command, used to create links to files. There are two types of links under UNIX, hard and soft link:

Hardlink vs. Softlink in Linux or UNIX

[a] Hard links cannot links directories ( cannot link /tmp with /home/you/tmp)
[b] Hard links cannot cross file system boundaries ( cannot link /tmp mounted on /tmp to 2nd hard disk mounted on /harddisk2)
[c] Symbolic links refer to a symbolic path indicating the abstract location of another file
[d] Hard links, refer to the specific location of physical data.

UNIX Create Symbolic link Command

To create a symbolic link, enter
$ ln -s {/path/to/file-name} {link-name}
$ ln -s /shared/sales/data/file.txt sales.data.txt
$ vi sales.data.txt
$ ls -l sales.data.txt
To delete a link, enter
$ rm {link-name}
$ rm sales.data.txt
$ ls -l
$ ls -l /shared/sales/data/file.txt
If you delete the soft link itself (sales.data.txt) , the data file would still be there ( /shared/sales/data/file.txt ). However, if you delete /shared/sales/data/file.txt, sales.data.txt becomes a broken link and data is lost.

UNIX Create Hardlink Command

To create hard link, enter (without the -s option):
$ ln {file.txt} {hard-link}
$ ln /tmp/file link-here
You can delete hard link with rm command itself:
$ rm {hard-link}
$ rm link-here
If you delete a hard link, your data would be there. If you delete /tmp/file your data still be accessible via link-here hard link file.

UNIX Create Soft-link between Directories

$ ln -s {path to actual directory} {Link-Name}
ls -ld {link-Name} <== To verify the link

UNIQ command examples in unix and linux

Uniq command in unix or linux system is used to suppress the duplicate lines from a file. It discards all the successive identical lines except one from the input and writes the output.
The syntax of uniq command is
# uniq [option] filename
The options of uniq command are:
  • c : Count of occurrence of each line.
  • d : Prints only duplicate lines.
  • D : Print all duplicate lines
  • f : Avoid comparing first N fields.
  • i : Ignore case when comparing.
  • s : Avoid comparing first N characters.
  • u : Prints only unique lines.
  • w : Compare no more than N characters in lines
Uniq Command Examples:
First create the following example.txt file in your unix or linux operating system.
# cat example.txt
Unix operating system
unix operating system
unix dedicated server
linux dedicated server

1. Suppress duplicate lines

The default behavior of the uniq command is to suppress the duplicate line. Note that, you have to pass sorted input to the uniq, as it compares only successive lines.
# uniq example.txt
unix operating system
unix dedicated server
linux dedicated server
If the lines in the file are not in sorted order, then use the sort command and then pipe the output to the uniq command.
# sort example.txt | uniq

2. Count of lines

The -c option is used to find how many times each line occurs in the file. It prefixes each line with the count.
# uniq -c example.txt
      2 unix operating system
      1 unix dedicated server
      1 linux dedicated server

3. Display only duplicate lines

You can print only the lines that occur more than once in a file using the -d option.
> uniq -d example.txt
unix operating system

> uniq -D example.txt
unix operating system
unix operating system
The -D option prints all the duplicate lines.

4. Skip first N fields in comparison

The -f option is used to skip the first N columns in comparison. Here the fields are delimited by the space character.
# uniq -f2 example.txt
unix operating system
unix dedicated server
In the above example the uniq command, just compares the last fields. For the first two lines, the last field contains the string "system". Uniq prints the first line and skips the second. Similarly it prints the third line and skips the fourth line.

5. Print only unique lines

You can skip the duplicate lines and print only unique lines using the -u option
# uniq -u example.txt
unix dedicated server
linux dedicated server

Uninstalling on AIX using the command line

Uninstalling on AIX using the command line

To uninstall Tivoli Access Manager for Operating Systems on AIX from the command line, follow this procedure:
  1. Log on as root.
  2. On the command line, enter:
     installp -u -g PDOS.rte 
  3. Reboot when the uninstall process is complete.

Uninstalling on AIX using SMIT

Uninstalling on AIX using SMIT

Follow this procedure to uninstall Tivoli Access Manager for Operating Systems on AIX using SMIT:
  1. Log on as root.
  2. Enter the following command:
    smit
    The System Management Interface Tool panel is displayed.
  3. From the System Management window, click Software Installation and Maintenance.
  4. From the Software Installation and Maintenance menu, click Software Maintenance and Utilities.
  5. From the Software Maintenance and Utilities menu, click Remove Installed Software. The Remove Installed Software pop-up panel is displayed.
  6. Click the entry field for Software Name "PDOS.rte". and enter 
  7. Before uninstalling the selected software, SMIT determines if it is possible to uninstall. PREVIEW only should be set to yes. Click OK, and then click OK on the confirmation window.During the Preview, a split screen shows the uninstall command and the output log for the preview of the uninstallation.
  8. When the preview is complete, click Done.
  9. The Remove Installed Software window is displayed. Specify No in PREVIEW only. Click OK.
  10. Click OK on the confirmation window.
  11. During the uninstallation, a split screen shows the uninstall command and the output log for the uninstallation.
  12. When the uninstallation is complete, the Remove Installed Software panel is displayed. Click Done.
  13. Close the Remove Installed Software panel.
  14. Close the Software Maintenance Interface Tool panel.
  15. Reboot when uninstallation is complete if required.