GPFS cluster general commands on AIX

GPFS cluster general commands on AIX Terminology

GPFS is a concurrent file system. It is a product of IBM and is short for General Parallel File System. It is a high-performance shared-disk file system that can provide fast data access from all nodes in a homogenous or heterogenous cluster of IBM UNIX servers running either the AIX or the Linux operating system.

All nodes in a GPFS cluster have the same GPFS journaled filesystem mounted, allowing multiple nodes to be active at the same time on the same data.

A specific use for GPFS is RAC, Oracle‘s Real Application Cluster. In a RAC cluster multiple instances are active (sharing the workload) and provide a near “Allways-On” database operation. The Oracle RAC software relies on IBM‘s HACMP software to achieve high availability for hardware and the operating system AIX. For storage it utilizes the concurrent filesystem called GPFS.

Data availability

GPFS is fault tolerant and can be configured for continued access to data even if cluster nodes or storage systems fail. This is accomplished though robust clustering features and support for data replication. GPFS continuously monitors the health of the file system components. When failures are detected appropriate recovery action is taken automatically. Extensive logging and recovery capabilities are provided which maintain metadata consistency when application nodes holding locks or performing services fail. Data replication is available for journal logs, metadata and data. Replication allows for continuous operation even if a path to a disk or a disk itself fails. GPFS Version 3.2 further enhances clustering robustness with connection retries. If the LAN connection to a node fails GPFS will automatically try and reestablish the connection before making the node unavailable. This provides for better uptime in environments experiencing network issues. Using these features along with a high availability infrastructure ensures a reliable enterprise storage solution.

mm  Multi Media
NSD  Network Shared Disk
mmfsd (1191 is default port)  GPFS daemon (Daemon will do I/O and buffer management)

Location of files:

/var/adm/ras/mmfs.log.latest  gpfs log file
/usr/lpp/mmfs/bin  GPFS command location
/var/mmfs/gen/mmsdrfs  Configuration file


mmlscluster  to list the cluster
mmgetstate -aLs  to view the status of the GPFS cluster
mmlsconfig  Basic configuration information of the GPFS including no.of  File  systems
lsof -i :1191 -P  To check the daemon port listen state.
mmlsmgr  -c  to view the GPFS manager
mmlsnfd -f -m  To check the
mmlsfs all  To check all GPFS file systems (lower alphabets are current values)
mmdf  To check the gpfs file system size
mmdsh  To configure trust relation between cluster nodes
mmlsnsd  To  List NSD disks
mmlsdisk -d  To view the disk information
mmaddnode  to add client node
mmchnode  To change client node name
mmcrcluster  To create gpfs cluster
/usr/lpp/mmfs/samples  location of same files created by installing base filesets
mmlslicense  To view the GPFS license
mmlsmgr  To check the cluster Manager and File Manager
mmfsadm dump version  Shows the Version and no.of days cluster is up.
mmshutdown -a  Shutdown the GPFS cluster in all nodes.
mmstartup -a  Starts the cluster in all nodes.
mmfsadm  dump config  GPFS attributes information
mmchcluster  To change the cluster
mmlsnsd  will show NSD disks
mmlsnsd -M  Show the detailed NSD disks
mmdf  Show GPFS file system details
mmlsdisk  Will shows the disks for that file system

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s