Big Data Revolutionizing


GPFS issues troubleshooting

When you got GPFS issue?

Got a problem? Don’t panic!
Check for possible basic problems:
  • Is Network OK?
  • Check status of the cluster: “mmgetstate–a”
  • Check status of NSDs: “mmlsdisk fsname”
Take a 5 min break
  • In major cases GPFS will recover by it self without need of any intervention from the administrator
If not recovered
  • Ensure that you are the only person who is doing the work!
  • check gpfslogs (first on cluster manager, then on FS manager, then on NSD servers)
  • check syslog(/var/log/messages) for eventual errors
  • Check disks availability (mmlsdisk fsname)
  • Consult “Problem determination guide”

Some usefull commands:

  • “mmfsadm dump waiters” will help to find long lasting processes
  • “mmdiag –network|grep pending” helps to individuate non-responsive node
  • “mmdiag –iohist” lists last 512 I/O operations performed by GPFS on current node (helps to find malfunctioning disk)
  • “gpfs.snap” will garter all logs and configurations from all nodes in the cluster
  • the first thing to send to IBM support when opening service reques

GPFS V3.4 Problem Determination Guide:

NFS stale file handle:

When a GPFS mount point is in the “NFS stale file handle” status, example
[root@um-gpfs1 root]# df
Filesystem 1K-blocks Used Available Use% Mounted on !
/dev/gpfs_um1 8125032448 8023801088 101231360 99% /storage/gpfs_um
df: `/storage/gpfs_um’: Stale NFS file handle

Then check if there is any NSD with status “down” 

[root@um-gpfs1 root]# mmlsdisk gpfs_um
disk driver sector failure holds holds
name type size group metadata data status availability
———— ——– —— ——- ——– —– ————- ————
disk21 nsd 512 4015 yes yes ready up !
disk22 nsd 512 4015 yes yes ready down !
disk23 nsd 512 4015 yes yes ready down !
disk24 nsd 512 4013 yes yes ready up !
restart the NSDs (important: do it for all NSD with status “down” in one command): 
[root@um-gpfs1 root]# mmchdisk gpfs_um start -d “disk21;disk24”
re-mount filesystems

Recovery of GPFS configuration:

If a node of the cluster lost its configuration (has been re-installed) but still present as member of this cluster
(“mmgetstate” lists it in “unknown” state) use this command to recover the node:
/usr/lpp/mmfs/bin/mmsdrrestore -p diskserv-san-5 -R /usr/bin/scp

Checking existing NSD:

  • If get this warning while creating new nsd Disk descriptor xxx system refers to an existing NSD
Use this command to verify if this device is actually used in one of the file systems
mmfsadm test readdescraw /dev/emcpowerax
–Viewing the GPFS disks
# mmlsdisk /dev/slvdata01208
Use the mmlsfs command to view the attributes and values of a GPFS file system.
# mmlsfs /dev/slvdata01208
view the nodes in your GPFS nodesets
# mmlsnode -a
view which GPFS file systems

test if Oracle TDP (RMAN) is working properly?
# tdpoconf showenv

GPFS cluster general commands on AIX

GPFS cluster general commands on AIX Terminology

GPFS is a concurrent file system. It is a product of IBM and is short for General Parallel File System. It is a high-performance shared-disk file system that can provide fast data access from all nodes in a homogenous or heterogenous cluster of IBM UNIX servers running either the AIX or the Linux operating system.

All nodes in a GPFS cluster have the same GPFS journaled filesystem mounted, allowing multiple nodes to be active at the same time on the same data.

A specific use for GPFS is RAC, Oracle‘s Real Application Cluster. In a RAC cluster multiple instances are active (sharing the workload) and provide a near “Allways-On” database operation. The Oracle RAC software relies on IBM‘s HACMP software to achieve high availability for hardware and the operating system AIX. For storage it utilizes the concurrent filesystem called GPFS.

Data availability

GPFS is fault tolerant and can be configured for continued access to data even if cluster nodes or storage systems fail. This is accomplished though robust clustering features and support for data replication. GPFS continuously monitors the health of the file system components. When failures are detected appropriate recovery action is taken automatically. Extensive logging and recovery capabilities are provided which maintain metadata consistency when application nodes holding locks or performing services fail. Data replication is available for journal logs, metadata and data. Replication allows for continuous operation even if a path to a disk or a disk itself fails. GPFS Version 3.2 further enhances clustering robustness with connection retries. If the LAN connection to a node fails GPFS will automatically try and reestablish the connection before making the node unavailable. This provides for better uptime in environments experiencing network issues. Using these features along with a high availability infrastructure ensures a reliable enterprise storage solution.

mm  Multi Media
NSD  Network Shared Disk
mmfsd (1191 is default port)  GPFS daemon (Daemon will do I/O and buffer management)

Location of files:

/var/adm/ras/mmfs.log.latest  gpfs log file
/usr/lpp/mmfs/bin  GPFS command location
/var/mmfs/gen/mmsdrfs  Configuration file


mmlscluster  to list the cluster
mmgetstate -aLs  to view the status of the GPFS cluster
mmlsconfig  Basic configuration information of the GPFS including no.of  File  systems
lsof -i :1191 -P  To check the daemon port listen state.
mmlsmgr  -c  to view the GPFS manager
mmlsnfd -f -m  To check the
mmlsfs all  To check all GPFS file systems (lower alphabets are current values)
mmdf  To check the gpfs file system size
mmdsh  To configure trust relation between cluster nodes
mmlsnsd  To  List NSD disks
mmlsdisk -d  To view the disk information
mmaddnode  to add client node
mmchnode  To change client node name
mmcrcluster  To create gpfs cluster
/usr/lpp/mmfs/samples  location of same files created by installing base filesets
mmlslicense  To view the GPFS license
mmlsmgr  To check the cluster Manager and File Manager
mmfsadm dump version  Shows the Version and no.of days cluster is up.
mmshutdown -a  Shutdown the GPFS cluster in all nodes.
mmstartup -a  Starts the cluster in all nodes.
mmfsadm  dump config  GPFS attributes information
mmchcluster  To change the cluster
mmlsnsd  will show NSD disks
mmlsnsd -M  Show the detailed NSD disks
mmdf  Show GPFS file system details
mmlsdisk  Will shows the disks for that file system

Oracle Determine HWM and reduce it by shrink space

Oracle uses the high water mark to identify the highest amount of space used by a particular segment. It acts as the boundary between used and unused space. As the amount of data grows due to row inserts and updates, the segment’s high water mark grows accordingly. But when you delete the Rows then HWM cannot come down automatically, extra steps needs to done to bring down HWM.

The HIGH WATER MARK is set at the beginning of the segment when the table is created. Deleting rows will never reset the HWM even if all the rows are deleted from the table.

SELECT e.file_id, tablespace_name, file_name, bytes/1024/1024,user_bytes/1024/1024, hwm, CEIL((NVL(e.hwm, 1) * 8192)/1024/1024) as “MB”
dba_data_files f
LEFT JOIN (SELECT file_id, max(block_id + blocks – 1) hwm FROM dba_extents GROUP BY file_id) e ON f.file_id = e.file_id
WHERE f.file_name = ‘/your/path/your_file01.dbf’


Table altered.


Table altered.

SQL> analyze table TBS_DATA compute statistics ;

Table analyzed.

SQL> SELECT blocks, empty_blocks, num_rows FROM user_tables WHERE table_name =’TBS_DATA’;

———- ———— ———-

SQL> SELECT TABLE_NAME , (BLOCKS *8192 / 1024/1024 ) – (NUM_ROWS*AVG_ROW_LEN/1024/1024) “Data lower than HWM in MB” FROM USER_TABLES WHERE table_name=’TBS_DATA’;
TABLE_NAME Data lower than HWM in MB

To reduce the high water mark:
a. Take export of table and re-import back
b. Truncate the table (only if you want to remove data/rows completely)

Suppose we create an empty table, the high-water mark would be at the beginning of the table segment

X Unused space


When you insert rows into the table the high watermark will be bumped up step by step. This is done by the server process which makes the inserts.

Used DATA/Rows X Un used Space


Now if we insert a table with 10,000 rows. And let’s assume that we deleted 5,000 rows later.

Used data Empty blocks X Un used space


Full table scan

As you seen above by deleting the data, HWM does not move.  The main disadvantage of this is that oracle always read the blocks up to high water mark in case of full table scan.  You may have ever notice that doing a count (*) on empty table, takes time to show you 0 rows.  The reason for delay is setting of HWM at higher position.

NOTE:  Whenever optimizer takes full table scan, it scans all the blocks below HWM. This would degrade the performance and slowdown the Full table scan process. To avoid this, we need to shrink the table to reset the HWM.

So, now how we set the high-water mark at lower position?

The only way to set the HWM is to truncate a table. Let us see how truncate set the HWM.

No data in the segment


HWM is reset now, after truncating data.

HWMBLOCKS = “Blocks Containing data” + “Empty blocks”

HWMBLOCKS – “Blocks containing data”

HWM % = (———————————-)  * 100


You can also use DBMS_SPACE.UNUSED_SPACE procedure to determine HWM.

High Watermark = Total blocks – Unused blocks – 1

SQL> select blocks from dba_segments where owner=upper(‘HR’) and segment_name = upper(‘JOBS’);

SQL> analyze table estimate statistics;

SQL> select empty_blocks from dba_tables where owner=upper(‘HR’) and table_name = upper(‘jobs’);

Ways for Resetting HWM………..

Option 1
Alter table  tab_name  move tablespace

This option requires rebuilding of indexes. Since the index will be invalid/unusable after running the above command. Also users cannot use the application or reports while rebuilding the index. Make sure you stop Goldengate if this table is in extract process otherwise it will go ABEND (because of Unusable indexes)

Option 2

  1. Export the data
    2. Truncate the table
    3. Import the table
    4. Analyze the table

Option 3
  1. Copy the table data
2. Truncate the original table
3. Insert back.

    Option 4
    Use DBMS_REDEFINITION package to copy the table (redefining the Table structure)

People confused about setting of HWM through ALTER TABLE DEALLOCATE UNUSED clause. This clause only frees unused space above the high water mark but cannot reset HWM position.

To determine the exact number of blocks that contain data i.e. space used by table below the high water mark, query rowid and get the first and last used block from the rowid.

SQL> select count (distinct dbms_rowid.rowid_block_number(rowid)) “used blocks” from table_name;
Starting with ORACLE 10gR1 we can use a new feature for adjusting the high watermark,
it is called segment shrinking and is only possible for segments which use ASSM, in other words, which are located in tablespaces which use Automatic Segement Space Management.

How to View High Water Mark – Step-by-Step Instructions (Doc ID 262353.1)


JDeveloper supports two mechanisms for embedding SQL in Java programs:

  • SQLJ: If you know the PL/SQL tables and columns involved at compile time (static application), you can use SQLJ. SQLJ is an industry standard for defining precompiled SQL code in Java programs.

    SQLJ allows you to code at a higher level than JDBC, by embedding SQL statements directly in your Java code. The SQLJ precompiler that is integrated into JDeveloper translates the SQL into Java plus JDBC code for you. SQLJ with JDeveloper lets you write and debug applications much faster than you can using just JDBC.

  • JDBC: If you require fine-grained control over database access, or if you are developing an application that requires precise information about database (or instance) metadata, you can code your application entirely in Java using the JDBC API.

You can mix JDBC calls with SQLJ statements in your program. One way to do this is through connection context sharing.

28.2.1 Using SQLJ

SQLJ is a standard way to embed static SQL statements in Java programs. SQLJ applications are portable and can communicate with databases from multiple vendors using standard JDBC drivers.

SQLJ provides a way to develop applications both on the client side and on the middle-tier that access databases using Java. Developing in SQLJ is fast and efficient, and JDeveloper completely supports SQLJ development. You can create or include SQLJ files in your JDeveloper projects. When you compile a project that contains SQLJ source files, JDeveloper automatically calls the SQLJ translator, or precompiler. The translator produces completely standard Java source code, with calls to JDBC methods to provide the database support. JDeveloper then compiles the Java that the SQLJ translator generates.

For more information, see the Oracle Database SQLJ Developer’s Guide.

28.2.2 Using Oracle JDBC Drivers

JDBC provides Java programs with low-level access to databases.

Oracle JDBC drivers can be grouped into two main categories with the following attributes:

  • Java-based drivers (thin client / Type 4 driver):
    • are implemented entirely in Java
    • are highly portable
    • can be downloaded from the server system to a web browser
    • can connect using the TCP/IP protocol
    • are the only option for applets (due to security restrictions)
  • OCI-based drivers (Type 2 driver):
    • are implemented using native method libraries (OCI DLLs)
    • have OCI libraries that must be available on the client system
    • cannot be downloaded to a browser
    • can connect using any Net8 protocol
    • deliver high performance

    The following figure illustrates how JDBC components and the driver run in the same memory space as an applet.

    Figure 28-1 JDBC Components

    JDBC components running in same memory space as an applet

The following figure illustrates how the Oracle JDBC OCI drivers run in a separate memory space from your Java application. These JDBC drivers make OCI calls to a separately loaded file.

Figure 28-2 Oracle JDBC OCI Drivers

JDBC OCI drivers run in a separate memory space


Take care not to confuse the terms JDBC and JDBC drivers. All Java applications, no matter how they are developed or where they execute, ultimately use the JDBC-level drivers to connect to Oracle. However, coding using the pure JDBC API is low-level development, akin to using the Oracle Call Interface (OCI) to develop a database application. Like the OCI, the JDBC API provides a very powerful, but also very code-intensive, way of developing an application.

28.2.3 SQLJ versus JDBC

How does SQLJ compare to JDBC? Here are some of the advantages that SQLJ offers over coding directly in JDBC:

  • SQLJ programs require fewer lines of code than JDBC programs. They are shorter, and hence easier to debug.
  • SQLJ can perform syntactic and semantic checking on the code, using database connections at compile time.
  • SQLJ provides strong type-checking of query results and other return parameters, while JDBC values are passed to and from SQL without having been checked at compile time.
  • SQLJ provides a simplified way of processing SQL statements. Instead of having to write separate method calls to bind each input parameter and retrieve each select list item, you can write one SQL statement that uses Java host variables. SQLJ takes care of the binding for you.

However, JDBC provides finer-grained control over the execution of SQL statements and offers true dynamic SQL capability. If your application requires dynamic capability (discovery of database or instance metadata at runtime), then you should use JDBC.

28.2.4 Embedding SQL in Java Programs with SQLJ

You have to perform a number of tasks to embed SQL in Java programs with SQLJ. How to Create SQL Files

You can create a new SQL (.sql) file and add it to the current project.

To create a SQL file:

  1. In the Application Navigator, select the project.
  2. From the main menu, choose File > New to open the New Gallery.
  3. In the New Gallery, in the Categories tree, select Database Tier then Database Files. In the Items list, double-click SQL File.
  4. In the Create SQL File dialog, provide the details to describe the new file.

    For more information at any time, press F1 or click Help from within the dialog.

  5. Click OK.

An empty SQL file is added to the current project and opened for editing. How to Create SQLJ Classes

Create a new SQLJ (.sqlj) file and add it to the current project.

To create a new SQLJ file:

  1. In the Navigator, select the project.
  2. From the main menu, choose File > New to open the New Gallery.
  3. In the Categories tree, expand Database Tier and select Database Files.

    For more information at any time, press F1 or click Help from within the dialog.

  4. In the Items list, double-click SQLJ Class to open the Create SQLJ Class dialog.
  5. In the Create SQLJ File dialog, provide the details to describe the new file.

    For more information at any time, press F1 or click Help from within the dialog.

  6. Click OK.

A skeleton SQLJ class will be added to the current project and be opened for editing. How to Compile SQLJ Classes

You can compile SQLJ classes into Java .class files.

To compile a SQLJ class:

  1. Set the project’s SQLJ translator options to control how the file is compiled.
  2. In the Application Navigator, locate and select the SQLJ class.
  3. Right-click the class, and choose Make.

The status bar at the bottom of the JDeveloper window shows the result of the compilation. Errors, if any, are listed in the log window. How to Use Named SQLJ Connection Contexts

A SQLJ executable statement can designate a connection context object that specifies the database connection where the SQL operation in that clause will execute. If the SQLJ statement omits the connection context clause, then the default connection context is used. How to Declare a SQLJ Connection Context Class

A connection context is an object of a connection context class, which you define using a SQLJ connection declaration.

To declare a context class:

  1. Declare a context class.

    Named connection contexts are not required: SQLJ statements that omit the connection context name use the default connection context.

    For example, this statement declares the context class MyConnectionContext:

    #sql context MyConnectionContext; 

    Context classes extend sqlj.runtime.ref.ConnectionContextImpl and implement sqlj.runtime.ConnectionContext.

After you have declared a context class, create a context object. How to Create a Connection Context Object

Before it can be used in an SQLJ statement, a declared connection context must be created.

To create a context object:

  1. Named connection contexts are not required: SQLJ statements that omit the connection context name use the default connection context.

    For example, use this statement to create an instance thisCtx for the connection context class MyConnectionContext:

    MyConnectionContext thisCtx = new MyConnectionContext (myPath, myUID, myPasswd, autocommit How to Debug SQLJ Classes

You debug SQLJ code by debugging the SQLJ source directly, not the generated Java code.

SQLJ is debugged in JDeveloper in the same manner as other source code.

For more information, see the Oracle Database SQLJ Developer’s Guide. How to Debug SQLJ Classes

You debug SQLJ code by debugging the SQLJ source directly, not the generated Java code.

SQLJ is debugged in JDeveloper in the same manner as other source code.

For more information, see the Oracle Database SQLJ Developer’s Guide. How to Set SQLJ Translator Options

You can control the translation of SQLJ classes through the controls in the Project Properties dialog:

  • Provide syntactic as well as semantic checking of SQL code.
  • Provide syntax and type checking on the SQL statements.
  • Test the compatibility of Java and SQL expressions at compile time.
  • Specify a connection to a database server.
  • Check the semantics of your SQL statements against the database schemas specified by connection contexts.

To set the SQLJ translator options:

  1. In the Application Navigator, select the project that contains the SQLJ file.
  2. Choose Application > Project Properties > Compiler and select SQLJ.
  3. In the SQLJ panel, set the compilation options.
  4. Click OK.

You can set SQLJ translator properties for all projects by choosing Default Project Properties from the Application menu How to Use SQLJ Connection Options

SQLJ connection options specify the database connection for online checking. The general form for connection options is


where option is one of the four options listed below.

The context tag is a connection context type, which permits the use of separate exemplar schemas for each of the connection contexts. If you omit the connection context type, the value will be used for any SQL statements that use the default connection context. The driver option does not allow a context tag.

The options are:

  • user This option specifies the user name for connecting to a database in order to perform semantic analysis of the SQL expressions embedded in a SQLJ program. It contains the user name, for example:

    The user command line option may include a connection context type. For example:


    Whenever a user name is required for the connection to a database context Ctx1, SQLJ uses the user option that was tagged with Ctx1. If it can not find one, SQLJ issues a message and looks for an untagged user option to use instead.

    Specifying a user value indicates to SQLJ that online checking is to be performed. If you do not specify the user option, SQLJ does not connect to the database for semantic analysis. There is no default value for the user option.

    If you have turned on online checking by default (by specifying, for example, -user=hr), then in order to disable online checking for a particular connection context type Ctx2, you must explicitly specify an empty user name, for example:

  • password This option specifies a password for the user. The password will be requested interactively if it is not supplied. This option can be tagged with a connection context type. Examples of the two forms are:
  • url This option specifies a JDBC URL for establishing a database connection. The default is jdbc:oracle:oci9:@. This option can be tagged with a connection context type. For example:
    -url=jdbc:oracle:oci8:@ -url@Ctx1=jdbc:oracle:thin:@<local_host>:1521:orcl
  • driver This option specifies a list of JDBC drivers that should be registered in order to interpret JDBC connection URLs for online analysis. The default isoracle.jdbc.driver.OracleDriver. For example:

    This option cannot be tagged with a connection context type.

    (Src: Oracle® Fusion Middleware User’s Guide for Oracle JDeveloper)

How To Create And Remove A Soft Link, Symlink Or Symbolic Link

A soft link, or more common, a symlink, is link a shortcut to the targeted file or directory. So when is removed the original target stays present. This is the opposite of a hard linkwhich is a reference to the target and so, if the hard link is removed, so is the target.

A symlink can be created like:

ln -s /path/ linkname

from the ln man pages:

ln [OPTION]… [-T] TARGET LINK_NAME (1st form)

-s, –symbolic
make symbolic links instead of hard links

to remove a symlink

rm linkname

What is important here is to note that the command doesn’t have the trailing slash

$ rm linkname/

will output the error:
rm: cannot remove `linkname/': Is a directory

$ rmdir linkname/
will output:
rmdir: linkname/: Not a directory

Maximum Tablespace Size And Database Limit For An Oracle DB


Oracle Database – Enterprise Edition – Version to [Release 10.1 to 11.2]
Oracle Database – Enterprise Edition – Version to [Release 11.2]
Information in this document applies to any platform.

What is the maximum value for the following in Oracle 10g and 11g :

database size
tablespace size
file size


Small file database

For a small file database, the Oracle database has the following limits :

maximum number of datafiles is : 65533
maximum data blocks per datafile : 2^22 -1 = 4194303
maximum datafile size = db_block_size * maximum number of blocks
maximum db_block_size = 32 K
maximum tablespace size = 1022 * max database file
Resulting values are:

maximum datafile size = 32 * 4194303 = 128 GB
maximum tablespace size = 1022 * 128 GB = 130816 GB
maximum database size = 128 G * 65533 = 8388224 GB

Big file database

For a big file database, the Oracle database has the following limits :

maximum number of datafiles is : 65533
maximum data blocks per datafile : 2^32 = 4294967296
maximum datafile size = db_block_size * maximum number of blocks
max db_block_size = 32 K
maximum tablespace size = size of big datafile
Resulting values are:

maximum datafile size = 32 * 4294967296 = 128 TB
maximum tablespace size = 128 TB
maximum database size = 128 T * 65533 =8388224 TB

Note: If you tried to increase the maxsize of a datafile to beyond the limits explained in this document the operation will fail with ORA-3206 error.

Consider Flash for Database Storage

If any of this is ticking boxes for you, it’s time to consider what flash could do for the performance of your database:

  • I/O wait times are high. Essentially we are looking for high latency from the existing storage system. Flash memory systems should deliver I/O with sub-millisecond latency, so if you see an average latency of 8ms on random reads (db file sequential read), for example, you know there is potential for reducing latency to an eighth of its previous average value.
  • I/O forms a significant percentage of Database Time. If I/O is only responsible for 5% of database time, no amount of lightening-fast flash is going to give you a big performance boost… your problems are elsewhere. On the other hand, if I/O is comprising a large portion of database time, you have lots of room for improvement.

Oracle Database Limits

The following is a list of database limits which are divided into four categories in addition to the PL/SQL compiler limits. It is an aggregation of 5 separate web pages from the Oracle Database 11g Release 2 documentation library. I put them all here on one page for convenience.

Datatype Limits

Datatypes Limit Comments
BFILE Maximum size: 4 GB Maximum size of a file name: 255 characters Maximum size of a directory name: 30 characters Maximum number of open BFILEs: see Comments The maximum number of BFILEs is limited by the value of the SESSION_MAX_OPEN_FILES initialization parameter, which is itself limited by the maximum number of open files the operating system will allow.
BLOB Maximum size: (4 GB – 1) *DB_BLOCK_SIZE initialization parameter (8 TB to 128 TB) The number of LOB columns per table is limited only by the maximum number of columns per table (that is, 1000).
CHAR Maximum size: 2000 bytes None
CHAR VARYING Maximum size: 4000 bytes None
CLOB Maximum size: (4 GB – 1) *DB_BLOCK_SIZE initialization parameter (8 TB to 128 TB) The number of LOB columns per table is limited only by the maximum number of columns per table (that is, 1000).
Literals (characters or numbers in SQL or PL/SQL – more) Maximum size: 4000 characters None
LONG Maximum size: 2 GB – 1 Only one LONG column is allowed per table.
NCHAR Maximum size: 2000 bytes None
NCHAR VARYING Maximum size: 4000 bytes None
NCLOB Maximum size: (4 GB – 1) *DB_BLOCK_SIZE initialization parameter (8 TB to 128 TB) The number of LOB columns per table is limited only by the maximum number of columns per table (that is, 1000).
NUMBER 999…(38 9′s) x10125 maximum value -999…(38 9′s) x10125minimum value Can be represented to full 38-digit precision (the mantissa) Can be represented to full 38-digit precision (the mantissa)
Precision 38 significant digits None
RAW Maximum size: 2000 bytes None
VARCHAR Maximum size: 4000 bytes None
VARCHAR2 Maximum size: 4000 bytes None

Logical Database Limits

Item Type of Limit Limit Value
GROUP BY clause Maximum length The GROUP BY expression and all of the nondistinct aggregate functions (for example,SUMAVG) must fit within a single database block.
Indexes Maximum per table Unlimited
Indexes Total size of indexed column 75% of the database block size minus some overhead
Columns Per table 1000 columns maximum
Columns Per index (or clustered index) 32 columns maximum
Columns Per bitmapped index 30 columns maximum
Constraints Maximum per column Unlimited
Subqueries Maximum levels of subqueries in a SQL statement Unlimited in the FROM clause of the top-level query 255 subqueries in the WHERE clause
Partitions Maximum length of linear partitioning key 4 KB – overhead
Partitions Maximum number of columns in partition key 16 columns
Partitions Maximum number of partitions allowed per table or index 1024K – 1
Rows Maximum number per table Unlimited
Stored Packages Maximum size PL/SQL and Developer/2000 may have limits on the size of stored procedures they can call. The limits typically range from 2000 to 3000 lines of code.
Trigger Cascade Limit Maximum value Operating system-dependent, typically 32
Users and Roles Maximum 2,147,483,638
Tables Maximum per clustered table 32 tables
Tables Maximum per database Unlimited

Physical Database Limits

Item Type of Limit Limit Value
Database Block Size Minimum 2048 bytes; must be a multiple of operating system physical block size
Database Block Size Maximum Operating system dependent; never more than 32 KB
Database Blocks Minimum in initial extent of a segment 2 blocks
Database Blocks Maximum per datafile Platform dependent; typically 222 – 1 blocks
Controlfiles Number of control files 1 minimum; 2 or more (on separate devices) strongly recommended
Controlfiles Size of a control file Dependent on operating system and database creation options; maximum of 20,000 x (database block size)
Database files Maximum per tablespace Operating system dependent; usually 1022
Database files Maximum per database 65533 May be less on some operating systems Limited also by size of database blocks and by theDB_FILES initialization parameter for a particular instance
Database extents (more) Maximum per dictionary managed tablespace 4 GB * physical block size (with K/M modifier); 4 GB (without K/M modifier)
Database extents Maximum per locally managed (uniform) tablespace 2 GB * physical block size (with K/M modifier); 2 GB (without K/M modifier)
Database file size Maximum Operating system dependent. Limited by maximum operating system file size; typically 222 or 4 MB blocks
MAXEXTENTS Default value Derived from tablespace default storage orDB_BLOCK_SIZE initialization parameter
MAXEXTENTS Maximum Unlimited
Redo Log Files Maximum number of logfiles Limited by value of MAXLOGFILES parameter in theCREATE DATABASE statement Control file can be resized to allow more entries; ultimately an operating system limit
Redo Log Files Maximum number of logfiles per group Unlimited
Redo Log File Size Minimum size 4 MB
Redo Log File Size Maximum Size Operating system limit; typically 2 GB
Tablespaces Maximum number per database 64 K Number of tablespaces cannot exceed the number of database files because each tablespace must include at least one file
Bigfile Tablespaces Number of blocks A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32 K blocks and 32 TB for a tablespace with 8 K blocks.
Smallfile (traditional) Tablespaces Number of blocks A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks.
External Tables file Maximum size Dependent on the operating system.An external table can be composed of multiple files.

Process and Runtime Limits

Item Type of Limit Limit Value
Instances per database Maximum number of cluster database instances per database Operating system-dependent
Locks Row-level Unlimited
Locks Distributed Lock Manager Operating system dependent
SGA size Maximum value Operating system-dependent; typically 2 to 4 GB for 32-bit operating systems, and > 4 GB for 64-bit operating systems
Advanced Queuing Processes Maximum per instance 10
Job Queue Processes Maximum per instance 1000
I/O Slave Processes Maximum per background process (DBWR, LGWR, etc.) 15
I/O Slave Processes Maximum per Backup session 15
Sessions Maximum per instance 32 KB; limited by the PROCESSES and SESSIONSinitialization parameters
Global Cache Service Processes Maximum per instance 10
Shared Servers Maximum per instance Unlimited within constraints set by thePROCESSES and SESSIONS initialization parameters, for instance
Dispatchers Maximum per instance Unlimited within constraints set by PROCESSESand SESSIONS initialization parameters, for instance
Parallel Execution Slaves Maximum per instance Unlimited within constraints set by PROCESSESand SESSIONS initialization parameters, for instance
Backup Sessions Maximum per instance Unlimited within constraints set by PROCESSESand SESSIONS initialization parameters, for instance

PL/SQL Compiler Limits

Item Limit
bind variables passed to a program unit 32768
exception handlers in a program unit 65536
fields in a record 65536
levels of block nesting 255
levels of record nesting 32
levels of subquery nesting 254
levels of label nesting 98
levels of nested collections no predefined limit
magnitude of a PLS_INTEGER or BINARY_INTEGERvalue -2147483648..2147483647
number of formal parameters in an explicit cursor, function, or procedure 65536
objects referenced by a program unit 65536
precision of a FLOAT value (binary digits) 126
precision of a NUMBER value (decimal digits) 38
precision of a REAL value (binary digits) 63
size of an identifier (characters) 30
size of a string literal (bytes) 32767
size of a CHAR value (bytes) 32767
size of a LONG value (bytes) 32760
size of a LONG RAW value (bytes) 32760
size of a RAW value (bytes) 32767
size of a VARCHAR2 value (bytes) 32767
size of an NCHAR value (bytes) 32767
size of an NVARCHAR2 value (bytes) 32767
size of a BFILE value (bytes) 4G * value of DB_BLOCK_SIZE parameter
size of a BLOB value (bytes) 4G * value of DB_BLOCK_SIZE parameter
size of a CLOB value (bytes) 4G * value of DB_BLOCK_SIZE parameter
size of an NCLOB value (bytes) 4G * value of DB_BLOCK_SIZE parameter

HA: Oracle RAC vs. RAC One Node vs. Data Guard

If we have to find HA solution for ORACLE DB, I will introduce for you the alternatives.  I collected as much information as possible about the available technologies such as RAC, One Node RAC and Data Guard.  In this article I do not want to consider explaining all of these technologies, more than compare with each other, help to someone summaries the features, advantages, disadvantages. And just one other important aspect: I take the focus just for high availability and I will not concentrate the other criteria. So, let’s start with basic introduction part:



1. Oracle RAC

The idea is to utilize the additional resources of multiple machines to satisfy higher load demands (scalability) as well as provide a higher level of availability since connections can be directed to any available instance.
– servers connecting to the same database (shared storage) simultaneous
– servers are usually same location “in a room”
– RAC is mainly for load balancing
– the application should be “cluster ready”

2. Oracle RAC One (one node RAC)

Same infrastructure as a classical RAC, just here it is a single instance of RAC running on one node of cluster while the 2nd node is in a cold standby mode. It is provide a cold fail-over solution for Oracle database.
– built-in cluster fail-over for HA but not to load balance unlike regular RAC
– useful for some maintenance purpose like rolling upgrade  or proactive upgrade
– it is capable for online upgrade to real RAC

3. Oracle Data Guard

Data Guard provides for continuity of operations – if the room in which your RAC cluster resides “goes away” (fire, flood, main hw failure, whatever) data guard is ” somewhere else, ready to take over” (fail over site).
– designed for disaster recovery and business continuity  solution (cost-effective way)
– the DG provide many extra features to use the secondary site database (reporting, testing, UAT env., etc.)
– possible to build total replication (physical) or just some part (logical)

Hardware/Network aspect

1. Oracle RAC

– the infrastructure complex and requires many network interfaces
– servers communicate via interconnect connection (latency between nodes critical – cache fusion)
– long distance sites between nodes required special hw component (stretch cluster)
– generally the RAC has significant overhead compare using classical infrastructure (not ideal in cloud environment)
– recommended to use similar capacity nodes

2. Oracle RAC One (one node RAC)

-it requires the same infrastructure component like regular RAC environment.
– it has less overhead than regular RAC

3. Oracle Data Guard

– servers connecting different database in different sites
– servers need standard network connection to transfer the transaction log (archive logs)
– shared storage possible to use but not required
– primary and secondary nodes could be different


1. Oracle RAC

– this protects from instance failures but not data or storage failures (storage level replication)
– for data recovery takes as long as in normal solution (but it has parallel recovery option)
– limited protection against the human errors

2. Oracle RAC One (one node RAC)

– RAC One provide fast server relocation but not 100% continuous availability
– it it not designed for DR, not suitable for mission critical applications without DR solution
– using this technology somewhere between the RAC and DG

3. Oracle Data Guard

– primarily a backup solution in the event of failure at the primary database (human errors, corruptions, etc.), recovery can just take few minutes. Human error is responsible for more than 75 percent of Oracle outages (Human error 79% – HW error 21 %).
– DG could provides solutions against the human errors (ability to run in a delayed apply mode)

Switch over after failure

1. Oracle RAC

– zero downtime for instance level failure
– open sessions relocated automatically, open transactions has to be repeat

2. Oracle RAC One (one node RAC)

– switch over takes less then 5 minutes
– on failure, first try to restart the service the primary site, then automatically switch over if was not success

3. Oracle Data Guard

– switch over takes from less then 5 minutes up to few minutes, depend on the settings.
– switch over can taken automatically but usually manual operation


1. Oracle RAC

– usually RAC is more expensive solution
– RAC is very popular instead of their complexity, because the ORACLE marketing machine is very effective

2. Oracle RAC One (one node RAC)

– Oracle has separate pricing policy for this solution, the cost less then the classic RAC but still expensive

3. Oracle Data Guard

– no extra option cost required, but have to buy license for secondary site as well.
– if we want to use the standby site for reporting we have option to do that but we have to buy a new option for it (called “Active Data Guard”)

Dynamic computing, cloud ready

1. Oracle RAC

– the clustering infrastructure required to run RAC, it not trivial to provide that

– the virtualization  has overhead and the clustering solution also has relevant overhead, this make together significant performance disadvantages  (what will answer for it the ORACLE in version 12c )

2. Oracle RAC One (one node RAC)

– required the same infrastructure as classical RAC (complex) and could be performance issue

(- anyway it is good idea to start the service just one node and extend with others on demand, but not for availability)

3. Oracle Data Guard

– more suitable in cloud (dynamic) environment, switching between data centres


Both DataGuard and RAC have their strengths and weaknesses. Some sites even use both. that’s why the oracle is recommended for “maximum availability architecture”.  If we put together: RAC+DG (+logical standby), system availability could be 99.9999% the but it cost very high. To understand this result, we have to admit To understand this result, we have to check the strengths (this technologies complement each other):
–    RAC has offer less downtime for OS/DB upgrade and instance level failure
–    DG can protect from human and storage errors
–    logical (DG) standby  can eliminate the downtime for every OS/DB upgrade (but it must support from apps.)