Either you are interested in making datafiles larger than 32 gigabytes or you've encountered a database with a datafile bigger than 32 gigabytes and are wondering how is that possible.
Since 10g, Oracle introduced the ability to define tablespaces as having “smallfiles” or “bigfiles”.
Smallfiles are the default and are limited by blocksize.Since we build a DB with 8k blocksize, the maxfilesize is 32GB.Maxsize varies with blocksize.Since a database is allowed a finite number of datafiles due to SGA limitation (64000… who knew!), if it is made up of smallfile tablespaces it has a maxsize of 64000 * 32GB or 2048000GB(that’s only 2 petabytes).
Bigfile Tablespaces are bigger, shall we say, in that a single 8k blocksized datafile may contain 32TB.
Here’s the rub…
-A Bigfile tablespace may have one and only one datafile
-That file must be managed using Automated Segment Space Management (ASSM) and needs to be striped or RAIDed across multiple disks.
-The file must be locally managed.
Since a database may have 64000 datafile * 2048000GB then the max for a “Bigfile” database is 131,072,000,000GB or , roughly, 122 Exabytes, or “foolishly large”.
Different blocksizes have different max filesizes, and, logically, the larger the blocksize (32, 64, or 128k), the larger the max database size, effectively limited only by hardware and budget.
So it doesn’t go unsaid:
Use of Bigfile tablespaces is not certified by Symantec DLP.
If a database has a datafile significantly greater that 32GB and is not using Bigfiles, then check the file with Oracle’s datafile verification utility (TECH221887) as it likely is corrupted.
Imported Document ID: TECH219959
Subscribing will provide email updates when this Article is updated. Login is required.