Create a database in Oracle11g on Linux (RHEL6)


In the previous tutorial I explained how to Install Oracle 11g Database on a Linux machine. In this tutorial, I will explain how to create a new database. Last time we had created & configured a default database (orcl) that comes with Oracle. This time we will create our own database & then we can create tables there. Let’s start with this tutorial 🙂 Make sure you have at least 2GB of free space where your Oracle is installed.

Declare new shell variable.

This will be the name of your new database. In this tutorial, I chose the name db_new as I had run out of nice names 😉 Please note that Oracle accepts database names of maximum 8 characters. So don’t go for longer names. Open your bash shell or terminal & issue : – ORACLE_SID=db_new (make sure you use your own name that you have chosen for your database). Then issue : – export ORACLE_SID. This will export ORACLE_SID variable to your profile. It also means, your Oracle will now run targeting this database.

Create a directory for your database.

Issue this command: – mkdir -p $ORACLE_BASE/oradata/db_new. This is where control & data-files for your database will be kept. Refer to previous post to know about ORACLE_BASE & HOME.

Create an Initialisation Parameter file.

This parameter file or pfile determines the database name, location of control files & a variety of parameters. Its a text file, so easiest way to create it is to copy it from init.ora file found in $ORACLE_HOME/dbs location. Copy it to the same location & give it the name initdb_new.ora (name of your database, just make sure you prefix init as it’s an initialisation file). Now edit with the contents shown below. Make sure to comment the lines mentioned in screenshot (open the image in new tab if you find text too small) because we are preparing a minimal file. Anything extra will result into failure. Be extra cautions about the control files location. Control files are the files that determine the number & location of database data files. Data files are actual files what store the actual data.

pflie contents.

pflie contents.

Create Server Parameter File.

Server parameter file or spfile is a binary file based on our initialisation parameter file(pfile) & can’t be read using a text editor. This is what Oracle reads before mounting the database. So, start Oracle by issuing sqlplus /nolog command. And then connect using your credentials as sysdba (for eg. sqlplus / as sysdba). This will connect you to an idle instance. From this point, we can create our spfile or server parameter file. Issue command : – create spfile from pfile. This will create an spfile. Now armed with spfile, database can now be started with nomount stage. Nomount stage is where Oracle reads the spfile & starts the Oracle instance. Issue startup nomount command to do this.

spfile created & database mounted to nomount stage.

spfile created & database mounted to nomount stage.

Issue Create Database script.

Now we can execute a script that will create our database. This is the most important step, so be extra cautious here. This is the step that got me exhausted the most 😦 I failed almost 8-10 times going past this. Potential pitfalls are : – lack of adequate storage space for data-files, wrong tablespace names in pfile, specifying inadequate size to various parameters, etc. I suggest following the values strictly that are mentioned in screenshot, if you are a novice in Oracle. Grab the script from my GitHub gist or type it as shown below.

Database creation script executed.

Database creation script executed.

Mount the database.

Now issue alter database mount command. Mount is the stage where Oracle read the contents on control-files to know about size, number & location of data-files. Your database is now mounted.

Open the database.

Now issue alter database open command to open the database. This is the stage where Oracles reads from & writes to actual database files & database becomes ready for transactions.

Run Oracle scripts to build database dictionary views.

This is a time-taking step (script #2), so its better to execute these scripts & then go for a cup of tea or coffee or some snacks 😉

@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
@?/sqlplus/admin/pupbld.sql

scripts executed1 scripts executed

Test the database.

Now that database has been created, we can create our tables here. You can also test it by querying database dictionary views. Below are two such queries.

querying database dictionary view

querying database dictionary view

So, we have successfully created our new database to play with. In next tutorial, I will explain you how to back-up & restore a database 🙂 See you later 😉

Advertisements

Installing Oracle 11g Database on Linux (RHEL6)


Installing, setting-up & administering Oracle database is one of the most overwhelming things I have ever come across during my short career as a Linux Admin. While it is somewhat easier on a Windows platform, it a relatively tough nut to crack on Linux servers. A lot of tweakings need to be done 😦 But with a careful approach, installing Oracle 11g can become a cake-walk 😉 In this tutorial I will show you how to install it on an RHEL 6.5 Server. You are free to choose any similar Linux distribution, but a few things do vary. So, be careful 🙂 Let’s start with the tutorial. I assume, you have downloaded Oracle 11g zip file & extracted to hard-disk.

Lab Description : –

  • OS – Red Hat Enterprise Linux 6.5 64-bit
  • RAM – 3GB
  • Free Space on /opt – 25Gb
  • Oracle Database version – 11.2.0(11g)

Edit Kernel parameters.

Do it as root. Edit /etc/sysctl.conf file (always take backup of configuration files before editing them). Append with contents shown in the screenshot. I want you to type it manually & not paste, so that you get the idea what you are putting in. Once you have appended the file, save it & run the command sysctl -p to reload kernel parameters. If these values are incorrectly set, installer will throw an error & will show you what was the expected value. So, you can correct those values & continue with installation.

Sysctl kernel parameters

Sysctl kernel parameters

Edit /etc/security/limits.conf file.

Append the file with below contents.

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1027
oracle hard nproc 65536

Create required groups & oracle user.

Create oinstall & dba groups & oracle user.

groupadd oinstall groupadd dba useradd -g oinstall -G dba oracle

Login as oracle user. Issue su - oracle to login as oracle user.

Run the Installer.

Go to the location where you extracted the tarball. It must be Oracle11g/database directory. You will find runInstaller.sh script here. Run it. It will open the installer GUI.

Installer Location

Installer Location

Installation Screen

Installation Screen

Click Next & you will be shown a warning message. Click Yes as its just a warning that you have not entered any email-address. You won’t need it anyway 😉

Install2

Click Next & then Skip Software Updates. Of Course, this is optional. Choose accordingly.

Install3

Click Next & proceed with selecting Create & Configure a database, since this is your first installation. I assume it to be on a test server or at home. This database can act as guide to create other databases, so I prefer using this option if this is your first ever experience of Oracle installation.

Install4

Click Next & choose Desktop Class as this allows for a minimal set-up.

Install5

Choosing Installation Location & Database Name.

This is very important step. Pay extra care here. I am explaining what needs to be done here. Oracle base is where your Oracle is installed. Choose a partition with at-least 12GB free space. While, it can be moved to any directory later on, I recommend choosing your partition wisely to simplify the things. Next 2 options will be dynamically changed based on option 1. Leave other fields as default. Then type in Administrator password. Do remember that Oracle doesn’t accept @ in passwords. Confirm it & Click Next.

Install6

If the password chosen is weak, it will display a warning message. Click Yes to continue.

Install7

Now choose an Inventory directory. You may leave it as default & then click Next.

Install8

Now the step that taught me a few things 😉 If kernel parameters were incorrectly set (step 1) & pre-required packages were not installed, you will see the error message shown below. Thank God, Oracle is intelligent enough to tell (refer to the screenshot below) us what should be the values in /etc/sysctl.conf 🙂 If you run into this issue, make sure to set the correct values & also to install required packages. I could have told this in the beginning, but I didn’t, since resolving errors is a good habit, rather than not getting any error 😉 It helps in learning. Once the dependencies are resolved, click Next.

Install9

The next screen shows the confirmation about the stuff we selected in the beginning. At this point, if your are satisfied with the parameters like installation directory, database name etc, you can click Install to begin installation process.

Install10

Below screenshots show you the installation progress. If you are patient enough, you can sit & watch the proceedings 😉 If you’re not, go get a cup of coffee 🙂 It will take some time.

Install11

Install12

Just before installation completion, you might see a warning message as seen below. It tells you that, to use a database, you must set ORACLE_SID to database name. I will cover Database creation part in next post. As of now, open your bash shell & issue :

ORACLE_SID=orcl export ORACLE_SID echo $ORACLE_SID

Install13

Execute Oracle Scripts. Now its time to run 2 scripts as root as shown in the below screenshots. Click Ok here & execute scripts.

Install17

Install15

Install16

That’s it. You will see below screen after successful completion of installation. Take a screenshot of the screen so as to have information about the web-console. If your memory is sharp, you may skip it 😉

Install14

So, you can see we have successfully installed Oracle 11g on our Linux server. Now, before you can start using Oracle, you need to do one last thing. Edit your .bash_profile file & add below text. Skip what’s already there. Make sure to enter your Oracle installation directory for ORACLE_BASE & HOME. Below is where mine was installed.

PATH=$PATH:$HOME/bin
export PATH
export ORACLE_BASE=/opt/oracledb/u01/app/oracle
export ORACLE_HOME=/opt/oracledb/u01/app/oracle/product/11.2.0/db_home1
export LD_LIBRARY_PATH=$ORACLE_HOME/lob
PATH=$PATH:$ORACLE_HOME/bin

After doing this, save this file & then issue sql / as sysdba command. It will launch SQL > prompt telling you are connected to an Oracle Database (orcl). If these things are confusing, then don’t worry. Next tutorial explains all these things in detail along with how to create a new database 🙂 See you there 😉

Simple Shell Script To Calculate Total Storage On a Linux Server


Lets face it. Linux doesn’t provide a very simple way to tell how many disks are attached to server. There is no simple way to know the total storage if you don’t have palimpsest tool installed, which is unlikely if you are accessing server through putty 😦

I was asked to check total storage available on the server & it was an overwhelming task since my server has 36 disks attached (of-course coming from SAN)!! So, I decided to write a shell script that would allow me to get all the details in one go 😉

Below is my script : –

DISKNUM=`fdisk -l | grep -e "^Disk /dev/sd" | wc -l`
i=1
while [ $i -le $DISKNUM ]
do
D=`fdisk -l | grep -e "^Disk /dev/sd" | awk '{print$3}' | cut -f1 -d, | head -$i | tail -1`
DISK="$DISK+$D"
SUM=`python -c "print $DISK" | cut -f1 -d.`
TB=$((SUM/1024))
let i=$i+1
done
echo "Total Disks Attached Are : $DISKNUM."
echo "Total Storage Attached is : $SUM GB or $TB TB."

Upon its execution, this is what I found as output.

[shashank@server ~]$ sudo sh disk.sh
Total Disks Attached Are : 36.
Total Storage Attached is : 5411 GB or 5 TB.

How To Set-up Conga Cluster (Red Hat Cluster) on RHEL6 Using 3 Virual Machines


Conga is a server(luci) & client(ricci) based Cluster Management solution for Linux. Luci is a Cluster Manager, in simple language & offers a GUI to create and manage clusters. This tutorial requires 3 RHEL VMs (of course you can use any similar Linux distribution). 1 VM will offer shared storage through iSCSI & will also act as Cluster Manager (luci will be installed here). 2 VMs will act as cluster nodes. We can also set-up Red Hat Cluster using only 2 servers. 1 server can act as iSCSI target, Cluster Manager & cluster node. Click here to learn how to set-up iSCSI.

Below are the steps to set-up our Red Hat Cluster on RHEL6.

Lab Description : –

Cluster Manager / Shared storage  – server.shashank.com 192.168.0.1

Cluster Node1  – node1.shashank.com 192.168.0.2

Cluster Node2  – node2.shashank.com 192.168.0.3

1. Install required packages. Install luci package on VM1. Lets call this VM server (happens to be hostname of my server here). Also set luci service to start at boot time. Install ricci packages on 2 cluster nodes, node1 & node2 (these are host-names in my lab. Choose any desired name!) Also set ricci service to start at boot time.

  • yum install luci* -y
  • chkconfig luci on
  • yum install ricci* -y
  • chkconfig ricici on

2. Start & Configure required services. Start luci service on server. Once the service is started, you can point your browser to localhost:8084. Login using your root password. You can see a slightly overwhelming Luci Console 😉 Now, assign passwords for user ricci on both the nodes. We are not yet ready to create cluster here. Please read on.

3. Configure Shared Storage. Create a partition on server & export that as a LUN to both the nodes. This will be done using iSCSI & has been discussed here. Issue fdisk -l on both the nodes to confirm you have an additional disk. If you can’t seem to see that, reboot nodes.

4. Create a Quorum Disk on any node. I created it on node1. What is a quorum disk? You may ask 😉 Well, its a storage device that holds cluster configuration database. This database determines which node must be active at a given time. This is decided by votes cast by each node. Below is the command to create a quorum disk.

mkqdisk -c /device -l quorumdisk

/device is your LUN or iSCSI disk that you shared from server. Make sure you first create a small partition on LUN and then create quorum disk on that device/partition. It can be a 100 MB partition, does not need to be big. -l stands for label. It can be anything meaningful, just be sure to give it a nice name 😉 Now, check node 2 if you can see this quorum disk there or  not. If not, reboot.

5. Create a GFS partition on node1. This is what we will use to demonstrate clustering. For this, first create another partition on LUN & format it using GFS filesystem. Below is how it is done. You may need to install gfs package first, if it is not available.

mkfs.gfs2 -p lock_dlm -t shashank:GFS -j 2 /device

-p means a protocol which is lock_dlm in our case. -t is cluster name. This is the name that you will be using for your cluster in Luci (I will come to this in a shortwhile). /device is the new LUN partition, which is /dev/sdc3 in my lab. Make changes accordingly.

Now, check on node2 if you can see the GFS partition or not, If not, reboot. Its better to create quorum disk & GFS partition & then reboot both nodes to ensure proper configuration.

GFS partition

6. Create a mount-point on both nodes. Choose any name, but make sure its same on both of them. I created /gfs on both.

7. Now go back to Luci Console. Click Cluster & then Create. Give it name that you chose while creating GFS partition (step 5), its shashank in my case. Select “Use the Same Password for All Nodes“. In Node Name, enter hostname of node1 (which is node1 in my case). Type in ricci password, that we created in step 2. Click Add Another Node to add node2. Select “Enable Shared Storage Support” since we are using GFS. Now click Create Cluster to start creating your cluster. It will take some time & then you will see something similar to 2nd screenshot 😉

Potential pitfalls – Check cman & rgmanager services on both the nodes if they are running & configured to run at startup or not. If not, start them & ensure that they start at boot time. Disable NetworkManager service at startup & stop it.

Creating Cluster

Cluster Nodes Added

8. Configure Failover Domain. Now click Failover Domains & then click Add a New Failover Domain. Give it a name, choose Prioritized. Then enable both the nodes in Members & set priority to 1 for both.

9.  Configure Quorum Disk. For this, click Configure & then QDisk. Now select Use a Quorum Disk. Then select By Device Label & enter quorumdisk. This is the name of quorum disk label. Choose accordingly. Under Heuristics -> Path to Program, type ping -c2 -t1 node1. Choose Interval as 2 & Score as 1. Click Add Another Heuristic & do the same for node2 i.e. ping -c2 -t1 node2. This is a kind of test that both the nodes will take to vote. Click Apply.

10. Create Resources. For this, click Resources & then click Add. From the drop-down list, select GFS. Give it some nice name 😉 Provide Mount Point name as /GFS (this is what I created on both the nodes, choose yours accordingly). Choose Device, FS Label or, UUDI as /dev/sdc3 (again, change accordingly since /dev/sdc3 is in my set-up). Select Force Unmount option & click Submit.

11. Create a Service Group. For this, click Service Group tab & then click Add. Give it some really nice name 😉 (once again!) & ensure Automatically Start This Service option is selected. Then choose a Failover Domain from the drop-down list, that was created above. Set Recover Policy as Relocate. Now click Add Resource, select your resource that you created in step 10, from the drop-down list & click Submit.

12. Refresh browser tab & you will see our Service running on Luci Console. 

Service Running

13. Now time to test our Cluster. As we know we have 2 cluster nodes each having /gfs mount-point. In case one node fails, data in /gfs will be available from other node (which is the very meaning of clustering here). At the moment our service is running on node1. Type clustat command and see the output. You will find the details there. See that /gfs is mounted here but not on node2.

Cluster Running on Node1

Now, lets relocate this service to node2. Type

clusvcadm -r GFS -n node2

Here -r means relocate, GFS is service name & -m is for the node to relocate to.

Cluster Running on Node2

Lets got to node2 & check the status. Type clustat command & notice the output. See the available mount-points as well.

Cluster Running on Node2 After Relocation

Cluster Running on Node2 & /gfs Mounted

So, thats it 🙂 We can see /gfs mount-point is clustered between 2 nodes & is available anytime, even if any of the nodes goes down.