Setup Ansible Nodes on Linux (Ubuntu 14.04)


In my last post, you learnt about Ansible & how to install it. Now, lets take one step forward and set up the nodes that Ansible will manage. As we know, Ansible is agent-less hence it doesn’t need any client package to be installed on the nodes it will manage. So, we only need to define the nodes in its inventory file on the Ansible server itself located at /etc/ansible/hosts

Backup this file & edit it with below contents.

[web-servers]
192.168.0.51
192.168.0.61

You can see here that I have defined a group called web-servers that contains IP addresses of 2 nodes. You can also use FQDN if you have DNS setup or entries in your /etc/hosts file.

Save this file & issue below command to test if Ansible server is able to ping the nodes or not.

shashank@shashank-server:~$ ansible -m ping web --ask-pass
SSH password:
192.168.0.61 | FAILED! => {
"failed": true,
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
}
192.168.0.51 | SUCCESS => {
"changed": false,
"ping": "pong"
}

You can see, it results in an error for one node. Because Ansible server’s fingerprint was not in known_hosts file of that node. For this, you need to add the fingerprint manually or by first trying to SSH into the node. That way it will ask to save the fingerprint & above command will work.

shashank@shashank-server:~$ ansible -m ping web --ask-pass
SSH password:
192.168.0.51 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.0.61 | SUCCESS => {
"changed": false,
"ping": "pong"
}

So, you can see now that Ansible is able to ping its nodes & hence it can manage them 🙂

One thing to note is that Ansible will SSH into the nodes using the user with which it was run. I ran Ansible using “shashank: user which is a user with root access. So, make sure you have enough privileges to run Ansible. You can get away with using --ask-pass if you use keys instead of a password. You can follow this link to know how to setup password-less SSH

shashank@shashank-server:~$ ansible -m ping web
192.168.0.61 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.0.51 | SUCCESS => {
"changed": false,
"ping": "pong"
}

Since I have password-less SSH setup on my infrastructure, you can see that I didn’t need to use the --ask-pass option.

Advertisements

Bootstrapping Chef Node To Manage Under Chef Server


So, the last post discussed about setting up Chef clients. Now its time to finish the overall setup by bootstrapping the Chef nodes from Chef Workstation.

Bootstrapping means we are syncing the Chef clients with the Chef workstation so that we can create & execute cookbooks from the workstation. Remember that 192.168.0.61 is the IP of Chef client/node. Replace the IP for all the nodes you want to manage & repeat this step. --node-name is the name you want to give to your node. Its NOT necessarily the actual hostname. I used sudo because it didn’t connect without it. Replace shashank with your username.

root@chef-workstation:/home/shashank/chef-repo# knife bootstrap 192.168.0.61 -x shashank --sudo --node-name node1
Doing old-style registration with the validation key at /home/shashank/chef-repo/.chef/chef-validator.pem...
Delete your validation key in order to use your user credentials instead

Connecting to 192.168.0.61
shashank@192.168.0.61's password:
192.168.0.61 knife sudo password:
Enter your password:
192.168.0.61
192.168.0.61 -----> Existing Chef installation detected
192.168.0.61 Starting the first Chef Client run...
192.168.0.61 Starting Chef Client, version 12.6.0
192.168.0.61 Creating a new client identity for node1 using the validator key.
192.168.0.61 resolving cookbooks for run list: []
192.168.0.61 Synchronizing Cookbooks:
192.168.0.61 Compiling Cookbooks...
192.168.0.61 [2016-05-21T09:28:51+05:30] WARN: Node node1 has an empty run list.
192.168.0.61 Converging 0 resources
192.168.0.61
192.168.0.61 Running handlers:
192.168.0.61 Running handlers complete
192.168.0.61 Chef Client finished, 0/0 resources updated in 01 seconds

You can see the list of all managed/bootstrapped nodes by issuing below command.

root@chef-workstation:/home/shashank/chef-repo# knife node list
node1

Below is the screenshot from my setup 🙂

Chef Nodes Bootstrapped

In future posts, I will explain how we can manage Chef nodes & how to create cookbooks. Till then, bye 🙂

Setup Chef Server on Ubuntu


Hi There! I am back after a long time 😉 Was really stuck with other things.

In this post, I will explain how you can setup your Chef Server on an Ubuntu machine. Chef, as we know, is an Infrastructure Automation platform that helps manage, maintain & housekeep a number of servers by keeping them in the desired state. Sounds complicated? Ok, let me put this in an easy way 😉

Suppose you have 25 servers in your infrastructure. 10 are Apache web-servers, 5 are Database servers, 2 are monitoring servers, 2 are LDAP servers & rest are Tomcat servers. You are given the responsibility to set up all of them 😀 You will have to install packages, create users & groups & do tons of modifications like editing /etc/hosts or /etc/resolv.conf files. All this leads to a lot of wastage in terms of time & resources. That’s where Chef or similar software helps. Chef will allow you to do all these tasks in much simpler & efficient manner. You define what packages are to be installed on what servers & Chef will do it. Add users to the  passwd file & Chef will populate this file to all the required servers. This is known as Chef recipes. Seems fun, right? 😉

Chef has 3 components.

  • Workstation : – A server on which you define all your modifications like contents of passwd file, packages to be installed etc. In other words, here you create Chef recipes & cookbooks.
  • Server : – Where you manage all your nodes & where all the recipes are sent to. Server then adjusts the nodes according to the recipes. It also has a web UI where you can see & manage your nodes. Chef Server can only be installed on Unix/Linux machines.
  • Nodes : – The individual servers that are to be managed by Chef. Like your Apache or DB servers. Could be any OS.

Now that you know the basic terminologies, lets setup our Chef Server 🙂

Lab Description : –

  • OS – Ubuntu 14.04
  • RAM – 4 GB
  • IP Address range – 192.168.0.XX
  • Chef Server version – 12.4.0
  • Chef Manage version – 2.3.0

Steps to perform : – 

1. Download & Install Chef Server package. Go to https://downloads.chef.io/chef-server and download the package for your OS. In this tutorial, I have chosen Ubuntu. Install it by using below command.

root@chef-server:/home/shashank# dpkg -i chef-server-core_12.4.0-1_amd64.deb
Selecting previously unselected package chef-server-core.
(Reading database ... 166216 files and directories currently installed.)
Preparing to unpack chef-server-core_12.4.0-1_amd64.deb ...
Unpacking chef-server-core (12.4.0-1) ...

2. Configure Chef Server. Next step will be to configure it. So run the command mentioned below. It will dump an output similar to it. Please note that this step will take around 2-3 minutes or more.

root@chef-server:/home/shashank# chef-server-ctl reconfigure
Starting Chef Client, version 12.6.0
resolving cookbooks for run list: ["private-chef::default"]
Synchronizing Cookbooks:

Deprecated features used!
Cannot specify both default and name_property together on property path of resource yum_globalconfig. Only one (name_property) will be obeyed. In Chef 13, this will become an error. Please remove one or the other from the property. at 1 location:
- /opt/opscode/embedded/cookbooks/cache/cookbooks/yum/resources/globalconfig.rb:76:in `class_from_file'

Chef Client finished, 323/451 resources updated in 03 minutes 10 seconds
Chef Server Reconfigured!

3. Create Chef user & its organisation. Issue below command to create a user & its organisation for Chef. This user will be used to log-in to its web UI & perform other admin tasks. These .pem keys will be used to authenticate & validate the certificates. Choose any desired location.

root@chef-server:/home/shashank# chef-server-ctl user-create chef-admin Chef Admin root@chef-server 'chefadmin' --filename /home/shashank/chef-admin.pem
root@chef-server:/home/shashank# chef-server-ctl org-create shashank 'Shashank Chef Server' --association_user chef-admin --filename /home/shashank/chef-validator.pem

4. Install Chef Manage(web UI). Default step is to install it using Chef itself by issuing chef-server-ctl install chef-manage command But it threw an error on my machine that apt-get update was unable to retrieve this package. So, I downloaded the package from Chef’s site & installed it using dpkg command. It will ask you to accept the license agreement.

root@chef-server:/home/shashank# dpkg -i Downloads/chef-manage_2.3.0-1_amd64.deb

To use this software, you must agree to the terms of the software license agreement.
Press any key to continue.
Type 'yes' to accept the software license agreement, or anything else to cancel.
yes
Starting Chef Client, version 12.4.1

When its done installing it, it will prompt you to issue another command.

Chef Client finished, 323/451 resources updated in 03 minutes 10 seconds
Chef Server Reconfigured!
Thank you for installing the Chef Management Console add-on!

The next step in the process is to run:

chef-manage-ctl reconfigure

5. Configure Chef Manage. Issue above command to configure it. It will take some time. Wait for it to finish.

6. Configure Chef Server again. Run chef-server-ctl reconfigure again to configure it.

If everything goes well, you will have your Chef Server ready.

7. Login to Web console (UI). Point your browser to https://localhost:443/login. You may also use IP address. Enter the credentials that you used in step 3 above. And lo!! You are done 🙂

Chef Manage

Logging into Chef Server UI

Chef Manage UI

Chef Server UI

Watch out for other Chef posts on my blog! Coming soon 😉

Automating PostgreSQl Installation Using Shell Script


This post taught you how to install PostgreSQL Database on Linux using source code. That approach is good when number of servers is less. But what if it is to be installed on a large number of servers? 😉  Such as in a clustered environment? Here’s how to achieve it using a shell script. 🙂

Better enable password-less SSH from your master server/jumpbox to all destination servers to avoid typing passwords. Then copy the installer tarball from jumpbox to all servers using below command.

for hst in `cat /home/shashank/hosts.txt`; do scp $hst:/home/shashank; done

Now that installer has been copied to destination servers, its time to run the script from jumpbox to install it on all destination servers in one go 🙂 Issue below command to execute automation script & install PostgreSQL on all our target servers.

for hst in `cat /home/shashank/hosts.txt`; do ssh -t $hst 'bash -s' < /home/shashank/postgre_installer.sh; done

Below is the installer script.

#PostgreSQL Automation Installer Script version 1
#Author - Shashank Srivastava
#set -x
echo "Checking availability of pre-required packages"
sleep 1s
sudo rpm -q readline-devel.x86_64
if [ $? != 0 ]
then
echo "Readline package not found. Installing it...."
sudo yum install readline-devel.x86_64 -y
fi
sudo rpm -q zlib-devel-1.2.3-29.el6.x86_64
if [ $? != 0 ]
then
echo "zlib package not found. Installing it...."
sudo yum install zlib-devel.x86_64 -y
fi
echo "Unpacking PostgreSQL tarball..."
sleep 2s
sudo tar -zxf /home/shashank/postgresql-9.3.5.tar.gz
echo ""
echo "Tarball unpacked. Creating destination directory for Postgre...."
sudo mkdir -p /opt/PostgreSQL/9.3
cd /home/shashank/postgresql-9.3.5/
echo ""
echo "Destination directory created. Building installer from source coude...."
sleep 1s
sudo ./configure --prefix=/opt/PostgreSQL/9.3
sudo make
sleep 2s
echo ""
echo "Installer built. Installing it now...."
sudo make install
sleep 1s
id -a postgres
if [ $? != 0 ]
then
echo "postgres user not found. Adding it...."
sudo useradd postgres
else
echo "postgres user already exists. Proceeding to next step...."
fi
echo ""
echo "Configuring PostgreSQL...."
sudo mkdir -p /opt/PostgreSQL/9.3/data
sudo chown postgres /opt/PostgreSQL/9.3/data
echo ""
echo "Defining data directory for Postgre & starting it...."
sudo su - postgres -c "/opt/PostgreSQL/9.3/bin/initdb /opt/PostgreSQL/9.3/data/; sleep 2s; /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data/ -l logfile start"
echo ""
echo "Checking version...."
echo "-------------------------------------------"
/opt/PostgreSQL/9.3/bin/psql --version
/opt/PostgreSQL/9.3/bin/postgres --version
echo ""
echo "Script executed successfully!"

Automate Apache Tomcat Installation on Linux Server Using Shell Script


My work involves installing & managing a lot of Linux servers. At times I have to do same work on numerous servers. One such task is installing Tomcat. In these posts I showed you how to install Apache Tomcat & how to check its version on Linux servers.
In this post, I will show you how you can automate the installation process across any number of servers using a shell script 🙂

My approach is to : –

  • Setup password-less SSH on all servers from one jumpbox machine. Jumpbox is a server from which you will execute this script.
  • Create a list of servers in a text file. All servers are typed line by line.
  • Copy Java & Tomcat installers to all servers in one go from this jumpbox.
  • And finally execute the script.

Below are the commands & script through which I accomplished this. All of these are self-explanatory 😉

for hst in `cat /home/shashank/hosts.txt`; do scp ~shashank/jdk-8u25-linux-x64.tar.gz $hst:/home/shashank; done
for hst in `cat /home/shashank/hosts.txt`; do scp ~shashank/apache-tomcat-8.0.15.tar.gz $hst:/home/shashank; done

What it does above? For loop reads the contents of text file (server IPs) line by line & copies tarballs there.

for hst in `cat /home/shashank/hosts.txt`; do ssh $hst -t 'bash -s' < ~shashank/tomcat_installer.sh; done

The trick above is to use ‘bash -s’. It will accept the script as an argument & execute it. After execution, it will move to next server. Below is my installer script.

Do let me know if it was helpful or not 🙂 I will come up with some more posts soon 🙂

#Apache Tomcat Installer Script
#Author : Shashank Srivastava
#set -x
echo "Logged into `hostname`. Installing here."
#Checking if installer tarballs are present or not. If they are not found, script will print error message & quit.
if [ -f /home/shashank/jdk-8u25-linux-x64.tar.gz ] && [ -f /home/shashank/apache-tomcat-8.0.15.tar.gz ]
then
echo "Unpacking Java installer tarball.......";
sleep 2s
#waiting for 2 seconds to show you what is being done.
sudo tar -xzvf /home/shashank/jdk-8u25-linux-x64.tar.gz;
echo ""
echo "Java tarball unpacked.";
echo ""
echo "Unpacking Tomcat installer tarball.......";
sleep 2s;
sudo tar -xzvf /home/shashank/apache-tomcat-8.0.15.tar.gz;
echo ""
echo "Tomcat tarball unpacked.";
echo ""
echo "Installing Java & Tomcat to /opt/app directory....";
sudo cp -rp /home/shashank/jdk1.8.0_25 /opt/app;
sudo cp -rp /home/shashank/apache-tomcat-8.0.15 /opt/app;
echo ""
echo "Exporting necessary variables......";
export JAVA_HOME=/opt/app/jdk1.8.0_25;
export PATH=$PATH:/opt/app/jdk1.8.0_25/bin:$PATH;
export CATALINA_HOME=/opt/app/apache-tomcat-8.0.15;
echo "Variables exported.";
echo ""
echo $JAVA_HOME;
echo ""
echo $PATH;
echo ""
echo $CATALINA_HOME;
sleep 2s;
echo ""
echo "Checking Java & Tomcat versions.";
echo ""
echo "Java is installed at `which java` Directory";
echo ""
java -version;
echo ""
java -cp $CATALINA_HOME/lib/catalina.jar org.apache.catalina.util.ServerInfo;
echo ""
echo "Starting Tomcat server"
echo ""
cd $CATALINA_HOME/bin
./startup.sh
echo "'
echo "Removing unpacked tarballs from PWD.";
sudo rm -rf /home/shashank/jdk1.8.0_25;
sudo rm -rf /home/shashank/apache-tomcat-8.0.15;
else
echo "Installer tarballs not found in /home/shashank. Please make sure they exist there. Exiting installation process now."
exit
fi
Status

Add Multiple Users to a Group in Linux


Ever found yourself in a situation to add multiple users to a single group? 😉 I know you have 😉

Just issue below command as root & it will add all the required users to a group 😉

for user in user1 user2 user3 user4 user5; do usermod -a -G group_name $user; done

If the number of users is too long, then create a text file containing all the users to be added (make sure user names are white-space separated, just like above example) & then pass this text file in for loop. Like this 😉

for user in `cat ~shashank/users_list.txt`; do usermod -a -G group_name $user; done
Quote

Using Expect Command To Automate User Creation on RHEL 6 – Part II


Continuing with where I left off, today I finally managed to write a script that will read the names of users from a text file & add them to the server. The whole process is automated & requires absolutely no human intervention. 😉 If you are new to expect, please read this post to get a clear understanding of expect command.

Below is how to add users to the server using a while loop & expect 🙂

Execute this script as :

[shashank@server ~]$ sudo sh ~shashank/scripts/useradd.sh

Here is the script.

while read user; do
{
/usr/bin/expect << EOF
spawn useradd $user
puts "$user added"
expect "?shashank \r"
spawn passwd $user
expect "?Changing password for user $user. \r"
send "Password\r"
expect "?Retype new password: \r"
send "Password\r"
expect eof
puts "Password for $user set."
EOF
}
done </home/shashank/userslist.txt

Below is the output with a list of 3 test users :-

spawn useradd test5
test5 added
spawn passwd test5
Changing password for user test5.
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.
Password for test5 set.
spawn useradd test6
test6 added
spawn passwd test6
Changing password for user test6.
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.
Password for test6 set.
spawn useradd test7
test7 added
spawn passwd test7
Changing password for user test7.
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.
Password for test7 set.