Install & Setup SonarQube on Ubuntu for Code Analysis

SonarQube is Code Quality testing solution which lets you analyse the quality of your code, detect bugs and much more to improve overall health of your code.

SonarQube comes in 2 variants. It can be accessed online using the URL and it can also be hosted on your own server. In this tutorial, I am demonstrating how you can install & setup SonarQube on your own Ubuntu server to check your code’s quality 🙂

Lets start!

Lab Description : –

Ubuntu 14.04 64 bit server with 2 GB RAM.

MySQL version 5.6.33 with InnoDB storage engine.

SonarQube version 6.2.

My PHP project located at DocumentRoot. You can choose any location for code analysis.

Please note that SonarQube needs atleast 2 GB of RAM, so please make sure you have enough of it.

Steps to be followed : –

SonarQube by default uses its internal H2 database but we will be using MySQL for this. Choose any database of your choice.

1. Download SonarQube & SonarQube Scanner.

Use the links provided to download both the products.

2. Unpack them.

Unpack both of them to any location where you can locate them easily. I chose my home-directory for it. It will create 2 directories sonarqube-6.2 & sonar-scanner-2.8.

root@shashank-dbserver:/home/shashank# unzip Downloads/

root@shashank-dbserver:/home/shashank# unzip Downloads/

It will be good if you create aliases for above 2 directories or add them to your PATH.

3. Create MySQL Database & User.

Create a new database called sonar in MySQL (or any other DBMS of your choice). Then create a user sonarqube & grant it all privileges for sonar database.

mysql> create database sonar;
Query OK, 1 row affected (0,01 sec)

mysql> use sonar;
Database changed
mysql> CREATE USER 'sonarqube'@'localhost' IDENTIFIED BY 'sonarqube';
Query OK, 0 rows affected (0,02 sec)

mysql> GRANT ALL PRIVILEGES ON sonar.* to 'sonarqube'@'localhost';
Query OK, 0 rows affected (0,00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0,00 sec)

4. Edit file.

Edit this file inside sonarqube-6.2/conf directory to enter database details. Make sure to put user-name & password you created in last step. Below is the snippet. Edit values accordingly.

# IMPORTANT: the embedded H2 database is used by default. It is recommended for tests but not for
# production use. Supported databases are MySQL, Oracle, PostgreSQL and Microsoft SQLServer.

# User credentials.
# Permissions to create tables, indices and triggers must be granted to JDBC user.
# The schema must be created first.

#----- Embedded Database (default)
# H2 embedded database server listening port, defaults to 9092
#----- MySQL 5.6 or greater
# Only InnoDB storage engine is supported (not myISAM).
# Only the bundled driver is supported. It can not be changed.

5. Create & Edit file.

Create this file inside your code’s project & enter values accordingly. See below snippet. Give your project a unique Project Key for SonarQube to uniquely identify it.

# must be unique in a given SonarQube instance
# this is the name and version displayed in the SonarQube UI. Was mandatory prior to SonarQube 6.1.

# Path is relative to the file. Replace "\" by "/" on Windows.
# Since SonarQube 4.2, this property is optional if sonar.modules is set.
# If not set, SonarQube starts looking for source code from the directory containing
# the file.

# Encoding of the source code. Default is default system encoding

6. Start SonarQube.

Start it by executing sonarqube-6.2/bin/ start command.

root@shashank-dbserver:/home/shashank/sonarqube-6.2/bin/linux-x86-64# ./ start
Starting SonarQube...
Started SonarQube.

7. Access SonarQube via browser.

Open your browser & enter localhost:9000. Then click login at the top-right corner. Credentials are admin/admin.

8. Start SonarQube Scanner to analyse your code.

Execute below command to start SonarQube scanner from within your project directory.

root@shashank-dbserver:/var/www/bills/html/CabBIlls# /home/shashank/sonar-scanner-2.8/bin/sonar-scanner

It will start scanning your project’s code. Once its done scanning, you will see output similar to below. Click the link provided there to see your report.

INFO: Analysis report uploaded in 240ms
INFO: ANALYSIS SUCCESSFUL, you can browse http://localhost:9000/dashboard/index/exclaimadeasy
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at http://localhost:9000/api/ce/task?id=AVqskPfd6DjWymbXBiOQ
INFO: Task total time: 18.806 s
INFO: ------------------------------------------------------------------------
INFO: ------------------------------------------------------------------------

Video Tutorial : –

I have also posted a video on my YouTube channel to demonstrate all the steps. You can watch it below.

Caveats : –

Video shown above only demonstrates the basic code analysis, even though my project is PHP based. For PHP projects (or any other non-default languages), please download the plugin(s) and place that in SonarQube_HOME/extensions/plugins directory. After that restart SonarQube by executing restart command. PHP plugin can be downloaded from

Also, in the video above, I missed to uncomment MySQL jdbc conncection URL but same can be seen uncommented in snippet I pasted in step 3 😉

I hope you liked this post. See you later 🙂


Create a Server Health Report (HTML) Using Shell Script

Shell scripts are insanely powerful & convenient. We all know it 😉 Much of the beauty in shell scripts lies in the way they can be used to automate many aspects of System Administration. As a SysAdmin, you might have been asked to prepare health-reports on a regular basis. Today, I wrote one such script that will generate an HTML health-report containing some vital system information. Lets see how it works 🙂

Lab Description : –

Ubuntu 14.04 Server. Environment : – Bash shell

Instructions : –

Download or clone my GitHub repository from below location.

Place the file anywhere you want. I prefer keeping it under my home-directory but you may keep it anywhere.

Make it executable (if not already).

You may either run/execute it manually or you may also put it in a CRON job. I have chosen to generate the report twice a day, but its entirely upto you 🙂

Video Tutorial : –

To see the script in action, watch the video below on my YouTube Channel.

Additional Notes : –

I have kept the script & report minimal since I wrote it today only. You may customize it further so as to suit your needs. Sky is the limit 😉

Setup MySQL Cluster/Load-balancing Using HAProxy

I have already explained how to setup streaming replication in MySQL in my previous posts mentioned below. I have also showed you how you can load-balance Apache web-servers using HAproxy.

How To Setup Streaming Replication In MySQL – Slave Node

How To Setup Streaming Replication In MySQL – Master Node

In this post, I will demonstrate how we can put our MySQL cluster behind an HAProxy load-balancer so that our database continues to run even if master database node crashes. This post is about master-slave load-balancing. So, data will be written to master only but retrieved from any node. I will write another post on master-master replication later. Lets start now 🙂

Lab Description : –

1 Load-balancer –

An Ubuntu 14.04 server running HAProxy 1.4.24. IP Address =

2 MySQL Database nodes (from previous posts, they are already under streaming replication)-

  • Master Node – running MySQL 5.6.17
  • Slave Node – running MySQL 5.6.17

Steps to be performed : –

1. Install HAProxy

root@haproxy-server:/home/shashank# apt-get install haproxy

This will install HAProxy, an open source load-balancer on our Ubuntu server.

2. Install MySQL client

Now we will need to install mysql-client on Ubuntu server to connect to our databases. So, issue below command to install it.

root@haproxy-server:/home/shashank# apt-get install mysql-client

Note that if you already installed MySQL on this server before, you may skip this step as client will be already present. You may issue mysql command to check.

3. Create users on MySQL servers.

Now we need to add 2 database users to connect to our MySQL databases servers from HAProxy Ubuntu server. Fail-over needs root access to database, hence one of these users will have equivalent privileges. You may continue with root but that will require more configurations and its always safe to have a user other than root. Note that below queries have to be run on both database nodes.

E:\wamp\bin\mysql\mysql5.6.17\bin>mysql -u root -p -e "INSERT INTO mysql.user (Host,User) values ('','haproxy_test'); FLUSH PRIVILEGES;"
Enter password: *****

E:\wamp\bin\mysql\mysql5.6.17\bin>mysql -u root -p -e "GRANT ALL PRIVILEGES ON *.* TO 'haproxy_root'@'' IDENTIFIED BY 'haproxy' WITH GRANT OPTION; FLUSH PRIVILEGES;"
Enter password: *****

You can see that haproxy_root user has root access & haproxy_test is created just to login to database.

4. Configure HAProxy

Its time to edit HAProxy’s configuration. By default, its disabled. So, edit the file /etc/default/haproxy & change ENABLED=0 to ENABLED=1. Now, backup the existing HAProxy configuration file /etc/haproxy/haproxy.cfg & edit it with below contents. I have put comments wherever necessary.

log local0 notice
user haproxy
group haproxy

log global
retries 2
timeout connect 3000
timeout server 5000
timeout client 5000

listen mysql-cluster #name of your mysql cluster.
mode tcp
option mysql-check user haproxy_check #db user created in last step
balance roundrobin
server host_name 1 check #hostname & IP:port of DB node1
server host_name check  #hostname & IP:port of DB node2

listen #port to bind HAProxy's web UI to.
mode http
stats enable
stats uri /
stats realm Strictly\ Private
stats auth user1:PASSWORD #user:password for authentication while opening web UI
stats auth user2:PASSWORD

frontend LB #optional & can be left

One main point to remember is to bind HAProxy to proper host & port. Since my web-application runs on a different server, I used listen in cluster properties above. HAProxy doesn’t have special properties for MySQL unlike Web-server. So I chose tcp above. If there are errors in your HAProxy configuration, you will see errors like below on starting haproxy service.

root@haproxy-server:/home/shashank# service haproxy start
* Starting haproxy haproxy [ALERT] 243/081515 (22114) : parsing [/etc/haproxy/haproxy.cfg:22] : 'bind' expects [addr1]:port1[-end1]{,[addr]:port[-end]}... as arguments.
[ALERT] 243/081515 (22114) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 243/081515 (22114) : config : proxy '' has no listen address. Please either specify a valid address on the <listen> line, or use the <bind> keyword.
[ALERT] 243/081515 (22114) : Fatal errors found in configuration.

It was partly because I had earlier used port 8080 for web UI but it was already in use. So I used 8090. Also, I had bound cluster to but it should have been or public IP of application/web-server.

Once this is done/fixed, start the service. It will start without any error.

5. Test load-balancer.

If you performed the steps correctly, you can now see your MySQL cluster being accessed by HAProxy server in a round-robin manner i.e. one by one 🙂 You have to use here & not the public IP or hostname. You may also point your browser to IP_address of haproxy server:8090 or any port you specified in configuration above to see the web UI. Credentials will be what you mentioned in its configuration.


root@haproxy-server:/home/shashank# mysql -h -u haproxy_root -p -e "SHOW DATABASES"
Enter password:
| Database           |
| information_schema |
| asset              |
| mysql              |
| performance_schema |
| test               |
| testdb             |

Great 🙂 You can access your cluster from load-balancer, as you can see above 🙂 Now, its time to see which node is being accessed. So, issue below command on load-balancer server 2-3 times & you will see server-ids in round-robbin manner 🙂

root@shashank-server:/home/shashank# mysql -h -u haproxy_root -p -e "show variables like 'server_id'"
Enter password:
| Variable_name | Value |
| server_id     | 2     |
root@haproxy-server:/home/shashank# mysql -h -u haproxy_root -p -e "show variables like 'server_id'"
Enter password:
root@haproxy-server:/home/shashank# mysql -h -u haproxy_root -p -e "show variables like 'server_id'"
Enter password:
| Variable_name | Value |
| server_id     | 1     |

6. Test your application

Now that basic testing has been done, now its time to test our setup in live scenario. To do this, stop mysql service on any database node & execute the same command(that you ran above) on load-balancer. You will see server-id of the other node every time you run this. Now, bring up the service and stop it on other node. Again run the same query & see the result 🙂

Now, change the application code where you have hard-coded database connection string & replace that name/IP with load-balancer IP 🙂 Check your application now by trying to read from the database. You will see that you can access the data even when one of your DB nodes is down 🙂

With this, you have successfully setup a MySQL cluster & load-balancing 🙂 See you soon!

How To Setup Streaming Replication In MySQL – Slave Node

In my last post, you read about configuring the master node. Here, you will learn how you can configure slave for streaming replication. Lets start this tutorial 🙂

Create the same database as on master.

As you know, our example slave node has IP & that will run our replicated database. So, create the same database here.

mysql> create database your_db_name;

Import the dump file to populate data.

Now, exit the mysql prompt & import the dump file that you copied from master node. This will populate our database with the data from master.

C:\Program Files\MySQL\MySQL Server 5.6\bin>mysql -uroot -p asset < C:\your_db_name.sql
Enter password: *****

Edit my.ini or my.cnf file.

Here we will edit the configuration file to indicate which one is our master & similar settings. Enter below details to your file. Read the file carefully & make changes accordingly. datadir is optional. Save the file & restart MySQL. On Windows, you need to go to Services & restart MySQL service. After that, you will see the data populated from dump file.

datadir=C:\Program Files\MySQL\MySQL Server 5.6\data

Stop slave.

Login to your MySQL & issue below to stop the slave.

mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)

Configure slave to start replication from master. 

Issue below query to start the replication process. Please change the values accordingly. In the last 2 properties, use the values that you noted down while configuring master. Its the File & Value property that you need to enter from that output. After this, start your slave.

-> MASTER_USER='repl_user',
-> MASTER_LOG_FILE='mysql-bin.000003',
Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

Now you are done 🙂 Just create new tables or add record in your master database. All that will be mirrored to your slave database 🙂 Check it for yourself!


How To Setup Streaming Replication In MySQL – Master Node

Hey there 🙂 I am back today with a post on how to setup Streaming Replication In MySQL. Streaming Replication is a process in which a databases(s) is/are mirrored between 2 0r more servers. In simple language, we have a MySQL master node running the database which is in sync with one or more slaves. If master node fails or crashes, one of the slaves takes over & database is made available. Any data that is written to master is automatically mirrored to its slaves. Sounds interesting? 😉 Here is how to do it. I have performed this on Windows servers because of my application. But same steps apply to Linux servers as well. Only the file location will differ. I am explaining this taking into consideration that a database already exists on master. Lets start this tutorial 🙂

Lab Description : –

  • Master node – Windows Server running MySQL 5.6.17. IP Address
  • Slave node – Windows Server running MySQL 5.6.17. IP Address

Please ensure port 3306(default) or the port you specify in MySQL configuration needs to be opened so as to enable communication between master & slave. I have to do it on my Windows servers.

Steps to be performed : –

Edit my.ini or my.cnf file.

Add below section to your existing file. Note that server-id has to be 1 for master. Change datadir to the location of your MySQL data directory. binlog-do-db has to be set to the database that you need to replicate. After saving the file, restart MySQL.

Please note that all these properties are thoroughly explained in my.ini or my.cnf files. Read that carefully and then make changes.

server-id = 1

Create replication user & grant access.

Issue below queries on master node mysql prompt. Note that the IP mentioned is of slave node.

mysql> create USER repl_user@;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;

Flush tables with read lock.

Issue below sql queries on your already open mysql prompt.

mysql> use your_db_name;

Database changed
Query OK, 0 rows affected (0.00 sec)

mysql> show master status;
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
| mysql-bin.000003 | 120      |              |                  |                   |
1 row in set (0.00 sec)

Note down these values i.e “File name” & “Position”. You will need these to configure slave.

Dump the database data to a file.

Now, its time to dump all the database data to a file. This file will be used to generate data on slave.

E:\wamp\bin\mysql\mysql5.6.17\bin>mysqldump -uroot -p --optyour_db_name >your_db_name.sql
Enter password: *****

Copy this file to the slave node. Use below PowerShell command for Windows 🙂

PS C:\Windows\system32> copy E:\wamp\bin\mysql\mysql5.6.17\bin\your_db_name.sql \\\c$\your_db_name.sql

Unlock the tables. Its time to unlock the tables.

Query OK, 0 rows affected (0.00 sec)

mysql> quit;

Here, we are done with configuration of master node. I will explain slave configuration in my next post. Stay tuned 🙂

Set Up A Centralised Log Server On Linux (Ubuntu 14.04)

Server Logs are wealth of useful information. Every SysAdmin knows it. Logs act as our only mean to troubleshoot critical issues, when nothing else helps. Logs are so important that they must be backed up properly & efficiently. While every Linux distribution has this facility built-in, its always good to have a centralised log server that captures the logs from all other client nodes. It servers many purposes. It acts as a central point of contact whenever we need to check the logs. No need to login to individual servers. It also reduces the load on storage media of individual servers since all the logging is recorded on one central server with huge storage 🙂 Lets learn how to setup our own Centralised Log Server on Linux. I have shown using Ubuntu but same applies to Red Hat based servers as well.

Lab Description : – 

Log Server – Ubuntu 14.04

Log Client Node – Ubuntu 14.04

Server Configuration : –

Enable UDP/TCP port. Edit /etc/rsyslog.conf file. There are properties for UDP & TCP under MODULES directive. Uncomment both of them. It looks like below after uncommenting. 514 is the port. This will enable UDP/TCP communication from clients to server.

# provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

Define template for the logs. Template defines the filename & location of the logs. Just above GLOBAL directive, add below lines to define a template.

$template RemoteLogs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log" *
*.* ?RemoteLogs
& ~

1st line is self-explanatory. It defines what will be the name & location of log-file.

2nd line tells rsyslog daemon to apply this template to all the log-files.

3rd line stops the older logging & enables this new logging.

Restart rsyslog daemon.

service rsyslog restart

Client Configuration.

Ensure your client nodes can communicate with server. Adjust firewall to allow UDP/TCP port 514 if needed.

Edit configuration file. We only need to define the IP address or FQDN of our log server in the /etc/rsyslog.conf file. So open the file & add below line to the end. No explanation needed here, right? 😉 Its the IP of log server & UDP/TCP port.

#Defining the Central Log Server
*.* @

After this, restart rsyslog daemon.

service rsyslog restart

You will see new directories inside /var/log with names of your client(s). Inside these, there will be many log files with their respective program names like sudo.log.

drwx------  2 syslog            syslog     4096 jul  6 09:48 shashank-server/
drwx------  2 syslog            syslog     4096 jul  6 09:47 shashank-client/
root@shashank-server:/var/log# ll shashank-server/
50mounted-tests.log           avahi-autoipd(eth0).log       gnome-keyring-daemon.log      pkexec.log                    sudo.log
accounts-daemon.log           avahi-daemon.log              jenkins.log                   polkitd(authority=local).log  su.log
acpid.log                     colord.log                    kernel.log                    polkitd.log                   udisksd.log
anacron.log                   cracklib.log                  lightdm.log                   postfix.log                   useradd.log
AptDaemon.log                 cron.log                      ModemManager.log              pulseaudio.log                whoopsie.log
AptDaemon.PackageKit.log      CRON.log                      mtp-probe.log                 rsyslogd-2207.log             xinetd.log
AptDaemon.Trans.log           crontab.log                   NetworkManager.log            rsyslogd-2307.log             
AptDaemon.Worker.log          dbus.log                      ntpdate.log                   rsyslogd.log                  
audispd.log                   dhclient.log                  os-prober.log                 rtkit-daemon.log              
auditd.log                    failsafe.log                  passwd.log                    sshd.log

Setup Ansible Nodes on Linux (Ubuntu 14.04)

In my last post, you learnt about Ansible & how to install it. Now, lets take one step forward and setup the nodes that Ansible will manage. As we know, Ansible is agent-less hence it doesn’t need any client package to be installed on the nodes it will manage. So, we only need to define the nodes in its inventory file on Ansible server itself located at /etc/ansible/hosts

Backup this file & edit it with below contents.


You can see I have defined a group called web-servers that contains IP address of 2 nodes. You can also use FQDN if you have DNS setup or entries in /etc/hosts file

Save this file & issue below command to test if Ansible server is able to ping the nodes or not.

shashank@shashank-server:~$ ansible -m ping web --ask-pass
SSH password: | FAILED! => {
"failed": true,
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
} | SUCCESS => {
"changed": false,
"ping": "pong"

You can see, it results in an error for one node. Because Ansible server’s fingerprint was not in known_hosts file of that node. For this, you need to add that manually or by first trying to SSH into node. That way it will ask to save the fingerprint & above command will work.

shashank@shashank-server:~$ ansible -m ping web --ask-pass
SSH password: | SUCCESS => {
"changed": false,
"ping": "pong"
} | SUCCESS => {
"changed": false,
"ping": "pong"

So, you can see now that Ansbile is able to ping its nodes & hence it can manage these 🙂

One thing to note is that Ansible will SSH into the nodes using the user with which it was run. I ran Ansible using shashank user which is a user with root access. So, make sure you have enough privileges to run Ansible. You can get away with using --ask-pass if you use keys instead of password. You can follow this link to know how to setup password-less SSH

shashank@shashank-server:~$ ansible -m ping web | SUCCESS => {
"changed": false,
"ping": "pong"
} | SUCCESS => {
"changed": false,
"ping": "pong"

You can see above that, since I have password-less SSH setup, I don’t need to use --ask-pass option.