Install Elasticsearch 5 on CentOS 7.x

Elasticsearch is a distributed storage and real-time search engine.
  • Distributed storage – you just need to setup and add Elasticsearch nodes, it’ll keep the data distributed on the cluster nodes. The distributed-ness makes data durable and highly-available too.
  • Real-time search engine – You can get to query the data the moment it’s been written.
Due to the above 2 attributes you have been listening and reading about Elasticsearch, wherever there’s a discussion of real-time data analysis. It’d not be an overstatement to say technologies like Elasticsearch set the foundation for any efficient and reliable search engine.

Step 0 – Pre-requisite

Elasticsearch is built using Java, and requires a recent version of Java installed i.e. Java 8. If you have a CentOS 7.x Everything (one with the GUI), it already has Java 8 installed:
Output of java -version:
[nahmed@localhost ~]$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
in case you are using CentOS 7 Minimal you may need to install it – here’s a guide for installing Java 8 on CentOS 7.x – Install Java 8 on CentOS 7.
Note: The same JVM version should be used on all Elasticsearch nodes and clients, in case of cluster setup.

Step 1 – Elasticsearch Installation

Download the RPM
It’s a good practice to download packages in your /opt directory:
The exact output:
[nahmed@localhost opt]$ sudo wget
--2016-11-30 23:03:50--
Resolving (,,, ...
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32866827 (31M) [application/octet-stream]
Saving to: ‘elasticsearch-5.0.2.rpm’100%[===============================================================================>] 32,866,827   834KB/s   in 49s2016-11-30 23:04:40 (660 KB/s) - ‘elasticsearch-5.0.2.rpm’ saved [32866827/32866827]
Import the PGP key – for download integrity verification
rpm --import
Install the Elasticsearch rpm
rpm --install elasticsearch-5.0.2.rpm
The exact output:
[nahmed@localhost opt]$ sudo rpm --install elasticsearch-5.0.2.rpm
Creating elasticsearch group... OK
Creating elasticsearch user... OK
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service

Step 2 – Starting Elasticsearch

Reload the system daemon
systemctl daemon-reload
Enable the Elasticsearch

To automatically start the service on system reboots

systemctl enable elasticsearch.service
Finally, start the Elasticsearch service
systemctl start elasticsearch.service
The exact output:
[nahmed@localhost opt]$ sudo systemctl daemon-reload
[nahmed@localhost opt]$ sudo systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/elasticsearch.service.
[nahmed@localhost opt]$ sudo systemctl start elasticsearch.service

Step 3 – Verifying if Elasticsearch is running

Test if the Elasticsearch is up and running, you can do so by sending an HTTP request to localhost port 9200, Simply hit the following url using any browser of your choice:
If you’re using a minimal CentOS 7.x (no GUI), simply execute the following command:
curl -X GET 'http://localhost:9200'
The response will be as follows:
"name" : "H5fcpdg",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Nub7-l2sRE-TNQPeH8JrZg",
"version" : {
"number" : "5.0.2",
"build_hash" : "f6b4951",
"build_date" : "2016-11-24T10:07:18.101Z",
"build_snapshot" : false,
"lucene_version" : "6.2.1"
"tagline" : "You Know, for Search"
The exact response:

Step 4 – Using Elasticsearch

To get started with Elasticsearch you can use the provided API, mainly provides the native Java api, and the HTTP/JSON RESTful api, which can be accessed over the HTTP i.e. using curl command-line tool, and for simple GET request the usual browser will do the job.
Elasticsearch’s RESTful API makes the basic CRUD operations possible i.e. Create (POST), Read (GET), Update (PUT), and Delete (DELETE).
Adding data to Elasticsearch
curl -X POST 'http://localhost:9200/devopspy/helloworld/1' -d '{ "message": "Hello World!" }'
The exact output
[nahmed@localhost ]$ curl -X POST 'http://localhost:9200/devopspy/helloworld/1' -d '{ "message": "Hello World!" }'
Above we have sent an HTTP POST request to the Elasticsearch server, for adding the data provided after ‘-d‘ flag, where:
  • devopspy is the index of the data in Elasticsearch.
  • helloworld is the type.
  • 1 is the id of our entry under the above index and type.
Retrieving the just added data:
curl -X GET 'http://localhost:9200/devopspy/helloworld/1'
The exact output:
[nahmed@localhost]$ curl -X GET 'http://localhost:9200/devopspy/helloworld/1'
{"_index":"devopspy","_type":"helloworld","_id":"1","_version":1,"found":true,"_source":{ "message": "Hello World!" }}
Response in better format using ‘pretty‘ flag:
[nahmed@localhost]$ curl -X GET 'http://localhost:9200/devopspy/helloworld/1?pretty'
"_index" : "devopspy",
"_type" : "helloworld",
"_id" : "1",
"_version" : 1,
"found" : true,
"_source" : {
"message" : "Hello World!"

Install Java 8 on CentOS/RHEL 7.x

If you have a fresh installation, it is recommended to run the update first
yum update
Usually Java comes installed on CentOS 7 (Everything), for CentOS 7 minimal you may need to install Java for various setups. On a CentOS 7 Everything, you can verify it by simply checking the version:
java -version

The output:

# java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
Java 1.8.0_111-b15 is the latest available, ‘1.8.0’ states it is Java 8, ‘u111’ means update 111, and ‘b15’means build 15.
For CentOS 7 minimal, the same command will give a different output:
# java -version
-bash: java: command not found
Which means Java is not installed. The latest Java version currently is Java 8.

Installing Java 8 using yum

It’ll install the latest Java 8 update i.e. Java 1.8.0_111-b15, where ‘1.8.0’ states it is Java 8, ‘u111’ means update 111, and ‘b15’means build 15.
yum install java-1.8.0-openjdk
The exact output:
# yum install java-1.8.0-openjdk
java-1.8.0-openjdk.x86_64 1: Installed:
java-1.8.0-openjdk-headless.x86_64 1:     javapackages-tools.noarch 0:3.4.1-11.el7     lksctp-tools.x86_64 0:1.0.13-3.el7     python-javapackages.noarch 0:3.4.1-11.el7
python-lxml.x86_64 0:3.2.1-4.el7                               ttmkfdir.x86_64 0:3.0.9-42.el7               tzdata-java.noarch 0:2016h-1.el7       xorg-x11-font-utils.x86_64 1:7.5-20.el7
xorg-x11-fonts-Type1.noarch 0:7.5-9.el7Complete!]

Verify if Java has been installed

[nahmed@localhost ~]# java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)

Installing Java 8 manually

Download the latest Java 8

Use the following command to download the oracle java 8 rpm.
For 32 bit
wget --no-cookies --no-check-certificate --header "Cookie:; oraclelicense=accept-securebackup-cookie" ""
For 64 bit
wget --no-cookies --no-check-certificate --header "Cookie:; oraclelicense=accept-securebackup-cookie" ""
The exact output:
# cd /opt/
[nahmed@localhost opt]# wget --no-cookies --no-check-certificate --header "Cookie:; oraclelicense=accept-securebackup-cookie" ""
--2016-11-30 06:12:59--
Resolving (,
Connecting to (||:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: [following]
--2016-11-30 06:12:59--
Resolving (, 2600:1406:1a:394::2d3e, 2600:1406:1a:3a1::2d3e
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: [following]
--2016-11-30 06:12:59--
Connecting to (||:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 166040563 (158M) [application/x-redhat-package-manager]
Saving to: ‘jdk-8u111-linux-x64.rpm’100%[========================================================================================================================================================>] 166,040,563 14.1MB/s   in 12s2016-11-30 06:13:12 (12.7 MB/s) - ‘jdk-8u111-linux-x64.rpm’ saved [166040563/166040563]

Installing Java 8 rpm

Use the below command to Install Oracle Java 8 (jdk-8u60) on your system using RPM file.
For 64 bit
# rpm -ivh jdk-8u111-linux-x64.rpm
For 32 bit
# rpm -ivh jdk-8u111-linux-i586.rpm
The exact output:
[nahmed@localhost opt]# ls
[nahmed@localhost opt]# rpm -ivh jdk-8u111-linux-x64.rpm
Preparing...                          ################################# [100%]
Updating / installing...
1:jdk1.8.0_111-2000:1.8.0_111-fcs  ################################# [100%]
Unpacking JAR files...
Verifying the Java version
After finishing installation check Java version using below command:
[nahmed@localhost opt]# java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

Multiple Java versions

Usually you may have more than one version of Java installed, to choose between the versions:
alternatives --config java
The exact output:
[nahmed@localhost]# alternatives --config java
There are 2 programs which provide 'java'.Selection    Command
1           /usr/lib/jvm/java-1.7.0-openjdk-
*+ 2           /usr/lib/jvm/java-1.8.0-openjdk- to keep the current selection[+], or type selection number:
Just type the number 1 (Java 7) or 2 (Java 8) and hit enter.
In case you just want to keep a single version of Java, remove the rest i.e. to remove the Java 7:
yum remove java-1.7.0-openjdk

Install Puppet Master-Agent on CentOS 7

Puppet is a configuration management tool for Unix-like and Microsoft Windows systems – which is basically provisioning automation i.e. the steps you want to preform on your freshly spawned virtual machine. Puppet uses declarative language for specification, and these configuration declaration files are termed as “Puppet manifests”. Puppet treats anything configurable as a “resource” i.e. file, service, package, user, cron, etc. The Puppet manifest is about describing the resources and their required states. Puppet gets the dynamic data (i.e. OS dependent) using Facter utility, for example Apache web server package is named ‘apache2’ in Ubunut, and ‘httpd’ for CentOS systems, Puppet allows variables in manifest (.pp file) to get such info on the fly, and set the right installation command.
If the term configuration management doesn’t make you feel yourself a noob, you can skip the following 2 posts and jump right to the Puppet Master-Agent installation section.

1. Pre-installation tasks

Before setting up any Puppet Agent nodes, we need to perform some pre-installation steps and have the Puppet Server ready.
disable SELinux
sudo set enforce 0
The above command will disable SELinux for the session i.e. until next reboot – to permanently disable it set SELINUX=disabled in /etc/selinux/config file.
Stop firewalld
as agents must be able to connect to the master node on 8140 port.
sudo systemctl stop firewalld
Resolve hostnames
As changing it later ‘d most probably break the installation.
Setting Puppet server hostname
setting it to puppet will be helpful as Puppet Master-agent installation by default expects it to be puppet, hence saving ourselves some trivial troubleshooting.
# hostnamectl set-hostname puppet
# hostname -s
Setting Puppet agent hostname
# hostnamectl set-hostname puppet-agent1
# hostname -s
Adding Puppet Master in /etc/hosts
All the nodes (Puppet server and agents) must have a unique hostname. Forward and reverse DNS must both be configured correctly, simply add the Puppet server in each of the Puppet you agent you’ll be installing. On CentOS you need to write it in /etc/hosts file on each node. The format for adding a host is  IP_address host_name aliases. Using file editor of your choice i.e. gedit, vi, vim, add the following line in your /etc/hosts file: puppet puppet-master
Test the network
ping or ping puppet
The output of the above mentioned preliminary tasks/command on the puppet agent:
[nahmed@puppet-agent1 ~]$ sudo set enforce 0
[nahmed@puppet-agent1 ~]$ sudo systemctl stop firewalld
[nahmed@puppet-agent1 ~]$ hostnamectl set-hostname puppet-agent1
[nahmed@puppet-agent1 ~]$  ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet  netmask  broadcast
inet6 fe80::20c:29ff:fed8:4cf9  prefixlen 64  scopeid 0x20<link>
[nahmed@puppet-agent1 ~]$  ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.233 ms
64 bytes from icmp_seq=3 ttl=64 time=0.613 ms[nahmed@puppet-agent1 ~]$  sudo vi /etc/hosts
[sudo] password for nahmed:
[nahmed@puppet-agent1 ~]$ cat /etc/hosts puppet puppet-master   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[nahmed@puppet-agent1 ~]$  ping puppet
PING puppet ( 56(84) bytes of data.
64 bytes from puppet ( icmp_seq=1 ttl=64 time=0.235 ms
64 bytes from puppet ( icmp_seq=2 ttl=64 time=0.551 ms
Syncing time on all the nodes is important
Puppet server’s time is important as it’s the certificate authority for puppet agents i.e. If the time of the puppet server wrong, it might issue agent certificates from the distant past or future, which other nodes will treat as expired. The recommended and widely used approach to have time synced is use of NTP
Here’s a guide for installing NTP and syncing time across your nodes – Install and configure ntpd.

2. Puppet Server

At this point you must have completed the pre-insallation requirements, let’s move to the real work, installing puppet server. As mentioned earlier, it’s a good practice (also recommended) to install puppet server before setting-up or installing any puppet agent.
Add the Puppet repo
$ sudo rpm -Uvh
The output:
[nahmed@puppet ~]$  sudo rpm -Uvh
warning: /var/tmp/rpm-tmp.5HvpZd: Header V4 RSA/SHA512 Signature, key ID 4bd6ec30: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
1:puppetlabs-release-pc1-1.1.0-2.el################################# [100%]
Verify if the repo has been added
$ yum repolist | grep puppet
The output:
[nahmed@puppet ~]$  yum repolist | grep puppet
puppetlabs-pc1/x86_64      Puppet Labs PC1 Repository el 7 - x86_64
Install puppet server using yum
$ sudo  yum -y install puppetserver
The output:
[nahmed@puppet ~]$  sudo  yum -y install puppetserver
puppetserver.noarch 0:2.7.0-1.el7

Dependency Installed:
java-1.8.0-openjdk-headless.x86_64 1:
libyaml.x86_64 0:0.1.4-11.el7_0
lksctp-tools.x86_64 0:1.0.13-3.el7
puppet-agent.x86_64 0:1.8.0-1.el7
ruby.x86_64 0:
ruby-irb.noarch 0:
ruby-libs.x86_64 0:
rubygem-bigdecimal.x86_64 0:1.2.0-25.el7_1
rubygem-io-console.x86_64 0:0.4.2-25.el7_1
rubygem-json.x86_64 0:1.7.7-25.el7_1
rubygem-psych.x86_64 0:2.0.0-25.el7_1
rubygem-rdoc.noarch 0:4.0.0-25.el7_1
rubygems.noarch 0:2.0.14-25.el7_1

Dependency Updated:
tzdata-java.noarch 0:2016h-1.el7

Start and enable (to start on reboots) the Puppet Server
$ systemctl start puppetserver
$ systemctl enable puppetserver
The output:
[nahmed@puppet ~]$ sudo systemctl start puppetserver
[nahmed@puppet ~]$ sudo systemctl status puppetserver
puppetserver.service - puppetserver Service
Loaded: loaded (/usr/lib/systemd/system/puppetserver.service; disabled)
Active: active (running) since Mon 2016-11-21 02:49:40 PST; 1min 53s ago
Process: 3511 ExecStart=/opt/puppetlabs/server/apps/puppetserver/bin/puppetserver start (code=exited, status=0/SUCCESS)
Main PID: 3518 (java)
CGroup: /system.slice/puppetserver.service
└─3518 /usr/bin/java -Xms1g -Xmx1g -XX:MaxPermSize=256m -Djava.sec...

Nov 21 02:47:14 puppet systemd[1]: Starting puppetserver Service...
Nov 21 02:47:14 puppet puppetserver[3511]: OpenJDK 64-Bit Server VM warning:...0
Nov 21 02:49:40 puppet systemd[1]: Started puppetserver Service.
Hint: Some lines were ellipsized, use -l to show in full.
[nahmed@puppet ~]$ sudo systemctl enable puppetserver
ln -s '/usr/lib/systemd/system/puppetserver.service' '/etc/systemd/system/'

Memory Allocation for Puppetserver (Optional)

With the above steps your puppet server will up and waiting for puppet agent to connect. There’s one extra step I’d like to talk about is upping the memory allocation for puppet server.
The default allocated memory is 2GB of RAM, good enough for most of the use-cases. However, if you have some requirement to increase or decrease the memory allocation for your Puppet Server, you can do so by editing the config file.
Location of config file
  • /etc/sysconfig/puppetserver — RHEL
  • /etc/default/puppetserver — Debian
Open the config file using editor of your choice – you’ll find the following line in it.
# Modify this if you'd like to change the memory allocation, enable JMX, etc
JAVA_ARGS="-Xms2g -Xmx2g"
The 2g part is where the memory allocation is defined i.e. 2GB. For example, for 1GB of memory, the line will become JAVA_ARGS=”-Xms1g -Xmx1g”; similarly, for 512MB, it’ll become JAVA_ARGS=”-Xms512m -Xmx512m
Restart the puppetserver service after making any changes to the config
$ systemctl restart puppetserver

3. Puppet Agent(s)

Till now you must have the puppet server installed and up, now you can proceed with puppet agent installation.
Note: Using the following steps you can setup as many puppet agents as much you want i.e. execute the steps on each puppet agent machine.
The installation is quite what you did for installing puppet server.
Add the Puppet repo
$ sudo rpm -Uvh
The output:
[nahmed@puppet-agent1 ~]$  sudo rpm -Uvh
warning: /var/tmp/rpm-tmp.AlTNZt: Header V4 RSA/SHA512 Signature, key ID 4bd6ec30: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
1:puppetlabs-release-pc1-1.1.0-2.el################################# [100%]
Verify if the repo has been added
$ yum repolist | grep puppet
The output:
[nahmed@localhost ~]$ yum repolist | grep puppet
puppetlabs-pc1/x86_64      Puppet Labs PC1 Repository el 7 - x86_64         102
Install puppet agent using yum
$ sudo  yum -y install puppet-agent
The output:
[nahmed@puppet-agent1 ~]$  sudo  yum -y install puppet-agent
Installing : puppet-agent-1.8.0-1.el7.x86_64                              1/1
Verifying  : puppet-agent-1.8.0-1.el7.x86_64                              1/1Installed:
puppet-agent.x86_64 0:1.8.0-1.el7Complete!
telnet Puppet Master for port 8140
To verify, if puppet master is listening at port 8140, and connection is possible from puppet agent.
[nahmed@puppet-agent1 ~]$ telnet puppet 8140
Connected to puppet.
Escape character is '^]'.
Connection closed by foreign host.
Start the puppet agent
$ sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
The output:
[nahmed@puppet-agent1 ~]$ sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
Notice: /Service[puppet]/ensure: ensure changed 'stopped' to 'running'
service { 'puppet':
ensure => 'running',
enable => 'true',

Registering Puppet agent on the Puppet server

The very first time you start a puppet agent, it’ll attempt to register itself to the Puppet master. What it does is, it generates a SSL certificate, and sends a certificate signing request (CSR) to the Puppet server, which is the certificate authority (CA). For signing the agent’s CSR we need to execute a command on the Puppet server. On the Puppet master, for checking any pending requests:
sudo /opt/puppetlabs/bin/puppet cert list
The output:
[nahmed@puppet ~]$ sudo /opt/puppetlabs/bin/puppet cert list
"puppet-agent1.localdomain" (SHA256) 46:C5:97:49:70:16:61:5C:08:B0:23:C0:A3:82:E6:AD:B0:3F:94:A0:60:39:CA:AE:A4:ED:5C:5D:D0:C9:6B:61
For signing, execute the following command, replacing <NAME> (which will the fqdn of the puppet agent), with name you got when ran the above cert list command :
sudo /opt/puppetlabs/bin/puppet cert sign <NAME>
The output:
[nahmed@puppet ~]$ sudo /opt/puppetlabs/bin/puppet cert --sign "puppet-agent1.localdomain"
Signing Certificate Request for:
"puppet-agent1.localdomain" (SHA256) 46:C5:97:49:70:16:61:5C:08:B0:23:C0:A3:82:E6:AD:B0:3F:94:A0:60:39:CA:AE:A4:ED:5C:5D:D0:C9:6B:61
Notice: Signed certificate request for puppet-agent1.localdomain
Notice: Removing file Puppet::SSL::CertificateRequest puppet-agent1.localdomain at '/etc/puppetlabs/puppet/ssl/ca/requests/puppet-agent1.localdomain.pem'
In case you have multiple agents, and want to sign there requests at once execute the following command:
sudo /opt/puppetlabs/bin/puppet cert sign --all
The output:
[nahmed@puppet ~]$ sudo /opt/puppetlabs/bin/puppet cert list --all
+ "puppet.localdomain"        (SHA256) C4:9F:EF:B4:57:38:F6:C8:C5:81:C1:2A:A3:8F:9C:14:57:A9:B9:10:0D:6B:1A:70:28:9B:35:98:07:75:1D:0D (alt names: "DNS:puppet", "DNS:puppet.localdomain")
+ "puppet-agent1.localdomain" (SHA256) 83:C5:F5:8A:61:2C:70:C8:BA:C5:B8:6B:71:15:6B:69:14:7E:B7:46:D6:A9:45:FC:9B:E4:B6:C8:A4:A5:03:9E
Note: /opt/puppetabs/ is basically the installation directory, you can verify it with the value of INSTALL_DIR param in the /etc/sysconfig/puppetserver file.
Congrats! Your Puppet Master-agent deployment is ready. Once the Puppet master signs it, the agent node will get listed, and Puppet master can communicate i.e. Puppet agent can fetch and apply the configuration catalogs, set or changed for it, on the Puppet server. To add any other Puppet agent node, just execute the same set of commands you did for puppet-agent1, and sign the certificate from the Puppet Master.

Writing Catalogs (Optional)

As the purpose of this installation is to have a Puppet master-agent setup, so you can manage your nodes/machines (Puppet agents) just from a single point (i.e. Puppet master). In Puppet’s lexicon the configurations files are called “manifests”. The Puppet manifests have *.pp extension. In Puppet Master-agent setup by default master keeps manifests at /etc/puppetlabs/code/environments/production/manifests
Let’s create a placeholder file for now:
sudo touch /etc/puppetlabs/code/environments/production/manifests/puppet-agents.pp
Note that the main manifest is empty right now, so Puppet won’t perform any configuration on the agent nodes.
Manifest execution
The Puppet agents periodically keep checking the manifest (every 30 minutes) at the Puppet server. During this checking, the Puppet agent sends facts about itself to the master, and pulls the manifest for it i.e. list of resources (service, file, etc) and their desired states. The agent performs the necessary (puppet provisioning) steps to keep itself at par with the manifest (just pulled from the master). This cycle will continue as long as the agents certificate is not revoked or, Puppet master is running and communicating with the agent nodes.
The typical time for manifest syncing is 30 minutes, in case we want to execute the desired changes on a particular agent node immediately, execute the following command on that particular agent node:
/opt/puppetlabs/bin/puppet agent --test
The output:
[nahmed@puppet-agent1 ~]$ puppet agent -t
Info: Caching certificate for puppet-agent1.localdomain
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for puppet-agent1.localdomain
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Notice: /File[/home/nahmed/.puppetlabs/opt/puppet/cache/facts.d]/mode: mode changed '0775' to '0755'
Info: Retrieving plugin
Info: Caching catalog for puppet-agent1.localdomain
Info: Applying configuration version '1479729237'
Info: Creating state file /home/nahmed/.puppetlabs/opt/puppet/cache/state/state.yaml
Notice: Applied catalog in 0.03 seconds

Syncing Date and Timezone – Install and configure ntpd

For various cluster or distributed application setups it’s an explicit requirement to have date and timezone (TZ) on each node synced. The difference in date or TZ may result in serious issues, for example in case of Puppet Master/Agent setup it’s a must requirement to have the data and TZ synced across all the nodes, if not, Puppet master server being the certificate authority may issue agent certificate from the distant past or future, which other nodes will treat as expire.
For syncing data and timezone across all the nodes, for whatever reason the tool at hand is ntpd.
The Network Time Protocol daemon (ntpd) is an operating system program that maintains the system time in synchronization with time servers using the Network Time Protocol (NTP).
First, let’s get the current date and timezone on the system:
[vagrant@localhost ~]$ timedatectl status
Local time: Mon 2016-11-07 17:02:22 MSK
Universal time: Mon 2016-11-07 14:02:22 UTC
RTC time: Mon 2016-11-07 14:02:22
Time zone: Europe/Moscow (MSK, +0300)
NTP enabled: n/a
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
Now change the timezone to UTC – you must select the TZ of your choice:
$ sudo timedatectl set-timezone UTC
Install ntpd
A simple yum install command:
$ sudo yum install ntp ntpdate ntp-doc -y
After installing ntpd, it’s a good practice to run the syncing manually:
$ sudo ntpdate
Start ntpd, and enable it on system reboots:
$ sudo systemctl start ntpd
$ sudo systemctl enable ntpd
Verify – if NTP enabled and NTP synchronized are set to ‘Yes’
[vagrant@localhost etc]$ timedatectl status
Local time: Mon 2016-11-07 14:17:11 UTC
Universal time: Mon 2016-11-07 14:17:11 UTC
RTC time: Mon 2016-11-07 14:17:09
Time zone: UTC (UTC, +0000)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

Specify pool zones of choice  (Optional)
You can specify for ntpd the time servers i.e. pool zones. You can set the pool zones geographical closer to the datacenter you have your machines at. You can find the available pool-zones here NTP Pool Project. For example – you have your datacenter in the US, and want to set you pool zones accordingly – list of available US pool zones.
Open the ntpd conf, using any of the file editor i.e. vi or gedit:
$ sudo vi /etc/ntp.conf
Comment the current set pool zones (server iburst most probably) and add the new one:

Save and exit.
Re-start ntpd to start syncing using the new time servers:
$ sudo systemctl restart ntpd

Installing open-source standalone Puppet on CentOS 7

Puppet is an open-source configuration management tool – for infrastructure orchestration, or automated provisioning, or configuration automation, and lot more. The simplest use case is for automated provisioning i.e. the tasks we need to perform once our machine/VM comes up for the first time (or even after it), like installing webserver, DB server, etc. Instead of manually performing the tasks/running the tasks we can use any of the available configuration management tools (like Puppet) to automate the boring repetitive tasks, and also making configuration consistent across all the servers.
Just to give you an idea (going into detail is out of scope – I’ll cover it later) – Puppet can be setup in 2 different modes, as per requirement:
  • Standalone setup

    where each machine/node has puppet software installed and running. Each node also has it’s copy of puppet configuration (puppet manifests),  which you can run using puppet apply on the node.

  • Agent-Master setup

    For this the distributed puppet package is used i.e. the nodes you need to manage run the puppet-agent software,  as a background service. and the management node (puppet server/master node)has the puppet server software installed. Puppet master pushes the configurations to managed nodes i.e. Puppet agents, and Puppet agents periodically send facts back to Puppet master.

Puppet is written in Ruby programming language, and it is available for Linux, Mac, BSD, Solaris and Windows-based computer Systems.
Puppet (previously Puppet Labs), the company behind the development and distribution of Puppet software, ships Puppet as an open-source software released under Apache License, and separately as an enterprise release i.e. Puppet Enterprise. For this tutorial we’ll be installing the standalone open-source Puppet.


Check if Puppet is already installed
[vagrant@localhost ~]$ which puppet
/usr/bin/which: no puppet in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/vagrant/.local/bin:/home/vagrant/bin)
Installing standalone Puppet is as simple as running yum install – for you the newbies yum is basically the package manager for CentOS i.e. it is the command/utility to install or remove packages. yum looks for the asked packages in available package repositories. For installing Puppet first thing we need to do is adding the Puppet’s repo.
(If you have CentOS 6, change the le-7 to el-6 —- use sudo if you’re not root).

Adding the Puppet’s repo
# rpm -ivh
The exact output:
$ sudo rpm -ivh
warning: /var/tmp/rpm-tmp.3dEsUA: Header V4 RSA/SHA512 Signature, key ID 4bd6ec30: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
1:puppetlabs-release-22.0-2        ################################# [100%]
You can verify, if the Puppet repo has been added successfully, using the yum repolist:
# yum repolist | grep puppet
The exact output:
$ yum repolist | grep puppet
puppetlabs-deps/x86_64         Puppet Labs Dependencies El 7 - x86_64        17
puppetlabs-products/x86_64     Puppet Labs Products El 7 - x86_64           225

Installing Puppet
Now execute the yum install command and you’ll be done. The ‘yes‘ on the right side is for the set of question you’ll be asked during installation:
# yes | yum -y install puppet
The exact output:
[vagrant@localhost ~]$ yes | sudo yum -y install puppet
Loaded plugins: fastestmirror [Errno 14] curl#7 - "Failed connect to; Connection refused"
Trying other mirror.
base                                                     | 3.6 kB     00:00
extras                                                   | 3.4 kB     00:00
puppetlabs-deps                                          | 2.5 kB     00:00
puppetlabs-products                                      | 2.5 kB     00:00
updates                                                  | 3.4 kB     00:00
(1/2): puppetlabs-deps/x86_64/primary_db                   | 8.4 kB   00:01
(2/2): puppetlabs-products/x86_64/primary_db               |  69 kB   00:02
Loading mirror speeds from cached hostfile
* base:
* extras:
* updates:
Resolving Dependencies
--> Running transaction check
--> Finished Dependency ResolutionDependencies Resolved
puppet.noarch 0:3.8.7-1.el7Dependency Installed:
augeas-libs.x86_64 0:1.4.0-2.el7       facter.x86_64 1:2.4.6-1.el7                  hiera.noarch 0:1.3.4-1.el7                   libselinux-ruby.x86_64 0:2.2.2-6.el7   libyaml.x86_64 0:0.1.4-11.el7_0
pciutils.x86_64 0:3.2.1-4.el7          ruby.x86_64 0:             ruby-augeas.x86_64 0:0.4.1-3.el7             ruby-irb.noarch 0:   ruby-libs.x86_64 0:
ruby-shadow.x86_64 1:2.2.0-2.el7       rubygem-bigdecimal.x86_64 0:1.2.0-25.el7_1   rubygem-io-console.x86_64 0:0.4.2-25.el7_1   rubygem-json.x86_64 0:1.7.7-25.el7_1   rubygem-psych.x86_64 0:2.0.0-25.el7_1
rubygem-rdoc.noarch 0:4.0.0-25.el7_1   rubygems.noarch 0:2.0.14-25.el7_1Complete!

Verify puppet installation and version:
[vagrant@localhost ~]$ which puppet
[vagrant@localhost ~]$ puppet --version

What is DevOps and Configuration Management


First of all I want to address the biggest confusion here i.e. DevOps is not any tool, or technology, or some product one can use to make and do things better. DevOps is an idea, a management and operations approach – emphasizing on cohesiveness between development and operations teams. In simplest words, it’s about gluing the development and the IT operation hence the name DevOps:
  • Dev – comes from development (developer/software engineers), people who make the system/software, and update it during it’s lifetime.
  • Ops – from IT operations (sysadmins), who take care of the system once it’s developed, i.e. in production.
As per Wikipedia:
A software development method that stresses communication, collaboration and integration between software developers and information technology (IT) operations professionals.
The motive behind DevOps is keeping the communication gap between the developers and the sysadmins to minimum – to improve, standardize, and automate deployments and infrastructure orchestrations, for consistency between development and production environments, better QA, efficient operations, accelerated time to market, and many more.
Implementing DevOps is only possible using various tools and technologies:
  • Automated build tools – jenkins, travisci, etc.
  • Provisioning and Configuration Management, like Puppet, Chef, Ansible.
  • Orchestration i.e. Zookeeper, Apache Mesos.
  • Monitoring & Alerts e.g. Amazon CloudWatch, ELK stack, graphite, etc.
  • VMs and Containers for development and production environment consistency e.g. vagrant, docker.

Configurations Management

As said DevOps is just an operational approach, so let’s move to the clarification of the second buzzword i.e. ‘configuration management’ – in simplest words, configuration management tools make it possible to implement or practice DevOps.
Loosely speaking configuration management is about installing and updating system packages, setting network configurations – in short, making machine/server ready for deployment once it comes live, or even later.
Before DevOps and the availability of mature configuration management tools, sysadmins were required to perform these provisioning on each machine/server, which was operational inefficiency, laborious and carried a very high chance of introducing configuration inconsistency across the servers i.e. most common example is, configuration inconsistency between development and production environments, which has high consequences.
“Configuration management is the process of standardizing resource configurations and enforcing their state across IT infrastructure in an automated yet agile manner.” [Puppet]

“The purpose of Software Configuration Management is to establish and maintain the integrity of the products of the software project throughout the project’s software life cycle. Software Configuration Management involves identifying configuration items for the software project, controlling these configuration items and changes to them, and recording and reporting status and change activity for these configuration items” [SEI 2000a].

“Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.” – Wikipedia 
In the present and technical sense, configuration management (CM) tools enable Ops (i.e. sysadmins) to define their infrastructure in code i.e. all the available CM tools (Puppet, Ansible, Chef, etc), offer you to declare the required system state or provisioning (package installation, updates, configuration settings, etc) in form of code or a declaration – the type and form (syntax) of this declaration is what varies between theses CM tools, otherwise they are one the same thing.
With these CM tools, all what is required from you is the configuration or provisioning declaration file. You can specify the packages to install and configure, stop or start a service, etc. This system state declaration in form of code/file also brings other benefits i.e. by making the file part of the project/repository you can have system configuration history in form file versioning, the system configuration will available for Dev and Ops, the provisioning will be consistent across the servers.

Puppet Example

To dig it further, I am sharing a Puppet example here (to feed your curoisty – Getting Started with Puppet) – the declared configurations are called Puppet manifests, and the convention here is to create a /manifests directory at project root, and place all the manifests inside it i.e. your_project/manifests/default.pp – default.pp is the manifest file, all the Puppet manifests have .pp extension. The system resources and their state can be described, either using Puppet’s declarative language or a Ruby DSL (domain-specific language).
Example – a simple use-case, install Apache web-server on CentOS and Ubuntu machines. The same package is Apache2 in Ubuntu, while for CentOS it’s available as httpd.
  • Get the type of operating system,
  • Install apache and start the service
case $operatingsystem{
$apache = 'httpd'}
$apache = 'apache2'}
}package { "apache":
name    => $apache,
ensure  => present,
}service { "apache":
name    => $apache,
ensure  => running,
enable  => true,
require => Package["apache"],

what’s the difference between pyenv, virtualenv, virtualenvwrapper

My first post was about Python ‘virtualenv‘ – started with what is a virtual environment, why we need it, and a minimal example.  The purpose was to clarify beginners about what they get to see in almost every Python example. After that I also wrote about pyenv, and virualenvwrapper, felt like I may have enhanced the confusion here, all the starters might be having the question – what’s the difference between pyenv, virtualenv, virtualenvwrapper ? hence the today’s post title.
As you all know already that a virtual environment is ” a separate Python interpreter with its own set of installed packages.”


is basically a separate Python interpreter with its own set of installed packages. It is the Python interpreter along with the installed packages (other than standard libraries) which make the environment – so we can have multiple Python environments on a single machine, the environments other the default are termed as virtual environment, which we need to create and activate before we can use (as demonstrated here). A note from official docs:
” A virtual environment (also called a venv) is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments, and (by default) any libraries installed in a “system” Python, i.e. one which is installed as part of your operating system.”


is same as virtualenv, it comes with Python standard distribution since version 3.4. It uses the venv module underneath.


is just a wrapper (an extension) around the virtualenv, with intent to provide a cross platform better management of venv. For more read – “Better management of Python virutal environments with virtualenvwrapper“.

4.  pyenv

(previously known as pythonbrew)- to put it simply, it’s Python version management tool. pyenv lets you have multiple Python installations i.e. multiple Python versions, from which you can set the global Python version (the default version to use), and also the local Python version i.e. project/application specific. Apart from this you can also create virtualenv – it has a separate sub-command for it.
Once installed, penv is a bash command not needing python to for execution – it’s user-level command, so no need to use sudo.
To sum up, pyenv is a superset of virtualenv. There’s a detailed post about pyenv setup and usage – ‘pyenv – managing multiple Python versions‘.

Vagrantfile explained

Post’s pre-requisites:

You must understand each and every word in ‘vagrant up‘, otherwise the following text won’t make much sense to you.

A minimal Vagrantfile

Let’s start with a minimal Vagrantfile, which you’ll get on executing vagrant init hashicorp/precise64:

# -*- mode: ruby -*-
# vi: set ft=ruby :# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# Every Vagrant development environment requires a box. You can search for
# boxes at = "hashicorp/precise64"# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# "forwarded_port", guest: 80, host: 8080# Create a private network, which allows host-only access to the machine
# using a specific IP.
# "private_network", ip: ""# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# "public_network"# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
# View the documentation for the provider you are using for more
# information on available options.# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# for more information.
# config.push.define "atlas" do |push|
# end# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline: <<-SHELL
# apt-get update
# apt-get install -y apache2
The above file has so many scary lines, but in fact it has only two configuration lines (lot of other commented lines) i.e. removing the commented part makes it:
Vagrant.configure("2") do |config| = "hashicorp/precise64"
Vagrant API version
The line 1 is Vagrant API version specification – Vagrant requires explicit mention of the API versions in the configuration file, to stay backward compatible. So in every Vagrantfile we need to specify which version to use. The current one is version 2 which works with Vagrant 1.1 and up.
The line 1 and 3 enclose all the configuration in Vagrantfile i.e. the first line is specifying the version and staring a block, and ‘end’ specify the block closing. = "hashicorp/precise64"
Here it is specified, what Vagrant box to use. precise64 by hashicorp is a publicly available basic Ubuntu 12.04 (32 and 64-bit) box good for minimal use cases (you see it in almost every Vagrant post).
The config namespace is mainly about config.vm i.e. the required conf parameters for VM are of form config.vm.*
Config namespace: config.vm
The settings within config.vm modify the configuration of the machine that Vagrant manages.

Looking further – into the commented configuration parameters – The above two lines are enough to get started with your VM, But in case you’re curious, or simply want to check the effect of other configuration options – below is the explanation of all the commented configuration parameters in the file: – You can set anything related to VM’s network configuration using the ‘’ variable. Configures networks on the machine. Please see the networking page for more information. "forwarded_port", guest: 80, host: 8080
Creates port forwarding – guest/VM’s port 80 will be accessible on host’s port 8080 i.e. if you have webserver running Apache, or NginX (using port 80), it’ll be accessible from host using localhost:8080. "private_network", ip: ""
Use this setting to create a private network for VM i.e. will only be accessible via host, specifically using the (above) set IP. "public_network"
The other type of network we can set for the VM is public network, quite like bridged network – will make the your VM a separate node on your network.

config.vm.synced_folder "../data", "/vagrant_data"

Is for sharing a folder between the host and VM i.e. the first argument ‘../data’ is the path to the host folder, and the second argument ‘/vagrant_data’ (the name is an example – you can set whatever you want) is the folder on guest/VM. This can be super helpful for sharing files between host and guest, or simply you can work on files while VM is powered-off i.e. simply edit the host copy of the file and it’ll get synced on guest once powered-on.

Provider specific configuration

Provider is the virtualization software Vagrant uses to create a VM, as mentioned in the previous post, Vagrant itself is just a wrapper on top of providers i.e. VitualBox, libvirt, VMWare, etc. The minimal file have the provider configuration commented though it’s a requirement – the reason is, Vagrant takes VirtualBox as the default provider, in case you don’t specify otherwise.

# config.vm.provider "virtualbox" do |vb|
#   vb.gui = true
#   vb.memory = "1024"
# end
config.vm.provider “virtualbox” do |vb| – instructing Vagrant to use VirtualBox as the provider (virualization software). And all the provider specific settings are to be specified after this line and before the block’s end.
vb.gui = true – It’ll enable the VirtualBox GUI (Graphical User-Interface) for this machine. Not setting it or, setting it to false runs the machine in headless mode i.e. there’ll be no UI for the virtual machine visible on the host machine.
vb.memory = “1024” – explicitly specifying to allocate 1 GB of RAM for the VM.
Apart this we can specify these customizations in VBoxManage (Vagrant itself uses VBoxManage utility for setting VM specific parameters before booting it up) format:
config.vm.provider :virtualbox do |v|
v.customize ["modifyvm", :id, "--memory", 2048]
v.customize ["modifyvm", :id, "--cpus", 4]
Where modifyvm is the VBoxManage command name, which takes VM id (the GUI for the VM) as param, and next are the options ‘–memory’, ‘–cpu’, followed by the value you want it to set – the VBoxManage command ‘d be:
$ VBoxManage modifyvm 'id/name' --memory 2048

Provisioner-specific configuration

Vagrant requires a provisioner to finally provision the machine, this happens after you run ‘vagrant up‘.  Provisioner can be any of the openly available configuration management tools, like Chef, Puppet, or simply a shell script. Vagrant requires Provisioners to automatically install software, alter configurations, and more on the machine as part of the vagrant provision and coming live process.

# config.vm.provision "shell", inline: <<-SHELL
# apt-get update
# apt-get install -y apache2

config.vm.provision “shell” – specifying simple shell provisioner, with inline mode i.e. specifying the commands to run here too. Here Vagrant has been instructed to run update, followed by Apache 2 installation. You can also use a separate shell script, and specify the path to it in this block:

config.vm.provision "shell" do |sh|
sh.path "provision/"



echo "Provisioning First VM ..."

echo "Updating ...."
apt-get update

echo "Installing Apache2 ..."
apt-get install -y apache2

In case of any confusion, let me know in the comment section.

Vagrant: VirtualBox and Guest Additions version

One last thing about Vagrant (setup and installation) I want to share with you guys is, syncing the VirtualBox and Guest Additions versions. I hope you’re here after going through the previous Vagrant posts ‘Installing Vagrant on CentOS 7‘ and ‘Vagrant 101‘ – on setting up the first VM (i.e. precise64), when you executed the ‘vagrant up’ for the first time, you may end up getting the following warning message:
$ vagrant up
default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default: Guest Additions Version: 4.2.0
default: VirtualBox Version: 5.1
The screenshot:
This post is about resolving this difference of versions. No need to worry, it’s just about executing one more command i.e. installing the vagrant-vbguest plug-in. Shutdown the machine ‘vagrant halt‘ and then execute the following command in the project directory i.e. where the Vagrantfile is located:
$ vagrant plugin install vagrant-vbguest
We can verify the current version of Guest Additions, using the ‘vbguest’ command:
$ vagrant vbguest
[default] GuestAdditions 5.1.6 running --- OK.
The screenshot:

Verification: on next ‘vagrant up‘ after the ‘Machine booted and ready!‘ it will start downloading the missing packages:
[nahmed@localhost ~]$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
== default: Checking if box 'hashicorp/precise64' is up to date...
== default: Clearing any previously set forwarded ports...
== default: Clearing any previously set network interfaces...
== default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
== default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
== default: Booting VM...
== default: Waiting for machine to boot. This may take a few minutes...
default: SSH address:
default: SSH username: vagrant
default: SSH auth method: private key
== default: Machine booted and ready!
[default] GuestAdditions 5.1.6 running --- OK.
== default: Checking for guest additions in VM...
== default: Mounting shared folders...
default: /vagrant => /home/nahmed
== default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
== default: flag to force provisioning. Provisioners marked to run always will still run.
The screenshot, to share the exact stdout I got:
And that’s it. It hasn’t just fixed the version conflict for just this VM, but also for any provisioning i.e. ‘vagrant up‘ it’ll will check & install the correct guest additions right after booting.

Better management of Python virtual environments with virtualenvwrapper

After writing about ‘what is virtualenv‘  and pyenv  I have been feeling a compulsion to write about the one remaining tool in the family i.e. virtualenvwrapper.
As it is in the name, virtualenvwrapper is just a wrapper (an extension) around the virtualenv. The sole motive behind writing it, is covering the discrepancies or features virtualenv lacks, and biggest issue is managing these virtualenvs. As stated in the official docs, the main value-adding features of virtualenvwrapper are:
  • Organizes all of the virtual environments at a single location.
  • Provides better management of virtualenvs – intuitive commands for creating, deleting, copying virtualenvs.
  • A single command to switch between environments i.e. workon (demonstrated later in the post)
  • User-configurable hooks for all operations (see Per-User Customization).
  • Plugin system for more creating shareable extensions (see Extending Virtualenvwrapper).

Installation and Setup: same as virtualenv, virtualenvwrapper is a python package, and can be installed via pip :

$ sudo pip install virtualenvwrapper
It’ll also install the dependencies:
For Windows, use virtualenvwrapper-win instead:
pip install virtualenvwrapper-win
Some one-time only initialization: the main file (wrapper script) is, you need to add the path to it in your shell startup file. You can get the path to it:

$ which
Next is, add the source path in your shell startup file i.e. ~/.bashrc – can use any file editor or simply execute the following in your shell:
$ echo 'export WORKON_HOME=$HOME/.virtualenvs' >> ~/.bashrc
$ echo 'export PROJECT_HOME=$HOME/projects' >> ~/.bashrc
$ echo 'source /usr/bin/' >> ~/.bashrc
Restart your shell or simple reload the .bashrc file:
$ source ~/.bashrc
WORK_HOME is the directory where virtualenvwrapper will be keeping all the venvs.
The above initialization has activated the script and made available the following commands:

Using virtualenvwrapper: let’s verify the setup and get our hands-on the new venv tool. Start with creating our first venv:

$ mkvirtualenv test_venv

(test_venv) $
mkvirtualenv command will create and activate new venv i.e. test_venv – For exiting the venv, use deactivate:
(test_venv) $ deactivate
To choose among the venvs we have workon <venv_name> command – if we execute it without specifying any venv, it’ll list all the available venvs:
$ workon
To start using/activate a venv, simply:
$ workon test_venv
(test_venv) $
For deleting a venv:
$ rmvirtualenv test_venv
Removing test_venv...

If I have missed something, here is the official documentation for the setup and usage – documentation.

For a full fledged sand-boxed development environment, check out Vagrant.