Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.
I don’t know about other programming languages, but if you are using Python or Django, you must have heard about Celery quite a few times, and if not, you better look into it. As stated on the project Celery website:
In case of a web service (most common use-case), asynchronous task queues are utilities to push (time-consuming) tasks in background while timely sending back the response for a user request. These delegated tasks can be anything from sending few notifications, dispatching emails, update system logs, or update internal ERP. Having the aforementioned tasks in line with the request processing, can delay the response back to the user to a large extent.
Install R and RStudio on CentOS 7
R is a free programming environment, mainly used (but not limited to) for statistical analysis. R is maintained by R Foundation. RStudio is a free integrated development environment (IDE) for R programming language.
This guide requires:
-
A CentOS 7 machine.
-
A sudo user, or root user.
Python decorators for dummies
If you’re going through interviews for the positions of Python developer, or looking forward to preparing for one, or just a curios developer, you better have your head clear around the concept of decorators in Python programming language.
I won’t be delving into ‘what are design patterns’ , and why should you make use of it, whenever possible. The post is merely about understanding and writing decorators in Python. You can find plethora of posts about Python decorators, the motivation for me is, that everyone has their own way of explaining, especially a technical concept.
Deploy Django with NginX, Gunicorn, PostgreSQL, virtualenv
Step 0 – Update and upgrade
We are using Ubuntu 16.04 LTS for this tutorial

apt-get update update the list of available packages and their versions, but it does not install or upgrade any packages. apt-get upgrade actually installs newer versions of the packages you have. After updating the lists, the package manager knows about available updates for the software you have installed.
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get upgrade
Continue reading “Deploy Django with NginX, Gunicorn, PostgreSQL, virtualenv”
Deploy Django using Docker Compose
Related posts
With the adaptation of micro-service architecture i.e. various components as independent services, docker community came up with docker-compose (previously FIg). Using a single (YAML) configuration file (docker-compose.yml) to specify all the components, which docker compose build and spawn as independent services i.e. docker containers.
Use-case: You have a web project, with web application developed using Django, using a Postgres database, redis as caching engine, and NginX for serving over the web. Using docker-compose you can deploy this stack with a single command:
docker-compose build --no-cache && docker-compose up
The complete project is available here.
This blog post is about using docker compose, for deploying your Django application with Postgres, Redis, and Nginx. It is presumed, you already have your Django project, and want to deploy your full stack.
High level steps
-
Install and start Docker compose
-
Setup project – presume you already have a Django project.
-
Create Dockerfile(s) and docker-compose.yml
-
Build service images – docker-compose build
-
Create database and database migrations – docker-compose run web python manage.py migrate
-
Start services containers – docker-compose up
-
View in browser http://127.0.0.1
OpenStack all-in-one setup on CentOS
OpenStack is an open-source cloud operating system for setting up IAAS (infrastructure as a service). OpenStack provides a flexible solution for both public and private clouds, covering the two important requirements i.e. cloud must be simple to implement and massively scalable. For production a minimal OpenStack setup requires at-least 2 separate machine, one controller and one compute node. To get started with OpenStack, a common practice is to setup an all-in-one deployment i.e. using a single machine.
This guide is about setting-up an all-in-one setup for OpenStack Queens, the latest release.
Configure static IP address on CentOS
After a fresh installation CentOS uses DHCP (dhclient -v) to assign an IP to the machine, which keep in changing on reboots, or service restarts, etc.
Use-case: Various service setups, especially involving a clustered configuration we need to set a fixed IP for each machine, so they can communication with each other, in case of DHCP the installation may break on the reboot, as anyone of the machine gets a new IP address. So the first step is to set a static IP address. Continue reading “Configure static IP address on CentOS”
Install XAMPP stack on Ubuntu 16.04 using terminal
Apache is the widely used web server, and PHP is a dominant technology when it comes to CMS frameworks i.e. WordPress, Drupal, etc. For this reason the deployment of the stack has been made effortless with XAMPP PHP development environment. XAMPP is an acronym, where X stands for any operating system (WAMP for Windows, LAMP for Linux), A for Apache web server, M for MySQL or MariaDB database engine, and PP stands for PHP and Perl. The post is about setting-up PHP development environment XAMPP on Ubuntu 16.04, suing terminal.
Step 0 – Login and update
First of all login into your Ubuntu machine using SSH – for a regular it’s recommended to add your SSH public key.
ssh <username>@<hostname/IP>
Continue reading “Install XAMPP stack on Ubuntu 16.04 using terminal”
All you need to know about SSH
Introduction
SSH stands for Secure SHell, a tool developed by SSH Communication Security Ltd, for secure remote log-in and command execution. It’s a secure alternative of it’s predecessors rlogin, rsh, etc. SSH has become industry de-facto for securely communicating with remote machines i.e. the entire session is encrypted.
The SSH is based on public-key cryptography (also known as asymmetric cryptography), a cryptography system employing key pair i.e. a public key which is meant to be shared, and private key which has to be kept safe and secret, only known to the owner. This pair serves two purpose 1. authentication, the public key verifies the owner of the paired private key, and 2. encryption, the public key encrypts the message, and only the paired private key can decrypt it. In simple words, you can share your public key (content of ~/.ssh/id_rsa.pub) with anyone via email, for example: to access a remote machine securely and without password, all you need to do is copy your public key to authorized_keys (default – ~/.ssh/authorized_keys) file.
DB partition trigger with PostgreSQL
Database partitioning is about logically splitting one large table into smaller physical pieces, such that improving query performance. DB partitioning is a good alternate for indexing multiple columns, reducing index size, hence the memory in use. Few common pros of database partitioning:
- Improved performance – data operations (CRUD) can be performed on a smaller volume of data, for example, in case of collecting data overtime, putting old data in separate partition might help with performance.
- Bulk create and delete can be efficient by adding or removing separate partitions.
- Time based partition can be helpful in cleaning old seldom-used data i.e. month based partition we can simply set a cron job for cleaning 12 month old partition, without effecting the table portion heavily in use for ADD, UPDATE, etc.
- Improved scalability – In case of very large tables, you can partition and have them hosted on a separate server.
There are 2 main approaches to database partitioning:
- Horizontal partitioning (Sharding) – a table is split horizontally, such that each partition is a subset of the table, having the same schema (i.e. number of fields/columns).
- Vertical partitioning – a table is split on the fields/columns, such that each subset has separate schema. A common use-case for vertical partitioning is to partition table fields on the basis of pattern of use i.e. frequently accessed fields are to be grouped together, and the less frequently accessed are put in a separate partition.