Monthly Archives for July 2020

My Thoughts from the Singapore General Election 2020

personal - derrick - July 12, 2020

For fellow Singaporeans at home and around the world, the past week has been an exciting one. I fondly remember the days in 2011 and 2015 texting with old friends, wrapping up my work early and rushing off in excitement to get to experience again the lively scenes of political rallies. The night doesn’t end without a long discussion on how the outcome of polls will be like over a nice big pot of bak kut teh and teh-o-kosong at a random kopitiam.

Fast forward to 2020, here I am in a distant Hong Kong village amidst the Covid-19 pandemic, watching the elections live on CNA. I must say this is quite an experience. Even as I am far away from home, I participate actively in Whatsapp groups with friends who share the same passion in being the 5-yearly political analysts, exchanging memes and articles that gets all of us talking. With all that, I am reminded that I am a true blue Singaporean, born and bred in the little red dot I call home.

Regretfully, it is logistically impossible for me to vote this round (My bad, I registered as an overseas voter too late). However, I say this with a little tinge of silly pride, having endured the pain and suffering for staying awake untill 4am at my age. I am proud of the outcome of the Elections and it is my belief that the results has made us stronger as a whole.

I believe that in every society there must be progressive change, even in a system as stable as ours. It is what makes us better people, some people or media may perceive change as instability, let them think that way, at least where I am concerned, we have achieved change together with the introduction of minimal variation this round. To me, that is the best of both worlds.

The new generation of voters are now more knowledgeable and savvy, it is a good sign in a knowledge-based economy that they are now able to crtically think to make and influence decisions early. It is an indication that our society is evolving, unlike us who grew up in an era where we are better off doing what we are told to do, they are now better empowered to make decisions over how our story should progress, what we should be doing is to continue to adapt and evolve. The only constant, is change.

These following are random important points I made for my personal notes from following the elections, they are not directed at any events in particular:

1. If you are going to be ambitious, be humble and don’t be mean to others, no matter your achievements
2. Always be fair to others, but don’t expect others to be fair to you
3. Accept that we are not the people who came before us, and we should not expect the ones that come after us to be the same.
4. If self actualisation is more important than resources, take the route less traveled
5. Be patient, tough times don’t last, tough people do
6. Be open to alternate views, or one day you might become the alternate view
7. It is important to step back a little sometimes, to listen and see what our environment is telling us
8. Always be thankful, what may seem so little to you may mean the world for someone else
9. Failure is only failure if you stop trying, it is an opportunity to understand the mechanisms to make things right
10. The biggest change is what is forced upon us from the pandemic, if even elections can be online, why not working from home?

Change is always an opportunity to improve, the moment we stop doing that, we are done for. There is no such thing as failure, a bad outcome or a lousy report card for any Singaporean or for any of the political parties. Those are just perceptions that cloud our abilities to transform. Let us assess our takeaways from this event, solve problems, overcome our weaknesses and amplify our strengths. The pandemic is temporary, we will get through this together.

Majulah Singapura.

This is my first personal post on this blog site, please overlook the bad writing and emotional outbursts, I will continue to change and improve.

Continue Reading

Guide to Learning DevOps Part #3 (Building Something to Deploy)

databases, devops, docker, programming, redis - derrick - July 8, 2020

At this point, we have an idea how a couple of things work, you might be feeling the pressure that comes with the diverse skillsets you need in becoming a DevOps engineer. I cannot stress enough that skills in coding are important in DevOps so I am putting together one guide focused on it. If you are currently new to coding, do take your time to perhaps learn more about python. If you already have a preferred programming language, try to adapt this guide or drop me a comment and I shall see if I can help.

“It does not matter how slowly you go as long as you do not stop. ”
– Confucius

Prerequisites

  • python with pip installed

Starting out

In this part of the guide I will walk through the development of a simple REST application in python. The application will communicate with a redis docker image.

Setting up a simple database

The inclusion of the database simplifies the design for all of our web services to persist and share common data among each server instance. The database of choice here will be a redis docker image. The first thing to do is to get redis-tools installed on the development machine for testing and verification.

$ sudo apt-get install redis-tools

A simple command will download the image without having to install a redis database instance on the machine. Note that this way of using redis has its strengths and limitations in an architecture perspective.

$ docker pull redis:latest

latest: Pulling from library/redis
8559a31e96f4: Pull complete 
85a6a5c53ff0: Pull complete 
b69876b7abed: Pull complete 
a72d84b9df6a: Pull complete 
5ce7b314b19c: Pull complete 
04c4bfb0b023: Pull complete 
Digest: sha256:800f2587bf3376cb01e6307afe599ddce9439deafbd4fb8562829da96085c9c5
Status: Downloaded newer image for redis:latest
docker.io/library/redis:latest

You can run the following command to check that the latest redis image has been obtained.

$ docker images

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
redis                                latest              235592615444        3 weeks ago         104MB

You can now run an instance of the database, to simplify the tutorial, I will name the instance as “db”, publish ports 6379 (redis default ports) and in detached mode.

$ docker run --name "db" -e ALLOW_EMPTY_PASSWORD=yes -p 6379:6379 -d redis:latest

8387216a90a24d2bf11580930a5f545927cce3a12bd69a0e579ec1ad22554801

You can now use redis on the command line. Do note that any data on the redis instance will not be persistent if the instance is shut down. You can now connect to redis simply by typing redis-cli.

$ redis-cli

127.0.0.1:6379> 

For the benefit of the learners who possibly have an existing redis instance on their machines, the ip and the port to the redis instance can be specified in your command line.

$ redis-cli -h localhost -p 6379

localhost:6379>

If you don’t already know this and have learnt something new, perhaps take some time to play with your new virtual toys! How about initialising a key-value called “counter” and assigning it a value of 0. Fetch the value of counter to verify it is indeed what you set it up to be.

localhost:6379> set counter 0
OK
localhost:6379> get counter
"0"
localhost:6379> 

There you have it, a working copy of redis in your machine with which you can persist test data.

Setting up the code environment

Now we get to the exciting part, getting started on building a simple python backend application on your preferred IDE (for me its Visual Studio Code).

Before we start building on python, it is always good practice to prepare a virtual development environment so that packages downloaded for the application can be isolated. This helps to keep your python on the machine clean. My favorite guide is https://docs.python-guide.org/dev/virtualenvs/ but I shall walk you through the process.

We begin by ensuring we have python and pip installed.

$ python --version
Python 2.7.17

$ python3 --version
Python 3.6.9

So I have python 2.7 and python 3.6 installed by default, next step is to check for pip.

$ pip --version
pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)

$ pip3 --version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)

As we can see, we have two versions of pip as well, one for python 2.7 and another for python 3.6.

The following command will install virtualenv on your machine. Just to make sure, test your installation by checking your virtualenv version.

$ pip install virtualenv

$ virtualenv --version
virtualenv 20.0.21 from /home/work/.local/lib/python2.7/site-packages/virtualenv/__init__.pyc

Different people like to organise their environments differently, my personal preference is to create an environment specifically for the python version I am currently using to code.

$ virtualenv -p /usr/bin/python3.6 venv36

created virtual environment CPython3.6.9.final.0-64 in 2115ms
  creator CPython3Posix(dest=/home/work/Documents/workspace/venv36, clear=False, global=False)
  seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/home/work/.local/share/virtualenv/seed-app-data/v1.0.1)
  activators PythonActivator,FishActivator,XonshActivator,CShellActivator,PowerShellActivator,BashActivator

You should now be able to activate the virtual environment. your environment name in brackets should appear on your terminal.

$ source venv36/bin/activate
(venv36) $ 

[OPTIONAL] You can deactivate the virtual environment simply by running the command deactivate, the environment name in brackets will cease to exist.

$ deactivate
$

Writing the code

Now I am going to build a simple backend REST application using python tornado. Before we do that, a few python packages needs to be installed in the environment we created earlier, create a file in your project folder and name it requirements.txt. The content of the text file shall be the following (version numbers are at the time of writing this entry).

tornado==6.0.4
sockets==1.0.0
redis==3.5.3

The following command will install the required packages into your development environment.

$ pip install -r requirements.txt

Collecting tornado
  Downloading tornado-6.0.4.tar.gz (496 kB)
     |████████████████████████████████| 496 kB 2.3 MB/s 
Collecting sockets
  Downloading sockets-1.0.0-py3-none-any.whl (4.5 kB)
Collecting redis
  Using cached redis-3.5.3-py2.py3-none-any.whl (72 kB)
Building wheels for collected packages: tornado
  Building wheel for tornado (setup.py) ... done
  Created wheel for tornado: filename=tornado-6.0.4-cp36-cp36m-linux_x86_64.whl size=427645 sha256=dc57464c4dc13181fbd9d4d60787e50018e86efd80ce4d08b42dfe85da574b9b
  Stored in directory: /home/derrick/.cache/pip/wheels/37/a7/db/2d592e44029ef817f3ef63ea991db34191cebaef087a96f505
Successfully built tornado
Installing collected packages: tornado, sockets, redis
Successfully installed redis-3.5.3 sockets-1.0.0 tornado-6.0.4

You should now have all the required packages to run the script. Since the guide is all about learning DevOps and not python programming, I shall give away the python code.

To make the web application a little more interactive rather than just returning hello world, a simple visit counter and ip address check is included to make our lives easier later.

import tornado.ioloop
import tornado.web

import redis
import socket

class MainHandler(tornado.web.RequestHandler):

    def get(self):
        self.database = redis.Redis(host='localhost', port=6379, db=0)

        if(self.database.get('counter') == None):
            self.database.set("counter", 0)
            
        visitCount = int(self.database.get('counter'))
        visitCount = visitCount + 1
        hostName = socket.gethostname()
        ipAddr = socket.gethostbyname(hostName)
        self.database.set("counter", visitCount)
        self.write("Hello! This page is visited %d time(s)<br>current recorded IP is %s" % (visitCount, ipAddr))


def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()

Start up the python script by running the simple command

(venv36) $ python app.py 

Open a web browser and navigate to http://localhost:8888, refreshing the page will increment the visit counter stored in redis.

Conclusion

Now that you have created a simple application, we now have a base application that we can modify or use for deployment in our DevOps environment. Hooray! all of the code for the guide can be found in my github repository through the link. https://github.com/snowfoxwoo/blog_gldp03.git

Continue Reading

Guide to Learning DevOps Part #2 (Cool Stuff to Do with Ansible)

devops, docker - derrick - July 1, 2020

The knowledge bar for DevOps is set pretty high, good DevOps engineers need to acquire skills from a combination of many different technology categories. These may include programming (Java, Python, etc), virtualisation/container tech, shell scripting, configuration management tools (Which sometimes may require learning a Domain Specific Language or DSL in short) or even release management, etc. In Part #2 of the guide, I will briefly talk about the use of a convenient configuration management tool (Ansible) that can help you on your DevOps journey.

In my last guide (Part #1), we talked about setting up a DevOps learning environment. I am going to use that environment to jump straight into a pretty cool configuration management tool, Ansible.

Prerequisites

  • Access to multiple Linux machines/VMs through ssh or a multi-instance DevOps environment.
  • Python installed

Installing Ansible on Ubuntu

For the purpose of this guide, I will refer to the machine you are working on is the control node (I’m using Ubuntu 18.04), and the docker instances as the worker nodes. The first step is to include the PPA in your system’s resources list. Do run an update after to ensure the resource list is updated.

$ sudo apt-add-repository ppa:ansible/ansible

$ sudo apt-get update

Simply run the apt-get install command to install Ansible. Verify the installation by viewing the ansible version installed on your machine.

$ sudo apt-get install ansible

$ ansible --version
ansible 2.9.10

You will get a bunch of other information (Such as python version, not listed above).

Setting up inventory hosts

Start up your favorite text editor(my favorite is vi) and edit the ansible hosts file (create one if it doesn’t exist).

$ sudo vi /etc/ansible/hosts

If a file was created by installation, it will contain some instructions, fill in the following to connect to the multi-cluster DevOps environment previously created.

[servers]
server1 ansible_host=172.10.0.1
server2 ansible_host=172.10.0.2
server3 ansible_host=172.10.0.3

You can check whether the connection to your servers will succeed through the following command.

$ ansible all -m ping -u root

server1 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: root@172.10.0.1: Permission denied (publickey,password).", 
    "unreachable": true
}
server2 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: root@172.10.0.2: Permission denied (publickey,password).", 
    "unreachable": true
}
server3 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: root@172.10.0.3: Permission denied (publickey,password).", 
    "unreachable": true
}

However, your request will most likely fail as we are still missing one step. I usually go through some possible basic failures so it helps with knowing what to expect and how to troubleshoot.

For the above, the reason we cannot connect to all our instances, is because we are still relying on password authentications to ssh into our containers. What we need to do is to first generate ssh keys on our control node.

$ ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/work/.ssh/id_rsa): 

Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/work/.ssh/id_rsa.
Your public key has been saved in /home/work/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:LY5Eqsz9vSMmTE6eI+ZQN7xHtTRyRBRvYF7fo76Ux3g work@control-node
The key's randomart image is:
+---[RSA 2048]----+
|       oB..      |
|       + + . .   |
|      o * o . o  |
|   . o = =   . . |
|  . = o S . .    |
| + +o= o . . +   |
|. +*o.o .   = E  |
| .o Boo..  . +   |
| o.. +..oo  .    |
+----[SHA256]-----+

Next thing we need to do is to copy the public key into our worker nodes, this eliminates the need for password authentication (Note this useful command if your job requires maintaining lots of servers).

$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.10.0.1

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/work/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@172.10.0.1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@172.10.0.1'"
and check to make sure that only the key(s) you wanted were added.

Now try to ssh into the server where the key was copied, you should no longer require a password to log in.

$ ssh root@172.10.0.1

Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-106-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

root@5ff00d540729:~# 

Now do the same for the other two instances.

$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.10.0.2

$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.10.0.3

Your ansible ping should now return successes, yay!

$ ansible all -m ping -u root

server1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false, 
    "ping": "pong"
}
server3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false, 
    "ping": "pong"
}
server2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false, 
    "ping": "pong"
}

Useful ad-hoc commands to try

The ansible ping example is actually one of the ad-hoc commands you can execute on the fly to obtain information from all your servers listed in the ansible configuration host file. More detailed introduction to ad-hoc commands can be found on the official ansible documentation here. https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html

Lets say I wish to know what are all the python versions on each of my worker_nodes.

$ ansible all -m setup -a "filter=ansible_python_version" -u root

server3 | SUCCESS => {
    "ansible_facts": {
        "ansible_python_version": "3.5.2", 
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false
}
server2 | SUCCESS => {
    "ansible_facts": {
        "ansible_python_version": "3.5.2", 
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false
}
server1 | SUCCESS => {
    "ansible_facts": {
        "ansible_python_version": "3.5.2", 
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false
}

Or I want to to have a quick look at how much memory each of my container instances are using.

$ ansible all -m command -a "free -m" -u root

server1 | CHANGED | rc=0 >>
              total        used        free      shared  buff/cache   available
Mem:           7896        3114         607        1054        4174        3263
Swap:          2047          29        2018
server3 | CHANGED | rc=0 >>
              total        used        free      shared  buff/cache   available
Mem:           7896        3094         627        1054        4174        3283
Swap:          2047          29        2018
server2 | CHANGED | rc=0 >>
              total        used        free      shared  buff/cache   available
Mem:           7896        3093         628        1054        4174        3284
Swap:          2047          29        2018

These are just some of the possibilities you can explore for the command line fanboys out there. Honestly, you shouldn’t need to do much of these, therefore I shall not go too much in detail. In my subsequent post, we shall look into some built in server configuration tricks that ansible provides to DevOps junkies like you and me.

Continue Reading