Posts for docker Category

Guide to Learning DevOps Part #3 (Building Something to Deploy)

databases, devops, docker, programming, redis - derrick - July 8, 2020

At this point, we have an idea how a couple of things work, you might be feeling the pressure that comes with the diverse skillsets you need in becoming a DevOps engineer. I cannot stress enough that skills in coding are important in DevOps so I am putting together one guide focused on it. If you are currently new to coding, do take your time to perhaps learn more about python. If you already have a preferred programming language, try to adapt this guide or drop me a comment and I shall see if I can help.

“It does not matter how slowly you go as long as you do not stop. ”
– Confucius

Prerequisites

  • python with pip installed

Starting out

In this part of the guide I will walk through the development of a simple REST application in python. The application will communicate with a redis docker image.

Setting up a simple database

The inclusion of the database simplifies the design for all of our web services to persist and share common data among each server instance. The database of choice here will be a redis docker image. The first thing to do is to get redis-tools installed on the development machine for testing and verification.

$ sudo apt-get install redis-tools

A simple command will download the image without having to install a redis database instance on the machine. Note that this way of using redis has its strengths and limitations in an architecture perspective.

$ docker pull redis:latest

latest: Pulling from library/redis
8559a31e96f4: Pull complete 
85a6a5c53ff0: Pull complete 
b69876b7abed: Pull complete 
a72d84b9df6a: Pull complete 
5ce7b314b19c: Pull complete 
04c4bfb0b023: Pull complete 
Digest: sha256:800f2587bf3376cb01e6307afe599ddce9439deafbd4fb8562829da96085c9c5
Status: Downloaded newer image for redis:latest
docker.io/library/redis:latest

You can run the following command to check that the latest redis image has been obtained.

$ docker images

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
redis                                latest              235592615444        3 weeks ago         104MB

You can now run an instance of the database, to simplify the tutorial, I will name the instance as “db”, publish ports 6379 (redis default ports) and in detached mode.

$ docker run --name "db" -e ALLOW_EMPTY_PASSWORD=yes -p 6379:6379 -d redis:latest

8387216a90a24d2bf11580930a5f545927cce3a12bd69a0e579ec1ad22554801

You can now use redis on the command line. Do note that any data on the redis instance will not be persistent if the instance is shut down. You can now connect to redis simply by typing redis-cli.

$ redis-cli

127.0.0.1:6379> 

For the benefit of the learners who possibly have an existing redis instance on their machines, the ip and the port to the redis instance can be specified in your command line.

$ redis-cli -h localhost -p 6379

localhost:6379>

If you don’t already know this and have learnt something new, perhaps take some time to play with your new virtual toys! How about initialising a key-value called “counter” and assigning it a value of 0. Fetch the value of counter to verify it is indeed what you set it up to be.

localhost:6379> set counter 0
OK
localhost:6379> get counter
"0"
localhost:6379> 

There you have it, a working copy of redis in your machine with which you can persist test data.

Setting up the code environment

Now we get to the exciting part, getting started on building a simple python backend application on your preferred IDE (for me its Visual Studio Code).

Before we start building on python, it is always good practice to prepare a virtual development environment so that packages downloaded for the application can be isolated. This helps to keep your python on the machine clean. My favorite guide is https://docs.python-guide.org/dev/virtualenvs/ but I shall walk you through the process.

We begin by ensuring we have python and pip installed.

$ python --version
Python 2.7.17

$ python3 --version
Python 3.6.9

So I have python 2.7 and python 3.6 installed by default, next step is to check for pip.

$ pip --version
pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)

$ pip3 --version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)

As we can see, we have two versions of pip as well, one for python 2.7 and another for python 3.6.

The following command will install virtualenv on your machine. Just to make sure, test your installation by checking your virtualenv version.

$ pip install virtualenv

$ virtualenv --version
virtualenv 20.0.21 from /home/work/.local/lib/python2.7/site-packages/virtualenv/__init__.pyc

Different people like to organise their environments differently, my personal preference is to create an environment specifically for the python version I am currently using to code.

$ virtualenv -p /usr/bin/python3.6 venv36

created virtual environment CPython3.6.9.final.0-64 in 2115ms
  creator CPython3Posix(dest=/home/work/Documents/workspace/venv36, clear=False, global=False)
  seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/home/work/.local/share/virtualenv/seed-app-data/v1.0.1)
  activators PythonActivator,FishActivator,XonshActivator,CShellActivator,PowerShellActivator,BashActivator

You should now be able to activate the virtual environment. your environment name in brackets should appear on your terminal.

$ source venv36/bin/activate
(venv36) $ 

[OPTIONAL] You can deactivate the virtual environment simply by running the command deactivate, the environment name in brackets will cease to exist.

$ deactivate
$

Writing the code

Now I am going to build a simple backend REST application using python tornado. Before we do that, a few python packages needs to be installed in the environment we created earlier, create a file in your project folder and name it requirements.txt. The content of the text file shall be the following (version numbers are at the time of writing this entry).

tornado==6.0.4
sockets==1.0.0
redis==3.5.3

The following command will install the required packages into your development environment.

$ pip install -r requirements.txt

Collecting tornado
  Downloading tornado-6.0.4.tar.gz (496 kB)
     |████████████████████████████████| 496 kB 2.3 MB/s 
Collecting sockets
  Downloading sockets-1.0.0-py3-none-any.whl (4.5 kB)
Collecting redis
  Using cached redis-3.5.3-py2.py3-none-any.whl (72 kB)
Building wheels for collected packages: tornado
  Building wheel for tornado (setup.py) ... done
  Created wheel for tornado: filename=tornado-6.0.4-cp36-cp36m-linux_x86_64.whl size=427645 sha256=dc57464c4dc13181fbd9d4d60787e50018e86efd80ce4d08b42dfe85da574b9b
  Stored in directory: /home/derrick/.cache/pip/wheels/37/a7/db/2d592e44029ef817f3ef63ea991db34191cebaef087a96f505
Successfully built tornado
Installing collected packages: tornado, sockets, redis
Successfully installed redis-3.5.3 sockets-1.0.0 tornado-6.0.4

You should now have all the required packages to run the script. Since the guide is all about learning DevOps and not python programming, I shall give away the python code.

To make the web application a little more interactive rather than just returning hello world, a simple visit counter and ip address check is included to make our lives easier later.

import tornado.ioloop
import tornado.web

import redis
import socket

class MainHandler(tornado.web.RequestHandler):

    def get(self):
        self.database = redis.Redis(host='localhost', port=6379, db=0)

        if(self.database.get('counter') == None):
            self.database.set("counter", 0)
            
        visitCount = int(self.database.get('counter'))
        visitCount = visitCount + 1
        hostName = socket.gethostname()
        ipAddr = socket.gethostbyname(hostName)
        self.database.set("counter", visitCount)
        self.write("Hello! This page is visited %d time(s)<br>current recorded IP is %s" % (visitCount, ipAddr))


def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()

Start up the python script by running the simple command

(venv36) $ python app.py 

Open a web browser and navigate to http://localhost:8888, refreshing the page will increment the visit counter stored in redis.

Conclusion

Now that you have created a simple application, we now have a base application that we can modify or use for deployment in our DevOps environment. Hooray! all of the code for the guide can be found in my github repository through the link. https://github.com/snowfoxwoo/blog_gldp03.git

Continue Reading

Guide to Learning DevOps Part #2 (Cool Stuff to Do with Ansible)

devops, docker - derrick - July 1, 2020

The knowledge bar for DevOps is set pretty high, good DevOps engineers need to acquire skills from a combination of many different technology categories. These may include programming (Java, Python, etc), virtualisation/container tech, shell scripting, configuration management tools (Which sometimes may require learning a Domain Specific Language or DSL in short) or even release management, etc. In Part #2 of the guide, I will briefly talk about the use of a convenient configuration management tool (Ansible) that can help you on your DevOps journey.

In my last guide (Part #1), we talked about setting up a DevOps learning environment. I am going to use that environment to jump straight into a pretty cool configuration management tool, Ansible.

Prerequisites

  • Access to multiple Linux machines/VMs through ssh or a multi-instance DevOps environment.
  • Python installed

Installing Ansible on Ubuntu

For the purpose of this guide, I will refer to the machine you are working on is the control node (I’m using Ubuntu 18.04), and the docker instances as the worker nodes. The first step is to include the PPA in your system’s resources list. Do run an update after to ensure the resource list is updated.

$ sudo apt-add-repository ppa:ansible/ansible

$ sudo apt-get update

Simply run the apt-get install command to install Ansible. Verify the installation by viewing the ansible version installed on your machine.

$ sudo apt-get install ansible

$ ansible --version
ansible 2.9.10

You will get a bunch of other information (Such as python version, not listed above).

Setting up inventory hosts

Start up your favorite text editor(my favorite is vi) and edit the ansible hosts file (create one if it doesn’t exist).

$ sudo vi /etc/ansible/hosts

If a file was created by installation, it will contain some instructions, fill in the following to connect to the multi-cluster DevOps environment previously created.

[servers]
server1 ansible_host=172.10.0.1
server2 ansible_host=172.10.0.2
server3 ansible_host=172.10.0.3

You can check whether the connection to your servers will succeed through the following command.

$ ansible all -m ping -u root

server1 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: root@172.10.0.1: Permission denied (publickey,password).", 
    "unreachable": true
}
server2 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: root@172.10.0.2: Permission denied (publickey,password).", 
    "unreachable": true
}
server3 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: root@172.10.0.3: Permission denied (publickey,password).", 
    "unreachable": true
}

However, your request will most likely fail as we are still missing one step. I usually go through some possible basic failures so it helps with knowing what to expect and how to troubleshoot.

For the above, the reason we cannot connect to all our instances, is because we are still relying on password authentications to ssh into our containers. What we need to do is to first generate ssh keys on our control node.

$ ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/work/.ssh/id_rsa): 

Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/work/.ssh/id_rsa.
Your public key has been saved in /home/work/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:LY5Eqsz9vSMmTE6eI+ZQN7xHtTRyRBRvYF7fo76Ux3g work@control-node
The key's randomart image is:
+---[RSA 2048]----+
|       oB..      |
|       + + . .   |
|      o * o . o  |
|   . o = =   . . |
|  . = o S . .    |
| + +o= o . . +   |
|. +*o.o .   = E  |
| .o Boo..  . +   |
| o.. +..oo  .    |
+----[SHA256]-----+

Next thing we need to do is to copy the public key into our worker nodes, this eliminates the need for password authentication (Note this useful command if your job requires maintaining lots of servers).

$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.10.0.1

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/work/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@172.10.0.1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@172.10.0.1'"
and check to make sure that only the key(s) you wanted were added.

Now try to ssh into the server where the key was copied, you should no longer require a password to log in.

$ ssh root@172.10.0.1

Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-106-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

root@5ff00d540729:~# 

Now do the same for the other two instances.

$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.10.0.2

$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.10.0.3

Your ansible ping should now return successes, yay!

$ ansible all -m ping -u root

server1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false, 
    "ping": "pong"
}
server3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false, 
    "ping": "pong"
}
server2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false, 
    "ping": "pong"
}

Useful ad-hoc commands to try

The ansible ping example is actually one of the ad-hoc commands you can execute on the fly to obtain information from all your servers listed in the ansible configuration host file. More detailed introduction to ad-hoc commands can be found on the official ansible documentation here. https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html

Lets say I wish to know what are all the python versions on each of my worker_nodes.

$ ansible all -m setup -a "filter=ansible_python_version" -u root

server3 | SUCCESS => {
    "ansible_facts": {
        "ansible_python_version": "3.5.2", 
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false
}
server2 | SUCCESS => {
    "ansible_facts": {
        "ansible_python_version": "3.5.2", 
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false
}
server1 | SUCCESS => {
    "ansible_facts": {
        "ansible_python_version": "3.5.2", 
        "discovered_interpreter_python": "/usr/bin/python3"
    }, 
    "changed": false
}

Or I want to to have a quick look at how much memory each of my container instances are using.

$ ansible all -m command -a "free -m" -u root

server1 | CHANGED | rc=0 >>
              total        used        free      shared  buff/cache   available
Mem:           7896        3114         607        1054        4174        3263
Swap:          2047          29        2018
server3 | CHANGED | rc=0 >>
              total        used        free      shared  buff/cache   available
Mem:           7896        3094         627        1054        4174        3283
Swap:          2047          29        2018
server2 | CHANGED | rc=0 >>
              total        used        free      shared  buff/cache   available
Mem:           7896        3093         628        1054        4174        3284
Swap:          2047          29        2018

These are just some of the possibilities you can explore for the command line fanboys out there. Honestly, you shouldn’t need to do much of these, therefore I shall not go too much in detail. In my subsequent post, we shall look into some built in server configuration tricks that ansible provides to DevOps junkies like you and me.

Continue Reading

Guide to Learning DevOps Part #1 (Setting up a Local Multi-Server Environment)

devops, docker - derrick - June 24, 2020

Most DevOps beginners have problems starting out, the best time to start on DevOps is when you have acquired some experience in backend applications development (REST API using some framework such as flask or tornado will be great). If you have not been making such applications, maybe try out some basic python frameworks to begin with.

So! I assume you already know some basic REST API development. You want to learn or try out some DevOps stuff but don’t know where to start? Lets see if I can help. To me, the most important thing about learning DevOps is the environment. If you have a proper environment set up to learn, then you are halfway there!

In every project there is always a need balance the resources you need to achieve a specific goal versus how much knowledge you currently have. Which means this setup isn’t going to done easy. I plan to set up 3 ubuntu server instances in a single machine which I can bring up and down whenever I want. This, of course, can be achieved using VMware or Virtualbox, but to make it slightly more challenging and create new opportunities for me to learn new skills. I am going to do this using docker instead.

Prerequisites

  • Basic Linux Machine (Mine has Ubuntu 18.04 installed)
  • Docker installed
  • Docker-compose installed

Creating new Docker networks

My first task, will be to create a bridged docker network, this is done so I can assign IP addresses to each ubuntu(docker) instance. In this example, I choose to create my network in the 10.10.x.x subnet.

$ docker network create --subnet 10.10.0.0/24 --gateway 10.10.0.254 mynetname

[OPTIONAL] If you screw up here, you can always list the docker networks and remove those that you don’t want.

$ docker network list

NETWORK ID          NAME                DRIVER              SCOPE
6b52e672b288        bridge              bridge              local
a47adaebb9ea        host                host                local
60b3086fa33e        mynetname           bridge              local
3d4bf0836766        none                null                local

[OPTIONAL] In this case, I am going to remove “mynetname”.

$ docker network rm mynetname

mynetname

Creating SSH Ubuntu Docker

You first need to create an Ubuntu image that has SSH installed. Specific instructions can be found on this link.

https://docs.docker.com/engine/examples/running_ssh_service/

For the benefit of others who wants more specific instructions without having to click on the link above, I will walk through the process of creating the Docker image. The first thing you need to do is create a directory with a Dockerfile with your favorite text editor.

FROM ubuntu:16.04

RUN apt-get update &amp;&amp; apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:THEPASSWORDYOUCREATED' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Paste the code above and save the Dockerfile, remember to replace THEPASSWORDYOUCREATED with a password of your choice.

In the same directory where you placed your Dockerfile, run the following command to build the image.

$ docker build -t ubuntu_ssh .

You can now try to run the docker using the following command.

$ docker run -d -P --name test_server ubuntu_ssh

[OPTIONAL] To ensure that your Docker is running properly and you can ssh successfully, you can first retrieve your port number through a port test.

$ docker port test_server 22

0.0.0.0:32772

[OPTIONAL] Note the port numbers, in my case it is 32772. The next step is to retrieve the ip address of your Docker daemon.

$ ip address

3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether b8:81:98:bd:2d:a1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.158/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp2s0
       valid_lft 76907sec preferred_lft 76907sec
    inet6 fe80::fc44:8201:3675:45a3/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f6:12:8b:b8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f6ff:fe12:8bb8/64 scope link 
       valid_lft forever preferred_lft forever

[OPTIONAL] In my case, the Docker daemon has its ip address on 172.17.0.1. I can now ssh into the Ubuntu Docker instance with the ip address and port numbers.

$ ssh root@172.17.0.1 -p32772

[OPTIONAL] Put in your password and you are good to go!

root@172.17.0.1's password: 
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-106-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@523dd9345851:~# 

Spinning up the servers

Next, I am going to run 3 instances of ubuntu 16.04 in detached mode on the network I have created. The first server will be created with name “server1” and ip address “10.10.0.1”. The network shall be mynetname which we created earlier

$ docker run --rm -it -d --name server1 --expose 22 --net mynetname --ip 10.10.0.1 ubuntu_ssh:latest

You should now have your first ubuntu instance running, you can normally ssh into it using the following command.

$ ssh root@10.10.0.1

Now repeat the process and set up 2 more servers in detached mode, each given a different name and ip address.

$ docker run --rm -it -d --name server2 --expose 22 --net mynetname --ip 10.10.0.2 ubuntu_ssh:latest

$ docker run --rm -it -d --name server3 --expose 22 --net mynetname --ip 10.10.0.3 ubuntu_ssh:latest

You should now have 3 ubuntu instances running on Docker that you can play with, congratulations and give yourself a small pat on the back! feel the adrenaline rush! Now lets clean up and put our toys back in the box. lets start by inspecting all the containers.

$ docker ps

CONTAINER ID        IMAGE               COMMAND               CREATED             STATUS              PORTS                   NAMES
4219ab627bb0        ubuntu_ssh:latest   "/usr/sbin/sshd -D"   38 hours ago        Up 38 hours         22/tcp                  server3
adae0537c745        ubuntu_ssh:latest   "/usr/sbin/sshd -D"   38 hours ago        Up 38 hours         22/tcp                  server2
aa0073e1b137        ubuntu_ssh:latest   "/usr/sbin/sshd -D"   38 hours ago        Up 38 hours         22/tcp                  server1

We can now bring down each container individually.

$ docker stop server1
server1

$ docker stop server2
server2

$ docker stop server3
server3

Now to check if we have successfully freed up all your resources for the next step.

$ docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Cleanup complete! I hope you now have a much better understanding of setting up a low-cost, low resource container environment. But we are not done yet, remember the goal is to be able to spin up all servers and bring them down easily. We are going to make use of a docker-compose script to do this.

Making a docker-compose script

To keep this post simple, I am just going to give you the configuration script. In my future posts, I will explain in detail how the docker-compose works.

version: '2'

services:
  server1:
    image: ubuntu_ssh:latest
    networks:
        devops_env:
            ipv4_address: 172.10.0.1

  server2:
    image: ubuntu_ssh:latest
    networks:
        devops_env:
            ipv4_address: 172.10.0.2

  server3:
    image: ubuntu_ssh:latest
    networks:
        devops_env:
            ipv4_address: 172.10.0.3

networks:
    devops_env:
        driver: bridge
        ipam:
            config:
                - subnet: 172.10.0.0/16
                  gateway: 172.10.0.254

Once you have saved your docker-compose file, run the following command to bring it up in detached mode in the same Directory.

$ docker-compose up -d

Creating network "dcmpdevopsenv_devops_env" with driver "bridge"
Creating dcmpdevopsenv_server2_1 ... 
Creating dcmpdevopsenv_server3_1 ... 
Creating dcmpdevopsenv_server1_1 ... 
Creating dcmpdevopsenv_server2_1
Creating dcmpdevopsenv_server3_1
Creating dcmpdevopsenv_server1_1 ... done

I have created a new subnet for this docker-compose file so it runs in parallel with what you have already created. You can now ssh into any of the servers with the ip address you specified in the docker-compose file, in my case, the following command will suffice. Enter your password and you are good to go!

$ ssh root@172.10.0.1

The authenticity of host '172.10.0.1 (172.10.0.1)' can't be established.
ECDSA key fingerprint is SHA256:Br0w5T0GWmJyRdjNHUx2JR6nVQ0L3ln1gP4xWT7ao+0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.10.0.1' (ECDSA) to the list of known hosts.
root@172.10.0.1's password: 
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-106-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@60a45a4f5105:~#

If at any point of you you need to bring it down again just exit the ssh and use the following command.

$ docker-compose down

Stopping dcmpdevopsenv_server1_1 ... done
Stopping dcmpdevopsenv_server3_1 ... done
Stopping dcmpdevopsenv_server2_1 ... done
Removing dcmpdevopsenv_server1_1 ... done
Removing dcmpdevopsenv_server3_1 ... done
Removing dcmpdevopsenv_server2_1 ... done
Removing network dcmpdevopsenv_devops_env

The docker-compose file makes it really easy whenever you need to spin up a couple of servers to try something new! So do give it a try and feel free to let me know if this post can be improved. The docker-compose and Dockerfiles can be accessed via my github link: https://github.com/snowfoxwoo/dcmp-devops-env

Stay calm and keep coding! – Derrick Woo

Continue Reading