Sunday, 29 December 2019

Packaging MySQL in a container

Now that I have apache and PHP running in separate containers. I wanted to run MySQL in a container too. A google search revealed that MySQL uses port 3306. So I created a dockerfile inside mysql folder.  And this is how it looked

FROM ubuntu
RUN ["apt-get","update"]
RUN apt-get install mysql-server -y
EXPOSE 3306
ENTRYPOINT service mysql start && tail -f /dev/null

And also placed a db.php inside public_html folder with the following lines

<h1>DB TEST</h1>
<h4>Attempting MySQL connection from php...</h4>
<?php
$host_name = 'mysql';
$user = 'root';
$passwd = '';
$conn = new mysqli($host_name, $user, $passwd);

if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}
echo "Connected to MySQL successfully!";
?>

So now to build and run it

docker build -t mysql  ./mysql
docker run -d --network testnet --network-alias mysql --name mysqlcontainer mysql

based on my experience with PHP container, I gave mysql a network-alias as well as connected it to my testnet and then I started up my apachecontainer as well as phpcontainer. Then I browsed to http://localhost:100/db.php but got an error that said connection refused.

Error 1:

I couldn't find more details so I connected to the apachecontainer using

docker exec -it apachecontainer /bin/bash

and installed mysql-client inside the apache container and ran the following command 

mysql -u root  'mysql'  -h mysql -P 3306

It said "ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql' (111)". Searching for that in google revealed that MySQL by default accepts connections only from localhost and to change it I need to make changes to the cnf file used by it. I found that MySQL was using the cnf file found at the following location '/etc/mysql/mysql.conf.d/mysqld.cnf'. So I copied it out

docker cp  mysqlcontainer:/etc/mysql/mysql.conf.d/mysqld.cnf mysql/mysqld.cnf

I commented out the line that said "bind-address = 127.0.0.1". And changed my dockerfile as

FROM ubuntu
RUN ["apt-get","update"]
RUN apt-get install mysql-server -y
COPY mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf
EXPOSE 3306
ENTRYPOINT service mysql start && tail -f /dev/null

and then I built the image and ran it as before. This time I got a not allowed to connect error.

Error 2:

I checked it using mysql-client in apachecontainer and this was the error that I got "ERROR 1130 (HY000): Host 'apachecontainer.testnet' is not allowed to connect to this MySQL server". Based on the solution here. I modified the dockerfile as

FROM ubuntu
RUN ["apt-get","update"]
RUN apt-get install mysql-server -y
COPY mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf
RUN service mysql start
RUN mysql -u root -D mysql -e "update user set host='%' where host='localhost';"
EXPOSE 3306
ENTRYPOINT service mysql start && tail -f /dev/null

and when I built it I got the following warning and error

[Warning] World-writable config file '/etc/mysql/mysql.conf.d/mysqld.cnf' is ignored.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

To fix the first warning I had to change permission of the mysqld.cnf file. But the error was confusing since that error is supposed to mean that MySQL is not running at all but I had started it in the previous line. It seems each command is run in a separate intermediate container. I changed my dockerfile as follows to remove the warning and error

FROM ubuntu
RUN ["apt-get","update"]
RUN apt-get install mysql-server -y
COPY mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf
RUN chmod 0444 /etc/mysql/mysql.conf.d/mysqld.cnf
RUN service mysql start && mysql -u root -D mysql -e "update user set host='%' where host='localhost';"
EXPOSE 3306
ENTRYPOINT service mysql start && tail -f /dev/null

I added the mysql start command and the query to the same line. And now I was able to build the image and run it. But again when I connected to it I got an Access denied error message.

Error 3:

I checked it using mysql-client in apachecontainer and this was the error that I got "ERROR 1698 (28000): Access denied for user 'root'@'apachecontainer.testnet'". I need to set a password for root user. So again changed my dockerfile

FROM ubuntu
RUN ["apt-get","update"]
RUN apt-get install mysql-server -y
COPY mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf
RUN chmod 0444 /etc/mysql/mysql.conf.d/mysqld.cnf
RUN service mysql start && mysql -u root -D mysql -e "update user set host='%' where host='localhost';"
RUN service mysql start && mysql -u root -D mysql -e "ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'newpass';"
EXPOSE 3306
ENTRYPOINT service mysql start && tail -f /dev/null

I set a new password ie newpass for the root user. And also updated the password in the db.php

$host_name = 'mysql';
$user = 'root';
$passwd = 'newpass';

And built the image and ran the container with the commands used earlier and now when I browsed to http://localhost:100/db.php, I got the connection success message.


Securing MySQL Installation:

I am sure I was not doing it the right way and there must be other ways of doing it. But this was the only way I could think of.


docker exec -it mysqlcontainer /bin/bash
mysql_secure_installation

I clicked n for 'Disallow root login remotely' option and also didn't set a new root password since I already had one. I guess I could use the 'docker commit' to save the changes to an image.

Configuring phpMyAdmin:

First I downloaded phpMyAdmin zip file from here. And extracted all of its contents to the public_html/ phpMyAdmin folder. 
  1. Inside the phpMyAdmin folder I Copy pasted the config.sample.inc.php as config.inc.php. 
  2. Edited the config.inc.php file, replaced the line that says $cfg['Servers'][$i]['host'] = 'localhost'; with $cfg['Servers'][$i]['host'] = 'mysql'; (ie the network-alias for the mysqlcontainer)
  3. Set the $cfg['blowfish_secret'] . I used a blowfish secret generator found here
  4. Then I created a folder named tmp inside the phpMyAdmin folder and set permission so that anyone can change the content.
  5. Now I could login with username (root) and password (newpass) at http://localhost:100/phpMyAdmin and It showed a warning like "The phpMyAdmin configuration storage is not completely configured, some extended features have been deactivated" clicking the "Find out why" option told me to create a Datatabase for phpMyAdmin. I just clicked the Create option and it got created.

Using docker-compose file:

I was also able to set up the services using the following docker-compose file

version: "3.7"
services:
  php:
    image: "phptest"
    networks:
      - backend
    volumes:
      - ./public_html/:/var/www/html/
  apache:
    image: "apache2"
    depends_on:
      - php
    networks:
      - backend
    ports:
      - "8080:80"
    volumes:
      - ./public_html/:/var/www/html/
  mysql:
    image: mysql
    networks:
      - backend
networks:
  backend:

and started the services using 'docker-compose up -d'. I was able to connect to the database without any problems. Now with a bit of difficulty I have setup Apache,PHP and MySQL each running in a different containers with ubuntu base image. 

Saturday, 28 December 2019

Packaging PHP in a Container

I started my attempt to package PHP after reading this. There they had used a docker-compose file to achieve the required result. I wanted to do it without using the docker-compose file.  we also need php-fpm for it for that I read an article here to find out the command needed to install php-fpm. So having seen all these tutorials, I thought it is simpler than I expected. well, this is how simple it was. 
I realized that the docker file for apache I created in my last post won't be enough since it wont forward php files to the php container. This is how I set up my folder structure

.
├── apache
|   ├── dockerfile
|   └── demo.apache.conf
├── php
|   └── dockerfile
└── public_html
    ├── index.php
    └── test.html
This is how I changed the apache dockerfile. I modified the file so that it would replace the default 000-default.conf with my custom demo.apache.conf using the COPY command in the dockerfile. This is how my dockerfile for apache looks like

FROM ubuntu
RUN ["apt-get","update"]
RUN ["apt-get","install","apache2","-y"]
RUN apache2ctl start
RUN a2enmod proxy_fcgi
COPY demo.apache.conf /etc/apache2/sites-available/000-default.conf
EXPOSE 80
ENTRYPOINT apache2ctl start && tail -f /dev/null

I have just enabled the proxy_fcgi module and copied the conf file and this is the demo.apache.conf file

ServerName localhost
<VirtualHost *:80>
 ServerAdmin webmaster@localhost
 DocumentRoot /var/www/html
     ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://php:9000/var/www/html/$1
     <Directory /var/www/html/>
          DirectoryIndex index.php index.html
          Options Indexes FollowSymLinks
          AllowOverride All
         Require all granted
     </Directory>
 ErrorLog ${APACHE_LOG_DIR}/error.log
 CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

With this, I have set up the dockerfile for apache. I removed old containers using

docker system prune -a

Next, I moved on to creating the dockerfile for php container.

FROM ubuntu
RUN ["apt-get","update"]
RUN apt-get install php7.2-fpm php7.2-mysql php7.2-mbstring php7.2-curl php7.2-dom -y
EXPOSE 9000
ENTRYPOINT service php7.2-fpm start &&  tail -f /dev/null

It seems simple right. That's because this doesn't work. I felt ok so now to run this and I ran them with the following commands

docker build -t apache2 .
docker build -t phptest .
docker run -d -v /home/user/Desktop/DockerTesting/public_html:/var/www/html -p 100:80 --name apachecontainer apache2
docker run -d -v /home/user/Desktop/DockerTesting/public_html:/var/www/html -p 9000:9000 --name phpcontainer phptest

and browsed to http://localhost:100/ to run the index.php located at the public_html folder. I got this


So as I was searching the internet for a solution. I found that I have to make php-fpm listen to port 9000 and that the setting could be found at www.conf file. So now I had to find the www.conf file in the container. So I used

docker exec -it phpcontainer /bin/bash

to establish a connection to the container and after a bunch of cd and ls commands. I found the www.conf file at '/etc/php/7.2/fpm/pool.d/' so I copied the file out of the container using

docker cp phpcontainer:/etc/php/7.2/fpm/pool.d/www.conf  www.conf

sure enough, the listen was not set to a port. So I changed that line to 'listen = 9000' and saved the www.conf file inside the php folder and modified the dockerfile as

FROM ubuntu
RUN ["apt-get","update"]
RUN apt-get install php7.2-fpm php7.2-mysql php7.2-mbstring php7.2-curl php7.2-dom -y
COPY www.conf /etc/php/7.2/fpm/pool.d/www.conf
EXPOSE 9000
ENTRYPOINT service php7.2-fpm start &&  tail -f /dev/null

Now I removed the phpcontainer and the phptest image and rebuilt them using the above mentioned commands. Now with all the hope in the world, I opened http://localhost:100/ and again I was greeted with the same error. I had no idea what went wrong or how to fix it. I tried again but this time using docker-compose 

Trial Using Docker-Compose:

Now that my attempt to package PHP and Apache in separate containers without docker-compose failed miserably. I decided to try it using docker-compose and this is my folder structure. Everything  is the same as before except for the docker-compose.yml file

.
├── apache
|   ├── dockerfile
|   └── demo.apache.conf
├── php
|   ├── dockerfile
|   └── www.conf
├── public_html
|   ├── index.php
|   └── test.html
└── docker-compose.yml
I created the docker file based on the example given here

version: "3.7"
services:
  php:
    build: './php/'
    networks:
      - backend
    volumes:
      - ./public_html/:/var/www/html/
  apache:
    build: './apache/'
    depends_on:
      - php
    networks:
      - backend
    ports:
      - "8080:80"
    volumes:
      - ./public_html/:/var/www/html/
networks:
  backend:

I set the version as 3.7 based on the documentation. And ran the file using

docker-compose up -d

and this time I was able to access the php file at http://localhost:8080. I was confused I used the same dockerfiles as before but now everything works like magic. So I thought either the way I am building them is wrong or else the way I was running them is wrong. So I decided to test it out. First I stopped them using

docker-compose down

First I thought of testing the way I build the images. I built the images the same way I did last time

docker build -t apache2 ./apache
docker build -t phptest ./php

and then modified my docker-compose file as

version: "3.7"
services:
  php:
    image: "phptest"
    networks:
      - backend
    volumes:
      - ./public_html/:/var/www/html/
  apache:
    image: "apache2"
    depends_on:
      - php
    networks:
      - backend
    ports:
      - "8080:80"
    volumes:
      - ./public_html/:/var/www/html/
networks:
  backend:

Now I started them 'docker-compose up -d'. And this time it worked too I was able to access the php page. So now that I know that building the image was not the problem. I have to modify the run command and play with it a bit

Trial Without Docker-Compose:

Since I was able to run both the containers through docker-compose without any issues. I was trying to run them without docker-compose. The first thing I could notice from the docker-compose.yml file is the network that gets created. So I tried this

docker build -t apache2 .
docker build -t phptest .
docker network create testnet

docker run -d -v /home/user/Desktop/DockerTesting/public_html:/var/www/html -p 9000:9000  --network testnet --name phpcontainer phptest

docker run -d -v /home/user/Desktop/DockerTesting/public_html:/var/www/html -p 100:80 --network testnet --name apachecontainer apache2

I created a new network based on the documentation and added the containers to this network. And it didn't work either. Then I noticed that in the yml file the port of the php container was not mapped so I also tried this for the php container

docker run -d -v /home/user/Desktop/DockerTesting/public_html:/var/www/html  --network testnet -- name phpcontainer phptest

Nope, didn't work out either. I was at my wit's end that I even posted a StackOverflow question. It was then that I thought of inspecting the containers created with docker-compose and docker-run. So I ran both the set of containers. The two working ones with docker-compose and the other two with docker run. Then inspected the containers using the command

docker container inspect apachecontainer
docker container inspect phpcontainer

and also the other two containers. Then I started to go through them comparing the working versions of apache and php with the non-working versions of them. I found that the working versions had a few docker-compose labels which seemed to be unimportant. But then I found something interesting under a node that says network both the working versions had an alias as apache and php respectively. It was then that it struck me the demo.apache.conf file I was using had the following line

ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://php:9000/var/www/html/$1

The fcgi://php:9000/ referred to the alias "php". So I set a network alias for both the containers and ran them with the commands

docker run -d -v /home/user/Desktop/DockerTesting/public_html:/var/www/html  --network testnet --network-alias php --name phpcontainer phptest

docker run -d -v /home/user/Desktop/DockerTesting/public_html:/var/www/html -p 100:80 --network testnet --network-alias apache --name apachecontainer apache2

And this time I was able to access the PHP pages by going to http://localhost:100. And finally, I was able to achieve what I had wanted to do. Next, I plan to install MySQL in a separate container and link it with these two. I will write about it in my next post

Wednesday, 25 December 2019

Packaging apache in a container

Now that I have installed Docker. I decided to try to create an image for apache2. I found some references here. So this is how it works, I create a file called dockerfile and put in a series of commands docker will use the commands to create the image from which the container could be created. I would need a base image on which apache2 can be installed. Some googling around I found that alpine is the preferred base image due to its smaller size but I decided to stick with ubuntu. Based on the documentation, I guess, I would need to use commands like FROM, RUN, EXPOSE and CMD/ENTRYPOINT in my dockerfile.
I created a folder called DockerTesting and a subfolder apache2 inside which I created a file called dockerfile with the following content

FROM ubuntu
RUN ["apt-get","update"]
RUN ["apt-get","install","apache2","-y"]
EXPOSE 80
ENTRYPOINT systemctl start apache2.service

My understanding is that the FROM command retrieves a base image, in this case, a slimmed-down version of ubuntu. The RUN command runs the commands in the base image like 'apt-get install apache2 -y' will install apache2 in the ubuntu image I had. The EXPOSE command exposes a port of the ubuntu image outside the container. In this case, I am exposing port 80 since apache serves http requests at port 80 (and I  didn't plan on implementing https for now). The ENTRYPOINT command runs the command once the container starts to run. I think I could have used the CMD command also and got a similar result but the documentation seems to prefer this method especially since we can pass parameters to apache this way. Now everything seems fine why don't we create the image and build the container. These commands require sudo to run each time hence I ran 'sudo su' at the start so that I don't have to use sudo each time. I 'cd' into DockerTesting/apache2 and used the following command to build the image

docker build -t apache2

and ended with the following error '"docker build" requires exactly 1 argument.' and also 'Usage:  docker build [OPTIONS] PATH | URL | -'. So it requires a path. It took me some time to realize I missed a dot at the end

docker build -t apache2 .

So if I am in the directory as the docker file I need to use a dot at the end or ./path. Now the image got created. And I could verify it using the command 'docker images'


docker images command

Got confused seeing the ubuntu image at first but a google search cleared it up. It is just the base image I built the image on top of. Now to create the container, I used the following command to create the container

docker run -p 100:80 --name apachecontainer apache2

I mapped the port 80 of the container to the port 100 of my machine. So that I could access apache server at http://localhost:100 and gave a fancy name for my container too. But on running the command I got an error "/bin/sh: 1: systemctl: not found". What is this, it seems the base ubuntu image does not have systemctl installed. So either I have to install systemctl while creating the image or find another way to start apache2 (I found a few ways here). I used the second approach, I decided to use apache2ctl instead of systemctl. And to fix it I will have to rebuild the image and then create the container.
Running the command 'docker ps -a' will show that the container 'apachecontainer' was created but didn't run. First I want to remove the faulty image apache2 before changing my dockerfile. But to remove the image I have to remove the container that uses the image. I did it by running the following commands

docker rm apachecontainer
docker rmi apache2            

Now I made the following changes to the dockerfile
FROM ubuntu
RUN ["apt-get","update"]
RUN ["apt-get","install","apache2","-y"]
EXPOSE 80
ENTRYPOINT apache2ctl start
Now I built the image and created the container

docker build -t apache2 .                                            
docker run -p 100:80 --name apachecontainer apache2

This time I got the following output. "AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message". This is just a warning and apache should have started. But I couldn't access it at http://localhost:100. What now, running "docker ps -a" shows the following


The server started but was shutdown immediately. A google search revealed that the process should run in foreground or else the container gets shutdown. The solution is to run a the program in the foreground or use a command that doesn't end like 'tail -f /dev/null'. So I changed my dockerfile as

FROM ubuntu
RUN ["apt-get","update"]
RUN ["apt-get","install","apache2","-y"]
EXPOSE 80
ENTRYPOINT apache2ctl start && tail -f /dev/null 

Now recreate the image and run

docker rm apachecontainer
docker rmi apache2
docker build -t apache2 .      
docker run -p 100:80 --name apachecontainer apache2

And this time I was able to access apache at http://localhost:100.


Now there is only one thing I need to do. Access and change the apache default page and add my own pages. To do that I need to use the -v parameter while running (as given here) I also found that the -d parameter runs the container in the background and so will get detached from the terminal. So removed the container and ran it again

docker rm apachecontainer
docker run -d -v  /home/user/Desktop/DockerTesting/htmlFiles:/var/www/html -p 100:80 --name apachecontainer apache2


Here I have linked the folder /DockerTesting/htmlFiles to the folder /var/www/html in the container. So now apache will be serving the files from /DockerTesting/htmlFiles. To verify I placed an index.html file in that location and tried to access http://localhost:100, and this is what I got


I also learned the following. To stop the container I can use

docker stop apachecontainer

and most importantly, to restart the container I should not use the run command again but rather the start command

docker start apachecontainer

So I guess I have installed apache2 in a container successfully. Next, I plan to install php. I want to install it in a separate container and link it with this container but not sure if it is possible or I would have to install it in the same container as apache. I will try it out and post about it.

Tuesday, 24 December 2019

My attempt to learn Docker



Disclaimer: This and the upcoming posts don't try to teach about Docker, they are just a record of my attempt to learn Docker

Recently heard a lot about Docker. So thought why not give it a try and here I am documenting my effort. To get some basic idea, I started by watching some basic youtube videos, I liked the tutorials by Raghav Pal (I started at video 12 btw). Since videos 12 and 13 seemed to give a pretty good idea about creating a container I felt I was ready to jump in. My idea is to try to set up a LAMP stack with apache.mysql and php containerized. Based on the videos watched I decided that I must create a separate image for each service and then run them together using docker-compose. But I didn't want to try it on my local machine since I was pretty sure I will mess it up. So created a VM in virtual box and installed lubuntu in it.

I found the installation guide here and followed the procedure to setup Docker. Most of the commands are run with sudo so I opened the terminal and ran 'sudo su' at the start

Updating the System:

I started by updating the system
apt-get update && apt-get upgrade


Install Prerequisites:

The documentation has the prerequisites that must be installed. So once the update is done I ran the following command

apt-get install  apt-transport-https  ca-certificates curl  gnupg-agent software-properties-common


Add the Docker Repo:


curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

The documents page has the commands to add the Docker repo for different architectures. Luckily mine was the one on the first (default) tab. So I added it with

add-apt-repository  "deb [arch=amd64] https://download.docker.com/linux/ubuntu  $(lsb_release -cs) stable"

Install docker:

Finally, I reached the stage where I am supposed to install Docker itself. I ran the install command to install it

apt-get install docker-ce docker-ce-cli containerd.io

and checked the version using the following command,  just to verify that it was installed indeed

docker -v

It said Docker version 19.03.5 which was the latest version. That seemed good enough since I installed it without having to reset my VM but still, it's not over yet. I have to install Docker-compose.

Install docker-compose:

I read the documentation here. And used the following command to get the file

curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose


and made it executable using

chmod +x /usr/local/bin/docker-compose

and checked the version using the following command,

docker-compose -v

It said docker-compose version 1.25.0. So I have managed to install docker and docker-compose successfully.

Next, I plan on creating an image for apache2 and run it. I will try it out and post about it.

Setting GitHub hooks in Jenkins

While setting up a Freestyle project in Jenkins we may need to set Github webhooks in Jenkins so that Jenkins will get notified each time th...