12/06/2019

Working with Ansible and Docker




As shown in the diagram, we are trying to provision, docker container in remote host using a intermediate docker container (Docker-Ansible) which has been installed Ansible.

So Ansible playbook will run inside the docker container which is in local host.

By this way you don't have to install Ansible in local machine.

Also we can execute multiple playbook at the same time.

Please note, for this demonstration I am using single machine. You can use two virtual machines instead of single machine with different IP address.


Steps.


1. Install Open-SSH server in Ubuntu.

Please follow below article for this.

https://www.cyberciti.biz/faq/ubuntu-linux-install-openssh-server/

2. Setup SSH keys in Ubuntu.

Please follow below article for this

https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-ubuntu-1604

https://www.ssh.com/ssh/copy-id

3. Install Docker

https://dhanuka84.blogspot.com/2019/02/install-docker-in-ubuntu.html

4. Install Python

https://tecadmin.net/install-python-2-7-on-ubuntu-and-linuxmint/

5. Clone below two github projects

git clone https://github.com/dhanuka84/docker-ansible.git
git clone https://github.com/dhanuka84/docker-ansible-playbook.git


6. Build each project

cd   docker-ansible/master-ubuntu16.04
docker-ansible/master-ubuntu16.04$   docker build -t ansible-docker:master-ubuntu16.04 .
cd   docker-ansible-playbook
docker-ansible-playbook$   docker build -t docker-ansible-intermediate:latest .

7. Provisioning the container using intermediate container

cd   docker-ansible-playbook
docker-ansible-playbook$  ./start.sh















Explanation


1. Docker image ansible-docker:master-ubuntu16.04


If you look at the Dockerfile, you can see it has setup up all the things mentioned in the diagram as Docker-Ansible. This will be the generic Ubuntu based image.


2. docker-ansible-intermediate:latest



We use previous image as parent image in here. Mainly what we do with this image is setup SSH keys and hosts

Though define ssh-keyscan 192.168.0.114  > /root/.ssh/known_hosts in the image level is not the best approach for this just for demo purpose we did like this.

Best approach is do the same in container level  (start.sh) instead of image level.



3. docker-ansible-playbook$ vim docker.yml  playbook


---
- hosts: docker
#  gather_facts: no
  tasks:
  - name: Create container
    docker_container:
     name: docker-test
    # docker_host: "tcp://localhost:22"
    # This is a pre-built ubuntu based image. Also it has been installed python-pip as explain in the #diagram.
   #We use this image as provisioned container
     image: nitincypher/docker-ubuntu-python-pip:latest
    # image: ansible:ubuntu16.04
     command: sleep 1d
     detach: true
     interactive: true
     tty: true
    # tls_hostname: localhost
     tls_verify: yes



References:


https://developer.ibm.com/tutorials/cl-provision-docker-containers-ansible/






12/02/2019

Setup Kubernetes Cluster with Virtualbox








Steps


1. Install Oracle Virtualbox & Extension pack


Download from here - https://www.virtualbox.org/wiki/Download_Old_Builds_5_2

2. Download Ubuntu ISO( ubuntu-18.04.3-desktop-amd64.iso )  image from below link


https://ubuntu.com/download/desktop

3. Create virtual machine in virtual box.  

Minimum Requirement is : 2 GB RAM, 1 VCPU

You can follow below blog for more details.

https://www.wikihow.com/Install-Ubuntu-on-VirtualBox


4. Sample virtual machine




After successfully setup virtual machine, you need to install guest additions as described below.
https://www.virtualbox.org/manual/ch04.html

5. Key points when creating a virtual machine


Network Settings:

  • You need to connect to internet so we have to use NAT network adapter
  • Also we are using bridged adapter as the cluster communication network, and it's IP as static IP
  • Configure the adapters as mentioned in the image

6. Once you setup virtual machine you need to install Docker and disable swap


7. Setup the virtual machine. Execute following commands in a terminal


sudo apt-get update

sudo apt install net-tools

Change hostname

sudo hostnamectl set-hostname "k8s-master"

exec bash

8. Install Docker


sudo dpkg --configure -a

Install relevant packages

sudo apt install apt-transport-https ca-certificates curl software-properties-common


sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

sudo apt update

apt-cache policy docker-ce

sudo apt install docker-ce


Use Docker as normal user

sudo usermod -aG docker ${USER}

su - ${USER}

sudo systemctl start docker

sudo systemctl enable docker


9. Create a static IP


Type below command and select enp0s9 entry

ifconfig -a


enp0s9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.166  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::a00:27ff:fe7c:faba  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:7c:fa:ba  txqueuelen 1000  (Ethernet)
        RX packets 21115  bytes 8269767 (8.2 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14770  bytes 1610853 (1.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

sudo vim /etc/network/interfaces

insert below code block to above file


auto enp0s9

iface enp0s9 inet static

address 192.168.0.166
netmask 255.255.255.0

Change /etc/hosts file with below IP entries

sudo vim /etc/hosts

192.168.0.166      k8s-master
192.168.0.178      k8s-worker-node2


10. Install Kubernetes


Install relevant packages

sudo apt install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add

sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

sudo apt update


Disable swap

sudo swapoff -a

vim /etc/fstab

comment out lines which indicate about swap
example :

#/swapfile                                 none            swap    sw              0       0

sudo apt-get install kubeadm -y


11. Cloning virtual machine as worker/slave node


Use below link but clone this as FULL clone


12. Setup Kubernetes Master



sudo kubeadm init  --apiserver-advertise-address=192.168.0.166 --pod-network-cidr=192.168.0.0/24

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Save below out put to a file

kubeadm join 192.168.0.166:6443 --token pa36ca.1e073aktjf9rac6m \
    --discovery-token-ca-cert-hash sha256:b850d754741005a6d7f2a096441d3c229dd23bc3a0078157c5d815a22216bc34 

13. Setup Kubernetes in Slave node


Do step 9 and  10. Then execute below command to join slave node to cluster
Then change hostname

sudo kubeadm join 192.168.0.166:6443 --token pa36ca.1e073aktjf9rac6m \
    --discovery-token-ca-cert-hash sha256:b850d754741005a6d7f2a096441d3c229dd23bc3a0078157c5d815a22216bc34

14. Deploy Flannel as Pod Network in both master and slave nodes


sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


15. Verify the cluster from master node


kubanetes@k8s-master:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-5644d7b6d9-vwnfx             0/1     Running   0          8m37s
kube-system   coredns-5644d7b6d9-zwgvn             0/1     Running   0          8m37s
kube-system   etcd-k8s-master                      1/1     Running   0          7m59s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          8m1s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          7m51s
kube-system   kube-flannel-ds-amd64-wk5s6          1/1     Running   0          27s
kube-system   kube-proxy-xgcjh                     1/1     Running   0          8m37s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          7m56s



kubanetes@k8s-master:~$ kubectl get nodes
NAME               STATUS   ROLES    AGE    VERSION
k8s-master         Ready    master   26m    v1.16.3
k8s-worker-node2   Ready    <none>   114s   v1.16.3


References:







                 

6/09/2019

Flume with Docker





Use case

Send file through TCP to Flume and log content in the console.
  • Assumptions: Docker already installed

Steps


1. Clone below github project.

https://github.com/dhanuka84/docker-flume

2. Build docker image using following command

docker build -t my-flume-image .

3. Change configuration in following folder according to your local machine.

config/*
run-fl.sh

4. Execute bash script

sh run-fl.sh

5. Send file through TCP tunnel

netcat localhost 4444 < README.md

6. Output from docker container




References:


1. https://flume.apache.org/releases/content/1.8.0/FlumeUserGuide.html
2. https://github.com/mrwilson/docker-flume
3. https://blog.probablyfine.co.uk/2014/08/24/using-docker-with-apache-flume-2.html



2/09/2019

Rule Execution as Streaming Process with Flink


As explained in the above diagram, rule creator (Desktop) will create JSON based rule and push them to Kafka (rule topic). Event Source will send events to Kafka  (testin topic). Finally Flink will consume both rules and events as streams and process rules based on key (Driver Id). Rules will be stored in Flink as in-memory collection and the rules also can be updated in same manner. Finally out put result will be send to Kafka (testout topic).



Setup Flink



  • Download Apache Flink 1.6.3 from below location and extract archive file.

          https://www.apache.org/dyn/closer.lua/flink/flink-1.6.3/flink-1.6.3-bin-scala_2.11.tgz


  • Download below dependancy jar files and place in flink-1.6.3/lib folder.





  • Configure Flink with flink-1.6.3/conf/flink-conf.yaml .


         Change job-manager and task-manager heap size to much smaller size.

         jobmanager.heap.size: 1g
         taskmanager.heap.size: 2g
         taskmanager.numberOfTaskSlots: 20

  • Change State Backend to RocksDB

          Create folder from you home location

         ~$ mkdir -p data/flink/checkpoints

          Edit configuration

          state.backend: rocksdb
          state.checkpoints.dir: file:///home/dhanuka/data/flink/checkpoints


  • Start Flink cluster in standalone mode.

           :~/software/flink-1.6.3$ ./bin/start-cluster.sh

Setup Kafka & Zookeeper


  • In here I am using docker and docker compose so you can follow below blogspost to install docker & docker-compose in ubuntu.

          http://dhanuka84.blogspot.com/2019/02/install-docker-in-ubuntu.html


  • Please checkout below docker project from github.

          https://github.com/dhanuka84/my-docker.git 


  • Change IP address to your machine IP address.

         https://github.com/dhanuka84/my-docker/blob/master/kafka/kafka-hazelcast.yml


  • Bootup Kafka and Zookeeper with docker-compose
          my-docker/kafka$ sudo docker-compose -f kafka-hazelcast.yml up


  • Check docker containers





  • Create Kafka Topics

         Download Confluent Platform - https://www.confluent.io/download/

  • Got to confluent platform extracted location and run below commands



bin/kafka-topics  --create --zookeeper localhost:2181 --replication-factor 1 --partitions 6 --topic  testin
bin/kafka-topics  --create --zookeeper localhost:2181 --replication-factor 1 --partitions 6 --topic  testout
bin/kafka-topics  --create --zookeeper localhost:2181 --replication-factor 1 --partitions 6 --topic  rule

Create Java based Rule Job


  • Checkout the project from github and build the project with maven.

          https://github.com/dhanuka84/stream-analytics
       
          stream-analytics$ mvn clean install -Dmaven.test.skip=true


          Please note that both rules and events filter by key.
         
          KeyedStream<Event, String> filteredEvents = events.keyBy(Event::getDriverID);
          rules.broadcast().keyBy(Rule::getDriverID)

          Store rules in ListState using flatMap1 method.
          Execute rule against  each relevant event using flatMap2 method.

          Finally results transform to JSON string.
  • Prepare flat jar to upload, using below command within checked out project home.

stream-analytics$ jar uf target/stream-analytics-0.0.1-SNAPSHOT.jar application.properties consumer.properties producer.properties

  • Upload Flink Job Jar file
         copy stream-analytics-0.0.1-SNAPSHOT.jar Flink home folder and run below command

         bin/flink run stream-analytics-0.0.1-SNAPSHOT.jar

Testing

  • Run Producer.java class using your favorite IDE. This will generate both events and rules then publish to Kafka.
  • To verify from Kafka level use below commands within confluent home location.


Get offset

bin/kafka-run-class kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic testout  --time -1

 Read from Kafka topic
bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic testout

2/08/2019

Install docker in Ubuntu

1. Go to below link
https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/

2. Download below debian installations
docker-ce-cli_18.09.1_3-0_ubuntu-xenial_amd64.deb
containerd.io_1.2.2-1_amd64.deb
docker-ce_18.09.1_3-0_ubuntu-xenial_amd64.deb


3. Install using below command
sudo dpkg -i /path/to/package.deb

4. Download docker-compose

curl -L https://github.com/docker/compose/releases/download/1.23.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose


5. Make docker-compose executable
sudo chmod +x /usr/local/bin/docker-compose

6. Check Versions
docker --version
docker-compose --version

7. Check whether docker running

service docker status


8. Use Docker as normal user

sudo systemctl stop docker
sudo usermod -aG docker ${USER}
su - ${USER}
sudo systemctl start docker
sudo systemctl enable docker