1/19/2020

Elasticsearch Hot Warm Cold Setup




We are going to deploy one master node, one  hot data node, warm data node, cold data node and one kibana node in same local machine. Then we play with Index Life Cycle Management UI.

Steps:


1. Clone below github project and go to cloned directory


https://github.com/dhanuka84/es-hot-warm-cold.git

2. You need to change my local machine configurations according to your machine


Local IP: 192.168.0.114
Project Folder: /home/dhanuka/research/elastic/docker2/es-hot-warm-cold/

How to change:  Use below linux command to replace all

find ./ -type f -exec sed -i 's/192.168.0.114/new_ip/g' {} \;

find ./ -type f -exec sed -i 's#home/dhanuka/research/elastic/docker2#new_folder#g' {} \;

Install Docker :

http://dhanuka84.blogspot.com/2019/02/install-docker-in-ubuntu.html

Change system settings: Run below command in terminal

sudo echo "vm.max_map_count=262144"  >> /etc/sysctl.conf

3. Run run-all.sh script as below

sh run-all.sh

You can check terminal output with nohup.out file

4. Then login to kibana as below.






5. Then go to monitoring tab

You can see number of Elasticsearch and Kibana nodes with their status.


6. Now let's create ILCM policy


Go to management tab and click index lifecycle policies as below.


7. Let's create policy as shown above and below.

In here we will create hot, warm , cold and delete phases of an index life cycle 




8. Then you need to create a index template as below.


PUT _template/my_template
{
  "index_patterns": ["kibana_sample_data*"],
  "settings": {
    "index.routing.allocation.require.my_node_type": "hot"
  }
}

9. Finally you need to add previously created ILCM policy to index template as below





10. Now when ever you create new index based on that template , ILCM policywill be applied to that index. You can verify template as below.


Request : 

GET _template/my_template

Response:

{
  "my_template" : {
    "order" : 0,
    "index_patterns" : [
      "kibana_sample_data*"
    ],
    "settings" : {
      "index" : {
        "lifecycle" : {
          "name" : "hotwarm-policy",
          "rollover_alias" : "my_index_alias"
        },
        "routing" : {
          "allocation" : {
            "require" : {
              "my_node_type" : "hot"
            }
          }
        }
      }
    },
    "mappings" : { },
    "aliases" : { }
  }
}



1/07/2020

Setup Kubernetes Cluster with Ansible and Virtualbox VMs (Redhat)




This is an improvement for previous blog. In here we are going to use Redhat 7.6 has virtual machines and Ansible as K8s setup automation tool.


Steps:


1. You need to create three Redhat virtual machines in virtualbox. You can follow below article for this.


user: dhanuka
user_password: root

root_password: root

https://developers.redhat.com/products/rhel/hello-world#fndtn-rhel



After successfully setup virtual machine, you need to install guest additions as described below.
https://www.virtualbox.org/manual/ch04.html

2. Once you setup one virtual machine you need to configure network and other OS configuration as we did in previous blog post.



  • Please note that, we install Docker with Ansible, so you don't have to install it manually. 
  • Also we are using host-only-adapter and Net adapter for networking.









3. Disable SWAP


vim /etc/fstab

comment out lines which indicate about swap
example :

#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0


4. Create a static IP


Type below command and select enp0s9 entry

ifconfig -a


enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::1a2e:75d3:a38b:aac6  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:76:52:cd  txqueuelen 1000  (Ethernet)
        RX packets 73480  bytes 96132054 (91.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11196  bytes 2431057 (2.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s17: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.20  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::64f6:2e06:bb4a:ee00  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:af:7a:5f  txqueuelen 1000  (Ethernet)
        RX packets 88747  bytes 51715147 (49.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 67547  bytes 7154603 (6.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


sudo vim /etc/sysconfig/network-scripts/ifcfg-enp0s17

insert below code block to above file: example k8s-worker-node1


HWADDR=08:00:27:AF:7A:5F
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.56.20
PREFIX=24
GATEWAY=192.168.56.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s17
UUID=1ab88d1d-3f78-47ab-b516-394c288a3f6f
ONBOOT=yes



Change /etc/hosts file with below IP entries

sudo vim /etc/hosts

192.168.56.10   k8s-worker-node2
192.168.56.20   k8s-worker-node1
192.168.56.30   k8s-master1 

Restart the VM

5. Now you can clone the VM as explain in previous blog post and repeat Steps 3 & 4 accordingly


6. Copy the ssh key to virtual machine


ssh-copy-id -i ~/.ssh/id_rsa root@192.168.56.20
ssh-copy-id -i ~/.ssh/id_rsa root@192.168.56.10
ssh-copy-id -i ~/.ssh/id_rsa root@192.168.56.30


6. Clone this github project and go to that directory



7. In here I am using Python pip virtual environment. My host machine is Ubuntu, so following commands will setup virtual environment.


sudo apt install python-pip

python -m pip install --upgrade pip setuptools wheel

sudo apt-get install python3-venv

python3 -m venv k8s_env

source k8s_env/bin/activate





8. Install Ansible 2.8.7


pip install 'ansible==2.8.7'


9. Install Docker and Setup Kubernetes : Run bellow command


ansible-playbook -i inventory/virthost/virthost.inventory playbooks/kube-install.yml -vvvvv


10. Uninstall Kubernetes


ansible-playbook -i inventory/virthost/virthost.inventory playbooks/kube-teardown.yml  -vvv