The purpose of this post is give high level overview about how to achieve security aspect of traffic controls in AWS EKS .
As seen in the above diagram, AWS EKS is a managed service, which means responsibility of control plane is part of AWS while components in the worker nodes are users/customers.
It's a shared responsibility model.
There are couple of ways, that we can control incoming and outgoing traffic to/from EKS cluster.
Cluster VPC and subnet considerations
There are couple of ways to setup a VPC and subnets.
1. Public and private subnets
2. Only public subnets
3. Only private subnets
Out of these, option one is the famous VPC and subnect setup for EKS cluster.
This way we can deploy webservices in public subnets while backend services in private subnets.
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster
This is Kubernetes way of applying compliance and rules for network traffic within the cluster.
The above example uses PodAntiAffinity rule with topologyKey: "kubernetes.io/hostname" to deploy the redis cluster so that no two instances are located on the same host. See ZooKeeper tutorial for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique.
The quickest solution is to change k8s elasticsearch-master statefulset .
$ kubectl edit statefulset elasticsearch-master
We gonna make this a single Elasticsearch cluster by editing he configuration.
We have to remove cluster.initial_master_nodes environment variable configuration
Also add discovery.type as an environment variable.
#- name: cluster.initial_master_nodes
# value: 'elasticsearch-master-0,'
- name: discovery.type
value: single-node
After editing, we can see single node Elasticsearch cluster.
4. Port forwarding ingress-controller, zeebe-gateway and elasticsearch
Two instances created for both person and building on fire.
At this point, you will see that they are both stuck at the Classify Emergency task. This is because you don't have workers for such tasks, so the process will wait in that state until we provide one of these workers.
Starting a simple Spring Boot Zeebe Worker
cd src/zeebe-worker-spring-boot/
mvn clean package
mvn spring-boot:run
The worker is configured by default to connect to localhost:26500 to fetch Jobs. If everything is up and running the worker will start and connect, automatically completing the pending tasks in our Workflow Instances.
You can see the completed events.
Once tasks are completed , there wont be any active instances.
Understanding the BPMN workflow.
In the Camunda Operate, once you click one of a instance id, it will navigate to Instance History view.