28 January 2023

Enable container insight for EKS-Fargate in cloudwatch

In this article, we will demonstrate on how to implement Container 
Insights metrics using an (AWS Distro for OpenTelemetry) ADOT collector on an EKS Fargate cluster to visualize your cluster & container data at every layer of the performance stack in Amazon Cloudwatch. Presuming you have an EKS-Cluster with a fargate profile already up and running, if not follow the article to setup one. then it involves only the following things to achieve it -

- adot IAM service account
- adot-collector
- fargate profile for adot-collector

Create adot iamserviceaccount 


eksctl create iamserviceaccount \
--cluster btcluster \
--region eu-west-2 \
--namespace fargate-container-insights \
--name adot-collector \
--attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
--override-existing-serviceaccounts \
--approve


in-case of deleting iamserviceacount




eksctl delete iamserviceaccount --cluster btcluster --name adot-collector -n fargate-container-insights

Deploy Adot-collector


wget https://github.com/punitporwal07/kubernetes/blob/master/monitoring/cloudwatch-insight/eks-fargate-container-insights.yaml

kubectl apply -f eks-fargate-container-insights.yaml



Create compute fargate profile for the adot-collector pod that comes with statefulset


eksctl create fargateprofile --name adot-collector --cluster btcluster -n fargate-container-insights 


Navigate to AWS cloudwatch

services > cloudWatch > logs > log groups & search for insight 


services > cloudWatch > insights > Container insights > resources


services > cloudWatch > insights > Container insights > container map


Some helpful commands

to scale down statefulset -

kubectl -n fargate-container-insights patch statefulset.apps/adot-collector -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'

to scale up statefulset -


kubectl -n fargate-container-insights patch statefulset.apps/adot-collector --type json -p='[{"op": "remove", "path": "/spec/template/spec/nodeSelector/non-existing"}]'


ref - https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-container-insights-eks-fargate/

02 November 2022

RBACs in Kubernetes

Security should be our top priority. In Kubernetes, role-based access control is used to grant users access to API resources. RBAC is a security design that restricts access to Kubernetes resources based on the roles.

There are two ways we can restrict and add RBAC policies

1. Namespace-wide RBAC policies

defining manifests to implement RBAC policies at the namespace level

with the above example, you have restricted the access to a specific namespace(myapp) for a specific application(Istio) linked with a serviceAccount deployed in the myapp namespace, but if you wish to access the resource of another namespace say myapp2 you will get a 403 forbidden error. 

2. Cluster-wide RBAC policies
ClusterRoles are similar to Roles however when assigned to a ServiceAccount can give cluster-wide permissions to access other resources in it. 

defining manifests to implement RBAC policies at cluster-wide level

with the above example, we have implemented clusterRoles to a ServiceAccount to access resources at a cluster-wide level.


if the user tries to access another namespace or system namespace like (kube-system). definitely, it will throw a forbidden error because when we create this user/SA only have access to myapp & myapp2 namespace. Usually, developers no need access to the system namespace (kube-system).

29 March 2021

Minikube - lightweight Kubernetes cluster

 Launching Kubernetes as a single node cluster locally

Minikube is the tool that allows you to launch K8S locally. Minikubes runs as a single-node-k8s-cluster inside a VM at your local, before you install kubectl do as below



 # Install minikube on Ubuntu
 # setup minikube using the script here -
   https://github.com/punitporwal07/minikube/blob/master/install-minikube-v2.sh 

 # Install minikube on Linux
 # use this script to launch k8s-cluster on local and interact with Minikube install-minikube.sh
$ git clone https://github.com/punitporwal07/minikube.git
 $ cd minikube
 $ chmod +x install-minikube.sh
 
 Now add your localuser as sudo-user, with root do the following -
 $ vi /etc/sudoers
   next to root ALL=(ALL) add as below for your user 

   localuser ALL=(ALL) NOPASSWD:ALL
 
 $ su - localuser
 $ ./install-minikube.sh





basic minikube command

FunctionCommand
verify kubectl to talk to the clusterkubectl config current-context ( should return minikube)
to start/stop cluster with resourcesminikube start/stop 
minikube start --cpus 6 --memory 8192
to delete noteminikube delete
start version-specific Kube nodeminikube start --vm-driver=none --kubernetes-version="v1.20.0"                                     
check node info kubectl get nodes
kubernetes cluster-infokubectl cluster-info
kubectl binary for Windowskubectl.exe
minikube 64-bit installerminikube-installer.exe


31 January 2021

Launch EKS using eksctl in AWS

There are ways to exploit kubernetes to deploy the application. Perhaps when you are running out of resources to manage the kubernetes cluster you have an option to use the kubernetes service backed by a cloud provider that will take all the burden of managing the cluster on your behalf known as managed kubernetes service and when deployed on AWS called as Elastic Kubernetes Service.

In this article, I will call kubernetes+cluster=Kluster

1. To start with EKS you need to fulfil the following prerequisites.

    a. Own an AWS account (free tier will work)
    b. Create a VPC (virtual private space that will not affect other components in your acc) 
    c. Create an IAM role with the security group. ( AWS user with a list of permissions to setup EKS)

2. To create a Kluster Control Plane you should have the following in place

    a. Kluster name, kubernetes version
    b. Region & VPC for kluster
    c. Security for kluster

3. Create worker nodes for your kluster ( set of EC2 instances)

    a. Create as a Node Group ( autoscaling enabled)
    b. Choose kluster it will attach to
    c. Define security group, select instance type, resources
    d. Define max & min number of nodes.

Then we use kubectl from our local to access kluster and deploy resources on it

Alternatively, we can use eksctl an official CLI tool for creating & managing kluster on EKS that is written in go & uses cloudFormation to set up EKS fast & effectively.

 
  # below command will do all the job mentioned above at
    runtime and will be using default values
  $ eksctl create cluster 
  

Let's demonstrate how this will be done

first, install eksctl to use this utility follow the instruction here 

NOTE: Before creating a cluster using eksctl util it is important to connect and authenticate to your AWS account, follow the instructions to connect


 $ aws configure // that will prompt you to provide 
   AWS Access Key ID [*********SIXH]: AKIAUYDUMMY73NXMF
   AWS Secret Access Key [*********6oB6]: mbJBDUMMYfm09oqONa
   Default region name [ap-south-1]:
   Default output format [None]: json
  
   // this user will be able to access the cluster


once this is done start creating your cluster using eksctl

 
 $ eksctl create cluster \
   --name first-kluster \
   --version 1.20 \
   --region ap-south-1 \
   --nodegroup-name linux-worker-nodes \
   --node-type t2.micro \
   --nodes 2


Alternatively, you can create a node group by running the following command
 
 # create public node group
 $ eksctl create nodegroup –cluster=first-kluster \
   --region=ap-south-1 \
   --name=node-group-name \
   --node-type=t2.micro \
   -–nodes=2 \
   -–nodes-min=2 \
   -–nodes-max=4 \
   -–node-volume-size=20 \
   -–ssh-access \
   -–ssh-public-key=ssh-key-pair-name \
   -–managed \
   -–asg-access \
   -–external-dns-access \
   -–full-ecr-access \
   -–appmesh-access \
   -–alb-ingress-access

 # List NodeGroups
 $ eksctl get nodegroup –cluster=<clusterName>

 # List Nodes
 $ kubectl get nodes -o wide

 # if fails to list nodes -
   export KUBECONFIG=$LOCATIONofKUBECONFIG/kubeconfig_myEKSCluster
   export AWS_DEFAULT_REGION=eu-west-2
   export AWS_DEFAULT_PROFILE=dev

 $ aws eks update-kubeconfig --name myEKSCluster --region=eu-west-2
 Added new context arn:aws:eks:eu-west-2:295XXXX62576:cluster/myEKSCluster to $PWD/kubeconfig_myEKSCluster


eksctl does provide an option to create a customized cluster using a yaml file


 apiVersion: eksctl.io/v1alpha5
 kind: ClusterConfig

 metadata:
   name: first-kluster
   region: ap-south-1

 nodeGroups:
   - name: linux-worker-nodes-1
     instanceType: t2.micro
     desiredCapacity: 5
   - name: linux-worker-nodes-2
     instanceType: m5.large
     desiredCapacity: 2

 $ eksctl create cluster -f cluster.yaml    


this will automatically create and assign VPC/subnet


once this is done you will be able to see your cluster in AWS console and same time can see its nodes in CLI using kubectl get nodes






once the cluster is created you will be able to find its credential file under ~/.kube/config which further can be used with different tools and products for integration and monitoring purpose of your kluster

  
  # In-order to delete a cluster
  $ eksctl delete cluster --name first-kluster
  

--

08 April 2020

Kubernetes cheatsheet

Kubernetes is an orchestration framework for containers that give you portability for managing containerized workloads and services in form of pods, there are two types of CLI commands -
Imperative - imperative commands are one-liners commands.
Declarative - declarative has some definition of objects in a file that a developer can refer it again.
There are different CLI tools that allow you to run commands against the Kubernetes cluster some of which I attempted to collect and put together as below.

Function

Command

kubectl auto-complete

echo "source <(kubectl completion bash)" >> ~/.bashrc

Initialize cluster 
verify cluster-info

verify kluster + components




reset cluster
delete tunl0 iface
delete pods forcefully

deregister a node from the cluster
(Unscheduling enabled)



Scheduling enabled

add a taint to a node
remove a taint from a node
label a node

kubeadm init --apiserver-advertise-address=MASTERIP --pod-network-cidr=192.168.0.0/16
kubectl cluster-info --minify

kubectl version --short && \
kubectl get componentstatus && \
kubectl get nodes --show-labels && \
kubectl cluster-info

kubeadm reset -f && rm -rf /etc/kubernetes/
modprobe -r ipip
kubectl delete pods --all -n kube-system --grace-period=0 --force


kubectl cordon nodeName
kubectl drain nodeName 

kubectl drain nodeName --ignore-daemonsets --delete-local-data --force
kubectl delete node nodeName

kubectl uncordon nodeName

kubectl taint nodes node01 key1=value1:NoSchedule
kubectl taint nodes node01 key1=value1:NoSchedule-
kubectl label nodes node01 key=value

setting namespace preference
validate current namespace
list everything in the cluster

kubectl config set-context --current --namespace=<namespace-name>
kubectl config view --minify | grep namespace
kubectl get all --all-namespaces

investigate any object
investigate kubelet service
  

kubectl describe node/deployment/svc <objectName>
sudo journalctl -u kubelet

exposing deployment as a service


patch a svc from ClusterIP to NP

port forwarding in svc


scaling your deployment


use service deployed in other ns

kubectl expose deploy/web --type=NodePort --name=my-svc 
kubectl expose deploy/web --port=9443 --target-port=61002 --name=mysvc --type=LoadBalancer

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}' 

kubectl port-forward svc/my-service -n myNamespace 8080:443
kubectl proxy --port=61000 --address=0.0.0.0 --accept-hosts '.*' &

kubectl scale --current-replicas=3 --replicas=4 deployment/my-deployment
kubectl scale deployment/my-deployment --replicas=2 -n my-namespace

Dns name: <service-name>.<namespace>.svc.cluster.<domain>

all possible attributes of an obj
wide details of running pods
delete a pod forcefully
delete bulk rsrc from a namespace

kubectl explain pod --recursive
kubectl get pods -o wide
kubectl delete pod mypodName --grace-period=0 --force --namespace myNamespace
kubectl delete --all po/podName -n my-namespace

open a bash terminal in a pod 
run shell command

kubectl exec -it podName -- bash
kubectl exec -it podName -- cat /etc/hosts 

create a yaml manifest, 
without sending it to the cluster

apply a folder of yaml files
validate a yaml

kubectl create deploy web --image=nginx --dry-run -o yaml > web.yaml [Imperative way]


kubectl apply -R -f .
kubectl create --dry-run=client --validate -f file.yaml

create a deployment
edit deployment web runtime
autoscale deployment 
rolling update
validate the rollout
undo the rollout

kubectl create deploy --image=nginx web --replicas=3
kubectl edit deploy/web
kubectl autoscale deploy/web --min=2 --max=5 --cpu-percent=10
kubectl set image deploy/web web=regustry/web:2.0
kubectl rollout status deploy/web
kubectl rollout undo deploy/web

passing configmap string
passing cm as a properties file  
query health check endpoint

kubectl create configmap my-config --from-literal=MESSAGE="hello from configmap”
kubectl create cm my-config --from-file=
my.properties
curl -L http://localhost:8080/healthz

dump logs                                     

kubectl logs podName 
kubectl logs podName -c containerName       
                           


run kubectl against pods using xargs

fetch 1st colmn from o/p of multi pods      
refine o/p with specific value                             

kubectl get pods -o name | xargs -I{} kubectl exec {} -- command 
kubectl get pods -n ns | grep -v NAME | sed 's/\|/ /'|awk '{print $1}'
kubectl get pods -n ns | grep -v NAME | awk '{print $1}' | cut -c8-14
  

Calculate max pods in an EC2 instance type             

curl -O https://gist.githubusercontent.com/punitporwal07/4dccce3e51503b8fc786d754e64fbe6f/raw/0e4d130db82ee953ce9f93366a35b803cac39faa/max-pods-calculator.sh

chmod +x max-pods-calculator.sh

aws configure


./max-pods-calculator.sh --instance-type m5.large --cni-version 1.9.0-eksbuild.1



06 March 2019

Different api versions to use in your manifest file

according to kubernetes : The API server exposes an HTTP API that lets end users, different parts of your cluster, and external components communicate with one another. The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API (for example: Pods, Namespaces, ConfigMaps, and Events)


                                  APIs are gateway to your kubernetes cluster

KindapiVersion
CertificateSigningRequestcertificates.k8s.io/v1beta1
ClusterRoleBindingrbac.authorization.k8s.io/v1
ClusterRolerbac.authorization.k8s.io/v1
ComponentStatusv1
ConfigMapv1
ControllerRevisionapps/v1
CronJobbatch/v1beta1
DaemonSetextensions/v1beta1
Deploymentextensions/v1beta1
Endpointsv1
Eventv1
HorizontalPodAutoscalerautoscaling/v1
Ingressextensions/v1beta1
Jobbatch/v1
LimitRangev1
Namespacev1
NetworkPolicyextensions/v1beta1
Nodev1
PersistentVolumeClaimv1
PersistentVolumev1
PodDisruptionBudgetpolicy/v1beta1
Podv1
PodSecurityPolicyextensions/v1beta1
PodTemplatev1
ReplicaSetextensions/v1beta1
ReplicationControllerv1
ResourceQuotav1
RoleBindingrbac.authorization.k8s.io/v1
Rolerbac.authorization.k8s.io/v1
Secretv1
ServiceAccountv1
Servicev1
StatefulSetapps/v1



alpha
API versions with ‘alpha’ in their name are early candidates for new functionality coming into Kubernetes. These may contain bugs and are not guaranteed to work in the future.

beta
‘beta’ in the API version name means that testing has progressed past alpha level, and that the feature will eventually be included in Kubernetes. Although the way it works might change, and the way objects are defined may change completely, the feature itself is highly likely to make it into Kubernetes in some form.

stable
Those do not contain ‘alpha’ or ‘beta’ in their name. They are safe to use.

30 December 2018

PROBES - Health check mechanism of application running inside Pod's container in Kubernetes

Kubernetes provides health checking mechanism to verify if a container inside a pod is working or not using PROBE.
Kubernetes gives two types of health checks performed by the kubelet.

Liveness probe
k8s checks the status of the container via liveness probe.
If liveness Probe fails, then the container is subjected to its restart policy.

Readiness Probe
Readiness probe checks whether your application is ready to serve the requests.
If readiness probe fails, the pod's IP is removed from the endpoint list of the service.

we can define liveness probe in three types of actions that kubelet performs on a pod:
  • Executes a command inside the container
  • Checks for a state of a particular port on the container
  • Performs a GET request on container's IP

# Define a liveness command
livenessProbe:
  exec:
    command:
    - sh
    - /tmp/status.sh; sleep 10; rm /tmp/status.sh; sleep 600
  initialDelaySeconds: 10
  periodSeconds: 5

# Define a liveness HTTP request 
livenessProbe:
  httpGet:
    path: /healthz
    port: 10254 
  initialDelaySeconds: 5
  periodSeconds: 3
# Define a TCP liveness probe
livenessProbe:
  tcpSocket:
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 20

Readiness probes are configured similarly to liveness probes.
The only difference is that you use the readinessProbe field 
instead of the livenessProbe field.

# Define readiness probe
readinessProbe:
  exec:
    command:
    - sh
    - /tmp/status_check.sh
  initialDelaySeconds: 5
  periodSeconds: 5 




Configure Probes
Probes have a number of fields that one can use more precisely to control the behavior of liveness and readiness checks

initialDelaySeconds: Number of seconds after the container starts before liveness or readiness probes are initiated.
Defaults to 0 seconds. Minimum value is 0.
periodSeconds: How often (in seconds) to perform the probe.
Default to 10 seconds. Minimum value is 1.
timeoutSeconds: Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed.
Defaults to 1. Must be 1 for liveness. Minimum value is 1.
failureThreshold: Minimum consecutive fails for the probe to be considered restarting the container. In case of readiness probe, the Pod will be marked Unready.
Defaults to 3. Minimum value is 1.


# example of Nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  labels:
    app: webserver
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
        - name: app1
          image: punitporwal07/apache4ingress:1.0
          imagePullPolicy: Always
          ports:
            - containerPort: 80
          livenessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 3
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 3


httpGet have additional fields that can be set
path: Path to access on the HTTP server.
port: Name or number of the port to access the container. The number must be in the range of 1 to 65535.
host: Hostname to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead.
httpHeaders: Custom headers to set in the request. HTTP allows repeated headers.
scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP

keep probing!