21 July 2023

AWS secrets scanning

Were you aware? With GitHub's secret scanning, we are able to add an additional layer of security to our repositories.

When we make our repositories public or make changes to public repositories, GitHub's advanced secret scanning feature is triggered.
The program meticulously searches the code looking for any secrets that match predefined partner patterns.

When a potential secret is detected, the investigation does not end there! As a result, GitHub notifies the service provider responsible for the secret issuance. This could be a third-party service like AWS. The provider then assesses the situation and decides whether to revoke the secret, issue a new one, or reach out directly to us. Their response depends on the level of risk involved for all parties.

Within minutes you will get an email from AWS about the breach and your access key would have a quarantine policy attached to it. 

👉 Key takeaway: It is important to note that, while AWS is a key component of our technology stack, it is not the one that scans GitHub repositories for secrets. It is GitHub's secret scanning feature that protects us against inadvertent disclosures.

Furthermore, due to this feature of GitHub aws detects any exposed/compromised keys online, and will attach the "AWSCompromisedKeyQuarantineV2" AWS Managed Policy ("Quarantine Policy") to the IAM User of which keys are exposed, and trigger a mail notification to your registered account with the details. So every time you try to use any resources from the exposed key you will get an authorization error.  

ex: 
 
 FAILED! => {"changed": false, "msg": "Instance creation failed => UnauthorizedOperation:
 You are not authorized to perform this operation. Encoded authorization failure message: 
 mw4pJJXTCly9BRXiEEzZhmPvanjwTNMCJ0MRAsFGw-jSRJyUwRz9tgdKjQF_S_d3IspWq_d4-LL1
 

The "UnauthorizedOperation" error indicates that permissions attached to the AWS IAM role or user trying to perform the operation does not have the required permissions to launch EC2 instances. Because the error involves an encoded message, use the aws-cli to decode the message. 

 
 Encoded-message is the encrypted value you get in your error msg
 $ aws sts decode-authorization-message --encoded-message encoded-message


--

28 January 2023

Enable container insight for EKS-Fargate in cloudwatch

In this article, we will demonstrate on how to implement Container 
Insights metrics using an (AWS Distro for OpenTelemetry) ADOT collector on an EKS Fargate cluster to visualize your cluster & container data at every layer of the performance stack in Amazon Cloudwatch. Presuming you have an EKS-Cluster with a fargate profile already up and running, if not follow the article to setup one. then it involves only the following things to achieve it -

- adot IAM service account
- adot-collector
- fargate profile for adot-collector

Create adot iamserviceaccount 


eksctl create iamserviceaccount \
--cluster btcluster \
--region eu-west-2 \
--namespace fargate-container-insights \
--name adot-collector \
--attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
--override-existing-serviceaccounts \
--approve


in-case of deleting iamserviceacount




eksctl delete iamserviceaccount --cluster btcluster --name adot-collector -n fargate-container-insights

Deploy Adot-collector


wget https://github.com/punitporwal07/kubernetes/blob/master/monitoring/cloudwatch-insight/eks-fargate-container-insights.yaml

kubectl apply -f eks-fargate-container-insights.yaml



Create compute fargate profile for the adot-collector pod that comes with statefulset


eksctl create fargateprofile --name adot-collector --cluster btcluster -n fargate-container-insights 


Navigate to AWS cloudwatch

services > cloudWatch > logs > log groups & search for insight 


services > cloudWatch > insights > Container insights > resources


services > cloudWatch > insights > Container insights > container map


Some helpful commands

to scale down statefulset -

kubectl -n fargate-container-insights patch statefulset.apps/adot-collector -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'

to scale up statefulset -


kubectl -n fargate-container-insights patch statefulset.apps/adot-collector --type json -p='[{"op": "remove", "path": "/spec/template/spec/nodeSelector/non-existing"}]'


ref - https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-container-insights-eks-fargate/

13 November 2022

Setup Site to Site VPN Connection in AWS

Basic Architecture




Image Source: AWS

 

Typically the Site to Site VPN setup looks like above diagram where at one end its AWS VPC and other end its customer network with edge router.
However as we don’t have access to customer network, for this exercise, we will simulate the customer network by using another AWS VPC in another AWS region.
We will configure EC2 in this VPC which acts as the router at customer end. For this router we will use OpenSWAN software.
The AWS network diagram would look like following :-
VPC A acts as AWS side of the network and VPC B acts as a customer network


Our Goal

On successful VPN connection, we should be able to reach to EC2-A instance from our simulated corporate network (EC2-B) using EC2-A private IP address.

Follow steps to configure this IPSec VPN connection:

In this exercise, we will create 2 VPCs, one will act as AWS side of the VPN connection and other VPC acts as a customer on-premise network with router configured on EC2 instance.

VPCs

1. VPC-A (CIDR 10.100.0.0/16) – This is AWS side of the network
a. Hosts the AWS VPN gateway

2. VPC-B (CIDR 10.200.0.0/16) - This acts as Customer data center network
a. Hosts Openswan VPN server (router)

 

Steps to setup IPSec VPN between AWS VPC and Customer Network with Static Routing

1. Create AWS VPC-B which acts as Customer datacenter end of VPN connection
a. Create VPC in N. Virginia Region
(Name: VPC-B, CIDR: 10.200.0.0/16, Tenancy: Default)

b. Create an Internet Gateway (Name: VPC-B-IGW)
c. Attach an Internet Gateway to VPC-B

d. Create a Public subnet in VPC-B
i. Create Subnet (Name: VPC-B-Public-Subnet, VPC: VPC-B, AZ: us-east-1a, CIDR: 10.200.0.0/24)
ii. Enable “Auto Assign Public IP” for the Subnet

Select Subnet > Actions > Modify auto-assign IP settings > Enable auto-assign public IPv4 address

e. Create a Route Table (Name: VPC-B-Public-RT, VPC: VPC-B)

i. Add a route entry for destination 0.0.0.0/0 and target as Internet Gateway
Select Route table > Routes > Edit Routes > Add Route > Save

ii. Associate route table with the subnet
Select Route table -> Subnet Associations -> Edit Subnet Associations -> Select Subnet VPC-B-Public-Subnet -> Save

f. Launch an EC2 instance (EC2-B)
i. Select VPC-B and VPC-B-Public-Subnet, Type: t2.micro, Storage: Default, Tags – Name: EC2-B, Keypair: your existing key pair or create new if you don’t have existing keypair

After successful launch of EC2 instance:
Let’s call EC2 Public IP = EC2_B_PUBLIC_IP
Let’s call EC2 Private IP = EC2_B_PRIVATE_IP

g. Disable Source-Destination Check for this instance as it acts as a router
i. Go to console -> Select EC2-B -> Action -> Networking ->

Change Source/Destination check > Disable

h. Configure security group to allow inbound traffic for
i. Port 22 for your IP address so that you can login and configure software VPN. (Select source “My IP” from the dropdown)
ii. Open “All TCP” for Source as 10.100.0.0/16
iii. Open “All ICMP - IPV4” for Source 10.100.0.0/16
iv. If you have this instance behind NAT then you should also open UDP port 4500 for Public IP of VPN. (Not application in this use case)

i. Login to VPC-B EC2 machine using SSH and configure software VPN
Change to root user
$ sudo su

ii. Install openswan
$ yum install openswan -y

iii. In /etc/ipsec.conf uncomment following line (if not already uncommented)
include /etc/ipsec.d/*.conf

iv. Update /etc/sysctl.conf to have following
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0

v. Restart network service
$ service network restart

2. Create VPC-A which acts as AWS end of VPN tunnel

a. Create VPC-A in Mumbai Region
(Name: VPC-A, CIDR: 10.100.0.0/16, Tenancy: Default)

b. Create a Private subnet in VPC-A
i. Create Subnet (Name: VPC-A-Private-Subnet, VPC: VPC-A, AZ: ap-south-1b, CIDR: 10.100.0.0/24)

c. Create a Route Table (Name: VPC-A-Private-RT, VPC: VPC-A)
i. Associate route table with the subnet

Select Route table > Subnet Associations > Edit Subnet Associations > Select Subnet VPC-A-Private-Subnet > Save

d. Launch EC2 instance in this subnet

Select VPC-A and VPC-A-Private-Subnet, Type: t2.micro, Storage: Default, Tags – Name: EC2-A, Keypair: your existing key pair or create new if you don’t have existing keypair

i. Let’s call EC2 Private IP=EC2_A_PRIVATE_IP
ii. Configure Security group to allow

1. Open “All TCP” for Source as 10.200.0.0/16
2. Open “All ICMP - IPV4” for Source 10.200.0.0/16
2a. Create Virtual Private Gateway (Name: VPC-A-VGW)
3. Attach Virtual Private Gateway to VPC-A
4. Create Customer Gateway (VPC-A-CGW)
a. Go to Customer Gateway and Create new customer gateway
b. Select routing as “Static”

c. Provide Customer end Public IP as IP address (In this case

EC2_B_PUBLIC_IP. See 1.f.i step above)

d. Leave rest of the fields as default and Create Customer Gateway

5. Create VPN Connection
a. Go to Site-to-Site VPN Connections ->Create VPN Connection
b. Provide Name: VPC-A-VPC-B-VPN
c. Select Target Type -> Virtual Private Gateway
d. Select newly created VGW and CGW
e. Select Static routing -> Enter IP Prefix range of VPC-B (10.200.0.0/16)
f. Leave rest of the fields as default
g. Create VPN Connection

h. At this point, VPN connection id should be created. Wait for some time till state turns out to be “available”
i. After VPN connection is created, go to “Tunnel Details” tab where you should see 2 tunnel IPs
i. Assuming Tunnel1 IP=TUNNEL_1_PUBLIC_IP
ii. Assuming Tunnel2 IP= TUNNEL_2_PUBLIC_IP

j. Download VPN configuration as “Openswan” and save as text file locally. Open the file with editor like notepad++.

6. Enable Route Propagation for VPC-A Route table
a. Select Route Table (VPC-A-Private-RT) -> Route Propagation -> Edit Route Propagation -> Select Virtual private gateway -> Save

7. Login over SSH on VPC-B-EC2 instance, configure OpenSWAN as below
a. sudo su
b. Create a file /etc/ipsec.d/aws.conf and paste the Tunnel1 configurations from the VPN configuration file you downloaded. The section looks like following

conn Tunnel1
authby=secret auto=start left=%defaultroute

leftid=<Customer end VPN Public IP>
right=<AWS VPN Tunnel 1 Public IP>
type=tunnel

ikelifetime=8h keylife=1h

phase2alg=aes128-sha1;modp1024 ike=aes128-sha1;modp1024 keyingtries=%forever keyexchange=ike

leftsubnet=<Customer end VPN CIDR> rightsubnet=<AWS end VPN CIDR> dpddelay=10

dpdtimeout=30 dpdaction=restart_by_peer

 
Note: Remove auth=esp line from the above section if exists.

 

Replacing values from our example:

conn Tunnel1
authby=secret auto=start left=%defaultroute
leftid=EC2_B_PUBLIC_IP
right=TUNNEL_1_PUBLIC_IP
type=tunnel
ikelifetime=8h keylife=1h
phase2alg=aes128-sha1;modp1024
ike=aes128-sha1;modp1024
keyingtries=%forever
keyexchange=ike
leftsubnet=10.200.0.0/16
rightsubnet=10.100.0.0/16
dpddelay=10
dpdtimeout=30
dpdaction=restart_by_peer

 

c. Create a new file /etc/ipsec.d/aws.secrets and add the pre-shared key to the file. You can find the shared key details in the VPN configuration file. Refer the section of Tunnel 1.
Example:
EC2_B_PUBLIC_IP TUNNEL_1_PUBLIC_IP: PSK "xxxxxxxxxxxxxxxxxxxxxxxxxxx"

d. Configure ipsec service to be ON on reboot > chkconfig ipsec on

e. Start the ipsec service
$ systemctl start ipsec

f. Check status of the service

$ systemctl status ipsec

If you have completed all the steps properly then your VPN Connection should be setup at this point

Verify VPN Connectivity:

1.      Check VPN Connection tunnel status on AWS. You should see 1 tunnel up. Sometimes it takes time to detect the Tunnel status. Hence wait for ~5 mins if you see tunnel down.



2.      From VPC-B EC2 instance, you should be able to connect to instance in VPC-

A on private IP

[root@ip-10-200-0-166 ipsec.d]# ping 10.100.0.42 PING 10.100.0.42 (10.100.0.42) 56(84) bytes of data.
64 bytes from 10.100.0.42: icmp_seq=1 ttl=254 time=1.43 ms
64 bytes from 10.100.0.42: icmp_seq=2 ttl=254 time=1.52 ms

THAT’S IT!! YOU HAVE SUCCESSFULLY SETUP THE VPN CONNECTION


Cleanup the AWS Resources
After successful completion of VPN setup
o Terminate all the EC2 instances both the VPCs
o Delete the VPN Connection from VPC console in Mumbai
o Delete the VGW and Customer Gateway
o Delete VPC-A and VPC-B

 

11 June 2022

Configure an Event Bridge to stop EC2 instance

Sometimes it is hard to keep track of your EC2 instance status and when you fail to do so you will end up
getting high bills based on their usage. 

You can cut down your bills by scheduling EventBridge rules for the instances you are not using.

Go to Amazon EventBridge > Create Rule

  1. Create a new rule and select Rule type as Schedule.

  2. Select Schedule Pattern as - A fine-grained schedule that runs at a specific time

  3. Define the cron expression for instance shutdown schedule.

     For eg. cron(30 17 * * ? *)  this will stop your EC2 instance at 17:30 everyday

    select time zone either as UTC or Local Time Zone 

  4. For Target1, select:

    • Target Type as "AWS service"
    • Target as "EC2 StopInstances API call", and
    • provide Instance ID to shut down as per schedule.
    • Finally, select Create a new role for this specific resource

  5. Review the schedule and instance detail and finish creating the rule.



    You can use the method mentioned above (using lambda) to start the same instance again.

Known errors -


while adding an eventBridge rule you might get execution role error as show below -


which can be fixed by updating the trust relationship for the role attached to your instance.

update the attached role to assume scheduler as trusted principal.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "ec2.amazonaws.com",
                    "scheduler.amazonaws.com"
                ]
             },
             "Action": "sts:AssumeRole"
        }
     ]
}



to be continued...

20 March 2021

EKS with Fargate Profile

Getting started with Amazon EKS is easy:

1. Create an Amazon EKS cluster in the AWS Management Console or with the AWS CLI or one of the AWS SDKs.

2. Launch managed or self-managed Amazon EC2 nodes, or deploy your workloads to AWS Fargate (is what we are doing in this article).

3. When your cluster is ready, you can configure your favorite Kubernetes tools, such as kubectl, to communicate with your k8s-cluster.

4. Deploy and manage workloads on your Amazon EKS cluster the same way that you would do with any other Kubernetes environment. You can also view information about your workloads using the AWS Management Console.

5. Here we are going to use eksctl utility that will help you to create your cluster using AWS cloud-shell, or you can do same using Bastion host.

NOTE -  When configuring the aws shell, be sure that the user whose Access and Security key you will configure must have admin access role attached to it so that it can define and describe resources. Else you might endup getting errors like -
Error: operation error EC2: DescribeSubnets, https response error StatusCode: 403, RequestID: 310f662c-799c-4c78-99ff-dec773557025, api error UnauthorizedOperation: You are not authorized to perform this operation.

ex of devops user -

To create your first cluster For the fargate profile in EKS, you are required to have the following things in place

OIDC provider
IAM roles
IAMServiceAccount
Addons - (VPC-CNI, CoreDNS, kube-proxy)
ALB-Controller
Fargate Compute Profile

for more details consider referring to AWS documentation - - https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html

To begin with, we will start with the eksctl & kubectl installation


 curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C sudo cp -rp eksctl /usr/local/bin

 OR

 wget https://github.com/eksctl-io/eksctl/releases/download/v0.162.0/eksctl_Linux_amd64.tar.gz 
 tar -xpf eksctl_Linux_amd64.tar.gz && 
cp -rp eksctl /usr/local/bin

 curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
 sudo chmod +x kubectl &&  sudo cp -rp kubectl /usr/local/bin


Cluster creation using eksctl, if doing from shell following command will work 


 # with EC2 as underline compute (nodeGroup)                                                            
                                                                                                        
 Simpletest form -
 eksctl create cluster --name ekscluster --region region-code 

 Complex form with nodeGroup -
 eksctl create cluster \
 --name ekscluster \
 --region eu-west-2 \
 --vpc-private-subnets subnet-0b10bDEMOfbdd235,subnet-0d9876DEMOa41fba,subnet-0b1168dDEMO1e66a2 \
 --version 1.27 \
 --nodegroup-name standard-eks-workers \
 --node-type t2.small \
 --nodes 2 --nodes-min 1 --nodes-max 2 \
 --ssh-access=true \
 --ssh-public-key=devops-kp 

 without nodeGroup - (preferred method & add NodeGroup once Kluster is up)
 eksctl create cluster \
 --name ekscluster \
 --region eu-west-2 \
 --vpc-private-subnets subnet-0b10bDEMOfbdd235,subnet-0d987DEMO1a41fba,subnet-0b1168dDEMO1e66a2 \
 --version 1.27 \
 --without-nodegroup
                                                                                                          
 with fargate profile as underline compute    

 Simpletest form -
 eksctl create cluster --name ekscluster --region region-code --fargate

 Complex form -
 eksctl create cluster \
 --name eksCluster \
 --region eu-west-2 \
 --version 1.27 \
 --vpc-private-subnets subnet-0b10be78a0fbdd235,subnet-0d9876a68e1a41fba,subnet-0b1168d86d71e66a2
 
 update following values

 # cluster name
 # region code
 # subnets (private)

if doing from Bastion host you need to authorise instance to perform this operation -
(create an IAM policy to create eks cluster using eksctl (optional))

Accessing  your cluster

on Bastion host do the following -


 # export necessary varibles as below & set cluster context

 $ export KUBECONFIG=~/.kube/config/kubeconfig_eksCluster
 $ export AWS_DEFAULT_REGION=eu-west-2
 $ export AWS_DEFAULT_PROFILE=dev

 $ aws eks update-kubeconfig --name eksCluster --region=eu-west-2
 Added new context arn:aws:eks:eu-west-2:295XXXX62576:cluster/eksCluster to $PWD/kubeconfig_eksCluster

 $ kubectl cluster-info
 Kubernetes control plane is running at https://01C5E459355DEMOFDC1E8FB6CA7.gr7.eu-west-2.eks.amazonaws.com
CoreDNS is running at https://01C5E459355DEMOFDC1E8FB6CA7.gr7.eu-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

with EC2 instance in nodeGroup

Add networking add-ons via GUI, consider looking at next point before adding them -

1. coredns - DNS server used for service-discovery
2. vpc-cni  -  networking plugin for pod networking
3. kube-proxy - It maintains network rules on your nodes and enables network communication to your pods. (only applicable when using EC2 in node group)

---> Add coredns compute profile via CLI ( if working with Fargate)
for coredns, you are required to create a Fargate profile of coredns under compute tab in the kube-system namespace, or via CLI as below -


 eksctl create fargateprofile --name coredns --cluster eksCluster --namespace kube-system

update the value for # cluster name

Optional (if there is any issue with addons try to patch the deployment, usually coredns goes into Degraded mode)

Ex - for coredns deployment


 kubectl patch deployment coredns -n kube-system --type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
kubectl rollout restart -n kube-system deployment coredns

NOTE-
if addon [coredns] remains in a degraded state, delete the coredns replicaset with 0 desired value & associated pods so that it will look for fargate compute next time when it deploys new pods instead on EC2 compute(by default it attempts to deploy on EC2)


 kubectl scale deployment/coredns --replicas=0 -n kube-system
 kubectl scale deployment/coredns --replicas=3 -n kube-system

Setup IAM Policy


curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json

Create IAM Role with the below policies attached that we use for the Fargate profile & update the trust relationship


# TRUST RELATIONSHIP SHOULD BE UPDATED AS BELOW
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "eks-fargate-pods.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:eks:eu-west-2:481xxxx1953:fargateprofile/MyEKSCluster/*"
                }
            }
        }
    ]
}

Associate IAM OIDC provider  with your cluster


oidc_id=$(aws eks describe-cluster --name eksCluster --query "cluster.identity.oidc.issuer" \ --output text | cut -d '/' -f 5)

aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4

eksctl utils associate-iam-oidc-provider --cluster eksCluster --approve 2022-10-26 12:46:52 [ℹ] will create IAM Open ID Connect provider for cluster "eksCluster" in "eu-west-2"
2022-10-26 12:46:52 [✔] created IAM Open ID Connect provider for cluster "eksCluster" in "eu-west-2"


Create IAM ServiceAccount


eksctl create iamserviceaccount \
--cluster eksCluster \
--region eu-west-2 \
--namespace kube-system \
--name aws-load-balancer-controller \
--attach-policy-arn arn:aws:iam::4812xxxx1953:policy/AWSLoadBalancerControllerIAMPolicy \
--override-existing-serviceaccounts \
--approve
2022-10-26 12:50:33 [ℹ]  1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules)
2022-10-26 12:50:33 [!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2022-10-26 12:50:33 [ℹ]  1 task: {
    2 sequential sub-tasks: {
        create IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
        create serviceaccount "kube-system/aws-load-balancer-controller",
    } }2023-10-26 12:50:33 [ℹ]  building iamserviceaccount stack "eksctl-eksCluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-10-26 12:50:33 [ℹ]  deploying stack "eksctl-eksCluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-10-26 12:50:33 [ℹ]  waiting for CloudFormation stack "eksctl-eksCluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-10-26 12:51:04 [ℹ]  waiting for CloudFormation stack "eksctl-eksCluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2022-10-26 12:51:04 [ℹ]  created serviceaccount "kube-system/aws-load-balancer-controller"


update following values

# cluster name
# region name
# caller identity

Install & Setup Helm for aws-lb-controller


 wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz 
 tar -xzpf helm-v3.0.0-linux-amd64.tar.gz 
 sudo cp -rp linux-amd64/helm /usr/local/bin/

 helm repo add eks https://aws.github.io/eks-charts

 helm repo update

Deploy aws-load-balancer-controller


 helm install aws-load-balancer-controller eks/aws-load-balancer-controller 
-n kube-system \
 --set clusterName=eksCluster \
 --set serviceAccount.create=false \
 --set serviceAccount.name=aws-load-balancer-controller \
 --set image.repository=602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon/aws-load-balancer-controller \
 --set region=eu-west-2 \
 --set vpcId=vpc-0d417VPC-ID7694dd7

update following values

# cluster name
# region name
# repository name
# vpc-ID








Deploy compute for your app i.e. FARGATE PROFILE


 eksctl 
create fargateprofile --name sampleapp-ns --namespace sampleapp-ns --cluster eksCluster

Deploy your application


 kubectl create deployment sampleapp --image=punitporwal07/sampleapp:3.0 -n sampleapp-ns

[deploy all the resources of my sampleapp from here - full-sampleapp-for-eks-fargate.yaml ]

Deploy Ingress which will do rest of the magic

# INGRESS RESOURCE/ROUTE THAT WILL BE DEPLOYED AS APP LOAD-BALANCER IN AWS ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: sampleapp-ns
  name: sampleapp-ing
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: sampleapp-svc
              port:
                number: 80


Ingress works with clusterIP service type as well however, to make it private deploy an NLB as a LoadBalancer service type. 

test your application by hitting the DNS of the Application Load Balancer, to test NLB test it from/within your VPC instance.

Some helpful commands during the setup 

To delete resources 


 helm 
delete aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system

 eksctl delete iamserviceaccount --cluster eksCluster --name aws-load-balancer-controller --namespace kube-system // when using CLI options, the --namespace option defaults to default, so if the service account was created in a different namespace, the filter would fail to match and, as a result, the deletion would not proceed.

 eksctl 
delete fargateprofile --name coredns --cluster eksCluster

 eksctl delete cluster --name eksCluster --region eu-west-2

Scale down your deployments to save some cost


 kubectl scale deploy --replicas=0 aws-load-balancer-controller -n kube-system
 kubectl scale deploy --replicas=0 coredns -n kube-system 
 kubectl scale deploy 
--replicas 0 --all -n namespace

Switch your cluster context in aws cloud-shell


 aws eks update-kubeconfig --region eu-west-2 --name eksCluster

To decode encoded error message

  
 aws sts decode-authorization-message --encoded-message "message"


Known Issues

Issue 1 - When you Add a nodeGroup with Private-Subnet, you might get an error -

NodeCreationFailure

Fix - Your launched instances are unable to register with your Amazon EKS cluster. Common causes of this failure are insufficient node IAM role permissions.
or lack of outbound internet access for the nodes. Your nodes must meet either of the following requirements:

Able to access the internet using a public IP address. ( Your private subnet should have Nat-G/w associated to it)
The security group associated to the subnet the node is in must allow the communication. 


Issue 2 - Resolve the single subnet discovery error



Fix - Add the appropriate tags on your subnets to allow the AWS Load Balancer Ingress Controller to create a load balancer using auto-discovery.

for public load balancer [public subnet] define in at least 2:


  Key                                                                                                    Value
 kubernetes.io/role/elb                      1
 kubernetes.io/cluster/your-cluster-name     shared or owned

  for [private subnets] tags:

  Key                                                                                                    Value
 kubernetes.io/role/elb                      1 or empty tag value for internet-facing load balancers
 kubernetes.io/role/internal-elb             1 or empty tag value for internal load balancers


  Note: You can manually assign subnets to your load balancer using the annotation -

 alb.ingress.kubernetes.io/subnets: subnet-1xxxxx, subnet-2xxxxx, subnet-3xxxxx



however, it is advisable that EKS discovers the subnet automatically so tagging the subnet with the cluster is the better approach if you are hitting the BUG, which usually occurs when -

Any of the following is true:
- You have multiple clusters that are running in the same VPC. use shared value on your next cluster.
- You have multiple AWS services that share subnets in a VPC.
- You want more control over where load balancers are provisioned for each cluster.

Issue 3 - Resolve nodes are not available

0/2 nodes are available: 2 Too many pods, 2 node(s) had untolerated taint {eks.amazonaws.com/compute-type: fargate}.
preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.



Fix - Add compute profile so that - aws-lb-controller pods can be scheduled on it,  and then delete the pending pods

 
 eksctl create fargateprofile --name aws-lb --cluster eksCluster --namespace kube-system






from the above, you can see the alb-controller is using aws-lb compute profile that we created and started its pod on it.