
AWS Certified Solution Architect - Associate with over 5.5+ years of experience as a DevOps Engineer.
Proficient in fully automated Continuous Integration/Continuous Deployment (CI/CD) pipelines, monitoring, and infrastructure management using GitHub & Actions, Dynatrace, scripting, and Kubernetes.
Skilled in writing Linux shell scripts to automate tasks and streamline the Software Development Life Cycle (SDLC) processes.
Experience in configuring Identity and Access Management (IAM) users, roles, and policies.
Proficient in version control software such as GIT & Bitbucket for code management.
Demonstrated ability in providing production support, including resolving high-priority tickets, ensuring customer satisfaction, and meeting SLAs.
Well versed with containerization techniques such as Docker/Kubernetes and knowledge of developing
Terraform
Hands-on experience with provisioning, maintaining, and deploying Kubernetes clusters across Development, Testing, Acceptance, and Production (DTAP) stages.
Experienced in working with EKS Fargate clusters integrated with GitHub Actions workflows for CI/CDpipelines.
Designing and implementing scalable solutions in a cloud environment leveraging cloud technology and AWS services using using IAC tools AWS CLI or Cloud formation
DevOps Engineer
Feuji Software Solutions Private LimitedCloud Consultant
Planon Software Solutions Private LimitedCloud Engineer
Infosys Limited
Visual Studio Code

Linux

Nginx

Agile

Scrum

Bash Scripting

Python Scripting
.jpg)
YAML

GitHub Actions
.png)
Jenkins

GitHub

Bitbucket

EC2

S3

ECS

IAM

EBS

RDS

WAF

DMS

ECR

VPC

CloudWatch

Batch

ELB

Lambda
.png)
Dynatrace

Prometheus

Terraform

CloudFormation
Okay. I'm a full time DevOps engineer. Mostly, uh, concentrated on Azure and AWS, uh, cloud platforms. And I'm trying to Skype the skills. It is AWS DevOps and Azure Coming to the CAC report. GitHub actions and Jenkins that they will see is and all. Currently, we're dealing with the dot net application framework, the Microsoft based development into the Kubernetes environment. So I'm more into Kubernetes Docker, uh, AWS different AWS services and Azure services and, uh, various enhancements, uh, with the AWS services as well. Much experience on building the CAC pipeline from scratch to end with automation included.
So, um, there's a Kubernetes object called the Horton Report Scaler, With that, uh, using that Kubernetes object, I'll configure the Kubernetes object accordingly, and, also, I will mention the required resources, uh, request and limits in the deployment so that it will the auto scaling occurs based on the request and limits and the percentage of the threshold given in the horizontal auto scalar. Minimum and maximum, uh, that should be mentioned there. And, also, uh, correspondence are these are gonna roles and role binding should be created, uh, for the auto scaling to occur properly. So the all this configuration will be done in the deploy, uh, this, uh, Kubernetes manifest files. This Huddl, uh, the other Huddl, and deployment files can be separate, pre maintained, or that can be clubbed to the same file based on the maintainable day of the project.
So what I'll say kill, um, in Kubernetes is there are, uh, different, uh, states of words. So running status and the EMS pullback or can be UNESCO back up status or, um, crash through backup status. Or it is still not, uh, started at pending status where it may take long time to get into the start status. So the updates, uh, of the port will be rolling updates, uh, which means once we deploy this rep, uh, same replica of the port, uh, it will wait for the other, uh, port run, uh, to be in the running state, and existing port will be test the test data terminated only after the other port will get run.
So I will deploy my application in 2 regions. One is supposedly, he has used 1 and he has used 2. So that deployment, uh, whenever the, uh, whenever the upgrade should happen to 1 of the with 2 of the applications in the region. Uh, with the by using the load balancer, I will send the load balancer to redirect the traffic to another, uh, another region, which will become the green deployment. At the time, and the the environment which is stable will become the blue.
So data form, we're introducing automation tool. The integration means so for the Kubernetes, uh, so of, uh, supposed to, uh, we need to create the Azure case using Terraform. So the required, uh, prerequisite thing for the Azure API should be, uh, the way the VPCs and subnets should be created. So before going into the APIs, I will create the VPCs and the corresponding resources is in Terraform. And whatever the resources, uh, created, uh, I will use those EPC IDs, uh, to use in that, uh, Kubernetes, uh, Terraform template. So this can be this will be managed in the separate modules to for the reuse purpose. Yeah.
Uh, super. The name is this is logic. Uh, the name is this is logical differentiation between, uh, we can say, between the resources or in the queue. But it is suppose, uh, for example, if you want to have access come to in the Kubernetes, not only to the respective ports or respective resources. So, uh, we will differentiate, uh, those resources, and we will create those resources in different namespaces so that this set of people will have only access to this namespace so that, um, the access can be restricted. So for some dev problem to you, we will we can create a namespace with the subject names. So this is just an example of where we can use the namespace for the customer.
Then based on web headshots is so, uh, for the Kubernetes deployments or any Kubernetes objects have to follow. We need we need to create more net a lot of, uh, managed files. So, uh, end source will, uh, will club all the required managed files, and just it's like a package of, uh, whatever the resources, uh, what are the Kubernetes objects we need for the successful Kubernetes deployments. Is used for this purpose. And managing dependencies in the is We can change the values dot yml file, uh, for the required, uh, dependencies and require modifications, any modifications custom from charts to prepare. I'll just start the MLK file can be configured accordingly. And, uh, by using that, uh, file, we can modify up. We can deploy our customized Helm charts.
So first and foremost thing is, um, we can use different namespaces for different set of teams. Uh, and also we can have network policies assigned to the port so that, uh, the the port between the ports, the one port will access the other port only with set of, uh, network policies and rules. And also the clusters outside the cluster, uh, should be able to access the Kubernetes. Only with the specified IAM roles and service accounts. And thus, uh, using service accounts for this purpose is very much helpful, uh, for the to follow the security measures. Uh, role based access controls should also be implemented to provide, uh, to prevent unauthorized access. Yeah. These are the list of things that we can look into.
So step for applications.
It's going to be a cluster anyway. So the Kubernetes cluster. We Kubernetes cluster. So, uh, at present, uh, I'm using, uh, Prometheus and Grafana and also the Datadog for the QV cluster. So, um, why miss? Uh, I can explain, uh, regarding the Datadog implementation. So the Datadog, uh, will capture the real time logs of the Kubernetes cluster, and, also, it has a version for application performance manage monitoring too. And you can also have synthetic testing, uh, with the data log And, also, any HTTP errors, we if you have to monitor monitor regarding an application. We can monitor that.
So application performance is here as I have said in the previous one. So the third letter will be a best for it, for the application also because it give very detailed and, uh, even if the millions of, uh, application logs, it will view seamlessly. So based on based on, um, latency between, uh, flowing of the logs and all and also the application performance, we can decide on, uh, our infrastructure planning.