profile-pic
Vetted Talent

G Kesava

Vetted Talent

AWS Certified Solution Architect - Associate with over 5.5+ years of experience as a DevOps Engineer.

Proficient in fully automated Continuous Integration/Continuous Deployment (CI/CD) pipelines, monitoring, and infrastructure management using GitHub & Actions, Dynatrace, scripting, and Kubernetes.

Skilled in writing Linux shell scripts to automate tasks and streamline the Software Development Life Cycle (SDLC) processes.

Experience in configuring Identity and Access Management (IAM) users, roles, and policies.

Proficient in version control software such as GIT & Bitbucket for code management.

Demonstrated ability in providing production support, including resolving high-priority tickets, ensuring customer satisfaction, and meeting SLAs.

Well versed with containerization techniques such as Docker/Kubernetes and knowledge of developing

Terraform

Hands-on experience with provisioning, maintaining, and deploying Kubernetes clusters across Development, Testing, Acceptance, and Production (DTAP) stages.

Experienced in working with EKS Fargate clusters integrated with GitHub Actions workflows for CI/CDpipelines.

Designing and implementing scalable solutions in a cloud environment leveraging cloud technology and AWS services using using IAC tools AWS CLI or Cloud formation

  • Role

    DevOps Consultant / Architect

  • Years of Experience

    6.1 years

Skillsets

  • Cloudformation
  • Containerization - 6 Years
  • Identity and Access Management - 6 Years
  • Lambda - 3 Years
  • Python - 3 Years
  • YAML - 5 Years
  • Azure DevOps - 3 Years
  • Ci/Cd Pipelines - 27 Years
  • AWS Batch
  • Eks fargate
  • IAM
  • Cloudfront
  • Grafana - 1 Years
  • DevOps - 6 Years
  • CI/CD - 5 Years
  • Bitbucket
  • CloudWatch
  • Docker
  • Git
  • AWS - 6 Years
  • Kubernetes - 5 Years
  • Terraform - 6 Years
  • Prometheus - 1 Years

Vetted For

14Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Kubernetes Support Engineer (Remote)AI Screening
  • 50%
    icon-arrow-down
  • Skills assessed :Ci/Cd Pipelines, Excellent problem-solving skills, Kubernetes architecture, Strong communication skills, Ansible, Azure Kubernetes Service, Grafana, Prometheus, Tanzu, Tanzu Kubernetes Grid, Terraform, Azure, Docker, Kubernetes
  • Score: 45/90

Professional Summary

6.1Years
  • Jan, 2023 - Present2 yr 9 months

    DevOps Engineer

    Feuji Software Solutions Private Limited
  • Aug, 2021 - Jan, 20231 yr 5 months

    Cloud Consultant

    Planon Software Solutions Private Limited
  • Apr, 2018 - Aug, 20213 yr 4 months

    Cloud Engineer

    Infosys Limited

Applications & Tools Known

  • icon-tool

    Visual Studio Code

  • icon-tool

    Linux

  • icon-tool

    Nginx

  • icon-tool

    Agile

  • icon-tool

    Scrum

  • icon-tool

    Bash Scripting

  • icon-tool

    Python Scripting

  • icon-tool

    YAML

  • icon-tool

    GitHub Actions

  • icon-tool

    Jenkins

  • icon-tool

    GitHub

  • icon-tool

    Bitbucket

  • icon-tool

    EC2

  • icon-tool

    S3

  • icon-tool

    ECS

  • icon-tool

    IAM

  • icon-tool

    EBS

  • icon-tool

    RDS

  • icon-tool

    WAF

  • icon-tool

    DMS

  • icon-tool

    ECR

  • icon-tool

    VPC

  • icon-tool

    CloudWatch

  • icon-tool

    Batch

  • icon-tool

    ELB

  • icon-tool

    Lambda

  • icon-tool

    Dynatrace

  • icon-tool

    Prometheus

  • icon-tool

    Terraform

  • icon-tool

    CloudFormation

Work History

6.1Years

DevOps Engineer

Feuji Software Solutions Private Limited
Jan, 2023 - Present2 yr 9 months
    AWS for enhanced Orchestrated the deployment of EKS Fargate Clusters, optimizing resource allocation and scalability within AWS infrastructure. Established secure private registries within Elastic Container Registry (ECR), ensuring the safe storage and retrieval of Docker Images. Streamlined codebuilds and deployment workflows by leveraging GitHub Workflow files, specifically targeting EKS Fargate pods for efficient deployment. Designed and implemented Ingress files to facilitate the deployment of load balancers, alongside Service YAMLs for the smooth deployment of pods. Enhanced system security by deploying and configuring WAF rules, effectively filtering and managing inbound traffic. Spearheaded the automation of environment replication processes utilizing CloudFormation templates and custom GitHub actions scripts, ensuring consistency and reliability across deployments. Successfully implemented horizontal autoscaling mechanisms for EKS Fargate pods, enhancing system resilience and resource utilization in response to fluctuating workload demands. Strategically devised and implemented weekend shutdown protocols for EKS Fargate pods, optimizing resource allocation and cost efficiency during periods of reduced activity. Implemented infrastructure as code (IaC) using Terraform to automate the provisioning of AWS resources, resulting in a reduction in deployment time. Leveraged AWS Batch to efficiently manage and execute long-running ETL (Extract, Transform, Load) jobs, ensuring seamless data processing and workflow automation. Played a key role in configuring and optimizing AWS Database Migration Service (DMS) setups, facilitating smooth and efficient data migration processes across diverse environments. Contributed to the setup and optimization of CloudFront distributions, enhancing content delivery performance and reliability for web applications and services. Experience with writing GitHub Workflows, Kubernetes manifest files, and docker files. Fair knowledge on observability, alerting, and tracing of distributed systems using tools like Prometheus, Grafana.

Cloud Consultant

Planon Software Solutions Private Limited
Aug, 2021 - Jan, 20231 yr 5 months
    Collaborated closely with development teams as a Cloud Consultant to architect and implement DevOps Continuous Integration/Continuous Deployment (CI/CD) pipelines using Jenkins, facilitating seamless code deployment across multiple environments. Played a key role in deploying, automating, maintaining, and managing AWS-based DTAP (Development, Testing, Acceptance, Production) systems, ensuring optimal availability and performance. Implemented robust server monitoring solutions utilizing both internal application dashboards and third-party tools such as Dynatrace, ensuring proactive identification and resolution of potential issues. Integrate Application code/image scanning tools such as SonarQube through the Jenkins CI/CD pipeline to automatically flag security vulnerabilities or policy violations to developers in their code before its deployed. Executed deployment strategies for internal applications using OpenShift platform on Cloud1 and Kubernetes platform through the implementation of Terraform code within Jenkins on Cloud2, ensuring consistent and reliable deployment processes. Collaborated closely with development teams to facilitate code changes and commits in Bitbucket, seamlessly integrating them with respective Jira tickets to maintain clear traceability and accountability throughout the development lifecycle. Undertook various CloudOps tasks including customer environment upgrades, troubleshooting of backup processes, and resolution of disk space issues, ensuring uninterrupted operation of cloud-based systems. Managed the release process by updating new software releases in S3 buckets and orchestrating their deployment, ensuring smooth transitions and minimal downtime. Exposure to Agile methodology and experience with full SDLC including design, testing and deployment.

Cloud Engineer

Infosys Limited
Apr, 2018 - Aug, 20213 yr 4 months
    Proficient in utilizing a wide array of AWS services including IAM, VPC, EC2, S3, RDS, ALB, WAF, and CloudWatch to architect and manage cloud infrastructure. Demonstrated ability to manage and optimize scalable distributed systems in cloud environments, ensuring high performance and reliability. Skilled in crafting IAM policies, roles, and user management strategies to facilitate delegated access control within AWS environments. Proficient in L1 production support, utilizing Jira ticketing tool to effectively address and resolve system issues with promptness and accuracy. Well-versed in various Atlassian tools including Jira, Confluence, and Bitbucket, leveraging their functionalities to streamline collaboration and project management processes. Actively engaged in imparting training and Knowledge Transfer (KT) sessions to newly onboarded employees, ensuring smooth integration into the team and promoting proficiency in relevant technologies and processes. Create and manage Elastic Block Storage (EBS), S3 buckets, and Enable versioning & Life cycle management.

Major Projects

3Projects

Healthcare Rewards platform

Jan, 2023 - Present2 yr 9 months
    Sunny Rewards vision is to build a Healthcare Rewards platform that connects Healthcare Providers with external partners that provide Task tracking data and Rewards Redemption facilities. The platform will allow health plan consumers to enroll in tasks, track and complete them, earn associated rewards, and redeem the rewards using external partners.

Software product life cycle

Aug, 2021 - Jan, 20231 yr 5 months
    Planon is a global market-leading Smart Sustainable Building Management software company. We connect buildings, people, and processes, by eliminating data silos and aligning solutions into one shared information platform.

Courier Service Platform

Apr, 2018 - Aug, 20213 yr 4 months
    Provided AWS cloud solutions for their infrastructure management and maintenance.

Certifications

  • Aws certified solution architect - associate

AI-interview Questions & Answers

Okay. I'm a full time DevOps engineer. Mostly, uh, concentrated on Azure and AWS, uh, cloud platforms. And I'm trying to Skype the skills. It is AWS DevOps and Azure Coming to the CAC report. GitHub actions and Jenkins that they will see is and all. Currently, we're dealing with the dot net application framework, the Microsoft based development into the Kubernetes environment. So I'm more into Kubernetes Docker, uh, AWS different AWS services and Azure services and, uh, various enhancements, uh, with the AWS services as well. Much experience on building the CAC pipeline from scratch to end with automation included.

So, um, there's a Kubernetes object called the Horton Report Scaler, With that, uh, using that Kubernetes object, I'll configure the Kubernetes object accordingly, and, also, I will mention the required resources, uh, request and limits in the deployment so that it will the auto scaling occurs based on the request and limits and the percentage of the threshold given in the horizontal auto scalar. Minimum and maximum, uh, that should be mentioned there. And, also, uh, correspondence are these are gonna roles and role binding should be created, uh, for the auto scaling to occur properly. So the all this configuration will be done in the deploy, uh, this, uh, Kubernetes manifest files. This Huddl, uh, the other Huddl, and deployment files can be separate, pre maintained, or that can be clubbed to the same file based on the maintainable day of the project.

So what I'll say kill, um, in Kubernetes is there are, uh, different, uh, states of words. So running status and the EMS pullback or can be UNESCO back up status or, um, crash through backup status. Or it is still not, uh, started at pending status where it may take long time to get into the start status. So the updates, uh, of the port will be rolling updates, uh, which means once we deploy this rep, uh, same replica of the port, uh, it will wait for the other, uh, port run, uh, to be in the running state, and existing port will be test the test data terminated only after the other port will get run.

So I will deploy my application in 2 regions. One is supposedly, he has used 1 and he has used 2. So that deployment, uh, whenever the, uh, whenever the upgrade should happen to 1 of the with 2 of the applications in the region. Uh, with the by using the load balancer, I will send the load balancer to redirect the traffic to another, uh, another region, which will become the green deployment. At the time, and the the environment which is stable will become the blue.

So data form, we're introducing automation tool. The integration means so for the Kubernetes, uh, so of, uh, supposed to, uh, we need to create the Azure case using Terraform. So the required, uh, prerequisite thing for the Azure API should be, uh, the way the VPCs and subnets should be created. So before going into the APIs, I will create the VPCs and the corresponding resources is in Terraform. And whatever the resources, uh, created, uh, I will use those EPC IDs, uh, to use in that, uh, Kubernetes, uh, Terraform template. So this can be this will be managed in the separate modules to for the reuse purpose. Yeah.

Uh, super. The name is this is logic. Uh, the name is this is logical differentiation between, uh, we can say, between the resources or in the queue. But it is suppose, uh, for example, if you want to have access come to in the Kubernetes, not only to the respective ports or respective resources. So, uh, we will differentiate, uh, those resources, and we will create those resources in different namespaces so that this set of people will have only access to this namespace so that, um, the access can be restricted. So for some dev problem to you, we will we can create a namespace with the subject names. So this is just an example of where we can use the namespace for the customer.

Then based on web headshots is so, uh, for the Kubernetes deployments or any Kubernetes objects have to follow. We need we need to create more net a lot of, uh, managed files. So, uh, end source will, uh, will club all the required managed files, and just it's like a package of, uh, whatever the resources, uh, what are the Kubernetes objects we need for the successful Kubernetes deployments. Is used for this purpose. And managing dependencies in the is We can change the values dot yml file, uh, for the required, uh, dependencies and require modifications, any modifications custom from charts to prepare. I'll just start the MLK file can be configured accordingly. And, uh, by using that, uh, file, we can modify up. We can deploy our customized Helm charts.

So first and foremost thing is, um, we can use different namespaces for different set of teams. Uh, and also we can have network policies assigned to the port so that, uh, the the port between the ports, the one port will access the other port only with set of, uh, network policies and rules. And also the clusters outside the cluster, uh, should be able to access the Kubernetes. Only with the specified IAM roles and service accounts. And thus, uh, using service accounts for this purpose is very much helpful, uh, for the to follow the security measures. Uh, role based access controls should also be implemented to provide, uh, to prevent unauthorized access. Yeah. These are the list of things that we can look into.

So step for applications.

It's going to be a cluster anyway. So the Kubernetes cluster. We Kubernetes cluster. So, uh, at present, uh, I'm using, uh, Prometheus and Grafana and also the Datadog for the QV cluster. So, um, why miss? Uh, I can explain, uh, regarding the Datadog implementation. So the Datadog, uh, will capture the real time logs of the Kubernetes cluster, and, also, it has a version for application performance manage monitoring too. And you can also have synthetic testing, uh, with the data log And, also, any HTTP errors, we if you have to monitor monitor regarding an application. We can monitor that.

So application performance is here as I have said in the previous one. So the third letter will be a best for it, for the application also because it give very detailed and, uh, even if the millions of, uh, application logs, it will view seamlessly. So based on based on, um, latency between, uh, flowing of the logs and all and also the application performance, we can decide on, uh, our infrastructure planning.