
Expertise in Cloud engineering, helps clients in area of architecture, design, development, migration & delivery of IT Solutions on Cloud using DevOps methodologies.
Member of the Cloud Engineering Team responsible for automating the provisioning of infrastructure in the AWS cloud using open source software (DevOps toolchain), as well as, work side by side with application development teams to rapidly deploy highly available infrastructure to ensure the success of agile managed business project.
• Experience in DevOps, Build and Release Engineer and Cloud Engineer like Amazon Web Services (AWS) with a major focus on Continuous Integration, Continuous Delivery / Deployment and Infra Automation.
• Managing the Team by assigning the right tasks to the team members and ensuring the right deliverables with proposed ETA with mentioned Standards from both applications and security perspective.
• Experience with Docker, Kubernetes and EKS clustering frameworks. Scripting experience on Terraform, Ansible, Groovy (Jenkins), Helm.
• Experience in building, designing and implementing scalable cloud-based web applications for PaaS, IaaS or SaaS using AWS
• Build and design secure, well-instrumented, highly available, and strictly automated reproducible infrastructure. Optimize infrastructure deployments for speed, cost, availability and scale.
• Working on open source configuration management tools to develop the automated deployment of virtual server instances and environments in AWS by using various DevOps Tool chain such as Jenkins, Terraform, Ansible, ServiceNow, Docker, native AWS tools, etc.
• Conduct research on emerging Cloud and DevOps technologies in support of infrastructure development efforts and recommend technologies that will increase cost effectiveness and infrastructure flexibility.
• Troubleshoot systems, software, failures and make changes to improve current automated performance following established change management practices.
• Assist with defining Implementation schedule on selected products for production rollout. Deploy Governance and monitoring tools to ensure the security and proper use of Cloud resources. Enforce dormancy requirements. Enforce IAM policies.
• Oversee Usage charges and billing accuracy towards eliminating wasteful resources such as developer systems not made dormant when not in use.
• Update and maintain Cloud documentation, including strategy, DevOps engineering documents, change control, trouble tickets, procedures, and other documentation.
• Cost management and optimization in Cloud Environments.
Assistant Consultant
Tata Consultancy ServicesTechnology Analyst
R R Donnelley India outsourcing Pvt. ltd.Engineer - Technical Support
Computer Science Corporation India Pvt. Ltd.FMS Engineer
CMS Computer Ltd.
Terraform

AWS

EKS
.png)
Jenkins

Ansible

Ansible Tower

Nexus

DevOps

DevSecOps

CI/CD
.jpg)
YAML
.png)
Docker

Kubernetes

Helm Charts

Amazon RDS

Amazon S3

Amazon Lambda

Amazon VPC

EC2

ArgoCD

Qualys

VMware ESXi
Yeah. Hi. This is Ganesh. I have 60 total of 16 years of Alright. IT experience. And, currently, I'm working at the cloud infrastructure unit. Here, I have, IMO card. I'm working on a AWS server, platform for around 8 years. Under here, I used to provide a
Secure sensitive data. From, uh, committing this institute code, we can give in a separate, uh, variable dot d s like that. We we can provide in a separate, uh, file, file dot d f file. We can pass it via the TF variable files separately without committing to VCS. Also, uh, otherwise, uh, for example, if you want to pass the access and security of the AW in the Jenkins portal, we can use the Jenkins portal as a credential plug in. By using the credential plug in, we can, uh, upgrade the passwords there. Here, we can, uh, in the Terraform, we can, uh, fetch those password from, uh, this one, like, plugins with credentials by in the in the files. Like that, we can, uh, secure the sensitive data.
Sure. This one, but, uh, by using the stateful set, we can do, uh, we can accomplish this, uh, distributed application. That's what Here, the, uh, we are using the Argo CD. By using the Argo CD, it will be continuous deployment. There, we can manage the state from the GitOps, uh, the code. Using that one, we can we can manage the state of those applications. Same, uh, same state will be available and throughout that, uh, without any, uh, deployments, it will not be changing any, uh, application state.
How did you choose Azure Blob storage mobile Azure file? Sure about, uh, uh, Azure. Uh, I do have a knowledge on that Azure Blob storage is like an file like a s 3 bucket in AWS. It's it's a storage file storing objects on the log storage, same kind of s 3 bucket. It's kind of, uh, there is no limitation for Azure block storage or a file storage. It will be file storage is nothing more than we can connect with the, uh, these Azure machine instances, maybe. Uh, I'm I'm not into Azure mostly. I used to work on the AWS cloud.
Okay. Here for, uh, when we can consider choosing one second. One second. Sorry. Then, uh, what would be the consideration for choosing between AWS? Sorry about that. Uh, AWS Fargate and the EC two instances. Here, uh, we can EC 2 instances, uh, it's like a what do you say? Dedicated machines kind of, uh, application we can run. Here in this fargate is what we feel if you want to know to for example, uh, if you want to run the cron jobs in the EKS, on the jobs on the EKS, that time we can use the Fargate. We will, uh, for demand on demand, it will create instance and execute the jobs and then terminate it. For that type of situation, we can use a Fargate. EC 2 instance is dedicated for applications running continuously on the, uh, continuously on the server means we can choose these 2 instances.
Generation with Docker enhanced application deployment in the cloud. So how does containerization with enhance application deployment in cloud? It it it, uh, simplifies the application deployments and the, uh, we can maintain the, uh, image in one, uh, in in the tree, artifact. From there, we can pull out the major docker images, and we can build a application anywhere in EKS or any in a Docker Swarm or like that orchestration tools.
So, uh, URL is, uh, it's down here. Uh, maybe the Git clone will not work. So we need to have a separate, uh, command for Git clone.
Analyzing this Python function designed to list all the unique security groups attached to running EC two instances. What logic issue can you spot and how would you fix it? Security groups, we need to that's what we need to check that which, uh, voice it is. If it's for Windows bench, we need to allow the RTP port protocol port. In the same way, we need to an application port numbers, uh, require application port in this same way if if it's, uh, Linux machine, we need to access and, uh, and and then to access application. The security folks, we need to download this, uh, specific port request.
Teleform teleform multi tier web application using AWS services. Multi tier application, uh, here in the sense that we can use a 3 tier application 3 tier, uh, applications, multi tier services. Okay? Here, we can have a a l p. L p in place of, uh, accessing EC 2. We can have, like, a 3 tier applications like. EC 2, there is a when the user, uh, trying to access our application, it will be a a n p. 1st, it will hit to the a n p. It's in the application work balance. From there, it will go traffic will go to the EC two instances where we cannot we cannot design 2 EC two instances, target group. Uh, in the target group, we can assign a a port number, 80 port number for accessing the application web applications. Behind behind the web application, we can access, uh, another we can enable another load balancer application awareness using the API. And, uh, behind that, uh, API, we can and from the API, it will be accessed via the databases. Like that, you know, we can, uh, decide enter a formula. We're going to build that application by using, uh, using AWS services. Like, what are the if you wanna what are the AWS services if you we need to, uh, create things. Yeah. What application load balancer, easy to insert, uh, Yahoo, security groups, and security groups. If anything, we need to attach to that, uh, EC two instance, we need to create, uh, I'm with the permissions, policies.
If required to build if required to build a CICD pipeline, which tool is the Kubernetes ecosystem review. CICD pipeline. Which tool in the pipeline? CICD pipeline we're using, we can use, uh, uh, Jenkins orchestration tool. From that, uh, we can build the the docker image, and we can do scanning of that, uh, images. Some security tools, and then the. We can push that, uh, after after building the image and push that, uh, image through the any repository like Nexus repository. From there, uh, we can we can have a a guitar tool like a CD. By using the CD, we can do a continuous deployment to the Kubernetes models. Yeah. That we can in that way, we can build a CICD pipeline for Kubernetes application, industrial applications.
Which AWS service would you use to create a centralized routing solution? Uh, if if this is something, if it's a security purpose, means we can have a control tower. In the control tower, we can set default, uh, audit logging tools, logging logging tool. Log up log up will be created automatically By using that, uh, logging tool, we can have a centralized logging solutions. The control tower will be, uh, useful for security purpose also and the AWS organization. Inside the control tower, we can have a AWS organization. And once we once we enable the control tower, we have a we have a separate, uh, like, uh, audit accounts and the log log logs. It it contains a centralized show, uh, details of all the accounts, what are the accounts we are creating via, like, control. This is the service we are using. We can use for so there is some AWS service. We have multiple, uh, security con security services are available, like, a CloudWatch, These are the non drones.