
Hello, my name is Ashish Goel and as a DevOps/Cloud manager with over 10 years of experience in the tech industry, I have honed my skills in leading cross-functional teams and implementing efficient and innovative solutions. My passion for driving continuous improvement and delivering high-quality products has allowed me to successfully navigate complex projects and exceed business objectives.
My passion for automation and streamlining processes has led me to specialize in DevOps and SRE, where I have successfully implemented and managed various client projects across multiple industries. My expertise in these areas has allowed me to optimize operations, reduce downtime, and increase business efficiency.
My expertise lies in developing and implementing Devops practices, optimizing processes, and fostering collaboration between development and operations teams. I have a strong technical background, with a deep understanding of cloud technologies, such as Terraform, Kubernetes, Cost Optimization, CI/CD, Azure, AWS, and Scripting languages like Shell, PowerShell, and Python. This, combined with my leadership skills, allows me to drive a culture of continuous integration and delivery, resulting in faster delivery times and improved product quality.
One of my key strengths is my ability to adapt to new technologies and environments quickly. This has enabled me to stay ahead of the curve and continuously enhance my skills to provide the best solutions for my clients. I am always eager to learn and stay updated with the latest trends and techniques in the industry.
In addition to my technical skills, I am also well-versed in cost optimization strategies, helping businesses save resources and increase their bottom line. My strong analytical skills and attention to detail have enabled me to identify areas for improvement and implement cost-saving measures.
I am a strong team player and thrive in collaborative environments. If you're looking for a dedicated DevOps/SRE professional who can drive operational excellence and deliver tangible results, I would love to connect. Let's explore how we can work together to achieve your business objectives.
Principal Architect
AbinbevDevOps Engineer
Cerner Healthcare Services Pvt LtdAWS Linux Engineer
Tata Consultancy Services LtdLinux Admin
Infosys LtdAzure

AWS

Terraform

Azure Defender for Cloud

Prometheus
.jpg)
Grafana
.png)
Jenkins

Gitlab CI/CD
.png)
Docker

EC2

ELB

CloudWatch

SonarQube

Maven

ArgoCD

VPC

S3

Route53

SNS

EBS
I have a role of 10 years of experience. Uh, I started my career as a Linux admin. So there, I worked for 3 years. I did a lot of Linux related tasks, why it's related to NFS related things, performance related issues of the servers. Then I moved towards more into the cloud and the DevOps world. Initially, I worked on AWS and, uh, CICD Jenkins pipeline. Post in my current role, I'm, um, majorly working on, um, Azure and AWS cloud with, um, Terraform and Kubernetes, and I'll also do the scripting in PowerShell and share.
So if you have to securely transmit the data between 2 AWS services, we can use uh, VPC gateway endpoint or a private endpoint. So we have to create those endpoints between the services and make sure the, uh, the traffic pass, uh, through through that endpoints will have to create a a endpoints, and we'll have to create a route so that all the traffic will pass via those endpoints only.
Yeah. Uh, in order to do a full tolerance in AWS RDS database, we can have, uh, we can have another replica of the database in another region. We can also have a a read replica maybe in another zone so that all the read queries will be routed to that read replica. Um, we basically have to create a redundancy of that, application or of that database into that another reason.
So to automatically scale a high traffic, uh, application, we can use auto scaling group in which we'll have to create a load balancer. And, uh, in the back end, we'll have to post put our virtual machines. So virtual machines can be a part of the auto scaling group. So we can have a application load balancer, uh, which will basically route the traffic to the virtual machines. And based on the metrics defined in their auto scaling group, the scaling will happen. So to monitoring to monitor this item, we can use a Cloud rule or or any third party tool like Datadog or Nagios. Yeah. Problem which is in reference.
Yeah. In order to use, uh, encryption, we can we can either use a platform managed key or a customer managed key, but customer managed key is always the So we can use a customer managed key to do the encryption, uh, in s 3 and RDS. So we'll have to create our own key in which, uh, on which we are the owner. So that is not managed by the AWS, but managed by us only. So we'll have we can use that key. We can, um, we can put that key into the vault, and then you can, uh, you can call s 3 or RDS to use that, uh, key.
So in order to manage traffic and security policies in AWS VPC, we will need to have, uh, uh, we need to have a, like, VPC in which we can define a route table with the Internet gateway for the public subnet and the net gateway for the private subnet. We'll have to define the route tables. We'll have to configure network security. Sorry. Uh, network ACLs on a subnet level, similarly, a security group from the instance level. And we can also have endpoint so that the services communicate with each other privately. And, uh, maybe we can have load balances so that the traffic is, um, evenly distributed between among the back end.
Expiration in days can be changed from 365 to 30 or 60 days. And then for the cost optimization point of view. Also, we can we can enable bucket policies onto the screen from a security point of view.
So, um, we can use, um, AMI, um, that we can use, basically, approved AMI instead of using some random AMI. Similarly, we can we can have, uh, basically, the virtual networks and subnets and the security groups configured for the for the EC 2 instance from a security point of view. And, also, we can, we can use some stance type, which is more inclined towards the requirement instead of using a a small instead of using a small 1.
So in order to do a migration, we need to check, like, we can do a migration onto the virtual machines in which we can basically, um, do a lift and shift migration to some virtual machines. And if we have to use some PaaS services, then we can use some PaaS components like Lambda, in which we can deploy the dot net code of the application into the AWS.
So with the CICD pipelines, the standards will be maintained. It went through various level of hedges like unit, uh, unit testing, integration testing. Then we can have smart queue for the code coverage and the code analysis. We can also push the push our, um, the jar file created or or for a dot net application, the file created into Nexus repo or a jfrog repo, then with the help of Docker, we can create a image, and then we can do the deployment onto the Kubernetes cluster via the ML files. And we can also have a scanning tool, some something like Trivy, uh, to scan the images. So we'll we'll have a we can if you follow the CICD pipeline, we can adhere a lot of standards. We can have a scanning tools in between, and all of the things are automated with the manual. Most of some of the things may be missed and may not follow a proper structure. Moreover, this is fast as compared to a manual effort. So we'll just have to push the code pipeline. We'll have to create a pipeline, and then all the all the stages, uh, will happen. We we can, uh, get more errors since we have a lot of tools like Synarko for the code analysis, similarly unit testing and integration testing, so we can make use of tools. So it will be a good practice.
So we can use, uh, CICD pipelines in which we the pipeline will first go through the unit and integration testing followed by the scenario code and then this is then it will go through, uh, through and then the image is created by the docker and goes to enter a container history, and then deployments will use that image to, uh, do the orchestration. So you can do the deployment via the ROCD tool, um, into the Kubernetes cluster. And