profile-pic
Vetted Talent

Parteek Adlakha

Vetted Talent
With over three years of experience in my role, I have honed a unique set of skills that have allowed me to excel in my field. Through hands-on experience and continuous learning, I have developed a deep understanding of the intricacies of my role and have acquired the expertise necessary to navigate challenges effectively. My ability to adapt to changing circumstances, communicate effectively with stakeholders, and problem-solve creatively sets me apart as a valuable asset in any team. I am confident in my capabilities and eager to continue expanding my skills to achieve even greater success in the future.
  • Role

    Golang Developer

  • Years of Experience

    4 years

  • Professional Portfolio

    View here

Skillsets

  • PromQL
  • Springboot
  • MongoDB
  • Linux
  • Kafka
  • Java17+
  • Hibernate
  • Golang
  • Firebase
  • DynamoDB
  • C++
  • AWS Event Bridge
  • Windows
  • Sumologic
  • SQL
  • Python3
  • Coralogix
  • Prometheus
  • PHP
  • Oracle
  • Node Js
  • MySQL
  • macOS
  • Laravel
  • Kubernetes
  • Kubernetes
  • gRPC
  • Grafana
  • Git
  • Git
  • Docker

Vetted For

9Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Golang Engineer (Remote)AI Screening
  • 69%
    icon-arrow-down
  • Skills assessed :Communication, API development, Database Design, AWS, Go Lang, Kubernetes, Problem Solving Attitude, Redis, Security
  • Score: 62/90

Professional Summary

4Years
  • Jul, 2024 - Present1 yr 3 months

    Senior Software Development Engineer

    Cars24
  • Apr, 2021 - Jul, 20243 yr 3 months

    Senior Software Development Engineer

    Razorpay

Applications & Tools Known

  • icon-tool

    PHP

  • icon-tool

    AWS

  • icon-tool

    SFTP

  • icon-tool

    Golang

  • icon-tool

    MySQL

  • icon-tool

    Java

  • icon-tool

    Postgres

  • icon-tool

    Grafana

  • icon-tool

    Oracle

  • icon-tool

    Redis

  • icon-tool

    Git

  • icon-tool

    Prometheus

  • icon-tool

    Coralogix

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    Prometheus

  • icon-tool

    AWS

  • icon-tool

    SFTP

  • icon-tool

    Redis

  • icon-tool

    AWS EventBridge

Work History

4Years

Senior Software Development Engineer

Cars24
Jul, 2024 - Present1 yr 3 months
    Designed a fully automated Self-Serve Loan Application Flow, reducing user drop-off by 70%. Accelerated product development by 20% by building SDKs for reusable components and standardizing internal service integrations. Developed a centralized Data Management Service to manage configurations, empowering non-tech teams to make updates. Led the development of a Used Car Loan product from scratch, now serving over 10,000+ customers across multiple channels. Engineered a high-conversion C2C Digital Loan Experience that now powers 80% of all loans disbursed through the Cars24 platform.

Senior Software Development Engineer

Razorpay
Apr, 2021 - Jul, 20243 yr 3 months
    Led Razorpay’s Malaysia launch, driving 60Mn GMV within the first quarter. Enhanced automated SEA payout systems generating 10Mn/mon, cutting go-live time by 70% and boosting merchant activation. Redesigned refund infrastructure, cutting merchant onboarding time by 50% and supporting 7Mn/mon refunds. Pioneered PCI-compliant tokenized payments, scaling to 120K+/mon secure transactions/mon. Integrated EMI payments with 4 partner banks, boosting GMV by 12%. Overhauled anomaly detection rates by 20% and reduced mean time to resolution. Streamlined sprint planning, increasing project delivery efficiency by 20%. Guided junior engineers, raising team productivity by 40% through coaching and code quality efforts.

Achievements

  • Reduced onboarding time on payouts by 70%
  • Established payment system in Malaysia with a GMV of 60MnMYR monthly
  • Led the automated payout systems setup generating GMV of 10MnMYR
  • Designed and implemented seamless refund flow with a GMV of 7MnMYR
  • Reduced onboarding time for refunds by 50%
  • Elevated error detection rates by 20% by deploying advanced monitoring
  • Revolutionized saved card payments via Tokenized Payment Flow
  • Forged integration with 4 banks increasing GMV of EMI card payments by 12%
  • Contributed to a 20% increase in project delivery efficiency
  • Facilitated the mentorship of junior engineers, increasing productivity by 40%
  • Received war room volunteering award
  • Received SPOT award for Payment System setup
  • Contributed 1 test case to LeetCode Open-source Community
  • Certificate of Appreciation for war room volunteering and timely tokenization completion
  • SPOT award for demonstrating ownership and autonomy for setting up Payment System

Major Projects

2Projects

URL Shortener Service

Jan, 1970 - Jan, 1970
    Built a scalable service handling 10K+ redirects/day via load testing, achieving 99.9% uptime with sub-ms response using Redis caching

Weather Forecast App

Jan, 1970 - Jan, 1970
    Created a real-time weather app using OpenWeatherMap API, serving accurate forecasts for 50+ global cities

Education

  • Bachelor of Engineering - Computer Science Engineering

    Chitkara University (2022)

Certifications

  • Complete guide to protocol buffers 3 [golang] (udemy)

  • Programming foundations with javascript, html5, and css, duke university (coursera)

  • The joy of computing with python3, iit madras (nptel)

AI-interview Questions & Answers

Hi. My name is Pratik, and I have joined around 2021 April. And, uh, it's been around 3 years I have joined Reserve as a full time employee. And, uh, earlier, it was an internship of 15 months, but it was kind of internship, and I was working like a full time employee only. And, uh, in this, I have worked mainly in 3 things. Many cards, where we used to maintain the card payment features and maintain the any bugs if, uh, reported by any merchant. And the second was tokenization. So there, we used to basically, uh, migrate safeguard flow, like, from local world to network tokenization. So that was the 2nd team I worked upon. And the 3rd team is, uh, IA team, internationalization. So there, uh, we are working on migrating Razorpay outside India. Uh, by outside idea by outside India, I mean, uh, basically generalizing the code base of Razorpay so that it can start supporting multi currency payment as well, and we are able to onboard, uh, any new merchant outside Razorpay as well. So they can support their local payment methods, local currencies, local settlements, everything. So this is a, like, kind of work I have done in LaserPay. And apart from work, uh, my hobbies are, like, playing badminton, going for tracking, going for cycling in the weekend, and, uh, basically, the editing books. This is the kind of, uh, hobby I'm trying to develop. And, uh, apart from this, uh, I'm from a very small town. It's called Sadora. It's on the border of Haryana and Himachal, and, uh, it lies in the Haryana. And, yeah, like, it's a privilege. I can go to Himachal anytime, like, 15 or 20 minutes away from my home. So, yeah, uh, that's it, uh, from me. Yeah.

Yeah. I guess, uh, to achieve this, uh, we can, uh, use multi, uh, multithreading. So, basically, there are 2 things we can do. Uh, with AWS, we can do HPA, so horizontal port automation. So, basically, we can scale the ports whenever it is required on the basis of load itself. So this is a kind of, uh, auto scaling we can do on infra level of things. And, uh, if you want to do on coding side of things, we can apply multithreading as well. So if we combine auto scaling plus multithreading, it will very good combination to basically support the scale. And, also, we should take care of, uh, which processes are synchronous and which are asynchronous. So if we basically go with, uh, like, if we if we are able to basically, uh, distinguish between asynchronous and synchronous processes, then, yeah, we can, for sure, save a lot of time because if any process is asynchronous, then we can just read it on any 2 x x, uh, kind of response to, uh, our consumer or client. And in background, we can, uh, keep basically, completing that task. So I guess, yeah, that's the one of the option we can do. So, uh, as I as I told, uh, 3 options, auto scaling, multithreading, and, uh, uh, basically distinguish between synchronous and asynchronous task. And, also, what, uh, one more thing we can do is we can distinguish the DB, like, uh, read DB and write DB. So, uh, some DBs have good, uh, basically, uh, like, good speed in, like, writing any entry, But, uh, some DB support good reading time. So on the base of that, we can distinguish the between the DB as well. So I guess, uh, that will fulfill our requirement.

Here you have to optimize your slow rolling API due to database, bottle, etc. Yeah, so I would like to mention one thing here Normally whenever we trigger an API, so it normally hits the DB But in certain scenarios we can take the use of Redis. So if we use the Redis then Basically, we can decrease the reliability of a system on a main DB so I guess it one of the one of the solution is using Redis cache and And Second is if we can use the offsets while fetching any items from the DB So let us suppose if we are loading some page and it require only 50 to 60 items Then we don't need to fetch all the items from the table and then take the first 50 or first 60 And then insert in those scenarios in those scenarios. We should use offset so set will make sure that we are taking only the items that we need and We are aligned with the numbering as well So the two solutions can be Redis and Basically as I told we can separate a read and write DB as well and Basically we can try using offset as well. So basically it will reduce the load on the DB and Then I guess yeah, like our API can work comparatively faster Due to database, bottle, etc. Yeah, I guess yeah

Audio flow for channel, lower mutex is in. Go like and, uh, vice versa. Your text is over channels. Okay. I guess the answer here would be I would like to prep for a new taxes, uh, uh, because, uh, it will help us in synchronizing, uh, synchronizing the resource that we are using in the process. So, uh, if we use channels, then, uh, we rely on the channels to give us some output on the basis of that we close the thread. So, like, uh, if anything fails in between, it it can kind of create a state of inconsistency. So but if we go with mutexes, then we can do this force management way better than the than than we can do in the channels. Because if we pass the this, uh, um, mutex dot, uh, like, uh, like, uh, if we can pass, basically, unblocking the resource, uh, in the defer and, basically and, uh, we can just start, uh, blocking the resource with the help of mutex. So we can make sure that even if the function or some code, uh, executes incompletely, then there is some part of code that will, uh, execute every time. So by that, we can free the resource. But in case of channel, we are relying on the channel only that once we get somewhat from, uh, input from the channel, then only we will perform some action. So I I believe that using mutex is way more better than using channels in calling.

Uh, Yeah. So, yeah, 2 practices I norm I normally follow, and I will divide this problem into 2 steps for me. Uh, first will be, uh, like, when to throw an error and how do I throw an error. So normally, I prefer to, uh, basically, in case of any validation failure, any exception handling anything, I prefer to return the custom error and where we can define what the reason of the error is. Like, not explain properly, but just give the headline so that a person who is reading can understand, okay. This might be the root cause of this error. And, also, I try to add proper tracing as well in those scenarios. So, uh, if there is some failure and I need to check-in the logs, okay, this is the failure. And so the logs should clearly tell me, like, what's the reason of the failure. Instead of I going, uh, I go and checking in the DB, going into multiple entries, doing multiple iterations, then going then, uh, going to find out, okay, this is the reason. So instead of log should tell me. So, uh, this is one of the, uh, practice I follow. And second is, uh, when I write doing some external calls, so there is a possibility that, uh, that external gateway API, whatever it is, it it might be down. So I always try to basically make sure that, uh, I'm handling the panic and all those scenarios in the default condition so that, uh, if there is some unhandled exception is thrown, so I should be able to catch that. And I should be able to able to test test that first. And, uh, if we can utilize some default value so that we don't basically fail the whole process. So, uh, uh, this is a practice I follow. So this this is make sure that if external API is down, then we will get some alerts. There might be a state of data inconsistency for some time, but we can fix that. But, uh, depending on the, like, importance of the operation, we make sure that whether we need to fail this or not, but they are but handling such kind of errors in the therefore, make sure that, uh, we are getting alerts in proper time and, uh, during the run time only. So we are not relying on some other, uh, third party or something. So we will get the alert real time, and we will be able to, uh, work on them, like, proactively.

Okay, coming to solid design principles, there are two principles mainly I follow. One is single responsibility and open-closed also and dependency inversion. So kind of if we follow two or three principles in the solid rules, we kind of following all the rules. But these are three rules I mainly focus on, single responsibility, so that if I am creating any function, so I am not putting too much responsibilities into that function. And if in any scenario I need, if my business logic is very complex, then I make sure that I create the function and the parent function is just calling the function that is below that one layer, like all the function below the second, third layer, the parent function shouldn't care about that. So this is the kind of practice I follow like while applying the single responsibility thing and second is open-closed. So in the open-closed, I try to make sure that I am using interface while creating anything interface and structure so that I can create a hierarchy that okay this is the parent, this is the child and on the basis of that I can create basically like child-parent kind of structure where there are multiple childs and only one parent and whatever object I can assign in that parent should be substitutable. So I basically assign the function according to that. And now coming to this dependency inversion. So whenever any, what do I say, like core, server, I need to create. So normally what we used to do is, as I mentioned in the open-closed principle, we use interface and structure combined way. So in the creating any core or something, like creating any encapsulated object. So there instead of passing the object, we pass the interface, like here the receiver object will be interface, but the passing object will be structure. So we don't need to change the structure of basically that encapsulated object and we can pass whatever kind of structure we want to pass in their constructor. So by this whatever new class we introduce, here the higher-level module is not depending on the lower-level module. Whatever we want to change in the lower-level module, we can and the higher-level module won't be affected with that. So these are the main three principles I use in the solid design rules.

Okay. Yeah. Here. So mainly, we are checking first, uh, what response we are getting from get method of STTK module. And, uh, if it contains the proper response, then it means that, uh, this API is working fine, and, uh, we are good to go with this. And, uh, here if you see, uh, if we are receiving some proper response, then we have added, uh, this response response dot body dot close. So whenever we are closing this function, we are we will be closing the body of this as well. And if we don't close the body, then, uh, this resource will remain open, and there might be a possibility when any new thread or new, uh, processor is or any new thread or, uh, process won't be able to access this resource. So if we don't do this step for them, this will be a issue. And okay. That's a good thing we have done we have done that. And now, uh, if we go to next time, it's, uh, if we are checking whether error is not equal to nil or not. If yes, then we return the error. Uh, it mean that nothing like, no file, nothing is open, so we don't need to close anything. And now, uh, we are checking whether the body that we have received from the API, uh, whether that body can be decoded or not, whether the the format that we are expecting it to send, uh, whether we are getting the same data or not from that API. So if that is the case, uh, then, uh, we will return error. Uh, otherwise, we we will return data. But if, uh, we are, like, getting error while decoding the data. Anyway, we will return error, but that effort will also execute in that case, and, uh, body will be closed. So any new resource can also access this, uh, particular resource.

Yeah. Here, the problem is, uh, the append function normally returns the new slice. So we need to store that new slice somewhere. So here we are appending the this, uh, n into the events, but, uh, we are not basically storing that slice somewhere. This append is this append function is returning the new slice where n is appended in the events array, but we need to basically create some variable which can store the reference to that slice. So events, uh, should be, like, events equal to append then perform the operation. So this is a problem in this code, and, yeah, we can fix that.

for optimizing golang garbage collection in a service with high memory usage patterns yeah so here we can make sure that yeah three things we can do first is we can make sure that whenever we are opening any resource we should close that if you are not closing that that then that's a very big issue because as we mentioned here it will be used to high memory usage and second is consuming the cache so by using cache we can make sure that we are relying on our main db as less as possible so this is the second thing we can do third thing is we can take use of cdn so if we know that what kind of data is used by our consumer in which area then we can go with cdn so we can deploy cdn in those places and the respective consumers can firstly come to cdn and get the data and if that data is not available then we can go to db it's the same way that AWS work firstly it goes to cloud front and if it is not present in the cloud front then we go to as3 and get the data so similar kind of things we can basically apply here as well and also there are certain db practices we can take into use those will be partitioning sharding sharding so if we can divide the db into multiple pieces and while like in the real time while using the process we can use some pattern like if the if I request this pattern then go to this partition if I request contain this pattern then go to this partition so if we make patterns like this then yeah we can I guess like decrease the load on the db first thing and reduce the time in performing any db operation so I guess these are practices we can take into use and that will reduce the high memory usage pattern on a microservice

Yeah, so mainly say Kubernetes is kind of a container mainly. So with the help of Kubernetes, we can create boards, services, network layers, all the infra that Kubernetes provides and it works really well with AWS. So if we use Kubernetes and integrate with AWS for our microservices, so whenever the load will increase and then our basically number of ports can scale. And with that, there will be very good advantage that we are able to reduce the cost and able to serve more load whenever required. So in Razorpay also we do that in like during the IPL times, we basically increase the number of ports and whenever it is required, we basically reduce that as well. So that dynamic handling can help us in basically reducing the cost as well as whenever it is required and increasing the scale as well. And whenever it's the time to serve the high traffic. And yeah, I guess there are multiple facilities Kubernetes provide like storing the secrets and yeah, like storing, creating network layer between multiple ports. And if we have multiple ports, it can then those endpoints that we expose, those can work like a load balancer as well. So there are multiple things, multiple facilities Kubernetes provides and if combined with AWS, it can be a very good asset for any work to scale and basically take use of HPA to serve the traffic.

What's your AWS SDK? Actually, I haven't done that till now.