profile-pic
Vetted Talent

Sourabh Singh Dhyania

Vetted Talent

Sourabh Singh is a backend engineer with expertise in scalable, high-performance systems. With a B.Tech + M.Tech in Mathematics and Computing from IIT Delhi, he has worked with Shopify (Deliverr), Expedia, and Zupee, driving system architecture, integrations, and optimization.

At Deliverr (Shopify, Flexport), he led returns integration, generating $100K/month revenue, and built inventory transfer systems. Previously, at Expedia, he optimized flight sustainability features and latency. Skilled in Node.js, Python, AWS, Redis, and microservices, Sourabh excels in system design and backend development.

  • Role

    Mid Backend Engineer

  • Years of Experience

    7.2 years

Skillsets

  • Database management
  • Shopify
  • MySQL
  • Lambda
  • GitLab
  • Github
  • Design
  • Algorithms
  • Express Js
  • Python
  • WebSocket
  • Heroku
  • Spring
  • Type Script
  • RabbitMQ
  • Redux
  • GraphQL
  • Slack
  • JS
  • Microservices Architecture
  • Spring Boot
  • Kubernetes
  • Mathematics
  • Angular
  • Node Js
  • S3
  • Bootstrap
  • SNS
  • Database
  • Django
  • Kotlin
  • Redis
  • Node
  • SQS
  • Backend
  • New Relic
  • AWS
  • Node Js
  • Flask
  • API
  • Java
  • Docker
  • CloudWatch
  • Grafana
  • Git
  • Mongo DB
  • nginx

Vetted For

10Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Software Developer II - Express JS and Node JS (Onsite, Bangalore)AI Screening
  • 51%
    icon-arrow-down
  • Skills assessed :CI/CD, DevOps, AWS, Docker, Express Js, MySQL, Node Js, Postgre SQL, Redis, Strong Attention to Detail
  • Score: 46/90

Professional Summary

7.2Years
  • Oct, 2021 - Present4 yr

    Mid Backend Engineer

    Deliverr
  • Feb, 2021 - Oct, 2021 8 months

    Software Engineer

    Expedia
  • Jul, 2020 - Jan, 2021 6 months

    Backend Developer

    Zupee
  • Jun, 2018 - Jun, 20202 yr

    Software Engineer

    Scrumdo

Applications & Tools Known

  • icon-tool

    Node.js

  • icon-tool

    Expressjs

  • icon-tool

    Angular

  • icon-tool

    MongoDB

  • icon-tool

    Redis

  • icon-tool

    RabbitMQ

  • icon-tool

    MySQL

  • icon-tool

    Python

  • icon-tool

    Django

  • icon-tool

    Celery

  • icon-tool

    Bootstrap

  • icon-tool

    Redux

  • icon-tool

    Git

  • icon-tool

    AWS Lambda

  • icon-tool

    SQS

  • icon-tool

    SNS

  • icon-tool

    S3

  • icon-tool

    Cloudwatch

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    Twilio

  • icon-tool

    nginx

  • icon-tool

    SSL

  • icon-tool

    MSG91

  • icon-tool

    Instamojo

  • icon-tool

    Heroku

  • icon-tool

    Grafana

Work History

7.2Years

Mid Backend Engineer

Deliverr
Oct, 2021 - Present4 yr
    Responsible for leading the integration of returns with platforms like Shopify and Returnly, and building transfer systems between storage and fulfillment centers.

Software Engineer

Expedia
Feb, 2021 - Oct, 2021 8 months
    Developed features for flight sustainability and implemented horizontal slicing to reduce latency by 10%.

Backend Developer

Zupee
Jul, 2020 - Jan, 2021 6 months
    Worked on pagination, pnl calculation, WebSocket connection management, Kubernetes config auto-reload, and MySQL pooling.

Software Engineer

Scrumdo
Jun, 2018 - Jun, 20202 yr
    Integrated Slack API, developed a Slack-like application, and integrated GitLab self-managed instances.

Achievements

  • Led the integration of returns with other RMTs like Shopify and Returnly increasing revenue from $0 to $100,000/month.
  • Built system to transfer from Deliverr Reserve storage to Fulfillment centres like FBA and Flexport.
  • Reduced average response time of API by 88% through MySQL pooling implementation.

Major Projects

4Projects

Onlease

    Involved in IVR and programmable voice setup with Twilio, production server setup, and payment integrations.

Websocket connection management in distributed environment

Onlease ( IVR and programmable voice setup with Twilio.

Real time Chat Application

Education

  • B.Tech + M.Tech in Mathematics and Computing

    Indian Institute of Technology, Delhi (2018)

AI-interview Questions & Answers

Could you help me to understand more about your background by giving a brief introduction of yourself? Um, uh, I graduated from IIT Delhi in mathematics and computing, uh, in 2018. After that, I started working for Strandu, uh, as a full stack developer in Python, Django, and, uh, Angular, uh, front end. Uh, then I started working with Zupery and Expedia. Uh, in in Zupery, I worked as a backend developer on working on Node. Js. And then, uh, in Expedia. I worked for, uh, around 6 months in JavaScript and Golang. Um, then I worked in Deliver as a back end developer. I made back end engineer. There, uh, I worked on Node. Js, uh, AWS, and MySQL. Uh,

How does connection pooling Node. Js improve database performance? Uh, connection pooling, uh, helps in, uh, create in you creating multiple connections and then, uh, using, uh, my and then using a connection, uh, then using a free connection, uh, to so that, uh, more, uh, connections can be used, and, uh, these operations can be done in parallel, uh, on, uh, and it it can help, uh, in reducing the network

Yeah. Uh, I would, uh, recommend using, uh, SOA framework, uh, because, uh, it gives us, uh, all the the question response, uh, types. And, uh, also, it gives gives us, uh, layering of layering architecture. So what we can do is, uh, we can create functions, uh, which will call another functions in, uh, so that in this manner, we can, uh, keep our authorization and authentication layer, uh, layer, then, uh, logging layer, and then, uh, we can also have, uh

Okay. We should get into it. We're not gonna know. Yes. It is entirely the same Uh, we can ensure, uh, the identity, uh, by using a separate of post guess. So, uh, we can, uh, use, uh, use lots, uh, that are provided by post guess. And, uh, uh, in that manner, uh, transaction, uh, will have SEO goodies. So they will be, uh, like, uh, they will be atomic, and, uh, there will be data integrity, and, uh, they will work as in, uh, uh, they are, uh, being, uh, they they are being, uh, performed in. They are being positioned in, uh, parallel in isolation.

Um, we can handle, uh, so, uh, Express JS provides us next function. So, uh, we can create a layer above API, uh, API layer. And in that, uh, layer, we can call next function, and we can handle any error, uh, at global level. And, uh, if the error that is written by APLA is a known error, then we can wrap it in, uh, it in a suitable error error response code. Otherwise, uh, we can throw a general error like, uh, uh, uh, server error, like 500.

Yeah. Uh, so for, uh, for implementing caching layer, so what we can do is, uh, we can create a caching layer above the API layer, and, uh, in in with that layer, uh, what it will do is, uh, it will, uh, it will cash data based on the, uh, API, uh, API route. And if, uh, that, uh, if data exists in that case, then it can return it. Otherwise, uh, it will call the underlying API function, and, uh, it will then, uh, cache it and then, uh,

Maybe we can use cursors here, because if there will be, okay, this. So here we are not handling the case when a user is not found. And in that case, we are returning rows. So in this case, DB can throw error and we should also throw error if a user is not found. But it is not giving error currently and it will return empty rows.

It should request parents, but I, instead of, uh, I, it should be ID. And, uh, because ID is the parameter name provided in

to isolate n services, isolate n services running in separate containers, how can you utilize docker's network features to isolate n services running in separate containers. If these are running in separate containers, so what we can do is, we can create, docker will provide us, we can expose different port from each container and internally it can be connected to a common port like localhost 300 or 8000, 3000 and externally it will be exposed to different port, so in that way all the services, all the containers will be exposed on different ports and can be connected to that port.

Um, he comes in for scaling and access JS application on AWS infrastructure. Um, if we are using, uh, uh, AWS Lambda, um, then, uh, a logging is an important factor. Um, if we are scaling up, so multiple, uh, so we can, what we can do is, uh, we can, we should allow multiple instances of, uh, Lambda to be created at, uh, same time. And, uh, uh, we can, uh, we should, uh, configure a minimum number of, uh, Lambda that should be running so that, um, uh, the latency is, uh, not high for, uh, uh, first few requests. And we can, uh, have, uh, we should, um, have a limited time. We can configure a limited time, uh, for, uh, for that Lambda, like, uh, uh, 15 or 30 seconds after which it will be killed. So, uh, latency should not be a greater than that. And we should have a good monitoring, uh, system, uh, using which you can, uh, monitor all the Lambda instances. And, uh, if, uh, they will be, uh, if they, uh, they go down at, uh, some point of time, we should, uh, get, uh, we should get, uh, we get notified and we should also monitor the error log so that, uh, we can, um, get to know, uh, if, uh, there is, uh, something going wrong. Uh, yeah.

So, uh, Node Node JS application. Uh, so AWS has has its, uh, repository for containing uh, images. Uh, so first, uh, an image, uh, would be deployed, uh, to their registry. And, uh, from there, if we are deploying on AWS Lambda, uh, then, uh, then Lambda will, uh, pull that image from that registry, and, uh, it will deploy it will run, uh, that service on a on a virtual machine. And, uh, and, uh, it it it's, uh, it also depends how, uh, we want to, uh, when do we want to trigger the deployment. Uh, maybe we want to deploy it, uh, through and through several CA, um, so that, uh, if, uh, to for continuous integration and deployment and, uh, so that if, uh, any, uh, if any changes are merged to a to a branch, then it should be deployed automatically instead of, uh, manually taking the deployment. And, uh, there are few, uh, steps, uh, that we can, uh, add, in that circle c a, uh, file. Uh, they should, uh, like, uh, like DB migrations that should run before that. And a few indication test that we can add, and this should be positive. We can add condition on which, uh, and, uh, and, uh, like, only, uh, passing of previous step. Uh, this step should be done, so we can create a dependency in that way. So, uh, deploying through CircleCI is a better