
I am a Senior Consultant, Application Developer at Thoughtworks with more than 7 years of experience in building highly efficient and scalable applications using agile methodologies, clean coding and best practices for large enterprises. I develop applications using micro-services and event driven architecture, consult companies on OO Design, patterns, testing techniques and development methodologies. Passionate about XP and agile
Technical Lead
Avalara TechnologiesSenior Software Engineer
ThoughtworksSenior Research Engineer
Hyundai Mobis
PostgreSQL
AWS (Amazon Web Services)

Java

Apache Kafka

Azure Cosmos DB
.png)
Docker

Java 11

Java 8

Spring Cloud

Spring Boot

Azure Pipelines

GitHub

IntelliJ IDEA

Azure Active Directory

New Relic
.jpg)
Grafana

Splunk

Cucumber
.png)
Jenkins

REST API
.jpg)
Web API

AWS
Azure

MySQL

Restful API

Kafka

Kubernetes

Gradle

Maven

Postgres

MongoDB

CosmosDB

Nomad

Rio

Amazon EKS

PCF

Maven

Azure DevOps

Azure Cloud

AWS Cloud

Consul
.png)
Datadog

Github
Uh, hi. I'm Akash. I'm working as a senior software engineer at Lara Technologies. I have overall nine years of experience, and I'm using Java Spring Boot, Microsoft services, AWS, Kafka related technologies in my day to day work. And I've worked for ecommerce, banking, health care, uh, domains, especially.
If very large strings are provided for this type of, uh, for this function. Basically, stack overflow problem could occur, uh, because for programs, usually, there is a limited stack area provided, and that can get exhausted, uh, by by this function because a lot of input will be provided. Uh, the better solution would be to keep a lot of temporary stacks, uh, use it, and get, uh, and keep on popping out those elements, uh, whenever we have pop operation, and, um, the function calls can be minimized. The rather than pushing all the elements in one stack, uh, they can be, uh, multiple small smaller stacks which can be used for for functional calls. So that we can be optimized, uh, when we want to implement this, uh, basically, when there are large input operations.
To come for a a stronger solution for a system design when the requirement is not enough, I think what we can first do is, uh, list down or note down the functional requirements, whatever we know. And then, uh, based on that, we can, uh, we can, uh, write down the nonfunctional requirements. They're they're always hidden and, uh, understanding the business use case or any other u user use cases, basically. And then we can break down, uh, into small steps, uh, maybe, uh, functional functional, uh, uh, implementations, um, going by one by one. Like, sometimes we we try when we have to build up larger solutions, we usually try to build a first minimum viable product. And there will be focus on one functionality and then keep on adding more and more function functions. And, also, we can make the overall solution, uh, design modular so that it's easy to couple and integrate with other, uh, modules or functions functionalities later.
Okay. In microservice architecture, uh, when we want to ensure the scalability and fault tolerance, uh, when services communicate via REST or Kafka, is that, uh, we try to introduce, uh, as much as less as much less less coupling as possible. So we, uh, try to, um, make the call asynchronous and try to make the system asynchronous. By that, what we mean what we mean is that, uh, we can make it stateless, uh, not maintaining any states or sessions between the between communication via the wireless or conquer. Uh, we also ensure idempotency so that if, uh, multiple calls happen with the same data or same request, uh, they are not getting duplicated. Uh, so we can, uh, in we can we can introduce idempotency, and, also, we can introduce, uh, retrying and re and error handling a lot. So because, uh, when the system gets loosely coupled, there are high chances that in a distributed system, they they start losing, uh, events or missing communication. So, uh, for increasing resiliency, uh, we need to have retries and, uh, error handling, uh, at every point at every point. Uh, we also ensure that there there are no single point of failures. So we replicate the data. We make it idempotent. We also make it resilient having by having multiple retries and also having, uh, as much as much less coupling as possible everywhere.
Uh, difference between primary key and unique unique key is that, uh, primary key is something which is, uh, which is inherently created by database. Uh, when, uh, whenever we try to insert a record, uh, we if we have provided a ways where if we have provided a way where we are auto incrementing the, uh, auto incrementing the primary primary key. So therein, we are, uh, create therein, the database automatically creates the unique, uh, primary key. But unique key is something which the user itself decides, uh, uh, which is unique across the records. Primary key can be a unique key, but it's not always that unique key is always a primary key. So it's a it's an additional field. It's an additional column which are, uh, additional field in the in the row. But user needs to provide a unique key, whereas primary key can be auto generated or provided by the user.
And in the design consideration, what we can make, uh, uh, between throwing a checked exception versus wrapping it in unchecked exception is that whenever we are not sure, uh, about what is the unwanted behavior, which the particular piece of code or lines of code can can occur, uh, while the execution, we usually consider consider it as unchecked exception where we are not sure what what the what use case we are handling. So we wrap it in an unchecked exception where whereas when we are sure that, uh, a particular piece of code can throw a checked exception, for example, in this case, we are reading from a path and the file path could not be found. There are chances. So we are knowing. So here we are using an IO exception and putting it in a try cache block. So I think that's a key differentiator, uh, when you know basically what type of exception what what is the possibility of failure. You know what where exactly the failure could occur. So you wrap it in and check the exception for in this case, that is IO exception. But when you're not sure, uh, whether a piece of code will throw or will not throw, you are, uh, relying on the probability. So there you basically put it in an unchecked exception.
Uh, similarly, uh, when we are trying to design an API in Java, when we are sure that, uh, the particular use case or a a API usage or a particular request can fail and we are sure that the failure reason is clear. So, basically, we create a checked exception. We throw a proper error code. But when we are not sure of any other failure, like a system has crashed internally, any component has crashed, or any server failures are there at the back end. And so there, we are, uh, relying on the system's, um, performance or availability. Those are scenarios where user, uh, we need not communicate back to the user and we are the API caller. So those are all cat those are all type those exceptions, uh, usually fall in category of unchecked exceptions, and we have an internal server. Usually, they we, uh, club those types of errors or exceptions in an unchecked exception and call it as internal server error in case of rest APIs. Uh, so that's a key differentiation differentiator when you are really sure of the use case or when the inputs are not correct. Those all things call fall under checked exceptions, and there are proper error handling scenarios we need to communicate back to the user via API. But when scenarios are unknown, when some unwanted events or problems occur, those are all fall under unchecked exceptions.
So, uh, I have used, uh, I've used a a design pattern called as a strategy pattern usually. Uh, so we had we had to create accounts for the user, uh, based on different, uh, ways. One is, like, user ID and password. Another one was, uh, using their OAuth, like Google or using their social media accounts. So all these three were different ways by which user can create account, but at the end, we wanted to collect gather the user details, like user ID, uh, name, addresses, their age, gender, etcetera, all their bio information. So we've we created a strategy pattern. So strategy pattern helps you decouple the, uh, decouple the logic, basically, of particular way of creating account. And then the u from the user point of view, uh, the experience is seamless because they they know they they don't know where the call is going and how it is being handled. Uh, so by creating a strategy button, uh, we have kind of created an interface where, uh, we are getting the call. And based on their type of request, we are calling a particular, uh, account creation methodology. So the actual implementation is clean. Uh, there is no tight coupling. And at run time, we are able to choose which particular strategy for, uh, for account creation we need to follow.
Sure. So, uh, if let's say our back end system is handling millions of jobs daily, uh, in order to prevent OOM errors, usually, uh, we need to have so so the most sim uh, the most simple and the quickest way is to get the feedback of your system, uh, that your system is getting overloaded. So one part is you try to implement observability and monitoring for all your services so that they can be at any level. So, uh, it could be at microservices level, at database level, and also at, uh, service level. So we we have we can have thresholds and alarmings. So when where the system is reaching a particular threshold, we we are knowing that now the system is getting overloaded, and we can, uh, we can get alerts. And, uh, we can have scaling. So, uh, we can have horizontal scaling. So with the horizontal scaling, you can spin up new instances rather than waiting for one instance to get overwhelmed and overloaded. So that's from the higher level point of view. From lower level point of view, uh, you can have, um, a a lightweight services, so which can scale easily. So if the services are lightweight and their memory footprint is low, they can, uh, be easily, uh, horizontally scaled. And from database point of view, uh, your read and write requests can be separated. Uh, you can follow something called a CQRS pattern, uh, where you have, uh, different replicas of your database which are catering to the read and write requests separately. So in that case, your one system is not getting overloaded and overburdened. So that way, uh, their auto memory errors can be avoided. And another way is to implement circuit breaker patterns and log ahead patterns. So with that, you, uh, basically try to stop or contain the services or error in one region, and they are not, uh, they're not leaking to the other system or affecting the other system and bring the whole system down. Uh, these are some of the ways by which we can really prevent the out of memory kind of errors and let all the system, uh, get down because of, uh, some too many requests. Uh, and coming to the simplicity, uh, few more simple ways are like having rate limiting at the, uh, gateway level. Uh, so so that, let's say, uh, too many requests are coming from a user which are, let's say, trying to, uh, hop down the system. So their rate limiting approach can really help them slow down the requests. So there are various strategies like leaking token a leaky leaky bucket of tokens or sliding window, uh, sliding window for tokens, and then there are exponential backoffs when there are too many retries. So I think these are some strategies by which, uh, out of memory errors can be really avoided.
Problems gonna occur in this implementation processing thousands of messages per second and So when, uh, thousands of messages for consumer are coming for this kind of method. Uh, in order to ensure scalability, uh, I think there can be multiple uh, we can ensure, uh, that the consumers are able to scale. So when there are more messages, uh, you're not relying on, uh, only fixed number of consumers. You can basically create more pools of consumers so that they can uh, they can scale easily. Another part is, uh, you can make the, uh, in introduce item potency in this scenario. Let's say there are so many retries and so many messaging messages coming in, uh, then you can have item potency can help you avoid reprocessing or, uh, re again processing the same message. Uh, you can disc discard or ignore duplicate messages. And, uh, for handling failures, uh, you can have, uh, d l DLQ DLQ kind of, uh, patterns, which can, uh, store even your system is down or too much overloaded. Those kind of, um, messages cannot get lost. They can be still sitting as a backup, and then can they can be again replayed it back.
Um, the key difference between Java seven twenty one and twenty two are that and Java seven was, uh, was was a lot was a lot, uh, straightforward and a lot of boilerplate code we need to create while doing any operation. But in the later versions of Java, there's a drastic difference in terms of how we are writing concise and precise code, it has become a lot of functional. Uh, so now now the focus of the code has shifted to rather than what should be done and not how it should be done. So we have introduced vars, lambdas, and, uh, functional style of programming. It's more declarative and a lot simpler. Um, there are ways in which you can, uh, prevent, let's say, even, like, you can limit the class class extensions. You can define the scope of a class. You can have dynamic variables, aware types, and a lot of memory and JVM level improvements are there, which makes it a lot faster. In terms of asynchronous programming, a lot of APIs like virtual threads and other optimizations are introduced, which which shape which basically makes the code faster, uh, uses less memory, and can scale easily on smaller memory, uh, devices or hardwares.