
Creative software professional offering over 11 years of experience. Enthusiastic about developing forward-thinking solutions to solve tomorrow's productivity problems.
Project Lead
R Systems InternationalTechnical Lead
Sopra Banking SoftwareSenior Software Engineer
R Systems InternationalSoftware Developer
Instant Systems IncDeveloper
Instant Systems IncConsultant
CapgeminiSoftware Engineer
Instant Systems IncDeveloper
National Informatics Centre (Contractual)Software Developer
National Informatics Centre (Contractual)Software Engineer
National Informatics Centre
Intellij Idea
.png)
Jenkins

Microservices

REST APIs

Springboot

Spring MVC

Spring Security

Spring Data JPA

Hibernate

Mockito
.png)
Docker

Kubernetes

Apache Kafka

Apache Storm

Apache Camel

Microsoft Azure

Google Cloud Platform

AWS S3

AWS Cloudfront

AWS Lambda

SQL Server

Apache Solr

Apache Cassandra

Scrum
Yeah. Sure. So currently, I'm working as a project lead And, our systems internationally limited. And I'm based out of NIDA. Right. So, currently, I'm working as a project lead in our systems international and current, and I'm based out of Noida. I have a lot of overall 11 years of experience Here, my roles and responsibilities include, designing and developing of microservices using Java and associated tech stack. So and then also help the, fellow team members if they get stuck anywhere in in dash. Their respective tasks. So I am working as an individual contributor wherein I get tasks myself. I have get clarification if it from business design, the solution and then develop it I'm also involved in the deployment of solution. So for example, like, I work on, EPMs using the Apache Storm Kafka, I'm using these about these. So Kafka, we are using to ingest, live streaming data and then ETLs, they process the data once the data is processed, we, insert them into the database we have both SQL and NoSQL kind of databases. In SQL, we have MySQL and NoSQL we have Cassandra and, Apache Sonar. Once the data is inserted, we develop microservices using Java and associated and, So there are 2 ways of consuming microservices. 1 is using Apache Camel. So we have 1 camel end point which actually can constitute of multiple microservices working behind the scene. So that's one way of interacting with these microservices. The other ways, which I'm currently working on is, using data pipeline in GCP so I'm working on a cloud based, data pipeline project as of now. So I've designed that as well. Wherein we are ingesting data from, multiple type of sources, say, a file, or, Input so these are basically vehicles that we process as we work in automobile domain. So based on the input, we call some microservices that we have already developed. And we do that by using Python tasks that we write in this data pipeline once we get the data back from microservices, we write BigQuery and so that we, can structure The data A bit more. Then in the next phase, we populating around 50 tables from the data that we got from these microservices. And once the sequel part is done in the next phase, we export that data from, the cloud to an on prem SQL server where we are running around 50 stored procedures in a sequential manner. So all these, the entire system, I have designed within with the approval of, an onshore architect. So I'm working on these kind of things right now.
So currently, I'm doing this actually. As I told you in a previous answer, I'm working on a cloud based data pipeline solution wherein we are making use of GCP So here we have a workflow, which is nothing but a combination of multiple Pipelines arranged in some, know, logical manner. So in one of those pipelines, we are writing Python tasks that call the Java based microservices. And, we are getting data from this. So potential pitfalls of these, like, Yes. The service is down then we don't get any any any data back in those Python tasks and, you know, the data frame that we create it sometimes becomes empty. And which might lead to pipeline failures then, you know, if the microservice, Java based microservice, it's taking a lot of I'm to, you know, respond to a particular request then, if there are concurrent requests going in for that particular micro just get cluttered up and, it, the the entire pipelines become some, slows down. So that's Also, one thing Personally, I felt that the error handling in these kind of things is not very graceful So yeah, that's the kind of thing. But yeah. Error handling is not part of it fall, I would say. But, yeah, Like, so this availability and, you know, the time these microservices are taking over the network to return the data back That can definitely slow the I 9 down. That's that that Yeah. That's the people, of now that I can think of.
I have not done this yet like, I have not integrated any Python based AI model to Java microservice architecture, yeah, I could think of something like say, you have an API wherein you have you you want to train some data or something like that. So You you write, normal Java based microservice. You'll, expose an endpoint And when then behind the scenes, we can have a Java library that could be configured in the projects POM file. That can give us a handle to the AI model that we want to integrate. And once we have that handle, we can call, the methods that are there in that particular model. And provide our data or and then get insight from it. So yeah, that's a pretty basic take I can think of. There could be multiple things involved in this, but as of now, I can think is getting some library configured to the Java project through POM. Create the object of that particular handler class which which provides us the capability of using this model. Then provide appropriate data to call that particular function and, get get the data back. Either it's an insight or a pattern or prediction or anything. So yeah.
0 downtime deployments in Microsoft using Springboard can be achieved through So configuring, through deploying these microservices using Kubernetes. Let's say you have 1 microservice and, There is Let's say there are there's just one part on which you have deployed this particular microservice, and we're getting lot of load So what we can do using, using Kubernetes is that we, you know, based on the amount of traffic that is coming in, the load that's coming in, can increase the numb the you know, number of parts for this particular microservice and the load balancer in this case would then distribute the load on both the ports So this will ensure zero downtime deployment in microservice. Another way of handling this be If there is a version change in the microservice, say,
Alright. So, uh, distributed caching mechanism can be okay. Alright. So, um, we'll have, uh, so, correct. I'll I'll tell you about my project. We have this implemented in, uh, in our project. So, um, we would have, uh, a Redis server that would be distributed so there would be multiple nodes of it. Um, and in the, uh, in the, uh, application dot properties file, we can, um, you know, specify all the notes that we would want to connect for that particular so we'll we'll define a property, and in that property, we would specify all the, uh, you know, the address of all the notes that, uh, have caching implemented. We would create a connection, uh, from a connect connection pool for that particular, uh, caching node. And whatever data we want to cache, we can just simply make use of, um, Jadis as an implementation of it. And, uh, yeah, like, we can dump the data in a cache or, uh, after a certain time or something, we can we can read the data. Uh, not after a certain time, I mean, till a certain time. Because in caching, we do provide the ability of, uh, you know, uh, time to live. We don't want everything to be cached forever. So we would specify some data to be cached for a certain amount of time and, um, yeah. Um, like, I'll give you an example where what we are doing, uh, like, we are working on the ETS, um, using Apache Strong. So, uh, what we do is, um, in from one of the bolts, let's say bolt number 1, we make a call to, uh, an API. And then when once we get the response back from that particular API, it's it's a very huge response. It's a logic tree. Basically, it's an XML response, and we don't want to carry that response from one bolt to another because then the Apache storm ETL would be very slow. Uh, the throughput of the ETL would be very slow. So what we did to, you know, you know, increase the performance or, um, boost the performance, we implemented a cashier, and when we are getting that, uh, x, uh, XML back from that particular service, we are dumping that XML, uh, into the Redis cache, and, uh, and and we just acknowledge that particular topple. And once we move to the second bolt, uh, we go and try to fetch that information from the cache, and, uh, we get that, uh, particular XML, and we read that pass that XML, read data from it, and whatever we want to do, like, whatever the business process is there, we we we do that, and then we move along. So this way, uh, the ETA throughput is, um, better. And, uh, yeah, that's how we have implemented, uh, uh, caching in a distributed manner. It's not just on one node. It's distributed across 3 nodes in our project. So we specify all the 3 nodes in the properties file for this. So in case of ETS, we do that in the YAML file, and in microservices, you can do that in the application of properties.
Yeah. So, um, so, like, the same thing. Um, a load balance would be very helpful in this case. Uh, we can tell, uh, like, we only want to entertain only 100 request, uh, for a one port of a particular microservice. And as, um, and, you know, the moment the request, uh, you know, it passes 100, uh, another part should be spin up at that instance or, uh, if there's an if there's a part already available, just, uh, redirect that request to that second part. So by doing this, uh, we can manage the load, uh, during peak. And, uh, if you talk about so this was using the infrastructure, using the deployment techniques. If you talk about, you know, technically, how we can, um, you know, minimize this, like, while building the microservice, uh, architect, um, technically so, we would have to ensure that we are not, um, you know, creating any unnecessary lists or, uh, you know, collections, uh, iteratively inside a loop. Right? That would be one thing to to check-in the code. 2nd, if there is if there are any computations, we try to do computations using, uh, the, um, you know, um, uh, the the the the intents. Uh, I mean, not we we don't want to use wrapper in this case. I'm sorry I'm forgetting the name. Uh, we we'll just simply use, uh, the I n t instead of an integer if you want if you are adding any 2 variables or stuff like that. Right? So that would be one second thing. Um, 3rd, uh, don't, uh, you know, don't, uh, create, uh, don't create a lot of new objects, uh, iteratively again inside the loop. I'll try to see if those objects are, um, they they we really want to create them inside a loop or inside an iteration or we can just move them out. Because as long as, uh, the object stays in the memory, it will keep it occupied and then, you know, there would be a peak, um, um, in in the memory size, uh, of that particular microservice. Maybe I'm just getting rotate from the topic here a little bit, but, um, yeah, these are the 2 things that I can think of.
So what can happen in this case is, uh, like, one thread, uh, let's say, t one, it comes in and it tries to access, uh, the get service instance method, and we would check if the service instance is done. Let's say this is the very first invocation. It's none. Okay. You got it. And, uh, you got the service instance the new service instance and fine. Then, again, what will happen if 2 threads they are assigned, uh, you know the second, 3rd comes in at the exact same time, then it could also get a new service instance rather than the rather than getting the earlier service instance, which even had got. So this is basically kind of a singleton pattern that's trying that we're trying to implement here. But, uh, this, uh, concurrent access on this particular method would, uh, break this pattern because, uh, the idea of singleton pattern is to provide one single object throughout the application and but in this case, when multiple threads would come in, uh, they would get, uh, there's a possibility to get, uh, different different, uh, thread instance for, uh, sorry, different, uh, service instance for the accessing threads. So what we can do here is, uh, we can, uh, do 2 things. We can synchronize the entire method, and then only one thread can acquire the lock over this method and can get the instance. So only one thread would be allowed to get the instance at a time. So the problem would be solved here, but it would be very slow as the lock would be acquired on the, uh, old method. The second approach is applying a double check here. Um, so we would create a volatile variable above for this, uh, service instance, and then we would only, uh, put the if service instance equals null inside a synchronized block. Uh, and before that, there would be another check for if service instance equal to null. So there would be 2 checks. Uh, that's why it's called a double double check, uh, singleton pattern. By doing this, we can, uh, solve the problem. So this, uh, since we are only going to apply the lock, uh, only on the, uh, part where we are creating the service instance. So in this case, uh, you know, um, the, uh, cost of, uh, operation would be less, and, uh, again, only one thread would be able to access, uh, the instance at a time. So the problem would be resolved with, uh, better, uh, you know, performance.
So first of all, if for every exception that will, uh, occur from this, uh, this, uh, this, uh, rest API will always return our internal server error. Um, now we are losing, uh, the ability to, um, you know, provide or or return any custom exceptions from the user service. Let's say, um, there are no users as of now in the system, so we could simply return a 200, uh, empty. We would want to return an empty 200. So that, I don't think we would be able to to no. It's the case. Yeah. Like, I mean, even if we try to, uh, throw any custom exception, um, it would, uh, just go to the catch, uh, in the exception block and would be returned as an internal service error. And one more thing, it's just the error, uh, status code that we are returning. There is no exception trace or exception message or anything that we are returning. So we would never, uh, get to know in the logs what went wrong with this. So that's also one thing because you see, uh, just passing HTTP status dot internal server error. Uh, the e, uh, which was holding the exception details in the catch block, um, it's gone. So we are losing the track of of what happened here.
Back up strategy for microservices. That expands where you could bring data stores. I'm assuming we are talking about configuring multiple type of databases or multiple databases in micro service. So I'm not sure if I'm fully understanding this product. If you're talking about, um, you know, connections that we would be creating for multiple databases from a microservice, then we would be doing it using connection tool. That's one thing. If that's not what being asked here, then the backup strategy for microservices ecosystem that spans multiple data stores. Is it related to distributed transaction? Uh, because in that case, uh, you know, if there's a transaction that spans across multiple databases, so we would implement a pattern like Saga or something, um, you know, to to roll back the transactions in case there's a transaction failure at in in one of the microservices. I'm sorry. I'm really not able to understand what this question is, so I'm giving different answers. I hope one of them is what you're looking for. Back up strategy, uh, for different data stores could be like if, uh, you know, uh, don't work on just a single, uh, database, uh, have shared database, uh, spread across our network. Um, uh, maybe use a cluster of, uh, you know, use a cluster having multiple nodes of databases so that even if, uh, one database is down, the application can be, um, you know, connected to the other databases, um, on the different on the different notes. So you you must configure, uh, your application to a cluster instead of just one data store so that in case one of the database goes down in that particular cluster, your application is not, uh, getting impacted. Uh, so yeah. Um, and, plus, uh, there would be additional, you know, replication of the data that will, um, that will ensure data availability at all times and data safety as well. So, yeah, we must have to ensure that the data is, you know, available at all the times. It is not lost and integrate, uh, it there there is integrity in the data. It shouldn't be, like, in a particular cluster. If there are 3 nodes, then on one node, we are, um, you know, updating the data and the other 2 nodes are not getting reflected with it. So, uh, that we have to manage. Uh, all the 3, uh, nodes on a particular cluster should have identical data. Um, I think, yeah, that would be the best answer to this question. Thank you.