
Senior Consultant 1
Hitachi Vantara Software Services IndiaSenior Developer
GalaxE.SolutionsSoftware Developer
Enthuons Technologies
Nodejs

ELK

express js

AWS services

Bitbucket

API integration
Could you help me understand more about your background by giving yes. Definitely. So, uh, my name is Rahul Rai. In total, I have 5 years 8 months of experience till now. And, uh, uh, I'm right now working in Galaxy Solutions as a senior developer. And, uh, with the tech stack, I have majorly worked on Node. Js for 5 years. And, uh, uh, with AWS services also, I have experience for 5 years. With the database part, I have majorly worked on DynamoDB, so that is also for 5 years. And, uh, uh, parallelly, for 2 years, I have worked on SQL, uh, with, uh, AWS services. I have, uh, worked basically on EC 2 instances, uh, s 3, SCS, um, SCS, and, uh, DynamoDB, API gateway, s 3, and, uh, AWS Cognito. So these are the things which I worked on. Uh, apart from this all, I have experienced, uh, I mean, uh, I have knowledge of Docker. I would not say experience because I have not, uh, worked in my, uh, I mean, any in any organization. So I have knowledge of Docker, Kubernetes. And, uh, other than that, I have also worked on microservices. I believe, uh, these are a brief I mean, they should walk, I I guess. So, yeah, this is it, uh, with my introduction.
How would Node JS streams affect ETL task performance compared to traditional method streams. Okay. How Node JS streams affect ETL task performance? What is that ETL? Okay. I would rather answer for the streams. So, uh, Node JS streams, uh, it it basically enhances your performance with the way of streaming. Like, uh, you have a event emitter, and you are providing the data in the form of eve I mean, in the form of streams and, uh, that is being consumed by, uh, your client. Suppose, uh, it it's it's like a producer and consumer like thing. Uh, you are providing your data or, uh, sending your data in chunks, uh, with the help of streams, and, uh, that is being consumed by your user, um, which we say as a buffering also. Like, uh, whenever, uh, a user Internet is slow, for example, it shows it is loading. So, basically, there is buffering, and, uh, it is fetching the data, uh, from the back end. So instead of sending the whole data at a time, we are sending in streams so that, uh, yeah, uh, so that the performance is good, not is like, uh, we have to download the whole data, and then only we can start and play any video or, uh, or any file.
How do you apply the single responsibility principle in modularizing a restful API develop? Yeah. With the single responsibility principle, what happens is, uh, 1, uh, suppose if I'm defining a class or an object, anything, uh, that should be responsible for, uh, unlimited to that particular class only. For example, if I I'm I'm writing a properties of, um, of a vehicle, for example, then, uh, whatever I will write inside that class, uh, that should be the color of the vehicle or, uh, the properties basically of the vehicle. Now if someone wants that I want to, uh, buy a car, for example, then that buy function, that should be in a different class. That should be in a different class. Uh, it should not be inside the same class because that, uh, that will include your, uh, but, I mean, that will include include your card or, uh, that is going to include payment systems, um, in voicing, and also that should be in a different, uh, class. So this is what single responsibility is there. That one, uh, uh, class, and that should have one responsibility. Nothing more than that. So, uh, by this way, it is easier to understand the code also, and, uh, this is what the design patterns of Solid says.
Uh, device strategy to manage state in an asynchronous node. Js back end while integrating with various APIs. To manage state in an asynchronous Node. Js backend while integrating the various API. Okay. When you are integrating the API's strategy to manage the state. Okay. So then you can use cookies, sessions or, uh, you can use global variables Other than that, uh, if you want to manage the state yeah. These are the things which you can do, uh, when transferring. Yeah. Yeah. The I think these are the things which, uh, we can do.
Practices would ensure high availability and fault tolerance for a Node. Js back end restful API system with frequent deployments. Okay. With frequent deployments, what practice would you ensure high availability? Uh, definitely, it's better, uh, for the high availability, you can use, uh, EC two instances, uh, first of of all, because, uh, that is fully managed by AWS. So that is good. Other than that, uh, CICD pipeline is there, uh, which is continuous, uh, integration and deployment will be there. So for frequent deployments, that would that is a good option. Uh, other than that, uh, to for default tolerance, I would say it's better to have a, uh, a load balancer. Yeah. That would, uh, I mean, uh, you should have different servers, uh, which are serving the same thing, and, uh, you can add a load balancer to on the on the top. And, uh, yeah, uh, that is going to distribute the calls, uh, from the users. So even if one server goes down, have the second one up and running. Uh, so and that is going to serve the users. So that can be used for fault tolerance. Other than that, uh, if you don't want to use, uh, this I mean, your 2 servers or 3 servers if you have taken I mean, if you have a CPU of 8 cores, then you can create your own clusters on the all the other 7, uh, cores that is going to spawn the same sis I mean, the same, uh, code is running basically at 7 ports and, uh, uh, with the 1 the 8th one, you, uh, being a main, uh, or you you can say master node. Uh, it is sending the request to all the 7, uh, other nodes. So that is, uh, one thing which you can use for fault tolerance. Uh, available yeah. With for the high availability. For fault tolerance, uh, Yeah. You can implement API gateway, uh, where, uh, you can use rate limiter or, uh, normally, also, you can use, uh, rate limiter so that it should not go down whenever the attackers use a DNS attack. Other than that, uh, using of, uh, helmet package in order to, uh, trim I mean, in order to avoid, uh, any, uh, security attacks, for example, like, uh, XSS attack and also these can be, uh, removed from there. I believe these are the practices which we generally perform and, uh, yeah, for high availability and fault tolerance. So yeah. Uh, for high availability, definitely, if you are if you have, uh, uh, more than one system, I mean, uh, more than 1 servers, that is good. Uh, you can use, uh, scalability also. Like, uh, auto scaling groups you can attach in your, uh, e c two instances so that whenever there are load, it goes, uh, I mean, a new server is attached. So, yeah, I would say these are the things which, uh, I would recommend.
Describe the process of normalizing an SQL database to optimize for complex data queries from multiple restful APIs. Okay. Normalizing, an SQL database. Basically, what, uh, it's like, there are different normalization, um, stages, which, uh, we can say that 1 and f, 2 and f, 3 and f, and b c and f. So, uh, majorly, uh, with b c and f, we say that it is the optimum. Optimum in the sense whenever, uh, we are searching something, uh, and, uh, our database is defined the, uh, like, BCNF, then that's the, uh, this is going to be your, uh, the best, uh, what I could say. Definitely, your data which you are getting, if it it is in BCNF, it is, uh, fully correct. Um, there is no partial dependency, uh, or, uh, I could say there is no, yeah. I mean, no. Uh, we have primary keys, and also, I believe these are these are the things which, uh, used in normalizing. Like, uh, there should not be any partial dependency. One item is if if you are querying with, uh, I mean, implementing a primary key, basically. If we have it, we can fetch, uh, uh, the data, uh, on the basis of primary key. Other than that, uh, I need to optimize this complex data queries. Uh, with complex data queries, what I could say, you can use indexes. Uh, you can implement that index indexing in the table. Other than that, uh, if it is complex. Yeah. Uh, these, I would say, these are the things which are coming right now into my mind.
Please explain what the following piece of Python code is supposed to do. Also, identify any errors in the logic that might prevent it from functioning as intended. Check credential if user is authenticated and user has permission access data, return access granted. Else, if user is old user, as permission access data return access limited, l 669. Yeah. Uh, here, basically, what we are doing is, uh, uh, authentication and authorization. Uh, if you see that, uh, there is, uh, check credentials, user is provided user, then the user is authenticated. And, uh, after that authentication, it has to check the permissions for access data. If it is there, it returns access granted. Else, uh, if it is authenticated and, again, uh, it is written the same thing. Like, if the user has permission access data, then we are returning access limited. And, uh, either it should be access granted, but we have already provided that, uh, in our, uh, previous if if statement. So there is a issue here. Uh, it might, uh, I mean, it is going to prevent the functioning. Instead of, uh, user I mean, in the else condition. Uh, else if a user is authenticated, that's okay. Uh, user has permission instead of has permission. Uh, you can say that, uh, it does not has permission, uh, or, um, yeah. Basically, it does not, uh, you we should write it here that, uh, we not have permission to access data, then we are returning access is limited for you. So it should be like this. Then it is going to function properly. Uh, else, it's return access denied. That is okay. But, uh, use, uh, in the if else if statement, user has permission access data that is, uh, preventing this function to function properly. It should be changed.
Even the s q, it seems like there is an issue in the j join operation. It is adopting help identify the error and explain the necessary correction. Okay. Select users dot name or rest of quantity from users. Okay. In a join order on user dot ID is equal to order dot user ID. Where orders dot quantity is greater than 1. Users dot name from users. Inner join orders on user dot ID and order dot user ID. There is an issue, uh, in the second line, which is saying that in a join orders on user dot ID. Uh, basically, it should be users dot ID is equal to order dot user ID. That should be orders, uh, dot user ID. This is the issue.
How do you implement an item put in back end service in Node. Js to handle repeated API calls without duplicating data. How would you implement an back end service to handle repeated API calls without duplicating data? Uh, We can, uh, what we can do here is we can provide, um, or we can use caching here if a particular user is repeatedly calling same API. Definitely, caching would be a good option. Other than that, if you want, you can implement a rate limiter also if it is too much frequent for that. These two things is, uh, is what I'm thinking right now. How do you implement repeated API? And now if you, uh, don't want to duplicate the data, okay, without duplicating data, then what should be yeah. Read replica would be a good option. Uh, if you have a read replica, uh, then your database would be calling from I mean, yeah. Uh, that would be a good option. Basically, uh, we are fetching it from the read replica, and we are providing it to the user. And, uh, so your database queries I mean, uh, the load on the database will be less. Other than that, I would say caching would be a good option if the data is not changing. If it is changing, then read replica. Uh, other than that, if you want to limit rate limiter, that's okay. Yeah. These are the things, uh, which we can implement.
Which market to API and bands would be most effective for custom audience segmentation, and how do you avoid rate limiting? So it'd be most effective for custom audience segmentation. Okay. For custom audience segmentation, which API endpoints? Uh, yes. Uh, You can use, uh, API gateway, uh, where you can implement, uh, basically, authorization and authentication, that will be there. Uh, you can use AWS Cognito also for I mean, to get your custom audience. Uh, it it will be there in the identity pool. So that's one idea. Uh, if you want to avoid rate limiting, definitely, uh, the user should be authenticated. And according to their, uh, authorization, we can provide that, yeah, these guys don't need to be, uh, in the rate limiting option. So this should be yeah. That that that, uh, should be okay, I believe, for now.
How can you leverage GA 4's new features to optimize website back end performance when implementing custom analytics? Uh, GA 4, Google Analytics 4 when implementing custom analytics. Yes. Definitely. Uh, okay. Although I have not used, uh, any analytics tool right now, uh, I mean, till now. But, uh, we have our own analytics, uh, a custom analytics I would I would say. I have it, and, uh, from there, we analyze our, uh, performance of the back end. So what, uh, how can you leverage features? We need to use the features of GA 4. Uh, with the features of GA 4, I don't know because I have not used it. Yeah. I have not used it, so it's not coming into my mind right now. Website back end performance. Well yeah. Yeah. You although I have created my custom analytics, and from there, uh, I used to analyze the performance that at which time, um, the users or my perform CPU, uh, performance was high or, uh, at how much time it was idle. These things, basically, how much users, uh, hated my APIs and, uh, what are the APIs which were hated most of the time? Any errors or any particular timing at which the users, uh, goes high or it goes down, anything like that. So every analytics is present. Uh, I mean, you can implement it. If Google Analytics have, uh, any more features, then definitely we will have to check that and try it. Thank you.