
As a MERN Stack and chatbot developer skilled in TypeScript, Node.js, Azure DevOps, and a variety of databases, I specialize in leveraging OpenAI's models for advanced AI solutions. With over 3 years of extensive IT experience, I possess a diverse and comprehensive skill set with a strong focus on chatbot development and integration. My professional journey encompasses the entire lifecycle of chatbot creation, from conceptual design to deployment and maintenance. Utilizing advanced technologies such as Artificial Intelligence, Natural Language Processing (NLP), and cognitive machine learning frameworks, I deliver sophisticated virtual assistants tailored to meet complex requirements.
MERN Stack and Chatbot Developer(AI)
Celebal TechnologiesNetwork Expert
Chegg Inc.
TypeScript
Node.js

Azure DevOps

RabbitMQ

Redis

Microsoft Teams
Hi. So, uh, thanks for this opportunity. As you know myself, Pushkar Jart, and, uh, I started my career in cellular technologies in January 2021, uh, where I work as a back end, uh, chatbot developer, uh, where I started my career in back end chatbot developer, and, uh, I initiated, uh, like, in initial, I started the working on, uh, Node JS, uh, and Python and, uh, also in front end, uh, on React JS where, like, I developed a multiple different different kind of chatbot from dialogue flow, uh, using Azure bot builder, uh, where, like, we use a multiple Azure cloud services like, uh, CLU services for language understanding and queue and amateur also for the, uh, cash and answer bank part. So where we create a a a conversational flow chatbot for, like, uh, Tata Motors for, uh, access banks and, uh, other organizations. Also, other than this, I worked on a AI chatbot platform where, like, I work as a senior developer for Capital Land Singapore where I worked on, uh, at at at completely AI powered chatbot from end to end, like, from front end to processing and, uh, using, like, prompting and, uh, in Python. I also like the, uh, engine of that AI chatbot for capital n developed in Python, like, uh, front end is developed in the access by kind of notice. So we handle the bot from using Azure bot service, uh, impacted and, uh, in Python, like, uh, and, uh, we create a microservice, uh, for AI based answer generation. So we connect a multiple different different kind of databases. Like, there is portal datas which is stored in, uh, embedding databases, uh, and the fetching through the semantic search. Uh, other than this, we do we created the orchestrator using lang change master and agents and, uh, tools and or guard railings also via using LinkedIn. We created a guard rails, uh, uh, for that AI question answering generation part. And, uh, it is us we also did the RND on a multi intent handling, uh, check where, uh, like, we can also identify the multi intent from that query and then can generate the answer for each end. And then then at last, we can combine all the answer in the single answer and then can send it to the front end. So in this problem, like, to reducing their time, uh, like, we created the multiple master agent, which can evoke simultaneously and at last, like, generate the answer and combine that answer, then they check through the guardrail that this answer is satisfied. The condition or not, and, uh, we submitted the answer to the front end. So, uh, this kind of work, uh, I developed in AI chatbot from past 3 and a half years in my career.
Yeah. So, uh, when we considering the implementing of machine learning model to predict the user intent in a chatbot conversation, So, uh, in this scenario, like, uh, uh, before the AIs comes, we use generally the cognitive language understanding, uh, where it's just in in identify the intent and entities from the query using the Azure CLU, uh, model, which is a before it was LUIS model, but right now it is cognitive language understanding model. So, uh, this is machine learning, uh, model, which is, uh, identify the intent, like, where where we provide a dataset of training and testing dataset to the model. And according to that, it identify the intent in the chatbot conversation.
Yeah. So, uh, when we are talking about to, uh, the maintain the consistent AI chatbot performance, right, if, uh, when we are going to scale up the user base. So when we are, uh, talking about this scaling, then, uh, the first thing came out about this is that the limitation of AI models to provide the input and output. Also, there are token permanent limitation on the multiple AI models. So, uh, there is, uh, like, we create a load balance around the top of multiple AI model services, and, uh, that is responsible to, uh, redirecting the queries onto the specific, uh, like, uh, GPT 3.5 turbo model or embedding model. Uh, like, generally, we use 3.5 Turbo, which is cost effective and, uh, better responsive. So in these kind of scenarios, like, for the scalability, uh, we we require the multiple resources, multiple instance of the Jupyter models, and, uh, then we set up the load balancer on that part, or we can send it using the API management service, API m, uh, provided by maybe from Azure or, uh, AWS. We integrated on the top of the model, and then, uh, we can handle the, uh, for Jupyter part, the scaling part on the, uh, AI based model on Jupyter part. And when we are, uh, talking about on the chatbot side so, uh, on the chatbot side, the best way is to dockerize and containerization to scale up then spell down the services and the in this scenario, like, uh, we just replicate the multiple images of that container and, uh, create a multiple instances to handle the, um, a huge amount of user base. And in back end, as we can talk about that via creating the multiple instances on a different different reasons because there is a limitation of token per minute, which is currently, uh, extending, but it's still, uh, like, if we are go 200 to a 1000 user at a single time, then we require a multiple instance on different different location and using API and we distributed the request to each and every response, and then, uh, we handle this scalability on the chatbot part.
So, uh, the method for integrating a newly developed chatbot with an existing user authentication, User authentication database to maintain securely. Yeah. So, uh, in in newly, uh, chatbot, uh, when we are going to develop a new, uh, new chatbot and they they they they there, we are talking about the user authentication database for security purpose. So, uh, in this scenario, what we have to do on the back end service, we integrate the database on our back end service and, uh, uh, according to, like, the user authentication database, which is store their token or maybe password, ID password, and the session token for the expiration and the expiry date and everything. So whenever we are going to get any request from the chatbot, then we are checking first the authentication using fetching the data from the DB. And, also, we can set a caching mechanism to reduce the time from getting the data from databases. Uh, we can use the Redis cache or any build cache for the system according to the scalability and, uh, the containerization on all those scenario in mind to, uh, that part. So, uh, then we are going to talk about the chatbot. So there is 2 scenario come. The first one is if our chatbot is open to public. So when it going to open to public, then, uh, like, the we also have to adjust a session token, a best security we can integrate. But when we are talking about the dead work, which is, uh, private, then in this scenario, there are multiple authentication service like the single auth or, uh, auth 2.0 or, uh, normal. Generally, we can use our, uh, token based or social token based authentication for the, uh, newly developed chatbot.
Uh, what is viable method to introduce multilingual support in an AI chatbot using NLP techniques? Yeah. So, uh, when we are talking about the multilingual support in AI chatbot, uh, using, uh, NLP techniques, so the best way to, uh, create a multilingual chatbot is that, uh, always we are going to develop our back end system on a single language like English. But on top of that, when we are, like, user providing the input to our back end service on top of that layer in middleware, we, uh, we use any NLP service, which, uh, natural language processing service, which help us to create from any language to English, then, uh, that English input, we will send it to our back end service, which will generate the answer for that service. And, uh, uh, when the AI service will return the response, then again, we will convert that, uh, the English syntax into the, uh, normal language and normal language, and then we can, uh, again, go back to that, uh, language of the input query and return the response to the user. So that's how, uh, we can achieve this, uh, multilingual support in AI.
So that analysis component of AI chatbot are optimized for performance when dealing with large dataset. So, uh, when we are going to, uh, interact with the larger dataset, then, uh, that how we can going to ensure our component for optimized performance is that, uh, in the case of a large dataset, uh, what we are do, we do a feedback based mechanism system for that databases that whenever a queries generate a response on using that specific databases. So the best way is to restore that database in the embedding form where we can do the semantic search on the dataset. Or at the query system, we can create a tool and provide tool. We can fetch the dataset, and we also create a feedback mechanism. So the response will going to correct or user provide a positive feedback, then we submit that feedback with that dataset. So that's how we can, uh, do this thing. Then the second thing is that we can add the caching mechanism for dataset that we are again going to fetch the same dataset again and again. So in that case, we added a caching layer on top of the dataset. So whenever we are going to, again, the fetch the same thing, then we are fetching from the cache, and then we are restoring that, uh, data indication. When we come second time, then we are get the information from then submit that information, uh, again, to reusability in that part. So we also add the, uh, for the, like, better optimization on that part. We add the expiration of the case here also, so it help us to whenever the case here is going to expire, then it will going to help us to, uh, make it more optimized.
Uh-huh. Data from PDF, what exception handling practice should be improved, and how would you update this method according to it? So PDF reader, uh, equal to null, reader equal to new PDF file path, string test from dot get text from page reader comma 1. Mhmm.printstactress. And finally, if they're not null, reader dot clause. So, uh, in this error handling, there are only, uh, IO exception type of handling are currently in the casing and that finally is reading. So how we can improve this that the the one way is we can add the validation. 1st check that the file path is at least or not. So there is lack of the validation of the file path, and then we also handle the multiple type of cache error. Like, the it is only, uh, handling the exception, but maybe value not found. Some is undefined path, not defined or, uh, the the PDF path is not exist. So such conditions and validations are not added in this code. So we will aid the, uh, those conditions and exception, like the validations and, uh, proper cache handling of all those errors also, uh, in this post record.
Uh, Harry Smith. They're supposed to connect to a database and fetch some data from the chat port. Can you spot any potential problem with this code that can lead to exceptional connection issue? Yes. Sure. So we can see there is a try, uh, an exception ending connection equal to database dot connect, host local host user password. So cursor equal to connection dot cursor, cursor dot execute, select start from chatbot data, result equal to cursor dot batch all. Exception a z, print database connection failed, and finally, connection. Section or connection issue. So So, uh, right now, like, if you can see that there is host is local load, so, uh, generally, this code is fine and handling the connections properly. Uh, like, if we didn't get the user and password, then there is chance that the connection can fail. The second thing is that we use the local also. We have to make sure that whenever it's going to in production or in deployment part, then we have to remove this local host and provide the proper endpoint for that connection part. And, uh, that's all. The other thing is that we are, uh, we are not assigning that execute query to the cursor variable and executing it, and then we are getting the result in the fetch all. So, uh, that might be there. It can be, uh, raise an issue of the connections. And in finally, this connection is going to close. So when we come second time, then it connection will again database will be going to connect, and then it will again close the connection. So, uh, we have to make sure that it connection uh, once the connection is set up and we have to again and again place the database from database, then connection should not be closed and it closed when the process is going to complete. Or we can also add the timer that after some time that the DB is not using, then it can connection is going to close.
If offered the opportunity, how could you employ visualization tool to represent? So, uh, if I got an opportunity, then in the uh, visualization tool, like, we can set that the application insights, uh, which can help us or any other tool which can store all the logs. So whatever the conversation going between the user and the chatbot, they store, uh, the user message response, and they send the intent identified from that query, the entities also from that queries, and the specific details regarding to that. So each request of that user, we are storing in our system, and then we are matching the we are, like, making the payload of the multiple, uh, conversation of the user and then come we we are we are going to store the payload into the application inside or any service which can help us to show, uh, data representation of the data into the dashboard. The the other way is also we can store into the our storage table or store databases. So from that databases, we can also show in our any dashboard for the representation between the user and chatbot. Uh, the the communication between the user and the chatbot and the, uh, the the details regarding to that queries around around this. Also, we have to store because we have to maintain the context for NDA chatbot. So we also have the only cost, uh, context stored with that query so it can help us in the visualization part.
Beyond the core skill required, uh, how will you live on knowledge of API design to improve check for functionality? So, uh, if if about the core skills, if we are talking about the API design to improve the functionality. So, uh, the best place that, uh, in chatbot generally, like, there are multiple APIs. 1 is here to fetch the, uh, details of my profile. 1 is here to fetch about my uh, knowledge base. 1 is here to fetch about my leave and attendance and policy regarding details. So I can use those API via creating the tools of that API. So then I can integrate all those tools in my with my master agent. So whenever, uh, uh, a specific details required, then the my GPT will, uh, GPT model will decide that which tool required to create some details on that specific query. So we can also, uh, leverage those API via converting those API into an tools and via using the multiple tools. We provide the tools to our master agent, and then our master agent, according to our prompt and the description of the tool, our master agent decided that how we can, uh, proceed and use that tool into our, uh, the chatbot and, uh, generate a response even better answer for that specific use cases.