
Resourceful Technical Project Manager/Solution Architect with 11 years of experience as a team lead. Successes with leading cross function teams and completing products on time. Demonstrated strong understanding between products, applications, costs and tradeoffs. Proven ability to shepherd projects from initial concepts, development of specifications through all development phases to shipping of product on time and within budget. Proven ability to successfully analyze an organization's critical business needs, identify deficiencies and potential opportunities, and develop innovative and cost-effective solutions for enhancing competitiveness, increasing revenues, and improving customer service offerings.
TechLead, Solution Architect and Technical Project Manager
PPInfotech TechnologiesSoftware Developer and Lead
Mantra SoftechSoftware Developer and Lead
Tops TechnologiesSoftware Developer
Kunsh Technologies
IIS

T-SQL
Jira
Azure
.png)
Ionic

GitHub

BitBucket

TFS

SVN

AWS

.NET Core

VB.NET

Visual Studio

Spring MVC

Angular

Git

HTML/CSS

jQuery

Bootstrap

Javascript

TypeScript

JSON

DevOps

Microsoft Projects

AWS

DevOps

AWS
Worked well as a technical project manager and asp.net lead with the team, nice troubleshooter
Okay. So if I talk about myself, my name is Krutarch Shah. I have more than 12 years of experience. Throughout my experience, I have worked with ASP dot NET Technologies. I've been involved with SQL Server database development as well. So me initially, I was working as a dotnet developer, junior.net developer. And then I was promoted to senior software developer, and I am also interacting with, uh, overseas clients. I'm involved in requirement gathering, converting nontechnical requirement to technical requirement. I'm also involved in technical documentation, defining the architecture of the software. I work with legacy applications as well, uh, which, uh, is having the oldest technology, and we used to convert that from, uh, old technology to new technology based on the customer's speed. I'm also involved in people management, uh, apart from the technical part. I'm also getting, uh, involved with Azure DevOps. That is one thing. So Azure DevOps, I am working with the agile methodologies, uh, for managing the project, for maintaining the project, getting the things done on time, on track, uh, on time deliverables, uh, converting the task into the different modules, into the tickets, and then work with it, uh, as per the release. So I'm also involved in deliveries and release management. So if we talk about, um, the technology stack in-depth. So ESP dot NET technology, I've worked with, uh, I started working with, uh, vb.net and web forms. And then I was working with MVC as well as, uh, this esp.netcore. So, uh, throughout, uh, dot net, I have a good hands on experience working with it. So one of my recent project, I was working with, uh, esp.netcore as well as web forms. So altogether, I have good, understanding and knowledge of ASP dot NET, SQL Server, database, and, uh, Azure DevOps.
Okay. So there are 2 ways of exception handling in, uh, c sharp. So one is, uh, global exception handling. So that is something we can define in global files, and, uh, we can get the logs of it. That is called global exception handling. One one thing one approach is that. That is a common approach we have to follow in in any project just to avoid, uh, if we do want to disclose the sensitive database information. That is something we can manage. It has a global exception handling. Uh, so we can, uh, uh, have a custom page for that for some sort of error. So we can, uh, display that custom page to the user so they can see, and, uh, they'll come back again in case of any unexpected error. So that is one thing. And the other is local, uh, exception handling that will be managed by try catch. So try block will have the statements of core or the logic, what we have written. And, uh, so at the time of execution or the time of build, if we do this kind of approach, if we follow this approach, um, that will be a best practice, I would say. So the block of statements we, uh, provide that that that needs to be handled in try block, that needs to be developed in try block, and catch statement would be needed to handle the exception. So in case of any, um, exception occur while runtime, It will be thrown to that catch block, and we can handle that in a different way. We can retry that particular uh, function. We can, uh, write the logs. We can maintain the logs. We can get to know the root cause of the error, and we can resolve the issue if it is a certain case to case business issue. So we can resolve it. We can get that thing. There is one more, uh, block is finally. So finally is something which will execute if the exception is generated or not. So it is not conditional thing. It is a commendatory thing. It have to be executed. So for an example, if we have written something in try block and the exception is generated, and if we want to execute some sort of code in case of exception handle in case of if exception is uh, thrown or not. So we need to handle that in finally block of statement. So that is something we can do. So there are 2 ways, uh, to be summarized. 1 is global exception handling, and 1 is local, uh, using try catch and finally statement.
Okay. So in case of, uh, SQL Server with the dot net application, so the question is about, uh, uh, what would be the steps to ensure the consistency and reliability of data in a distributed dot net application using SQL servers. If we talk about distributed dot net application, I'll, um, give you one example. So if you are working with, uh, ecommerce application, which have multiple applications, one is the web application, the others are desktop applications, uh, maybe one back end application for admin panel. 1 is for seller application. So we are managing different sort of different application with the single database. So a skill server has a potential to handle multiple dot net application or maybe multiple different technology stack application with, uh, a single database. And we can have a centralized database across the dot net application. So what, uh, my suggestion or what would be my approach is, like so just to maintain the consistency and reliability of data, we would need to get the data run time. So real time data would be the most accurate way to get the things. Because if application 1 is accessing the same data, and at the same time, if application 2 is inserting some data or maybe modifying some data to the same table, then there'll be a conflict. And I'll not say a conflict will, uh, lead to an error or a bug, but the user might be seeing the older data, not the modified or not the inserted data. So at the time of request, we have to check, uh, the things on real time basis, uh, whether the data is updated or not. So while clicking on a list, it will be shown the updated one not, uh, by using the cache or the older data or not reusing them. So that is my approach. Like, we need to fetch the data on real time basis. Uh, that is one thing. Secondly, we can do one more thing. Like, we can maintain the data in such a way, like, if if concurrent access to a SQL server has occurred. So at the time, we can use certain, uh, locking statements, like no lock statement we can use for selecting the data. Just to avoid the deadlock situation, it will not happen in a smaller application, but on a larger scale application, we may be facing this kind of issue where we can see this, uh, deadlock situation. Just to avoid, we can use no lock statement for that. So these are the steps we need to follow for to maintain the consistency and reliability of, uh, data in a distributed document application using SQL Server.
Okay. So if we talk about the optimization part, there are a few things few steps we need to follow. Ideally, while defining the structure, we need to make sure that the application should be scalable. So we need to define a structure in such a way, um, with with different ops concepts, or maybe we can see a different sort of, uh, solid principles we need to follow just to avoid, uh, the delays or a bad user experience, or we can say, um, just to improve the performance or to optimize the code. That is one thing. And if we talk about the heavily loaded database, Azure SQL Server database so Azure SQL Server has a capacity to store large number of data, maybe, uh, more than millions of data we can store in, uh, SQL Server. And while fetching the data from SQL Server to dot net application, it it will definitely take some time. So we need to take certain steps to um, optimize the performance. Uh, so what we need to do is, like, we need to have a proper indexing in the select statements. So for select statement, if we we are getting the data from, uh, table, which is having millions of records, So we have to follow, uh, indexing of primary key that that is worst. Uh, there'll 2 there'll be 2 type of indexing, cluster and noncluster. So I would refer both should be implemented, um, as per the need. So for example, if we are using join statement in a SQL Server, then we need to have indexing on that particular join or maybe a foreign key that would be non cluster index, and that would be associated with the cluster index or a primary key of some other table. So that will create a kind of replica of that particular column in a table. If we talk about the memory, then, uh, it will be a faster execution. So there will be a separate process running for, uh, comparing the data between 2 tables, uh, while doing this joint operation. That will be a faster execution, and the data load would be faster. Apart from that, while binding the data to dot net application, we need to, uh, get the real time data. For example, we are, uh, binding, uh, table, which is having millions of records. So we need to have a we need to use paging for that, and at least we need to show 100 records at a time. And for next page, we need to fetch the next 100 records. That is one approach I would definitely follow. Secondly, in case of searching the data from SQL Server, so we need to find the data using keyword search methodology in a SQL server. So, like, query will definitely work, but on a certain column, only I would recommend. Well, when we don't need to convert the data from, uh, Vercare to in digital or in digital to work, that kind of thing we need to take care.
So, uh, yeah, there are multiple approaches for branching strategy, uh, for Git. If we are using any Git, it can be Bitbucket. It can be, uh, DFS. It can be any. So I'll I'll recommend, uh, there are 2 approaches we need to follow. Uh, one would be based on the modules. So if we are working with the different releases simultaneously so my approach should be, like, each and every developer will create, uh, the tickets for a task. So we will have a master branch for, uh, this project. That will be the main branch of a project, and then we will have a feature branch. So feature branch will be taken out from the master branch itself, and feature branch will be based on a release particular release. So feature branch will have description like this. Task number 1, 2, 3, 4, 5, this will be there in a feature branch. Then what we we need to do as a developer, we need to, uh, so we we are having, uh, for example, 20 task or feature number 1. So we need to create a separate branch for a particular task. From that feature branch, we need to take out a branch, and we need to work on that particular branch, uh, for completing task 1. And if I'm a developer, I'll take out a branch from feature 1. I'll complete I'll name that branch as task 1. I'll complete my work. I'll, uh, merge that branch to feature branch, And then I'll, um, you know, take a branch again from feature for task 2. That will be my approach. I'll delete the task 1 branch if it is not needed. Maybe for future reference, it will be there. But, uh, if it is not needed, then we can delete that thing. So if simultaneously, if multiple developers are working on a multiple features, So we can have multiple feature branches, and there'll be lots of, uh, task branches we need to work on. And we need to commit back to uh, maybe we can merge the branch to the feature branch. Once the feature is completed, we will be ready for the release. Right? So that feature branch will be voice to the main branch, master branch, or maybe we can say, uh, ideally, we need to manage 3 different branches for main branch as a main branch. The that would be production, UAD, and, uh, development. Uh, development branch would be for development purpose. UAD would be for QC, and master would be for the production. So, accordingly, wherever we are want we want to release the things, we will need to, um, merge the branch from that particular feature branch to the main branch, and then we can release the thing. If it is our our, I mean, CICD setup, then it will be auto deployed. So that is the approach we need to follow. Apart from that, there is one more approach, but that is not related to this particular example. It would be for the developers. So developer will have their own branch, and they'll work and they'll, um, you know, merge to the main branch, but that would be not relevant for this particular example. That would be, uh, therefore, a single release option. So for multiple release, it would be a feature branch strategy would be the most efficient way to work with.
Okay. So in case of Azure DevOps, some improvement of code reviews and, uh, the process, uh, how it will work in c 7 dot net technology. So for example, if we are using Azure DevOps and we are using a Git for particular development, so we are taking the example of previous question only. That would be for the feature branch. So if someone has completed a task and they are merging their raising the pull request that we can we can say PR, uh, in a in a other word. So they will raise the pull request, and, uh, there will be one process. One step will be there for an approval. So approver, they need to select. Uh, for an example, if I'm a developer and my manager or my team lead is is an approver, So I need to commit the code. I need to create a pull request to that, uh, manager or a team lead. And then that manager or team lead will see what lines of code I have changed. If it is relevant, if it is, it can be optimized, or if it is something which will create or which will have some other impacts of, um, uh, of that particular project. So they'll definitely reject the pull request, and they will assign me with the, uh, sign me the task with some sort of comments. So that will help us to build a quality code. And QC, I would say, the quality control will be also done at the same time. So, uh, I'll do a necessary change. I'll raise the pull request again, and then manager will check, okay, these things are fine. Then it will be, again, merged to the feature branch or a main branch, whatever it is needed. So these are the strategies to improve the code review process. Uh, if we talk about Azure DevOps, so Azure DevOps, they are providing a feature of agile methodology. So agile methodology will be a part of, uh, when they are having certain steps. So step number 1 would be, uh, requirement gathering, and then we we do the analysis part, and we define the models and tickets or stories. Uh, even we can work with the epics. So that will be done, and then we will be working on a, uh, development piece. So once the development is done so as an agile methodology or if we talk about Azure do a feature, uh, we need to do the merge process. I mean, once we complete the code, we need to create a PR. That would be there. If it will be accepted by the manager or team lead, then it will be, uh, approved one, and then it will go for a next step for QC. If it is not, then it will go again to the developer. So it will be a kind of circular thing until, uh, the things will be executed in a proper manner. And there will be one exit point where the, uh, code is merged or the code is done in a proper manner. And unless it will be there in a loop itself. And the manager or team lead will define when where to exit. So if they will, uh, give a green signal and they'll merge the code, then we need to move for the next step. So that is the strategy, and that is the process to improve the court review.
Okay. So I'll just leave in the code. So we have class 1. Uh, the class name is order. We have one method, which is return type is decimal. Calculate total discount. And we have 2 input parameter. 1 is list of products, and the other is customer. Okay. So it's object from customer. 1 is list. Ideally, it should be, uh, the list of something. For example, list of products. Right? So that is one issue there. So list should be, uh, so it would it should be written as list of then products. So list then angle bracket products, uh, product. There is one class defined below that, and then the closing angle bracket should be there, and then the products products would be there. So I'm just reviewing the code again, uh, for our next steps. So decimal discount is equal to 0 m. That is some sort of fine. If customer dot ease premium okay. This this one is the Boolean parameter. Okay. Anyways, if that is true, then, okay, the loop will be there. Product in products. Okay. So they are, um, running a for each loop for, uh, products list. Discount will be added products dot price into 0.1 m. Okay. So the discount will be added, uh, by, uh, 10%. Yeah. And they will return the discount. So, ideally, I don't see any other, uh, issue in this block of code. Right? So the main, uh, issue I will see is in the input parameters of calculate total discount. That is list of product. It should be in place of list products. Right. So that is one issue I can see. Apart from that, the code is fine.
Again, SQL statement, we're trying to we try all unique parameters of unique pairs of customers who have ordered the same product. Okay. However, there is a stable flaw present. Can you spot it and explain how it would impact the result? More how we how would you rectify the SQL query while maintaining the intended purpose. Okay. So this is the SQL statement we have. I'll just go through with it. Uh, so select testing, customer ID, customer ID. Okay. So we are getting it from 2 different tables. Okay. Order 1 and order 2 on. Okay. Product ID, they are having a join on it. Then customer 1. Okay. Order 1 dot customer ID is gonna be 2. Customer 1 dot customerid. Customer 2, that is also fine. The customer 1 dot customerid is not equal to. Okay. So if the customers are not matching and they are having a set of, uh, customer IDs with 2 different, uh, orders. So that is their approach, and they want a pair of it. So now we are trying to get the key pair kind of thing. Uh, we we are trying to get a pair unique pairs of customers who ordered the same product. Okay. So we are talking about the same products. If we see the join of 1st join and that we can see, uh, the order product IDs are same. So that is there. However, there is a subtle flaw present. Okay. Can you spot and explain how it would impact the result? Okay. So they are, moreover, how rectify the screen carry while maintaining the intended purpose. Okay. So I am talking about the subtle flow flow. What is there here? Ideally, it should be yeah. You can see this thing. So, uh, in the select statement, I can see the customer IDs that we are getting. That is fine. In orders, uh, so we are getting that from order 1, and we are also managing from order 2. And, yeah, the customer so, ideally, the things are okay. So here, one thing we can, uh, make sure if what whoever has not ordered anything. So they will definitely, uh, get a kind of thing where, uh, so they'll not be there part of the particular result set. That is what I can see. Other thing other than that, I can see the query is fine. I don't find any error here. I don't find any issue here. The things are looking good then other than this. Yeah.
Okay. So so in interface segregation principle, this is part of the solid principle. This is a very important part in terms of defining the architecture and in terms of maintaining the code. So, uh, just to implement this thing, it is very, uh, needed to have this kind of thing if we are dealing with the 3rd party services. So one thing I would say, uh, in case of if we are providing the 3rd party services. So what we need to do, we need to prepare a layer around with, uh, our core development part. So that would be using the interface. So interface would definitely give you the signature, not the block of words. So they will not have a body. They will definitely have a signature. So interface will be used to, um, avoid the this multiple inheritance, uh, feature. So they will have the multiple inheritance functionality avail in the interface. So we don't have that in in class. So that is the main primary difference of it. Secondly, we'll have a, um, block of statement and the logical things in class. Interface will be associated with the class only. So whatever the class, we have a signature. So the same thing will be there for a for an interface. So what ideally we'll do, we'll create a separate class library application that will have the reference to this thing, uh, to some third party third party services. So we'll we can use the same thing for some other application within the same solution. Uh, so for example, if we are creating a class library for 1 of the application and the other application the project is using that, uh, that particular class library, they will have they will directly interact with the interface that will have the signature only just to avoid the, uh, you know, accessibility to this particular, uh, block of code. So for a security purpose, uh, if the 3rd party application will hit that particular interface in place of the in place of the class object. So that will be kind of thing, uh, they'll hide the logic, uh, we can say. They'll not show the, uh, logic upfront, and they'll have only the signature. So in case of any hacking attempt or in case of any unauthorized access, they will definitely handle this request, and they will not, uh, present the logical thing or maybe the code, what you have done. Or there may be some secret, uh, data or some key or some, you know, authentication or authorization part handled that will be hidden itself in the class or in a method of class. Uh, so if we are representing the interface, so that would be the interface segregation principle in c sharp, uh, to avoid, uh, this ambiguity of it.
What strategies would you suggest for implementing 0 downtime deployment? Okay. So in terms of deployment, yeah, definitely, we will have, uh, kind of strategy. So first strategy, we, uh, would definitely suggest is we should have different servers for application as well as the SQL. So as a part of downtime I mean, uh, 0 downtime, we should have a backup server for it. Um, so for example, if we are running with an application, the server is for example, server application server is server a, and database server is server d. So if we are restarting the application server, so we need to prepare a backup server. So in case of DNS management, for example, we have www.xyz.com. That is our URL. And we need to, uh, create a backup server that is a point one, and a point one should have the same replica of code which point to the database server, the skill server, the same database I'm talking about. And if you want to restart the server so what we need to do, we need to just switch the DNS from server a to server a point one for a temporary base. That will be again a schedule 1, and we need to, uh, restart IIS pool for that. That's the only thing we need to do. It will take almost a second. Uh, the server configuration is good. That's the only thing we need to make sure. And then we need to restart the server a. That will be, uh, that will take few minutes. That is fine. But the application will be on for, uh, we can say, uh, temporary server. That is server a point 1. Service server, we can see. And then we will move to move the DNS from a point 1 to a again once the server is restarted or any issue has occurred. We can do that similar thing. So in case of deployments, we need to do a replica of that, or maybe we can have a schedule job for, uh, you know, replicating the code from server a to a point one. Similarly, we can do for server d. That is a database server. And server database server will have, uh, I mean, if you want to restart the things, we need to regular backup for that. So SQL server will be a kind of thing, like, where we need to, store the data. I mean, same backup restore functionality we need to avail at the same time. Because if we are moving database from 1 server to another server, for example, server d to server d point 1, that will again, uh, create an impact in case of, uh, you know, maintaining the older data. So that, uh, so if the data has entered in between of that execution, that latest data should be stored somewhere, uh, with a flag, or maybe we can take a fresh backup of that particular server. We can uh, move that along with the, you know, that latest backup or latest data to the main server. So just to avoid the data loss, we need to follow follow this kind of procedure. Right? So that that other approach is to implement the zero time downtime.
What strategies do you use to manage state and data consistently across the distributed? Okay. So in case of, uh, cloud or native application in s p.net technology, what strategies we need to do, uh, just to maintain the state and the data consistency across the distributed services? So my answer would be if we take an example of an application where we are having multiple applications for a single sort of database. So, uh, the synchronized should be real time. That is one thing. If we are using, uh, the same database for multiple application, for example, one is hosted in cloud. The other is hosted in as a desktop application. The other would be, uh, kind of back end application that would be again in the cloud. So we if you are running with 3 different application, we are uploading certain things as a in a back end that needs to be reflected or at the same time in a front end website or the cloud application, uh, along with the desktop application. So that should be a real real time sync needs to be there, and, uh, that is there by default in a SQL server database We are running with the ASP dot NET application. That is one thing. Secondly, if we want to work with this distributed architecture. So at the time, we what we need to do is we need to, um, get the real time data. I so, for example, if we are inserting any data in a back end application. So at the same time, while we are fetching the data, we need to get the fresh data from the SQL server or from the database just to have the consistency. I mean, what what data we have entered in a SQL, uh, or or in a back end application that needs to be synced at the same time. So we need to get the fresh data at the same time without any latency, without any delays, without any scheduled things. It have to be there with the other application. So across the application, we need to, uh, cross the distributor service, we need to manage that kind of thing. And, uh, ideally, we can, uh, achieve this approach. So for a centralized database with the distributor services, which is, uh, hosted in a cloud or a native application, that can be managed easily.