
Good exposure to each phase of Software Development Life Cycle (SDLC), development of projects from stage of concept to full development.Hands-on experience in FICO Blaze Advisor, SQL, JIRA and Power BI.Experience in Banking Domain.Detail-oriented Project Coordinator & Order Processing Specialist with 5+ years of experience in order management, process tracking, vendorcoordination, and project execution. Proficient in NetSuite, Microsoft Office Suite (Excel, Word, Outlook), and Power BI, ensuring seamless orderprocessing, accurate invoicing, and efficient workflow automation. Adept at collaborating with cross-functional teams, including sales, operations,recruiting, and marketing, to streamline customer order tracking, vendor management, and logistics coordination. Recognized for ensuring highaccuracy in order execution, maintaining well-organized project documentation, and enhancing operational efficiency.
Analyst
Career Launcher
MS-Office

SQL
So I'm currently working as a senior business analyst with Value Momentum. I have 5.2 years of experience. My role and the responsibilities involve a variety of factors. Firstly, I have a stakeholder management role, which involves stakeholder communication from internal as an external mode. Uh, I have a team of 7 people that I deal with. So I work majorly with the FICO BRD FICO BRD UAT protesting as well. Uh, we majorly do the documentation, BRD and FSD. And after BRD and FSD, we move on to the next form where, uh, we try to understand the requirements, and I give it to my developers. Once the developers execute it, we go to the UAT and the fraud testing. There's a data analysis that happened. I work majorly on Jira and Agile, and, uh, that is how, uh, I have been my experience so far. It's been a mix of team management, team handling, uh, stakeholder management, uh, learning new skills. I have exposure towards Jira, Agile, and SQL. And, uh, yes, I have a 5 or 2 years of experience. I've done my MBA from Symbiosis from the 2018, 2020 batch. I belong to Gwalior, Madhya Pradesh. And, uh, yeah, currently, uh, I'm living with my wife here, and my parents live in Gwalior. That's the one.
Discuss your method for validating the accuracy of data after transforming and loading it into Snowflake. We can have a little definition about the validation rules, like, you know, they can be sourced to target mapping to ensure that a clear source of data and the transformation is being done. Uh, we can have the business rules, send in the business rules to validate the data that aligns, uh, with that. And we can also, uh, have a data quality check. That is very important, data quality rules. Then we can have a account validation that can be done. You know, they can be a query that can be run to in order to, you know, validate and have a row count validation. Then it can be our data check, data hygiene, or data consistency check for that matter. We can use column level validation, and we can also use queries, like, you know, SQL queries, like, select from there, all of these. And then we can have a threshold valuation, valuation, and validation that can be there. Uh, of course, data accuracy validation is one part of the process where we have to spot check the key metrics and understand that what exactly is it. And then we can have a, uh, you know, check on hygiene, check on what happened. And then we transform the logic validation and reapply the transformation, and then the audit trial happens and the audit trial and the meta metadata validation happens.
What considerations would you make when creating a Tableau dashboard intended for use by executives versus 1 for data analysis? You know, when we create a Tableau dashboard, the design and the content should actually actually align with the need and the preferences of the audience in general. So for the executive, it's focused more on, you know, high level insights and, you know, strategic decision making. Uh, it provides an overview of the overall KPI of what the individual has, what everyone has, and it prioritize on data that understand business goals probably, you know, like revenue growth or cost reduction or market, uh, performance of that. So whereas in data analyst, it is focused on granular granular level, you know, for exploration to enter the flexibility to drill down into details. And, you know, it includes comprehensive data sources as well. Executives have a few critical KPIs that they have a clear definition that they can do it and they can use in a more aggregated and summarized data. Whereas, uh, data analysts will need a more, uh, you know, specific and detailed asset, uh, detailed description towards things so that they can understand where they can have included raw fields, calculated fields, all of that, which I think the executives do not need at that point. Uh, executives will need comparison and trends to understand growth and, you know, to have this strategize that way. And, uh, I think in hindsight, what we can say is for executive, we can use a simple, clean, and, you know, intuitive design with minimal, uh, minimal text, probably and, you know, we can do a top down structure. You know, we do we can start with an overview and allow for drill downs if necessary. Probably, and for data, unless it's a little flexible layout, you know, with multiple tabs that are there or there are sections for deep exploration. So, yeah, I think of that sort. I would think.
How would you set up a monitoring system for data pipeline to catch and alert on failures proactively? I'm not really sure about it. We can actually focus on the probably the monitoring the objects That is the health of the pipeline. What currently is it, ensuring all steps that are that that are running as expected. We can have timelines. We can have verification that the job is completed within the accepted timelines. Uh, the data quality is not, uh, breached, and there's a sufficient and and quality that is maintained and then monitored for anomalies like, you know, missing, duplicate, or thoroughly incorrect data. And, uh, we can also ensure that the monitoring and the data is extracted correctly in the correct format. The number of reports extracted are correctly. The transformation happens correctly. Uh, the load. So the confirmed data where the target, uh, you know, system works, we can do that. Logging in in terms of implementing detailed logging at each step of the pipe, then that can be done. We can also have setup alerts, you know, like, error alerts that are there that, you know, trigger alerting for any any job failure that can happen in the process. There can be data quality alerts that can be there. We can implement, uh, you know, various tools like, uh, maybe some various platforms that are there that can be used. We can log onto the analysis and work around in that way, and we can have an automation about it. Yeah.
In what ways could the principle of SOLID impact the maintenance and scalability of data analytics pipeline? Probably, you know, in general, uh, the impact, you know, if we could talk about solid, the s is the single responsibility principle that is there. So the impact can be that it can be easier to, you know, debug and test the individual stages. It can reduce the risk of changes in one stage and, you know, at the same time, it promotes the usability of individual components or individual stakeholders at that point. Open for open or, you know, close principle that is there. Uh, it enhances the flexibility, uh, and which allows new features like, you know, adding a file format or somewhere on that part. And it also reduces the risk of, uh, during the updates, uh, that also helps in, you know, that there. Uh, l stands for I'm not sure about it. Uh, I'm not aware about it. It's a substitution principle, but I'm not really aware about it in that way. Uh, then there's I, which stands for interface recognition. I'm not really aware about all of this. I noticed solid b depends on that. Then there is some dependency inversion. So, yeah, that is all I'm aware about solid.
Describe your approach to identify and fix a data discrepancy in multistage snowflake data pipeline. Okay. Uh, we can again understand the data pipeline, you know, to document the pipeline stages if it is an extraction, transformation, or loading. And then we can understand the dependency on each of whether it's a high scale dependency or low scale dependency. We can identify the discrepancy, you know, define the program, uh, collect evidences, and then provide probably provide the data mine. The scope of it that can be done. Uh, we can add validate each stage of pipeline, you know, source data validation that can be there and where is the source, how is the source. We can transform the logic validation that is there, and then we can have a a log more of logging validation that can be there. We always have to analyze the logs and, you know, come up with a solution of what happened, what happened at what stage, what logs were created, and what logs had what legs. We have to do an RC of a root cause analysis. You know, we have to do backtracking of what happened in the past, whatever we have done. We have to, uh, you know, start from the final data final table and work backwards towards it. If you compare the snapshot, if available, we can have to compare the historical data to ensure, you know, that it works fine. And we'll have to spot patterns also. So that if this is happening regularly for null value, we'll have to duplicate the record and you'll have to identify what exactly is the pattern for it. And then we can fix the issue. We can correct the data issue, uh, probably, uh, you know, the source data issue. We can update the trans, uh, you know, the transformation logic and, you know, reload the data and, again, push it further for that.
A part of tableau calculated formula is given here to categorize the revenue in different class. It seems that the classification isn't working correctly. What is the flow and the logic presented in this code segment? Case when some small bracket revenue is less than equal to 1,000 then low, when some revenue, uh, greater than 1,000 and some revenue less than equal to 5,000 then medium. When some revenue greater than 5,000 then high, else undefined. End. Okay? Uh, I'll just need to define. I think it's the improper use of the parenthesis, you know. And there's a missing then keyword that is there for some conditions that are located in Tableau, and that cannot be the problem. So probably, uh, because Tableau requires something known as syntax for case statement. And the incorrect placement of this parenthesis has led to this. So probably, uh, this is what I feel is still correct on for it. I'm not really sure about it, but yeah.
Below is a Python snippet that is meant to filter out numbers, uh, filter out all numbers in a array that are greater than 10 and then print the sum of the remaining numbers. However, it contains a bug. Can you identify and explain the issue? Numbers is 5, 11, 2, 16, 7, 10. Some underscore of underscore numbers is equals to some filter lambda x semicolon, x, less than 10 comma numbers, print, sum, underscore, of, underscore numbers. Um, you always tell. Give it a try. I have no exposure towards Python that much. Uh, probably what I feel is the issue is the variable, the sum underscore off underscore numbers. Actually, it is not defined anywhere in the code that the variable is there. So I think it'll be a error in that. Uh, that is what I'm I could identify.
Describe the steps to migrate a data analysis workflow from a traditional database system to Snowflake, ensuring minimal downtime. Okay. We'll have to assess and plan migration, you know, first we'll have to understand the current workflow, what are the key components in the existing workflow, what are the data sources all of them, then we'll have to document it. Once the data analysis is done and the documentation is done, we go to the next step of defining the migration goals. You know, the objective of what exactly is a migration, why do we have to migrate for that? What is the success criteria? You know, there can be query performance, there can be data accuracy, and then we can have to third step is establish a migration team. Once that is done, then we go, and when we say deciding, you know, establishing a migration team, it involves, I'm sorry, it involves data engineers, et cetera, all of them. The second part involves, you know, preparing the Snowflake environment, when you set up a Snowflake account, it creates, you know, an account and configure virtual warehouses, and then we have to configure the networking out of it, and once you configure the networking, you optimize the storage and you set up a data governance for it. And then once that is done, you migrate the whole system.
Can you discuss an approach to use Python's data visualization tools to supplement Tableau's dashboard with advanced analytics? We can have an identification towards advanced analytics and we can set up a flow between Python and Tableau. Option one is I think multiple options might be there. One that I am aware about is using Tableau-Python integration with Jupyter notebooks, that is one of them, where you have to integrate the Python-Jupyter notebook with Tableau and this allows interactive analysis workflow that is there, it exports the result as a dataset, it develops and runs the advanced analytics or visualization in the workbook in the Jupyter notebooks and it uses Python libraries from Java.
What Python libraries would you choose for advanced data analysis and why? I know a couple of them probably. I'm not really sure about it if it's a correct answer or not. One is I think Pandas. Pandas data manipulation and analysis because Pandas is the main we can say the substance of data manipulation and it probably allows you know to handle large data set and the key features that include in that is you know maybe it can help you find and handle the missing data and time series of stuff.