
Engineering graduate in Information Science and Engineering with over 8 years of experience in the software industry. Possess 2 years of expertise in ERP (SAP Business One), SQL Server, and Crystal Reports, coupled with 6 years of extensive experience in Business Intelligence (BI) Concepts like Data Warehousing, Data Modelling, Data Reporting, Data mart, and technologies like Power BI, SQL Server, Azure SQL DB, Azure Synapse / MS Fabric, Databricks, Azure Data Factory and used the languages like T-SQL, DAX, M Query, PySpark & YAML. Also proficient in database development, multidimensional analysis, and CI/CD pipelines deployment using Azure DevOps.
Pulled Data from Business Tools Like SAP ERP, SalesForce (CRM), Success Factors (HR), and Oracle EPM (Financial Planning.)
Business Intelligence Engineer
ElastacloudBusiness Intelligence Analyst
Sapiens Technologies India Pvt. Ltd.SAP B1 Consultant & BI Analyst
SM Squares TechnologiesSAP B1 Consultant
Dynamo Infotech
SAP Business One

SQL Server

Crystal Reports

Power BI

Azure Synapse

Databricks

Azure Data Factory

T-SQL

DAX
.jpg)
YAML

MS Fabric

SSIS

SSAS

SSRS

Azure Boards
Uh, Yeah. Hi. Uh, myself is Amit. So I have around 8 years of experience in IT industry. So 6 years into Power BI and data modeling, data analysis, and data warehousing. So I have started my career with, uh, report creations using the SQL Server. So I worked on the, uh, stored procedures, functions, views, and, uh, I I used the t SQL to create, uh, some of the reports like sales reports, etcetera reports, uh, using SQL server, and I used SQL scripts over there. So later, uh, I I got interested into Power BI, then I moved, uh, the creating the reports and dashboards into, uh, Power BI and also, like, working back end data for the data warehousing. So I like, particularly in the project, like, uh, mainly, I worked on projects like finance and the HR domains. So in the like, also, we usually collect the data from the that multiple data sources, uh, which the company, uh, which the company use. And from there, we will use those data, uh, and, uh, like, load it to the landing page, then we will transform, clean the data, and improve the quality, remove the nulls, and indexing the data. Everything we do the configuration, and, uh, we will move to the staging space. There, we create the facts and dimensions and create build the model and then move it to the curated curated space. So from the curated space, we we use the data to, uh, create the reports on the dashboards. So in this all this process so, like, we I I use especially, I used the, uh, ADF for the for orchestrating the data, and, uh, I used a skill server for creating, uh, like, upgrading the, um, uh, like, uh, for the incremental load process and the full load process and manipulating the data, transforming the data. So and, uh, we I used the Databricks also for, uh, same for the transforming the data and, uh, like, uh, for, uh, ADF stuff. Uh, so and also for the reporting, I used the tax queries and, uh, m query for the, uh, for the ETL processes. Uh, I used Power BI, uh, in this. Uh, and other than the you, uh, like, uh, I worked on Databricks as well. So and for the storage, we used to use Azure SQL DB and Azure Synapse. Now recently, I have certified for Microsoft also. So we use some, uh, Microsoft Fabric for a few of the things. Uh, okay. So this is all about my experience. Uh, Yeah. So and, also, we use agile methodology. Uh, we follow every 2 week sprint, and there will be a planning retrospective on the demonstrations every 2 weeks. So based on the input from the demo, so we will improve our, uh, changes in the, uh, next sprint. So this is this is how we will follow the project. Thank you.
Balance. Okay. So we will first, uh, I'll first check the volume of the data, what is this data related to, and how quick we can connect it to from the Power BI. We will check that 1. So and and see what what is the type of data it is. Uh, so it's it's a a streaming data or it's, uh, like, a day to day update data like that. So then, uh, we will decide. So, uh, so the main thing is that, uh, so if if so only only thing is that the data model, uh, is, uh, will be, uh, will be affect only the performance. Other than that, I think, uh, we can easily copy data to, uh, the database, and we can create the report.
What techniques could you use to so to probably a different that won't load in the intended application. For, uh, for intended application I mean, in the external application, we will check the XML endpoint, see whether, uh, the XML endpoint is, uh, correct. And, uh, so yeah. So and, uh, XML yeah. I think XML endpoint is the main thing to check uh, for the, uh, data. It's not loading to the external application.
Okay. So, Jose, we take we take the dashboards into its applications using, uh, for the IMEI analytics. So we use the XML end point. So that, uh, end point, we use it in the external application so that, uh, that the users can see the Power BI reports into, uh, the Power BI embedded space.
And would you choose to build a new table or view in the database for a for a report? Okay. So if, uh, we are, like, uh, if, uh, we are, uh, uh, having the data from the different tables and, uh, then we can, like, create a query so that to join between the tables, and we can create it as a view, uh, and it will be a virtual. Uh, so, uh, yeah. So that that will do the processing in the data source itself, uh, not in the Power BI engine. That times we use the views. And if we if we have that data entries happening, particular table, uh, that's a physical space, physical storage, then then only we use, uh, if, uh, if there is a, uh, usage of the the particular users are doing entry in the table, then only we create it as a table or else we create it as a view.
Uh, so for a virtual person control so yeah. Now, like, for Power BI, uh, we can integrate that with the Git, and we can give the Azure DevOps organization, create organization, and create a project, and we can link that to the Power BI, uh, that one. And, uh, so we can create it as a 3 3 or 4 stages, like a dev test, UAT and prod, or dev test and u a dev test and prod, depending on the, uh, come like, a company, uh, hierarchy and, uh, the requirement. So, like, yeah, we can, uh, create a deployment, uh, pipe line over there, and in that, we can create 3 workspaces. 1 is a dev, test, and prod. So we can assign that, uh, 3 workspaces to the 3 pipelines in the deployment pipelines. Then we can, uh, integrate that with the Git, uh, with the project. Uh, then every time you upload it, uh, to the, uh, Power BI service, then, uh, it will, uh, it will ask then you if you map with the commit it with the branch, uh, then you if you map with the committed with the branch, uh, then it will be like, a version will be managed, and you can see the history. So previous, uh, like, the previous changes, whatever you do, you can, uh, see the version history. So this is the one type, and another type is you can integrate with the Azure DevOps. So you can create, uh, like, a a clone that Azure, uh, repose into your local system. From there, you can whenever you do the changes in, uh, that create a service principle using Power BI, and, uh, whenever you do the changes, in the local system, uh, in your branch, so automatically. And if you once you commit and push it to the main branch, uh, sorry, in the main main, uh, GitHub or to the GitHub, then you can see the changes uh, in the Power BI, uh, also. Uh, so once you do all the changes and the ones that it moved to that development test, once the user approve that the changes, then you can get the merge it with the master branch. So, yeah, you can use Azure DevOps, and, uh, the you can use the Azure Git integration inside the Power BI service. So there are 2 options to do it. Thank you.
It's a number of filter on sales sales here. 2022. So here in this formula, so you have the filter, uh, which it particularly do it to the table. So now you are see selected as a sales all sales. So however, you are doing the sales year 2022. So we don't need to use a filter function here, filter. So directly, we can uh, use all sales and, uh, the sales year 2022 because of, uh, filter will only concentrate on a particular table, not on all the tables, uh, existing that particular visual. So you can use some of sales amount, and, uh, all sales and sales here is for 2022.
So let's start from orders in our join customer. Okay. Customer dot country is equal to Germany. Okay. So here, um, yeah. So you can Okay. So only here is the star. So instead of taking the star, we can take only, uh, the select, uh, required columns, uh, inside the query, then then I think, uh, like, the performance will improve because if you take the star, then it will, uh, fetch all the rec all the columns from the orders and, uh, columns from the customer. So instead that we can take only the, uh, required columns in the select statement. Yeah. Thank you.
Design a forward way forward way reporting solution that I can to complex business logic and the accent. Solution. Okay. So, uh, like, for, uh, doing a complex business logic, so in the DAX, especially, we can use the variables. So variables, uh, place important role in the DAX so that, uh, we can use that in a formula so that, uh, it will it will, uh, improve the performance. And, uh, like, you can use the tabular editor. So tabular edit editor is a very good option to check the query performance and to improve it there, uh, like, by doing trial and error, uh, dev test dev test. Uh, that is the 2 options you can use it. And, mainly and the data model. So data model should be, like, uh, it should be the relation between the table should be properly, uh, like, assigned, so designed so that, uh, it will, uh, it has integral, uh, integral security so that, uh, so you you won't it won't, uh, affect the performance on the reports. And, also, on the visual level, you can use the performance analyzer in the pop up desktop. There also you can, uh, improve the performance by checking the visual, uh, to refresh the visual, how much time it is taking. But there also, you can improve the DAX formula by seeing the visual performance. Uh, Yeah. So So 1 is the the data model stage, 1 is, uh, using the variables in the text query, and 1 is, uh, visual, uh, performance, uh, using performance analyzer. And, also, we can use the, uh, DAX Studio to check the to check the schema, so how how it is and what are all the, uh, columns you have required in the schema. So based on that also, you can, uh, improve the Power BI reporting. So Power BI performance. So, uh, the standard format, you can use there And, uh, from DAX Studio, and you can do it. Uh, Yeah. Using this, you can input.
What strategies would you apply to ensure Power BI reports remain accessible during data base maintenance activities. Activities. So, like, uh, if see, uh, Yeah. So so first thing is that, uh, the we we cannot directly link the database with the operational database to the Power BI data. So we we should have some staging space, and we should have some curated, uh, data there. From there, we will connect with the, uh, Power BI. And, of of course, if you are, like, want, like, maintenance, then then improve import import mode is the, uh, import mode is the best way to have it when you are doing the database maintenance activities. But, uh, it's it's it it only work for the, uh, like, small data. It don't work for big data. Uh, for the 2, see, if you have, like, if, uh, like, the the, like, data warehouse so you can create some, uh, some replicated data and you can do the connection to that or using a branch. So you you're having a, uh, data test, uh, and, uh, production environment. So there, you can use a branch and you keep that, uh, data in 1 branch and then, uh, you can do the database maintenance activities. I think so. Yeah. So we use a separate database for the reporting. So, uh, whenever there is a maintenance issues in operation data, then I I don't think it will, uh, affect to the analytical data. In case if you're doing the maintenance activities for the analytical data, then, yeah, using the import mode is the 1 of 1 1 option so that uh, the reports will be continuously running, uh, because it will, uh, store it in the cache memory in Power BI itself, uh, or using a a different environment, uh, like, creating some different environment and, uh, with the same name, uh, and connecting to that data source until the maintenance activities.
How do you incentivize your team to maintain good quality in the development of DAS expressions? Underscale queries. Yes. So, like, uh, yeah. So the scale queries and the tags expressions are, like, uh, it is a different type of, uh, the same same, uh, report but, uh, using a different syntax. So yeah. Yeah. Like yeah. We use a tabular editor. So, like, uh, same like a for SSMS for SQL Server. So for the decks, we use a tabulator. And, of course, in new version of Power BI, we can, uh, do it in Power BI itself. Uh, but maintaining and managing the DAX expressions in tabulator is, uh, much good than, uh, for BI. So yeah. So that 1, uh, we you yeah. We will, like, uh, I like, I it's like, uh, we will check the Yes. So we we checked the like, what is the back end, like, what is the formula exactly used in the SQL queries and how it will be transferred to the tags. So because in tags, the functions is different and, uh, complete, uh, like, syntaxes are different. And, uh, yeah, based on the the back end back end, uh, the, uh, the logic will be the same. So first, we filter that particular data, and then we will apply the aggregate functions on that, uh, grid functions on, um, on that particular data for while doing the reporting. So same same way, uh, like, same, uh, scenario applied to the tax also there. We will check the, first, the particular, uh, requirement, and then we will filter it out the, uh, we filter it out that, uh, capture that filter into the variable, and then we will use a variable in the creating function so that, uh, yeah, this is how we will, uh, do the things. Yeah.