profile-pic
Vetted Talent

Manmohan Tyagi

Vetted Talent

Manmohan Tyagi is a seasoned Cloud Architect and Big Data Engineer with an impressive 14 years of hands-on experience. He is a certified expert in AWS, GCP, and Azure cloud technologies, including AWS Developer, AWS Solutions Architect, and GCP Cloud Architect/DevOps. Manmohan's expertise spans cloud migration, data engineering, and big data technologies such as Hadoop, Spark, Kafka, and Flink. He has extensive experience in designing and implementing data ingestion pipelines, data lakes, and data warehousing solutions on cloud platforms. His strong background in NoSQL databases and data modeling for data warehousing, coupled with proficiency in DevOps practices, CI/CD pipelines, and infrastructure as code, make him a well-rounded technical leader. Manmohan is also adept at programming languages like Java, Scala, and Python, and has developed expertise in microservices and serverless architectures. Additionally, he has demonstrated excellence in cloud security, compliance, and governance, as well as in leading and mentoring teams on complex enterprise-level projects.

  • Role

    Architect, Big data and Cloud AWS/GCP

  • Years of Experience

    15 years

Skillsets

  • Architectural Design
  • Containerization
  • Cross-functional collaboration
  • ETL processes
  • Leadership & collaboration
  • Performance Optimization
  • Project Management
  • real-time data streaming
  • Security & compliance
  • Team management
  • Managed Services

Vetted For

11Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Staff Engineer Engineering - Payments EconomicsAI Screening
  • 68%
    icon-arrow-down
  • Skills assessed :Communication, Fintech, Backend Development, Cloud Computing, Design Pattern, Microservices Architecture, Payment System, Programming, Software Architecture, Database, Problem Solving Attitude
  • Score: 61/90

Professional Summary

15Years
  • Aug, 2023 - Present2 yr 1 month

    Architect, Big data and Cloud AWS/GCP

    LTIMINDTREE
  • May, 2021 - Jun, 20232 yr 1 month

    Architect, Cloud Security

    CAPGEMINI
  • Dec, 2018 - May, 20212 yr 5 months

    Architect, Big data and Cloud AWS/GCP

    EPAM SYSTEM
  • Feb, 2011 - Aug, 2011 5 months

    Technical Lead: Java, Spring (MVC, IOC, AOP), Hibernate, Restful services, Android.

    INFOGAIN PVT. LTD
  • Sep, 2011 - Sep, 20153 yr 11 months

    Technical Lead: Java, Spring-boot, Hibernate, Big data/Hadoop.

    SYMPHONY TELECA(HARMAN)
  • Sep, 2015 - Nov, 20183 yr 2 months

    Lead Consultant, Big data and Cloud AWS

    SAAMA TECHNOLOGIES
  • Jul, 2010 - Feb, 2011 6 months

    Software Developer: Java, Spring (MVC, IOC, AOP), Hibernate, Restful services, WebLogic portal.

    IBM PVT. LTD
  • Dec, 2009 - Apr, 2010 4 months

    Software Developer: Java, Spring (MVC, IOC, AOP), Hibernate, Restful services.

    INFOPRO PVT. LTD
  • Apr, 2008 - Nov, 2008 6 months

    Software Developer: Java, WebLogic Portal.

    TELESOFT PVT. LTD

Applications & Tools Known

  • icon-tool

    Hadoop

  • icon-tool

    Spark

  • icon-tool

    Kafka

  • icon-tool

    Hive

  • icon-tool

    Flink

  • icon-tool

    NoSQL

  • icon-tool

    Apache Beam

  • icon-tool

    Impala

  • icon-tool

    Presto

  • icon-tool

    Matillion

  • icon-tool

    Airbyte

  • icon-tool

    Talend

  • icon-tool

    Pub/Sub

  • icon-tool

    Dataflow

  • icon-tool

    Terraform

  • icon-tool

    Terragrunt

  • icon-tool

    Airflow

  • icon-tool

    Cloud Composer

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    SQL

  • icon-tool

    Python

  • icon-tool

    Scala

  • icon-tool

    Java

  • icon-tool

    Spring

  • icon-tool

    Hibernate

  • icon-tool

    ORM

  • icon-tool

    Redshift

  • icon-tool

    BigQuery

  • icon-tool

    Snowflake

  • icon-tool

    AWS

  • icon-tool

    GCP

  • icon-tool

    microservices

  • icon-tool

    Lambda

  • icon-tool

    AWS

  • icon-tool

    GCP

  • icon-tool

    Terraform

  • icon-tool

    CloudFormation

  • icon-tool

    Kafka

  • icon-tool

    Flink

  • icon-tool

    NoSQL

  • icon-tool

    Impala

  • icon-tool

    Presto

  • icon-tool

    AWS Glue

  • icon-tool

    AWS CloudFormation

  • icon-tool

    Iceberg

  • icon-tool

    Microservices

Work History

15Years

Architect, Big data and Cloud AWS/GCP

LTIMINDTREE
Aug, 2023 - Present2 yr 1 month
    Actively preparing for upcoming projects and enhancing skills in CRM & HCM Migration on Cloud. Engaging in continuous learning and training programs.

Architect, Cloud Security

CAPGEMINI
May, 2021 - Jun, 20232 yr 1 month
    Worked as Cloud Security Architect for VMware client on VSS and CloudHealth Products. Design and development cloud services configuration and author the security rule.

Architect, Big data and Cloud AWS/GCP

EPAM SYSTEM
Dec, 2018 - May, 20212 yr 5 months
    Worked with multiple Epam Clients like UBS Bank, DBS Bank and Novartis. Provide technical and process leadership for projects.

Lead Consultant, Big data and Cloud AWS

SAAMA TECHNOLOGIES
Sep, 2015 - Nov, 20183 yr 2 months
    Provide technical and process leadership for projects, defining and documenting information integrations between systems.

Technical Lead: Java, Spring-boot, Hibernate, Big data/Hadoop.

SYMPHONY TELECA(HARMAN)
Sep, 2011 - Sep, 20153 yr 11 months
    Developed server-side business logic, used java, spring. Developed UI code and persistence layer ORM tool interaction coding.

Technical Lead: Java, Spring (MVC, IOC, AOP), Hibernate, Restful services, Android.

INFOGAIN PVT. LTD
Feb, 2011 - Aug, 2011 5 months
    Worked as java developer and code the project in java, google Map. Implemented the Unit test and integration testing.

Software Developer: Java, Spring (MVC, IOC, AOP), Hibernate, Restful services, WebLogic portal.

IBM PVT. LTD
Jul, 2010 - Feb, 2011 6 months
    Worked with IBM client Airtel on RSA PORTAL. Implemented business logic of application and flows.

Software Developer: Java, Spring (MVC, IOC, AOP), Hibernate, Restful services.

INFOPRO PVT. LTD
Dec, 2009 - Apr, 2010 4 months
    Worked as java/j2ee developer, implement the server-side code, JUnit testing, RDBMS queries and business workflow.

Software Developer: Java, WebLogic Portal.

TELESOFT PVT. LTD
Apr, 2008 - Nov, 2008 6 months
    Worked with Vodafone client on CMS portal Project. Implemented the server-side code, JUnit testing.

Major Projects

8Projects

AGGR

UBS bank.

DATA PLATFORM

DBS Bank

Project:

NBS

Novartis Technology:

M2M STUS

Verizon Technology

eDiscovery.

EMC2 Data Storage Systems.

CMS

Vodafone Technology:

Education

  • Master of Computer Science

    H.I.M.T Greater Noida (2006)

Certifications

  • Aws certified Developer,

  • Aws certified developer

  • Aws certified solution architect- associate/professional

  • Aws certified solution architect

  • Gcp professional cloud architect

  • Certified cloud security professional by cloud guru (ccsp)

  • Certified cloud security professional by cloud guru

  • Certified cloud security professional

AI-interview Questions & Answers

Hi. This is Manmon. I have 15 year of, uh, overall experience in IT. I got opportunity to work in different kind of domain like health care, banking, retail, payment transaction. So I have, uh, expertise on cloud technology, big data technology, microservices, payment transactions, storage domain, health care. Apart from that, I have language wise experience like, um, Python, Java, Golang, Scala. Most of the time, uh, right now, I'm playing as a, uh, solution architect and the cloud architect role. So daily activities like designing and, uh, reviewing the architect, enhancement, security posture on cloud deployment, and our on premise deployment. They will talk to the stakeholders and how to optimize monitoring the products and how to deploy in the, uh, new feature into, uh, production servers, how to maintain the infrastructure securities, and team management. So that actually is the my daily activity. Once again, the technical front, uh, big data technologies, uh, artificial intelligence, and, uh, generated AI. Uh, that will include from the cloud services. So as I mentioned, 10 year of experience on AWS cloud, uh, 5 to 6 year of experience on GCP cloud, and a 2 year experience on AWS cloud. So, essentially, we were developing a security product with VMware. Name of this product is, uh, VSS, VMware security state. This actually is going to be the, uh, running as a security postures. Like, for example, we have deployed some kind of, uh, uh, workflow on cloud, and we wanted to, uh, make sure that what actually is the security posture of your workload on cloud or applications like that. So that what our old, uh, very high level deep introduction about

So, actually, uh, there is no, uh, any kind of, uh, uh, single rule that how you are going to implement, uh, the PCI DSS control in your project. But let me try because I work in many of the project where we require the PCI and DSS compliant in throughout. So we were working in my last project, uh, in with EPAM system, and at the time, we actually implemented the 1 payment gateway there. So very high level, actually, there are the 12 key requirement for PCI or DSS compliant in any of the project either this goes to, uh, any of cloud. If you are the on prem system, that is going to be give you a favor because you are in your private network. So let me highlight it, uh, these 12 key, uh, responsibility to be the compliant in any of the solution on cloud or on premises. But here are I think near about in the case was, you know, PCI DSS, uh, PDF, security field control PDF. There is 300 sub con subcategory controls. So it's not possible to completely cover all of these, but highlighted the, uh, 12 most important thing that we are going to the 1st layers. So first thing that, uh, network security point of view. So when we are talking about the network security point of view, so firewall come into the picture. So in our environment, there should be the, uh, very fine configuration over the firewall installation and firewall configuration. If you are on premise, then, of course, there is the leverage that you are in your private cloud, so you can manage more fine tuning. But when you are on the cloud, of course, you have to maintain, choose your firewall properly. Then here is the kind of vendor default. If some vendor you are working, they're providing the by default password, some keys. So this will be the completely managed by you. 3rd thing is that, of course, you have the data. So that's how you are going to protect your data. Uh, that is your physical and, uh, virtual where you put your data. They are sure to be the security measure that how you are protecting your data, and then there is the encryptions that encrypt your data at rest where you actually share this data. And in transient, if you are transmitting this data from one point to, uh, other point, so encryption transient is called, like, HTTP, SSL, TLS. We have to provide that one. Next point is the antivirus. Actually, our system is not just, uh, uh, working as a individual monolithic. It's microservice. So many component and the modules are there. So individually, uh, different kind of system, uh, working collectively together. So this should be the update policies that how soon we are getting the, uh, updated devices for entire the system. So that next thing come into the picture that is the secure your, uh, system and application. What kind of the application system you have that you also the very critical things. Now come to the, uh, access. So 3 kind of access we have, access to the, uh, cardholder data. So that should be the product then access to system component like I mentioned here. Uh, maybe the many component working together inform a system. So this should be the fine gain access control policies that who is going to be the responsible and authorized to access this particular system, uh, application, cardholder data, and restricted, uh, your physical access. Like, if we have the on premises, some server put in there, who is, uh, going to the allowed to do that? And couple of things that we can add here that, uh, monitoring and, uh, monitoring our, uh, monitoring, logging, and, uh, access the, uh, network process, cardholder data for the auditing purpose. Then clearly define your security policies that, uh, according to the organization policies, according to the different people are working, what kind of the privilege least privilege they are going to, how encryption and these things are working. So these are the key things. And, of course, in any of the project, there are a lot of challenge because it's not just a fun. Just to pick up the some, uh, noted as, uh, single and implement that. So we also have the face of many, uh, challenges in that area that how to protect different point of things, how to protect access, how to protect

What is said you would imply to answer data security and PCI, uh, compliance. So data security and PCI compliance, actually, again, as I mentioned, that there are the, uh, it's critical for, uh, critical for our, uh, protecting the sensitive information for our, uh, financial transactions. So first of things that when you are using, uh, cloud based. So we should keep in mind, like, any of the cloud like Azure, GCP, or whatever cloud you are going to choose, its services should be the PCI compliance. Then what we can do, the network segregation. Uh, so it's better always better to keep, uh, segregated the payment, uh, system and, uh, to the nonpayment system. So make it a complete segregation so it help you to increase your security, your investigations, then you can implement your encryption, uh, address to transient, then you can make the tokenization, uh, to, uh, hide the, uh, sensitive information about the data holder, and then you can implement your that who is the least privileged access principle are back based, uh, control access you can provide. Then there should be, uh, very interesting today is the MF multifactor, uh, that you can implement in your system. And regular security auditing and monitoring is the very critical part. Logging and monitoring, once again, is the that one. And the one important point that, uh, when we are, uh, working this PCI compliant things or data security, so PCI compliant assessment should be regularly conducted through the system. So these are a couple of things that we should, uh, keep in mind when we are working on this kind of

Uh, how would you design a rate limiting system for high demand payment API to prevent the abuse? So, uh, before to, uh, just explain that what to read the date limiting, what happened that when we hear the public facing our payment API or payment gateway. So there might be the, uh, in very, uh, normal case that there is some cyber kind of cyber attack or some misuse is going to be the happen. For example, like, I'm the guys, and I'm going to be the creator sent into the system. I'm abusing the payment API again and again, intentionally or unintentionally. So, uh, to, uh, overload our system, what we have the fundamental here, the late limiting. So for example, I am trying to access particular, uh, payment API gateway online from particular, uh, IP address or geographically on continent. So this rate limiting is going to be the helpful that, uh, it can be the limited that, for example, uh, me is allowed to only request the 4th time in a minute. So in that way, system is going to the functional properly. It's not going to be the overloaded or unnecessary is not going to be the scale out based on the load. So let limit is the way where we can, uh, maintain our scalability in a requirement fashion rather than to, uh, bundling the system in such a scenarios. So that actually is the way to rate limiting the high demand payment API to prevent the abuse. And here is the, uh, another way, like, robotic kind of the attack is going to the happen or DDoS attack is going to the happen. So in that scenario, uh, of course, system is behind the load balancer, and the system is the auto scaling features are working to meet the demand. So to, uh, prevent these unexcidental or an, uh, expected kind of side effect, we, uh, again, is going to be the rate limiting at a level to prevent and kind of, uh, uh, stop or allow the very limited kind of issues in our system. So that one.

So, uh, actually, what process you implement to validate and deploy massive changes to the payment system without affecting the user. So it's very critical that, uh, 2 or 3 things come into the picture, uh, when we are doing this type of the things. 1st 1 is the user experience. 2nd is the function should be the, uh, system should be the functional in any case of, uh, uh, deployment, message deployment, or any kind of the deployment. So here are the some technique that we can use. Uh, first of all, uh, like, uh, environment, uh, kind of, uh, uh, So for example, we have our production system, and we are going to massive deployment. So better the strategy is that rather than to directly deploy on to the environment or let first do this into the middleing environment, like some staging environment, development, or testing environment. So that the time we are going to deploy and, uh, monitoring the system monitoring the system behavior that how the system is going to be behave. Either they are we are detecting some errors, some functional failure. So this kind of the things we are monitoring here. Then we can use the candidate deployment. For example, that there is message deployment, and we want to just test this in real time, uh, or in a real environment. So in this scenario, just redirect the couple of percentage traffic to newly added feature or newly added deployment. So these are the ways that, uh, we can test the functionality if something is going to fail. So in that scenario, entire system is not going to down, and the user experience is not going to be the completely negative. Uh, third thing, once you are deploying the things, the monitoring and logging is the more important there, uh, in that scenario. And, uh, security, of course, is there. Then, uh, major thing is that we should, uh, our rollback policy in place. That if any kind of the things is going to be fail, then how we are going to, uh, roll back the entire system or rebuild the system into the, uh, previous state. So it's going to the functional properly, which is not business, and user experience is not going to be that impacted. 3rd way that, uh, we can using the towing flag. Like, for example, there were some features when we are deploying the message deployment. So based on that, uh, we can just honor off this functionality. If this functionality is going to break that particular system, so we can at different time, just of these features, and the rest of the system is working fine. So, again, whatever we are going through the deployment, what feature is going to be the year. So, uh, impact and the risk analysis should be done first before to deploy into the production, and this should be the properly documented. Teams should be the align, and we should inform to the stakeholder that what kind of the change the system is going to be deployed, what might be the known issues or some hidden issues, and how we have the rollback policies, and that system is going to the control in previous state. So these are the summary

Optimize database transaction in high throughput payment processing to avoid the contention. So there is a technique. So in that scenarios that, uh, we have already used all large number of, uh, user base in different continental or different geographic. So, of course, there are the local, uh, demands. So, uh, optimize database transaction in high throughput. Avoid the resource contention. So there are other techniques. Like, uh, uh, what we can do, uh, we can use the database replica. So in that scenario, uh, the load is going to be the, uh, balance redistribute because, of course, there is some load balancer who is going to balancing the load, and we have the multiple, uh, replica here, database replica. It might be the, uh, your read replica for your read, um, intensive kind of, uh, uh, commands. And, uh, you can have the master and architecture. In some scenario, if the, um, our master is going to fail, then we stand by is going to take over. So read replica is kind of the scaling, uh, scaling the system in a fashion that, uh, all the read, uh, kind of, uh, command is going to hit the replicator rather than to actual system. Second thing is the, uh, distributed casing. So you can use the cache here. It's also going to prevent the or you can just reduce the lower on our system, then there is the proxies and, uh, uh, connection points. So in that scenario, actually, after all, what we are doing here is going to be the reduce, uh, the load, which actual, uh, database server have to handle. So some load is going to be handled by the distributed case because already answer are there. Some reader replica is going to work. Proxy is going to be work. We have the standby. So this kind of the things we can use and optimize our, uh, payment processing system to avoid the contention of the resources. So because, uh, few requests only mandatory, uh, kind of the request is going to, uh, the our database, like, uh, read, update, or delete kind of the thing. So, of course, uh, we already left it out the reload. So system has the buffer, system has resource capacity available in that scenario, so that is the way to optimize. And, uh, what we can do that, uh, to full tolerant and high availability, we can deploy the parallel system to the multiresional or multiple zone data recoveries, uh, complete data backup policies. So our system in any kind of the failure, any kind of the situation should be the functional. So these are the

JOS JOSA Java code snap explained why class might not be following the single responsibility principle. How would we factor it to, uh, adhere the better design. So here's the payment processor. We have process method. Okay. Then log transaction, send a receipt. Here is the payment data. So look, uh, according to the solid principle, single responsibility is going to say whatever the responsibility we have, just keep keep it out in, uh, either an interface or just asset class, then try to, uh, implement in our concrete class or some payment process. Because, uh, if system is going to be the extendable, like, we have the 3 or 4 methods, uh, here, and someone class is going to then implement it. So unnecessary is not going to be the break the system. So single responsibility means only the single responsibility. So here, we have the 3 methods. Uh, process payment, log transactions, send and receive. So what we can do, we can separate the responsibility, process payment in, uh, 1 interface and send receipt. Receipt sending responsibilities should goes to the another interface so that the class who is responsible is going to be the work accordingly. And, uh, we have now, uh, single responsibility processing payment in 1 interface. 1 class is is responsible for this. Another guys is working on the receipt sending. 3rd 1 is going to log the transaction manage. So we can just divide this and assign to the single interfaces. So any of the concrete class or in case of tomorrow, we want to extend the system. So it's not going to break all effect because there are number of separate, uh, for example, tomorrow, we are extending system, but our system only required to send that ZIP. So we have to unnecessarily implement the process by maintain those transactions. So keep it separate in separate of the interface that meet our single responsibility, solid designs principle to be the better.

Given this, it's a strategy pattern. Actually, in what will be done into open, close. Okay. So we have payment context. Here, we have interface. I payment strategy. We have pay method here. So in payment context, we have private variable. Okay. We will define this high payment strategy variable. Then we have 2 method here, process payment. Okay. That's fine. Processing the payment. Set payment strategy means dynamically we want to change the strategy. For example, uh, today, credit card, PayPal, or Bitcoins. There might be the different kind of implementation. So the question here is open close. So I think interface single one. That's correct. And credit card payment implemented this interface. It's having have years separate one. Okay. That's great. Then here be the more payment strategies. So in that scenario, like, uh, Bitcoin or PayPal or another kind of, uh, payment strategy might be. So here the interface is clear. Implementation for this credit card or different strategy is clear. Come to the payment context class. So in payment context class, we have 3 thing. Uh, 1 variable that, uh, of course, going to set the payment registry dynamic at a run time dynamically. Process payment. Okay. Going to pull the pay. That's right. So I think that, uh, here is the missing one thing. We should have a default strategy, uh, in payment context, and that we can achieve by using the constructor. So we should implement 1 constructor in payment context, and let's set there the, uh, uh, this, uh, under payment strategy variable to set. So, like, uh, in any of the client, in case of the client is going to call, so it's going to the instances like credit card payment or, uh, some other to make the payment. So constructor is going to call to create the instances. So my understanding is here, it should have the constructor with the setting of, uh, payment strategy. So that thing is the missing here. I think that can

What measure would you take to prevent prevent transactional data loss during a system failure? Okay. So this the question is prevent transactional data loss, okay, during system fail. So there is the, uh, many ways, like, we can prevent our transaction data. So first 1, we have to understand that our payment system kind of system that we have implemented here Is that a, uh, distributed transactional system? Is that going to use distributed transactional coordinated there? Is this the, uh, compensating transaction available? So that in any case of the things, we can roll back the system. 3rd thing is that the system is the idempotent. What kind of the communication is that asynchronous or synchronous communication we have here? What the monitoring and logging, uh, strategy we have there. 4th thing, I think there should be the retry and time out. In system failure, in this scenario, the major thing is that to prevent the data loss, we should have a proper backup and the restore, uh, strategy in place. So even system is going to partially fail here. Our data is not going to the loss. So two type of data. 1 is the transaction log data. 2nd is the actual transaction data. So any of the, uh, kind of stories we are going to use, like some relational database or any kind of the stories or this thing we are using. So for critical data, keep in mind, in each storage system, it should be the replicated in, uh, multiple, uh, storage system. Then our, uh, storage system, like, uh, I mentioned the lesson database, it should be the duplicative means in, uh, different region or different zone so that both system are synchronously, uh, updating the state there. In any of the failure scenario system is failed, so there might be the journal failed, regional failed. So our system is in condition to be the functional and recoverable. So backup and policies, database, uh, replications, uh, multi regional deployment, and, uh, storage replication as well. So load balancing and, uh, uh, DNS routing. So these are the things that we can prevent data loss during the partial system failure or complete system failure because multiple things, database backup, restore, we can, uh, get back soon in our system in a functional, or we can say we can recover the system.

Okay. This is to use cloud services to manage and analyze payment transaction logon or performance tuning. Okay. So, uh, this is the, uh, many services that, uh, public cloud provider and the private cloud provider, uh, offers. So here, I'm taking the example from AWS side. So let's, uh, uh, talk about this. Uh, so first of all, thing, the data injection. So inside of the data injection, let stream all of your transaction log to the AWS s three service. It's going to the real time, uh, or in a batch, whatever the, uh, things you have. And keep in mind the system either should support the API or some SDK to ingest the data in single centralized place like, uh, AWS history. Then second thing coming to the point that how we are going to be the, um, cleansing and transforming the data. So in that means of the data. So AWS provide the AWS Glue service, and, of course, there is the catalog as well. So we can utilize the AWS Blue service for ATL cleansing and data catalog as well. 3rd thing is the analyze purpose. This is the, um, Athena. So Athena, you can use the interactive query. You can find the insight in your log, what kind of the, uh, queries you have, you can perform and understand your, uh, transaction log system. Then there is the if you want to the visualize dashboard, then there is the cubesight. Cubesight, once again, is AWS service. We help you to the dashboard and visualization. If you want to the advanced analytics, then here is the SageMaker. SageMaker is machine learning model itself to, like, predictive analysis or some detection kind of things if we have the security logs. Then there is the monitoring and logging services, like, you know, CloudWatch. Then here is the for, uh, optimization, you can create your performance metrics based on some, uh, parameters like your personalization, response time. So whatever kind of, uh, metrics you want, you can create on, uh, CloudWatch, AWS Cloud. What then did is the cloud grid for, uh, uh, testing your system that, uh, how many type of the, uh, request is going to the headband auditing purpose. So these are the, uh, services. I just took the example from AWS cloud itself. So these are the service available we can

Evaluate method to incorporate automated security scan automated security scan within payment processing application deployment life cycle. You will leave it in my 3rd to incorporate automatic okay. So, like, uh, uh, in application development life cycle, we have many phases. For example, there is the authentication part. There is the authorization part. D s is the, uh, list privilege. D s is the your application is going to access the database. What kind of replace that particular module have to, uh, access. Uh, and when you are storing your data in some storage, how you are going to, uh, make it a compliant, like, for example, PCI DSS compliant. How did I secure? How the control you are going to implement. So this kind of, uh, automation, we can implement, uh, in my previous project, we have implemented this by using the robot framework by Python liabilities. So, of course, uh, if you are deploying this this on premise or, uh, some cloud, so there should be the encryption, uh, other back control access and the physical security system component access, uh, tokenization, and masking, so the video, TLS or SSL. So these all things, you can write your, uh, test scenarios in robot, uh, framework that is going to automatically scan when you are deploying your application in, uh, dev environment, staging environment, or, uh, testing environment. So here, we can evaluate the security posture and scanning end to end that, our system is compliant or not. What kind of the security, uh, level we should have the existing guiding role and responsibilities. So these are the way to automate that upon a time of, uh, build, or you can say the CICD pipelines. So in that step, we can make this automate the security scanning throughout. So that actually is the