O post BMW Group and act digital inaugurate new IT hub to drive digital innovation in the Americas apareceu primeiro em act digital.
]]>
São Paulo, March 12, 2026 – Celebrating a significant milestone in its global innovation strategy, the BMW Group today officially inaugurated BMW Group TechWorks Brazil in São Paulo, a new IT hub fully operated by Act Digital. The new technology center represents a major step in the company’s digital transformation in the Americas region. The celebration brought together employees, executives, and officials for a comprehensive program that highlighted the vital role of local talent and technology for the future of premium mobility.
BMW Group TechWorks Brazil joins a global network of DevOps IT hubs, enhancing the BMW Group’s ability to deliver customized software solutions. The hub will focus primarily on developing customized software for the National Sales Companies (NSCs) in the Americas region and for BMW Group Financial Services, in addition to providing IT services to the region’s production plants.
This expansion represents a projected investment of over R$ 600 million (more than US$ 100 million) over five years. The project is launching with approximately 200 Act Digital employees, and the plan is to more than double its workforce to meet the growing demand for IT services in the Americas.
To highlight the initiative’s importance in the region, Maru Escobedo, President and CEO of BMW Group Latin America, states: “The BMW Group has been present in Brazil for 30 years, leading the premium segment for the past seven years. Today, we operate two production plants, a sales office, and an engineering office. Now, with the official opening of BMW Group TechWorks Brazil, we are ushering in a new era of digital innovation, further strengthening our presence in the Americas region."
“BMW Group TechWorks Brazil will be essential in developing customized software solutions that meet the unique needs of our National Sales Companies (NSCs), Financial Services, and our plants in the Americas, enhancing the customer experience and driving our strategic goals in the region,” says Carsten Sapia, Vice President of IT for the BMW Group in the Americas, highlighting the hub’s operational significance.
The partnership with Act Digital reinforces the BMW Group’s confidence in Brazil’s robust technology ecosystem and its highly qualified IT talent. “For Act Digital, the collaboration with the BMW Group reinforces our role as a global technology partner that delivers AI-based digital solutions at scale. It also highlights the strength of Brazilian IT talent in developing high-value solutions worldwide," emphasized Thibaut Charmeil, Founder and CEO of Act Digital.
Brazil plays a key role in the BMW Group’s global strategy, having been home to two production plants, an engineering office, and a successful National Sales Company for 30 years. In 2025, the BMW Group led the premium segment in Brazil for the seventh consecutive year. The launch of TechWorks Brazil further solidifies the Group’s long-term investment and growth plans in the country.
With the launch of BMW Group TechWorks Brazil powered by Act Digital, the BMW Group is not only expanding its technological capabilities but also reaffirming its commitment to fostering local talent and driving the future of premium individual mobility through digital solutions.
Source: BMW Group
With its four brands—BMW, MINI, Rolls-Royce, and BMW Motorrad—the BMW Group is the world’s leading premium manufacturer of automobiles and motorcycles and also provides premium financial services. The BMW Group’s production network comprises more than 30 production sites worldwide; the company has a global sales network in over 140 countries.
In 2025, the BMW Group sold 2.46 million passenger cars and more than 202,500 motorcycles worldwide. Profit before taxes in the 2025 fiscal year was €10.2 billion on revenue of €133.5 billion. As of December 31, 2025, the BMW Group had a workforce of 154,540 employees.
The BMW Group’s economic success has always been based on long-term thinking and responsible action. Sustainability is a key element of the BMW Group’s corporate strategy and encompasses all products, from the supply chain and production through to the usage phase.
JeffreyGroup – [email protected]
Fabio Perrotta Jr. - [email protected] - +55 (21) 9.9997-9113
Fernanda Alencar - [email protected] - +55 (11) 9.8723-9590
Paola Clemente - [email protected] - +55 (11) 9.8469-2396
act digital is an AI-first technology multinational operating in 12 countries, combining local agility with global expertise to serve as strategic partners to our clients in delivering customized and scalable solutions. With over 6,000 impactors worldwide, we transform complex challenges into opportunities to drive and generate business value.
O post BMW Group and act digital inaugurate new IT hub to drive digital innovation in the Americas apareceu primeiro em act digital.
]]>O post Modernization of Legacy systems: How to Turn Constraints into Competitive Advantage apareceu primeiro em act digital.
]]>Systems designed in another technological context carry limitations in integration, scalability, and security, holding back the adoption of cloud, microservices, real-time data, and AI, which have already become pillars of efficiency and digital innovation.
In addition to the brake on innovation, the cost of maintaining old technologies tends to grow due to a shortage of talent, low automation, and high sustainment effort. Studies show that maintaining legacies can become significantly more expensive than adopting modern architectures over time.
Classic triggers include: misalignment with business needs, rising maintenance cost, operational risk of downtime, poor user experience, and barriers to integrating new capabilities.
There is no single path. Design depends on obsolescence, business impact, risk, and budget. Among the most common approaches:
Embed security and compliance from the first commit: modern identity, secrets management, policy as code, quality gate conveyors, SAST/DAST, and audit trails. In addition to reducing exposure to incidents, this simplifies regulatory adherence and compliance with market audits.
Strategies fail not only because of technology, but because of resistance to change, skills gaps, and lack of communication. Treat modernization as a transformation program, with continuous enablement, change management, and clear governance between IT and the business.
The modernization of digital platforms is the way to reduce costs, accelerate deliveries, strengthen security, and unleash the innovation that the business needs, now. The data is unequivocal: most companies still rely on legacy and already recognize modernization as a priority, precisely because the benefits far outweigh the challenges when the movement is value-driven and executed incrementally.
At act digital, we connect strategy, architecture, and execution to modernize with minimal friction and maximum impact:
Talk to our experts and turn your legacy into a competitive advantage.
O post Modernization of Legacy systems: How to Turn Constraints into Competitive Advantage apareceu primeiro em act digital.
]]>O post LLM Penetration Testing from a Technical Perspective apareceu primeiro em act digital.
]]>The productive use of Large Language Models (LLMs) in companies is increasing significantly. The spectrum ranges from publicly accessible chatbots on company websites to internal assistance systems that access knowledge databases, document repositories or proprietary data sources. From a technical point of view, this results in hybrid systems that combine classic software architectures with probabilistic language models.
The security analysis of such systems is fundamentally different from traditional penetration testing. The main difference lies in the character of the model itself: an LLM is a black box. The path from input to output is not deterministically traceable. The model calculates token probabilities based on its training and current context. Which parts of a prompt are weighted more heavily, how internal security instructions are prioritized, or how competing instructions are resolved is not transparently visible - especially with proprietary cloud models.
This lack of transparency has direct security consequences. While classic software relies on clearly defined control flows, conditions and reproducible logic, an LLM reacts context-sensitively and statistically. Two nearly identical inputs can produce different outputs. Security mechanisms that are based purely on textual instructions in the prompt therefore do not constitute robust technical isolation.
From Prompt to Response: Technical Reality
In production environments, user input is rarely passed to the model in isolation. Typically, a composite prompt is created that consists of system instructions, developer logic, user requests, and, if necessary, externally retrieved contextual information. For the model, this is ultimately a contiguous block of text. There is no real, technically enforced separation between "security policy" and "user input".
The model processes everything sequentially as text. This is exactly where the core of many attacks lies.
Prompt injection as a structural problem
Prompt injection is not a classic injection attack in the sense of SQL or command injection, but a manipulation of the textual decision-making basis of the model. Since security rules are often also formulated as text ("Do not answer confidential questions", "Do not disclose internal information"), an attacker can try to relativize or overwrite these instructions with new instructions.
The model has no inherent ability to clearly distinguish between legitimate policy and malicious instruction. It evaluates token probabilities. If a manipulated user instruction is semantically strong or cleverly contextualized, it can overlay the originally set security instructions.
Since the internal decision-making process is not transparent, the security check here is inevitably adversarial: one systematically tests the effect of competing instructions and whether protection mechanisms are actually robust or only appear to exist.
Internal LLMs and the Risk of Document Analysis
These questions are particularly relevant for internal LLM systems that work with retrieval-augmented generation (RAG). In such architectures, corporate documents -- such as PDFs, Office files, wiki content or CRM data -- are automatically analyzed, converted into text, vectorized and indexed in a database.
If a user request is made later, the system searches for semantically matching documents and adds their contents to the prompt as context. The model generates its response based on this.
Technically, this means that the content of PDF files is completely extracted and made available to the model as text. In the process, visual separations, formatting or semantic structure are often lost. Hidden text areas or metadata can also be indexed, as long as they are not explicitly filtered.
This is where a particularly critical attack scenario arises. If an attacker is able to inject manipulated content into an indexed document -- such as an internal wiki, a shared PDF, or a shared document repository -- they can place malicious instructions that are later processed by the LLM.
Since the model does not distinguish between "user prompt" and "document context", an instruction placed in the PDF can become part of the basis for decision-making. The actual attack surface is no longer the chat interface, but the company's document base.
A practical scenario arises in the recruiting process: Applications are often submitted via publicly accessible web forms. Curriculum vitae (CV) and cover letter are uploaded as PDFs, automatically stored and stored internally. If the HR department later accesses these documents via an internal LLM system -- for example, to summarize profiles or compare candidates -- the contents of these PDFs are read and processed by the system.
If an attacker now deliberately places malicious instructions in the CV or cover letter, these can enter the internal LLM via the document index. The attack path thus runs from a publicly accessible upload function to document management and an internal AI system. Technically, it is an indirect injection that does not take place via the interface of the LLM, but via an upstream business process.
Many organizations do not take this risk into account because they assume that documents are merely a source of information. In fact, however, they become an integral part of the prompt and thus a potential attack vector.
Access control and data exfiltration
Another structural problem of internal LLM systems lies in access control. The model itself has no permissions. It merely processes the context that is passed to it by the backend.
If the retrieval system retrieves documents without enforcing a clean document level access control, it can happen that content is passed to the model that the requesting user should not actually see. The model may then use this content in its response or at least indirectly reference it.
Attacks are often iterative. Through clever questioning, rephrasing or shifting context, information can be reconstructed step by step. Even if complete documents are not output, fragments or summaries can reveal sensitive content.
The core problem is architectural: security logic must not be left to the model, but must be technically enforced before context generation.
Tool integration and active system interventions
With the introduction of function calling or tool integrations, the risk profile is shifting further. The model no longer generates just text, but structured calls to backend functions. These can perform database operations, create tickets, or access external APIs. If such function calls are not strictly validated and authorized on the server side, an attacker can indirectly trigger actions via manipulated prompts. It becomes particularly critical when backend service accounts have far-reaching rights and the model acts as an intermediary.
In this context, it becomes clear that an LLM is not a security-conscious actor. It follows statistical patterns. Every safety-relevant decision must be technically secured outside the model.
Context and session isolation
Another technical aspect concerns the management of conversational contexts. Many systems store chat histories or use caching mechanisms to generate responses more efficiently.
If contextual data is not properly isolated, it can lead to blending between users or sessions. In multi-tenant environments. this represents a significant risk. Since the model itself does not have client separation, this must be strictly enforced in the backend.
Basic technical insight
LLM systems are not classic software components with clear decision logic, but probabilistic word processing systems. They have no intrinsic understanding of security and no reliable separation between instruction and data.
As soon as internal documents are automatically analyzed, indexed and integrated into prompts, the attack surface expands considerably. Manipulated PDF content, indirect prompt injection via knowledge bases, and inadequate access controls are among the most realistic risks in productive enterprise environments.
The security assessment of such systems therefore requires an architectural understanding of the entire processing chain – from document parsing to retrieval mechanisms and tool execution.
The critical point is not only in the model, but in the way it is connected to data, context and system functions.
O post LLM Penetration Testing from a Technical Perspective apareceu primeiro em act digital.
]]>O post Top 10 vulnerabilities found in internal pentests apareceu primeiro em act digital.
]]>This service consists of the simulation of a cyberattack, carried out by security experts, within the company’s network to find vulnerabilities in the system and elevate privileges on the infrastructure, the same way a real opponent would (but without causing real damage). The goal is to find weaknesses and provide recommendations to fix them before a real attacker can exploit them.
Our penetration testing team has conducted many internal security audits for various clients over the years. Here are the top 10 vulnerabilities we find, how we were able to exploit them, and recommendations to solve them.
Windows is using multiple naming protocols to identify computers or services across the network to facilitate users’ access to them. The most widely used is DNS (Domain Name System): each computer that joins the domain or service is added to the DNS server, so whenever a user asks for a specific resource, the DNS service answers with the matching IP.
Unfortunately, if a user makes a mistake when querying a resource, then the DNS server cannot answer (e.g.: by assuming a CIFS (Common Internet File System) service like SMB (Server Message Block) exists at “ws-share.domain.tld”, while the user is asking for “w-share.domain.tld”, the DNS server won’t answer because “w-share” is not in its database). In this case, Windows will still try to answer using fallback protocols such as mDNS (Multicast DNS), NBNS (NetBIOS Name Service), or LLMNR (Link-Local Multicast Name Resolution).
All these are broadcast protocols, which means a request is sent to everyone in the attempt to find the IP of “w-share.domain.tld”. The problem is that if someone in the network is responding to this query with its own IP address, the client asking for “ws-share.domain.tld” may be redirected to initiate an authentication process, sending directly to the attacker an NTLMv2 response, which includes the hash of his password. In this scenario, the attacker can gather the hashes of multiple users’ passwords and try to crack them offline.
To prevent this threat, it is recommended to avoid using old legacy protocols by disabling them and only allow the use of DNS. This can be done through Group Policy Object (GPO).
Another widely used technique is the NTLM (NT LAN Manager) relay. This attack relies on the lack of signature verification during the NTLM authentication, when a user tries to access a service (mostly LDAP – Lightweight Directory Access Protocol – or SMB).
NTLM authentications work in 3 steps:
Attackers in a Man-in-the-Middle (MitM) position may intercept the authentication request between a user and a service A, forwarding that request to a service B. Then, as they receive the challenge from service A, they forward it to the client and wait for his answer, which they will send back to service B. This way, they can authenticate themselves as the client to service B, without any knowledge of his password.
This attack does not work anymore if services request the client’s signature. Be aware that enabling signature and requiring signature is different: in the first case, the signature is possible and not mandatory, whereas in the second it will be asked everytime.
Since it is very difficult to manage computers in large environments, Microsoft added a solution to automatise the management of the local administrator account, which is called LAPS (Local Administrator Password Solution). This solution adds two new properties to computer objects: “ms-Mcs-AdmPwd” and “ms-Mcs-AdmPwdExpirationTime” (a clear text password and its expiration date).
By default, only high privileged accounts may read the “ms-Mcs-AdmPwd” properties, but because some groups often have more privileges than they should, it is not rare to find users allowed to read these properties, even though they should not, which grants them local administrator rights.
Another misconfiguration our team encounters is when users have extended rights over a computer A. In this scenario, if the user has the right to add a new computer to the domain, they may add a computer B (on which he has administrator privileges) and synchronise its local administrator password with computer A (overwriting his own password). Afterwards, since they are already local administrators on the machine, they just need to read the “ms-Mcs-AdmPwd” property to obtain the password.
Using LAPS is a very efficient way to protect computers but it may be hard to restrict users from having read access or extended rights over the password.
During internal penetration tests, once a security auditor has an account, he may verify if there are open shares over the network and check if there is one that is configured with weak restrictions, such as “anonymous” access, “full read” access, or “write” access.
“Anonymous” access to a share means that anybody, even without an account, can access this share. Giving “write” access to too many principles may also result in users modifying data (e.g., binaries) to introduce backdoors. On the other hand, giving “read” access to everybody can end up in critical information exposure.
It is difficult to manage permissions correctly because of the number of users and groups, or even because of the number of shares available, but having a strong and restricted share policy reduces the attack surface.
Group Policy Object (GPO) is a feature that helps administrators to manage and control the working environment of users and computer accounts within an organisation. An attacker who has an account in the domain may access the SYSVOL (system volume) shares of any domain controller to retrieve those GPO. If administrators use GPO to push and execute scripts on computers, it may contain critical information such as a credentials.
Another risk is when a user has too many rights (like CreateChild, WriteProperty, or GenericWrite) over a GPO. In this scenario, a user may change it to take over the computers to which GPO applies.
To resolve this vulnerability, do not write clear text passwords in GPO and verify carefully who has writing privileges over your GPO.
ADCS (Active Directory Certificate Service) is a service used for issuing and managing certificates for users, computers or services. It allows you to configure templates that will then be used to generate certificates. However, since these templates are very modulable, they also contain dangerous options. For instance, some templates might be used to authenticate users on the domain, so if the template allows a user to request a certificate on behalf of another account (“ENROLLE_SUPPLIES_SUBJECT” option), an attacker might as well request a certificate and add in the SAN field (Subject Alternative Name) the name of a domain administrator. The ADCS will therefore issue a valid certificate to the attacker, which he might use to authenticate himself as a domain administrator.
Another vulnerability our team finds during pentests is templates that can be overwritten by any “authenticated user”. In this case, we are able to change the template options to authorise “client authentication”, and we can modify the subject of the certificate in order to generate a certificate on behalf of a domain administrator, thus compromising the entire IS.
Companies should verify the rights of their ADCS template to restrict the access to its modification, and they have to make sure that users are not able to supply the subject of the certificate.
A not so new but still present vulnerability is the usage of SPN (Service Principal Name) for non-technical accounts – the so-called kerberoasting attack. SPN is an identifier of a service instance (e.g. MSSQL01_svc/sql.domain.tld:1433 represents the account MSSQL01_svc on sql.domain.tld). If an account has the “servicePrincipalName” property, every user may request a TGS (Ticket-Granting Service) for its service. This TGS includes an encryption of the account’s password, so an attacker may ask for a TGS and try to crack the password offline. This is not that dangerous for service accounts, because passwords are usually random and long enough, therefore harder to crack, but it becomes problematic for user accounts, because most of the time, even with a good password policy, it is easier to crack them with a good dictionary and efficient rule sets.
So, do not set the SPN property for user accounts and make sure to enforce a good password policy to limit the possibility of having attackers crack passwords.
A threat that we frequently face in internal pentests is the discovery of outdated systems or software, which are vulnerable to Common Vulnerabilities and Exposures (CVE). It is hard to have a good update policy, especially in large companies, but this might be an easy entry for attackers, since new vulnerabilities are discovered and exploited every day, and many PoC (Proof of Concept) are often available.
Organisations must set up a good update policy to limit the possibility of cyberattackers exploiting those vulnerabilities.
Recently, during an internal audit, our team has found a way to compromise the entire domain of a client by abusing the misconfiguration of the SCCM (System Center Configuration Manager) service. This product helps administrators to install, manage and configure computers in a domain. Unfortunately, the account used to perform configuration tasks was a domain account with too many rights (domain administrator rights).
The fact is that a local administrator of a computer managed by SCCM can decrypt the SCCM NAA (Network Access Account) password, because it is only protected by the Windows Data Protection API. Therefore, becoming local administrators on a workstation is often achieved by our experts.
Microsoft recommends not to use domain accounts to perform the configuration of computers or, at least, to restrict permissions.
Finally, the most common vulnerability we face during our audits is having weak user passwords. As computing power increases, it is easier for attackers to crack more complex passwords, and even easier when the password policy used is weak. For example, this article shows how a password of 8 characters with upper, lowercase and special characters takes 5 days to be recovered and costs 3000$.
The French Security Agency (ANSSI) recommends a password length between 9 and 11 characters to protect low sensitivity data, 12 to 14 for medium sensitivity content, and 15 or more for a high to a very high sensitivity level. They also recommend using uppercase, lowercase, digits, and special characters. Highly sensitive accounts should also be using Multi-Factor Authentication (MFA).
O post Top 10 vulnerabilities found in internal pentests apareceu primeiro em act digital.
]]>O post Shift-Right: a bold testing trend in real-life scenarios apareceu primeiro em act digital.
]]>We’ll cover the benefits, techniques, implementation, challenges and successful examples of the Shift-Right approach, besides exploring its integration with other testing practices.
The concept of Shift-Right testing (also called ‘testing in production’) is simple but implies a certain boldness: it involves carrying out tests continuously when the software is already in a production environment. It complements the traditional practice of Shift-Left, which consists of incorporating tests as early as possible in development, avoiding problems rather than detecting them.
And where is the boldness in the Shift-Right approach? In the fact that it leaves the ‘safe’ environment of development and puts the software in contact with users, potentially bringing insights that would be difficult to obtain otherwise. It focuses on ensuring that testing accompanies not only the development stages, but also the entire deployment process and the post-launch stages of the product, involving testing in real environments and ensuring that it is possible to catch errors that sometimes only appear in actual conditions of use.
Shift-Left guarantees testing at the beginning of the development stages, with the aim of finding as many bugs as possible, as soon as possible, and correcting them before the deployment phase. These measures aim to reduce the gap between writing the code and the possible bugs found, which consequently reduces costs and correction time.
Shift-Right, on the other hand, covers the entire stage after this flow, concentrating on validating the functionality and usability of the product and solving the problems raised by the client. In short, it tends to improve the user journey, ensuring that the product is bug-free in the first few hours after implementation and before future updates.

Shift-Left vs. Shift-Right in the software development cycle
Both resources aim to evaluate and guarantee the quality and performance of new solutions and functionalities in the DevOps process and software development lifecycle, focussing on continuous testing methods. The logic behind the Shift-Left and Shift-Right principles in Agile practice is basically ‘fail-fast’. The main focus is on detecting risks before they become problems.
The main benefit of this methodology is the ability to provide a real users’ point of view, who, because they are not involved in the development stages, end up seeing the application more objectively.
We can also point out the following benefits:
Shift-Right includes a variety of techniques that can be applied at different stages of the development lifecycle. Some of the most common are:
Each of these techniques has its own advantages and disadvantages, so the choice of the ideal technique depends on the specific context of the project and the objectives of the testing team.
Shift-Right, just like its sibling Shift-Left, is essential to ensure that quality is present throughout the product’s development and lifespan, providing a consistent and safe user experience for customers.
I can say that it’s a crucial step in the whole flow, since it’s where we validate real scenarios, acting as if we were the customers themselves. In many cases, companies invite customers with whom they have a good relationship to carry out beta tests to ensure that everything is working as it should.
It's worth emphasising that this isn’t a guarantee that there won't be any more bugs, but rather that the probability of finding them is as low as possible. In other words, it's a way of ensuring that the post-deliverable remains working properly.
O post Shift-Right: a bold testing trend in real-life scenarios apareceu primeiro em act digital.
]]>O post AVIF Image Format: what is it and can it improve performance of Web Applications? apareceu primeiro em act digital.
]]>
The best compression in comparison to all formats
First things first, what is AVIF? According to the caniuse website:
“A modern image format based on the AV1 video format. AVIF generally has better compression than WebP, JPEG, PNG and GIF and is designed to supersede them. AVIF competes with JPEG XL which has similar compression quality and is generally seen as more feature rich than AVIF.”
In other words, AVIF is an image format, smaller than the most famous formats like PNG or JPG, without losing quality.
The use of this kind of format is very important for the reasons listed below:
The most famous tools today are the avif.io and Squoosh (from Google).
I recommend the usage of Squoosh, since you can change the configurations into the images and visually see how the conversion impacted your image.
The list of the compatible browsers can be found here: https://caniuse.com/avif
Even if the browser that you use must be compatible, if it isn’t, you can just use the polyfill and the <picture> tag below.
You don't need to wait for all browsers to support it – you can use content negotiation to determine browser support on the server, or use <picture> to provide a fallback to other supported image formats on the client, see the example below a fallback to a JPG image (in case of the AVIF format is not supported yet ):

The polyfill to “create” the feature into the browser is the following.
However, as much as this polyfill is official, it does not work on private tabs in old browser versions of Firefox (53+) and Edge (17+).
It’s recommended to have polyfill + Use the <picture> tag for fallbacks.
To see all the compatible versions using the polyfill, follow the docs: https://github.com/kagami/avif.js
O post AVIF Image Format: what is it and can it improve performance of Web Applications? apareceu primeiro em act digital.
]]>O post The importance of software testing to ensure application quality apareceu primeiro em act digital.
]]>Let's find out more about software testing and its role in the success of a product.
According to the International Software Testing Qualifications Board (ISTQB), levels of testing are groups of testing activities organised and managed together, with each level being equivalent to an instance of the testing process.
There are the following levels of testing:
The types of testing represent a group of activities focused on testing specific characteristics of the system, or part of the system based on its objectives. They are subdivided into:
The greater the visibility of the code, the greater the visibility of how the feature is made, the smaller the quantities of requirements and business specifications tested, and the greater the transparency of the test, approaching a white box test. On the other hand, the lower the visibility of the code, the lower the visibility of how the feature is made, the higher the quantities of requirements and business specifications, and the lower the transparency of the code, gradually approaching a black box test. The image below illustrates this idea.

Any of the levels and types of software testing mentioned above can be carried out in two ways:
Each type of test can be applied in different environments, where the applications are installed, and in different development phases according to the infrastructure, needs and budget of each project/application, as shown in the image below.

Usually, the amount of unit tests developed is relatively greater than that of integration tests (also called service tests) and UI tests (included in the level of system tests), because they cost less, are less specific and are quicker to develop.
This logic is illustrated by the software testing pyramid:

Among the most common software tests developed are unit tests, integration tests and sometimes UI tests. Different levels of testing are created, taking into account their particularities:
In order to deliver a high-quality product at a low cost and with little time invested, a combination of tests should be considered, considering that they can be carried out manually or automatically. However, the types of tests and quantity to be developed depend on the application's scalability, budget, project time and product needs.
O post The importance of software testing to ensure application quality apareceu primeiro em act digital.
]]>O post Developing a Power BI architecture with dataflows and shared datasets apareceu primeiro em act digital.
]]>Power BI offers a range of features and capabilities to enhance data analysis and reporting. Among these, the use of dataflows and shared datasets has become increasingly popular, as they provide a scalable and efficient architecture for data modeling and sharing within Power BI services.
In this article, we will explore how to develop a Power BI architecture by leveraging the use of dataflows and shared datasets, and we’ll discuss the numerous benefits it brings to organisations.
Dataflows in Power BI are a powerful mechanism for data preparation and transformation. With dataflows, you can connect to various data sources, apply transformations, and create reusable data entities known as dataflow tables.
These dataflow tables can then be used across multiple reports and dashboards, ensuring consistency and reducing redundancy in data modeling efforts.
Here are the main benefits of creating dataflows:
Shared datasets in Power BI are reusable datasets that can be used across multiple reports and workspaces.
By leveraging shared datasets, organisations guarantee the following benefits:
Now that we understand the benefits of dataflows and shared datasets, let's figure out how to develop a Power BI architecture that effectively uses these features. Let’s follow these steps:
Identify data sources and transformations: start by identifying the relevant data sources and the required data transformations. Use Power Query within dataflows to clean, filter, and shape the data according to your needs;

Create dataflow tables: Define dataflow tables within the dataflows, representing the transformed data entities. Ensure that the dataflow tables are structured appropriately and are optimised for reuse across multiple reports;

Publish dataflows: publish the dataflows to the desired workspace within Power BI services;

Create shared datasets: use the dataflow tables as the foundation for creating shared datasets. Define appropriate relationships, calculations, and measures within the shared datasets to enable consistent reporting;

Build reports and dashboards: with the shared datasets in place, create reports and dashboards using Power BI Desktop.

When developing a Power BI architecture with dataflows and shared datasets, it's crucial to carefully consider the access control and permissions for dataflows, datasets, and reports within Power BI services. This ensures that the right users have the appropriate level of access to the data and reports, maintaining data security and integrity.
In the end, make sure you validate all these aspects:
By carefully considering access control and permissions for dataflows, datasets, and reports in Power BI Services, organisations can maintain data security, promote collaboration, and ensure regulatory compliance.
It is crucial to regularly review and update access permissions as user roles or data requirements evolve, ensuring that only authorised users have access to the appropriate data and reports.
O post Developing a Power BI architecture with dataflows and shared datasets apareceu primeiro em act digital.
]]>O post Top 10 vulnerabilities found in web application pentests apareceu primeiro em act digital.
]]>Penetration tests are designed to replicate the real-world scenario of the attackers, by testing the security posture of web applications and attacking all known vulnerabilities before the real adversaries get their hands on it.
Below are the top 10 web vulnerabilities found during our penetration tests, accompanied by a short explanation of how these vulnerabilities can be exploited and how mitigate them.
XSS vulnerabilities allow attackers to inject malicious scripts into web pages, potentially affecting all users who view or interact with those pages. The most common XSS are:
Exploitation: In the case of a stored XSS attack, an attacker may insert malicious code in a comment or a form field, which is executed when another user views the content. Reflected XSS happens when an attacker crafts a URL containing malicious code and deceives a user into clicking on it.
Recommendation: Implement strict input validation and output encoding to prevent XSS. Sanitise all user inputs and escape output. Additionally, use Content Security Policies (CSP) to reduce the risk of executing unauthorised scripts.
CSRF can be used to trick verified users into carrying out unwanted actions on a web application, such as completing transactions or altering account information. This attack exploits the web application's inability to verify the origin of the request and the user's authorised session.
Exploitation: An attacker creates a malicious form or link. When the victim uses the link or the form, it sends a request to the vulnerable web application using the victim's cookies. This request can do all the actions that the user can do on the application, which can compromise their account.
Recommendation: There are several methods for mitigating the risk of CSRF, and we recommend using as many of them as possible. The most popular of these methods are anti-CSRF tokens, the use of custom headers, and the use of session cookies with the SameSite=Strict attribute.
By default, applications cannot communicate in JavaScript with applications from other domains. Cross-Origin Resource Sharing (CORS) can be used to modify this behaviour. Incorrect configuration could make the application vulnerable to unauthorised access from other domains.
Exploitation: The most frequent configuration errors are the misconfiguration of “Access-Control-Allow-Origin”, which if too permissive, uses a wildcard (*) or the null value, and allows requests to be made from uncontrolled domains. If, in addition, the application uses “Access-Control-Allow-Credentials: True”, then an attacker could make requests from users' sessions, thus stealing their accounts, because unlike CSRF attacks, the attacker can read the results of these requests.
Recommendation: Avoid using wildcards or null in the Access-Control-Allow-Origin header. Restrict CORS requests to trusted domains and avoid credentialed requests from untrusted origins.
IDOR occurs when an application exposes internal references to objects like database keys or file paths. Attackers can manipulate these references to access unauthorised data.
Exploitation: If a user has access to document 123, he can try to modify the URL or the body of the request to access other users’ documents. For example, he could look for document 122.
Recommendation: Implement server-side access control checks to ensure users can only access authorised data. Avoid exposing sensitive identifiers in URLs and use indirect references or access tokens.
Allowing users to upload files can pose security risks if not properly handled. Many applications rely on client-side validation, which is easily bypassed.
Exploitation: Attackers can upload malicious files such as an HTML file containing JavaScript or a server-side script like PHP, potentially gaining control of the server. It could also upload very large files to saturate the server's storage, or files that once downloaded are dangerous, such as DOCM or EXE files.
Recommendation: Perform strict server-side validation on uploaded files, checking the file type, size, and content. Store files in a non-executable directory and use randomised file names.
An open redirect vulnerability occurs when an application allows users to be redirected to external URLs without proper validation, often based on user input.
Exploitation: An attacker can craft a legitimate-looking URL that redirects users to a malicious website, potentially tricking them into visiting harmful sites like phishing pages. On some redirects, it's also possible to add JavaScript code in place of the URL.
Recommendation: Avoid open redirects when possible. If necessary, validate and whitelist allowed domains or limit redirects to a predefined set of trusted URLs.
JSON Web Tokens (JWT) are widely used for authentication in web applications, but improper implementation can lead to security risks.
Exploitation: If an application fails to verify a JWT’s signature, attackers can manipulate the token to alter claims, such as the user’s identity or role. So, even without an account on the application, the attacker can obtain an admin token that never expires.
Recommendation: Always verify JWT signatures using secure algorithms (e.g., HS256, RS256) and ensure tokens are signed with a strong secret key. Enforce strict expiration times.
Web apps frequently rely on external libraries, which might have security vulnerabilities. If certain components are not updated, the application may become vulnerable.
Exploitation: Attackers can exploit known vulnerabilities in outdated components to compromise the system.
Recommendation: Regularly update third-party libraries and frameworks. Apply security patches promptly and remove unused dependencies.
Storing session tokens in insecure locations such as local or session storage exposes them to potential theft by malicious actors. These tokens are essential for maintaining user sessions and, if compromised, an attacker can hijack the session, impersonate the user and gain access to sensitive information.
Exploitation: Attackers can exploit cross-site scripting (XSS) vulnerabilities to steal session tokens stored in the browser's storage. Furthermore, tokens stored in these locations are vulnerable to browser-based attacks, such as client-side malware or browser extensions.
Recommendation: Prefer cookies with Secure, HttpOnly and SameSite flags over other storage mechanisms to make them inaccessible to JavaScript and therefore protected against XSS attacks.
SQL injection vulnerabilities allow attackers to execute malicious queries by injecting SQL code into input fields.
Exploitation: Attackers can insert malicious SQL commands to access, modify, or delete data, bypass authentication, or execute commands on the server.
Recommendation: Use prepared statements and parameterised queries to prevent SQL injection. Validate and sanitise all inputs and avoid constructing SQL queries by concatenating user input.
O post Top 10 vulnerabilities found in web application pentests apareceu primeiro em act digital.
]]>O post 6 reasons to consider a Managed XDR service apareceu primeiro em act digital.
]]>If this is not news to you, you may already be struggling to manage the shortage of skilled personnel and budgetary resources in your company's Security Operations Centre (SOC).
These issues are driving the rapid growth of a new cyber security service, the Managed XDR (Extended Detection and Response) service.
A Managed XDR offers the right combination of advanced security technologies and human expertise for your business and strengthens your security team with the provider's expertise and know-how.
This holistic approach can be fully customised to suit company size, the existing technology and the threat exposure level.
1. Enhanced cyber security coverage
Cyber attackers (external and internal) do not work business hours - they just wait for the right moment to exploit the slightest system vulnerability. So, if full 24/7 or business hours security coverage seems impossible, the Managed XDR solution is a perfectly scalable solution, for which you do not need to recruit additional security personnel.
2. Smart budget allocation
With a Managed XDR service, you can choose the level of coverage based on the size of your business, the number of terminals and the risk levels. A security operations centre (SOC) staffed with experienced cybersecurity experts is an increasingly expensive approach, so the right Managed XDR partner can help you by expanding your existing team and strengthening their skills.
3. Protection against advanced threats
With the widespread adoption of increasingly advanced tactics, techniques and procedures (TTPs) by hackers and cyber criminals, the cyber threat landscape is becoming more complex every day, increasing the risk of disruption to an organisation's business and reputation. The Managed XDR service offers the most comprehensive security solution available, combining best-in-class technologies with skilled cyber security teams.
4. Reducing tool complexity
In addition to understaffed security teams and analyst burnout due to data and alert overload, companies today use a range of tools and platforms of varying degrees of suitability and complexity. The Managed XDR service aims to reduce technical and human stress by facilitating threat detection through simple integration of tools processing data from various sources.
5. The right skills at the right time - guaranteed
Using a managed XDR service ensures that you are able to address one of the main cybersecurity concerns - the difficulty of finding the right experts. By outsourcing your incident detection and response service to a Managed Security Service Provider (MSSP), you can ensure that you have the best available expertise.
6. More scalable
XDR Managed Services can be upgraded and adapted as the security needs of your company change.
So, while many providers offer the technology and the managed service part as an add-on, with the act digital Managed XDR service you can port your existing in-house solutions, which we can integrate into our XDR, or you can leverage our expertise to select and recommend the right solution for you.
O post 6 reasons to consider a Managed XDR service apareceu primeiro em act digital.
]]>