Are the algorithms in your game subject to the AI Act? Most game development studios assume they aren’t. Most of them are wrong.
In 2018, the Belgian Gaming Commission classified loot boxes as gambling and banned their sale for real money. Not because they were illegal under gambling law (lawyers argued about this for a long time), but because a mechanism that requires payment for a random chance of winning, aimed at children, is in practice a slot machine. Major studios, including Blizzard and EA, had to withdraw paid loot boxes from the Belgian market. However, enforcing the ban proved difficult and loot boxes continue to operate in Belgium to this day, but the precedent was clear – the regulator stated that the gaming industry’s business model could constitute gambling, and they were right. Despite strict regulations, research from 2021–2022 suggested that the ban was not fully enforced across all mobile games; however, the major studios (EA, Blizzard, Nintendo) complied with the requirements to avoid heavy financial penalties or even criminal liability.
In a few months’ time, a similar debate on the limits of the law will take place across the European Union, but this time it is not about gambling but the AI Act. A system that matches opponents to a player’s profile, a shop displaying offers at the perfect moment, a mechanism that calibrates difficulty so that the player does not give up – all of the above – if it analyses user data and makes decisions on that basis – constitutes AI within the meaning of the AI Act and falls under the scope of that legislation.
When a game algorithm ceases to be a mechanic and becomes an AI system in the legal sense
The AI Act defines an AI system by what it does, not by what it is called. If a system analyses user data and, on that basis, generates predictions, recommendations or decisions that influence the user’s behaviour, we have an AI system within the meaning of the law – regardless of whether the studio has called it an algorithm, a recommendation engine, or anything else. The line is not drawn where the industry is accustomed to drawing it, and this is precisely the source of a problem that most studios have yet to recognise.
Over the years, player matching algorithms have evolved from simply comparing statistics to models that analyse behavioural data, a player’s frustration levels, their spending history and susceptibility to a loss of control. As long as they served to improve gameplay quality, nobody had a problem with such mechanisms. The problem arises when the same data begins to be used to influence purchasing decisions, because that is when we enter an area that is explicitly prohibited by the AI Act. Systems designed to alter a user’s behaviour without their knowledge in a way that could cause them harm have been banned since February 2025, and the line between personalisation and manipulation is rarely clear-cut; it is precisely this line that will be the subject of the first major disputes between the regulator and the industry.
Dynamic pricing has been used in games for years and is rarely seen as a legal issue; however, players should be aware that the offer they see is not the same as the one someone else sees, and that the algorithm has made this decision based on their data. At the moment the player makes their choice, no one points this out to them or informs them of it.
As research published by BeGambleAware shows, just 5% of players generate half of the market revenue from loot boxes, suggesting that these systems exploit the behavioural biases of a specific group of users rather than relying on the informed purchasing decisions of the player base as a whole. If the algorithm adjusts the probability of items being drawn to match a player’s profile and spending history, the studio must be able to prove that the system does not employ manipulative techniques.

Children’s games – a separate category of risk that must not be ignored
This is the most challenging area and the one where the consequences could be the most serious. Games aimed at children or those played by large numbers of children, which use algorithms to optimise engagement and retention, may require a fundamental rights impact assessment before they are placed on the EU market, as the AI Act treats systems that may affect minors particularly strictly. An algorithm that analyses a child’s behaviour in order to extend their gaming session and increase the likelihood of a purchase is a system that may affect the fundamental rights of a minor. The situation is complicated by the fact that the GDPR and the DSA both impose their own obligations on users under the age of eighteen, meaning that a studio developing a children’s game with an advanced retention system could find itself subject to three different regulatory regimes at the same time, and none of them will be willing to accept that it was merely an innocent part of the game.
What do game developers need to document and report?
For low-risk AI systems, the requirements are modest, but they do exist, as users interacting with a generative AI system must be aware of this, and a shop powered by a recommendation algorithm should operate in a way that is understandable to the user. In contrast, for systems classified as high-risk, the scope of obligations is much broader, as it includes implementing a risk management system throughout the product lifecycle, maintaining technical documentation, ensuring automatic event logging and human oversight of important system decisions, and registering the system in the EU database prior to deployment. All of this requires planning at the level of the entire game development process.
There is one further issue that the industry regularly overlooks. A studio that uses off-the-shelf AI tools from an external supplier and implements them under its own brand or adapts them to its own purposes may be considered an AI system provider rather than merely a user, thereby bearing the full scope of a provider’s responsibilities, including conformity assessment and registration. A studio that takes a ready-made AI tool and configures it for its own use cannot assume that the responsibility for regulatory compliance lies with the provider, as this is not the case under the AI Act.

How can a game development studio prepare for the requirements of the AI Act before the regulations come into force?
The starting point is to identify which elements of the game actually analyse player data and do something with it, as this is where risk classification begins. For each such element, it is worth checking whether it could influence the player’s financial decisions, whether it is aimed at children, whether it analyses behavioural or emotional data, and whether it might operate in ways of which the player is unaware. The more affirmative answers there are, the higher the risk classification and the broader the scope of obligations.
Added to this is a review of contracts with external AI tool providers, as those signed before 2025 almost certainly do not regulate the division of liability in the event of a query from the regulator. This is a mistake that is easy to rectify now but difficult to explain later. For years, the game development industry had plenty of scope for innovation before regulation caught up with technology. In the case of the AI Act, that window has closed; the regulations are in force and penalties can reach 7% of global annual turnover. A studio that carries out an analysis now has time to adapt calmly. One that waits will only discover the extent of the problem when it already has to act quickly.
