Home 9 Expert articles 9 AI Is Not Just an Opportunity—It’s a Responsibility

AI Is Not Just an Opportunity—It’s a Responsibility

Interview with Tatjána Turányi, AI Regulatory Expert at Quadron

 

As artificial intelligence becomes more deeply embedded in our systems, the stakes keep rising. From cybersecurity threats to ethical grey zones, AI is not just transforming how we work: it’s redefining what accountability means in the digital age.

We spoke with Tatjána Turányi, AI Regulatory Expert at Quadron, about the realities of implementing the EU AI Act, why traditional IT risk frameworks fall short for AI, and how organisations can move from confusion to compliance—responsibly.

 

How did you get involved with cybersecurity and AI? What led you to Quadron?

Tatjána Turányi: I started exploring cybersecurity and cyberpsychology about four or five years ago, during the pandemic. I became interested in how digital environments shape behavior, and how we can build resilience against digital threats. I joined Quadron as an educational consultant, where I integrated a cyberpsychology perspective into our training. I was looking not just at technical threats like phishing, but also at the human reactions behind them.

Later, I shifted toward advisory roles, especially around compliance and risk management. That’s where I saw the biggest impact: understanding and addressing risk needs to start as early as system design, especially in the age of AI.

 

Tell us a bit about your academic background

Tatjána Turányi: I’m originally from Transcarpathia (in Ukraine) and have a social sciences background. I studied political science in Ukraine, then moved to Budapest over a decade ago to pursue a PhD in sociology at ELTE. My focus gradually moved from social psychology to cyberpsychology and decision-making under risk. I also completed cybersecurity fundamentals through ISACA and ISC2, after a cyberpsychology course with the Information Sharing and Analysis Center (ISAC).

 

In what ways has AI escalated cybersecurity risks?

Tatjána Turányi: AI has fundamentally changed the game. You no longer need technical skills to create harmful content. AI tools can generate malicious code or deepfakes on command.

I read a study recently that found AI models are already outperforming human experts in some research domains by over 90%. That’s a massive leap forward for things like disease detection. But the same capabilities can be weaponised and fall into the wrong hands.

Disinformation has also become more dangerous. AI can generate and spread false narratives at scale, creating societal instability or even fueling political manipulation. This goes beyond system outages—we’re talking about eroding people’s sense of reality.

The Russia-Ukraine war was a turning point: the first publicly visible cyberwar. Infrastructure like utilities and mobile networks were targeted, even before the invasion. At the same time, waves of disinformation hit social media. And these tactics are spreading globally.

 

What frameworks guide your AI risk assessment work at Quadron?

Tatjána Turányi: We follow three main frameworks. First, the EU AI Act which provides legally binding obligations, not just suggestions. Like NIS2, it sets clear compliance expectations for both providers and users.

For technical risk management, we use the NIST AI RMF, which gives detailed guidance on how to assess threats and ensure governance. Then there’s ISO 42001, which helps organisations responsibly manage AI across its entire lifecycle.

Of course, real-world implementation is never one-size-fits-all, risk depends on the system’s context, use case, and design.

 

How has the EU AI Act influenced your approach?

Tatjána Turányi: It has introduced a clear shift toward risk-based classification. Depending on the risk level, different rules apply. This doesn’t just affect developers, users must comply too.

Many organisations didn’t previously assess AI risks at all. Now, AI is everywhere, even if it’s not acknowledged. Our job at Quadron is to help clients align their tools with this new reality and making sure their systems are secure and compliant by design.

 

What does AI regulation and risk assessment look like in practice?

Tatjána Turányi: Regulation focuses on turning the AI Act into internal policy—helping clients understand their obligations and avoid prohibited practices. It’s a multi-phase rollout, with some rules already in effect and others coming in 2025 and 2027.

Risk assessment is more technical. We look at how systems are trained, what data they use, and how transparent and secure they are. One client project involved mapping potential AI use cases and assessing risks per scenario. In many cases, even clients don’t fully know how they want to use AI yet, so collaboration and clarity are key.

 

What types of practices are banned under the AI Act?

Tatjána Turányi: Since February, the AI Act has made several practices explicitly and legally prohibited and these rules apply to both developers and users.

The most critical banned practices include:

Use of subliminal, deceptive, or manipulative techniques
AI systems must not apply techniques that influence a person’s behavior without their awareness or distort their ability to make decisions to such an extent that they take actions they would not have otherwise taken.
Example: A video game may not use hidden frames that trigger aggression or other behavioral responses.

Exploitation of personal vulnerability
AI systems must not be developed or used in ways that exploit vulnerabilities arising from a person’s age, social background, or health condition to alter their behavior.
Example: A game may not bombard a 9-year-old child with loot box offers in an attempt to persuade them to spend their parents’ money.

Social scoring
AI may not be used to create profiles that rank or score individuals based on their social status, behavior, or past, in ways that result in unfair treatment in contexts unrelated to the original data collection—such as credit applications or access to social benefits.

Criminal profiling
AI may not be used to assign individuals a criminal profile solely based on their background or behavioral patterns.

Creation of facial recognition databases
It is prohibited to use AI systems that generate facial recognition databases from images or data collected online.

Emotion recognition
AI systems may not be used to monitor an individual’s emotional state in workplaces or educational institutions—except in situations involving immediate danger (e.g., when someone poses a risk to themselves or others).

Biometric categorisation
AI must not use biometric data—such as iris scans or facial features—to draw conclusions about a person’s health or other sensitive characteristics.

Real-time biometric identification in public spaces
Real-time biometric surveillance in public areas is prohibited, unless authorised by competent authorities—for example, in cases involving kidnapping or serious criminal investigations.

If an organisation discovers they’re using a non-compliant system, they must report it and stop using it immediately. That’s why constant monitoring is essential.

 

Where does ethics come in?

Tatjána Turányi: A lot of people fear AI will replace them. But it’s not about replacing people, it’s about supporting them.

What makes AI different from traditional IT systems is that it learns and evolves. It behaves in ways we can’t always predict. So we need to go beyond technical safeguards and consider social impacts: what data is being used? Who’s being affected? What decisions are being automated?

Bias is a huge concern. Poor-quality data leads to poor outcomes, which, at scale, can reinforce discrimination. It’s our responsibility to make sure AI isn’t just efficient, but fair.

 

What are clients struggling with the most when it comes to the AI Act?

Tatjána Turányi: Most often? They simply don’t know about it. Or if they do, they don’t understand how it applies to them. The NIS2 directive was already a challenge and now the AI Act adds another layer. Some larger organisations are building AI compliance teams. But many are still in the dark about what systems they’re using, what risks they face, or even where to start.

Our role is to translate legal language into business terms. It’s not just about managing technical risk, it’s about understanding long-term business and ethical impact.

 

How do you support decision-makers in understanding what’s at stake?

Tatjána Turányi: We need to connect the dots between IT teams and executive leadership. Many leaders see AI as a performance boost but they’re less aware of the legal obligations and reputational risks. The AI Act requires continuous reporting and collaboration across the ecosystem. This isn’t about ticking a box and moving on. It’s about maintaining trust, accountability, and transparency over time.

My job is to help organisations shift their perspective: AI should empower people, not replace them. Once that mindset takes hold, AI becomes a much more strategic—and sustainable—tool.

 

Can you share a real-world example of an AI compliance project?

Tatjána Turányi: Last November, we delivered our first full AI policy for a client. The project began when they told us a subsidiary was developing a custom AI tool. We first clarified their role under the AI Act: they were the provider, which meant more responsibility.

Next, we mapped out the tool’s use cases: internal only, or external too? That matters, because the law treats these differently.

We then drafted a 50-page policy covering banned practices, data principles, transparency obligations, and output limitations. In parallel, my colleague Henrik Gál conducted a technical risk assessment of the system’s architecture.

It was an intense collaboration, even the client was still figuring things out as we went. But by the end, we’d created a policy they now use as their internal compass.