AI Alignment & Governance

Technical Rigour. Regulatory Clarity.

I help European organisations navigate AI governance, EU AI Act compliance, and technical implementation risks — drawing on decades of C-level leadership and hands-on engineering.

Steve Ball

Steve Ball

AI Governance & Strategy Consultant

Serving clients in
Brussels • Remote

About Me

Background

I've spent decades as a senior technology leader, combining deep hands-on engineering with responsibility for risk, compliance, and large-scale systems delivery. My roles have included Staff Engineer, CTO, investment-banking COO, and Senior Director of AI. I've led multiple AI product initiatives from design through to deployment.

Current Focus

I partner with executive teams to shape AI strategy, and work hands-on with engineering teams to build the first working prototypes together with the operational safeguards they require. This ensures the organisation can adopt AI in ways that are technically robust, trustworthy, and properly governed.

Working Style

Substance Over Performance

AI is powerful, but it isn't magic. I build working prototypes that teams can validate early, so that organisations can see how a system behaves, where risks may arise, and what operational safeguards will be needed.

Safety as a First Principle

My work is grounded in early risk visibility, close collaboration with domain experts, and the principles of trustworthy AI. I recognise the profound impact GenAI can have, and I am committed to maximising its benefits while minimising its potential for harm.

AI Readiness Assessment

Assess your organisation's AI readiness in just a few minutes.

This tool offers a clear, structured snapshot of where you stand—across strategy, execution capability, and organisational preparedness.

It is free, anonymous, and needs no sign-up.

AI Due Diligence & Risk Assessment

AI is powerful, and responsibility is the first requirement.

GenAI introduces opaque, non-deterministic components into human-facing systems. These are components whose behaviour cannot be inspected or reliably predicted. This makes structured monitoring and evaluation essential.

Unlike traditional software, generative models cannot be reasoned about through code, inputs, or deterministic rules. Safety guardrails must be designed into GenAI implementations from the beginning; and meaningful oversight requires inspectability, automated evaluation, and human-in-the-loop escalation, after deployment.

Why GenAI requires a new standard of risk management

  • Unpredictable outputs. LLMs can generate harmful, misleading, or unexpected responses — including confident fabrications — and the same prompt cannot be relied on to behave consistently over time.
  • Prompt injection. Malicious or unintentional inputs can override system instructions, bypass safeguards, reveal internal logic, or coerce the model into unsafe behaviour, creating a direct integrity and security risk.
  • Impersonation risk. Generative systems can convincingly mimic historical, living, or fictional individuals, creating reputational, ethical, and legal exposure.
  • Information leakage. Models may inadvertently reveal confidential or sensitive information unless carefully constrained, isolated, and monitored.
  • Agentic autonomy. Systems with the ability to take action — trigger workflows, evaluate cases, send messages — can amplify harm significantly without strong governance boundaries.
  • Invisible failures. Without structured oversight, unsafe or anomalous interactions may go unnoticed, undermining organisational and regulatory obligations.
  • Unintended bias. Subtle asymmetries in training data can manifest as skewed, inappropriate, or discriminatory outputs in sensitive contexts.

The more powerful the capability, the more rigorous the safeguards must be.

GenAI risk is real, but it is not unmanageable. The same properties that make these systems unpredictable also enable strong technical support for mitigation: AI-based monitoring of interactions with alerting pathways to human-in-the-loop review.

Most of the safeguards organisations need are well-established patterns in modern AI architecture — and, when designed in, they bring non-deterministic systems back within the boundaries of accountable, inspectable behaviour.

I help organisations implement these safeguards in a way that aligns with their obligations under the EU AI Act. My role is to make risks visible early, recommend proportionate controls, and support teams in building systems that are safe, operationally robust, and trustworthy.

AI Strategy & Governance

Effective strategy = precise scoping.

Strategy for AI integration and governance starts with defining boundaries: what you're building, what you're explicitly not building, who owns decisions, and where oversight occurs.

This includes the governance frameworks themselves: what oversight is required, which safeguards are proportionate, and how accountability functions in practice. Ambiguity creates risk, and clear scope prevents ambiguity.

What organisations need

  • Regulatory clarity. Understanding which EU AI Act requirements apply, what conformity means in practice, and how to structure documentation and oversight.
  • Governance frameworks. Designing accountability structures, decision rights, and approval processes that meet regulatory obligations.
  • Operational safeguards. Building technical safeguards including human-in-the-loop review mechanisms, automated monitoring, and prompt engineering guardrails that protect against injection attacks and other risks to keep systems reliable.
  • Value assessment. Scoping anticipated benefits and success criteria against implementation costs, operational impact, and risk.
  • Technical assessment. Assessing vendor claims dispassionately, evaluating technical feasibility, and estimating realistic implementation complexity and constraints.
  • Implementation sequencing. Phasing deployments to match organizational capacity and coordinate across technical and non-technical teams.

Clear boundaries enable confident action.

When organisations know what they're building, what they're not building, and what governance exists around those choices, teams can direct their energy to solving problems rather than debating scope. Far from constraining thinking, boundaries create the necessary conditions for it.

I work with organisations to establish this clarity early on: defining regulatory obligations, designing operational frameworks, and ensuring the technical safeguards are in place for trustworthy deployment. The goal is to help teams move forward with confidence, knowing their approach is both technically sound and institutionally defensible.

GenAI Prototyping & Exploration

Most people can't form a realistic mental model of GenAI until they try it in context

A simple working prototype often achieves in minutes what hours of conversation cannot: clarity, alignment, and a shared sense of what's possible. It turns abstract thinking into something concrete — a model leaders and domain experts can evaluate, question, and refine together.

Benefits of early hands-on interaction

  • Feasibility. Shows what a foundation model can realistically achieve with your tasks, data, constraints, and context.
  • Value. Gives an early sense of where GenAI delivers meaningful organisational benefit — and where it does not.
  • Risks. Surfaces behavioural, safety, and governance concerns early, before time or budget are committed.
  • Data. Clarifies what information is needed, how it must be handled, and which privacy or confidentiality issues arise.
  • Alignment. Provides a shared, concrete reference point teams can refine, question, and build upon together.

A good prototype is an instrument of discovery — it aims to engage and reveal.

With each quick, contained iteration, assumptions become clearer, risks surface, and the shape of a workable proposition emerges. This early clarity strengthens every subsequent stage: architecture, data preparation, safety design, governance decisions, and the conversations that determine whether a project should proceed at all.

I support organisations through this process in a steady, structured way — building simple, well-bounded prototypes, refining them with domain experts, and ensuring the insights feed directly into practical next steps. The goal is not to race ahead heedlessly, but to help teams move forward with confidence, guided by the realisations that emerge through the process.

Get in touch

+31 610 422 333
© AI Align, 2025. All rights reserved. Photos courtesy of Unsplash.