AI Policy and Objectives

We use AI to work faster, make some tasks possible, and to explain ideas more clearly.

What do we mean by AI

When we say “AI”, we mean large language models (LLMs) from big, fairly trusted platforms, used through their consumer tools, without custom models, in-depth training or corporation-tuned servers. We are using generally available tools at their intro/initial paid subscription tiers, where we can access just enough capability to make this project possible.

We are usually not talking about custom machine-learning (ML) models or artificial general intelligence (AGI). Those are starting to show up in government and research settings but are still hard for most people to access and come with their own issues.

Most claims about today’s AI need qualifiers like “mostly,” “usually,” or “generally.” We try to stay aware of these limits. We use AI for the things it does well and avoid the areas where it struggles.

Why we use AI

AI is best at working with natural language—often turning long or messy text into shorter, clearer, or more structured forms. That makes it easier to use natural language processing tech and automation on complex problems.

We also see places where the government’s fast push for AI in public services like the NHS could unintentionally harm citizens, and while the government does run sensible trials and has ways to respond quickly when problems show up, we believe the government should be subject to the same trade-offs everyone else does: tools can speed up work but may also weaken understanding. Our approach lets us test how to use AI responsibly and try to set a good example for using AI in deeper workflows.

AI also lets us:

How we use AI

What we are trying to learn

Limits and risks we recognise

Guardrails

Roadmap experiments

If you spot a problem with our use of AI, or have ideas for safer, better practice, please open an issue or pull request on the BBB GitHub repository.