Our Ethical AI Principles
Technology that serves the story — and the people behind it.
Why It Matters
Artificial intelligence is reshaping how we discover and interpret information.
But too often, AI is used to manipulate attention, reinforce bias, and serve commercial interests over public understanding. At Bulletin, we believe technology must amplify truth, not distortion.
Our AI is designed to support independent journalism — not replace it — and to make access to credible information simple, fair, and transparent for everyone.
This isn’t just a technical choice. It’s an ethical stance.
1️⃣ Human-Centred, Not Machine-Led
Our AI assists; it doesn’t decide. Journalists, editors, and readers always remain at the centre of the process. Automation is used to enhance clarity, speed, and accessibility — never to suppress or rewrite human work.
- We treat AI as an assistant for understanding, not an authority on truth.
- Human judgement remains the final layer of curation and accountability.
2️⃣ Transparency Over Black Boxes
You deserve to know how information reaches you. Every recommendation is traceable to its source and logic. We publish clear documentation on how Bulletin’s discovery engine ranks, links, and translates content.
- No secret influence, no algorithmic bias buried out of sight.
- We believe transparency builds trust — and trust is the foundation of democracy.
3️⃣ Privacy by Design
Your data is yours. Always. No profiling, no surveillance-based advertising, no resale of personal data. Our systems are built with minimal data collection — only what’s needed for function, never for exploitation.
- You can view, control, or delete your data at any time.
- We’d rather build slower, better systems than faster, invasive ones.
4️⃣ Fairness and Accessibility
AI should open doors, not close them. Built to WCAG 2.2 AA standards for accessibility. Inclusive design ensures compatibility with screen readers, Braille displays, text scaling, and low-contrast modes.
- We actively audit for bias — linguistic, cultural, and geographic — to make sure our translation and search systems serve diverse audiences equitably.
- Access to truth should never depend on who you are, where you live, or what language you speak.
5️⃣ Open, Auditable, Explainable
We’re committed to open research and verifiable systems. We use open-source models (Mistral, LLaMA, etc.) that can be inspected, tested, and challenged by independent experts. Our codebase for AI decision logic will be progressively opened for peer review.
- Academic partners and civil society groups will be invited to evaluate our fairness, explainability, and performance metrics.
- We believe accountability is impossible without openness.
6️⃣ Low-Energy, Sustainable AI
We design for efficiency — both ecological and computational. Lightweight, self-hosted models trained locally to minimise carbon footprint. Prioritising small, interpretable models over large, opaque ones.
- Ethical technology also means environmental responsibility.
7️⃣ Continuous Research and Oversight
Ethical AI isn’t a static checklist — it’s an ongoing process. We partner with Goldsmiths University and other research institutions to study media ethics, algorithmic bias, and user trust. We will publish regular AI Transparency Reports documenting performance, inclusivity metrics, and real-world impact.
- A mixed advisory board — including technologists, journalists, and ethicists — will review these findings and guide further development.
Our Commitment
We build AI that helps people see clearly — not think less.
We believe journalism and technology can coexist without compromise.
And we commit to proving, through every update and experiment, that intelligence can serve integrity.