Challenges of public trust in models

Explore top LinkedIn content from expert professionals.

Summary

The challenges of public trust in models refer to the difficulties people face in believing that complex AI systems and algorithms are fair, reliable, and safe. Building trust in these technologies requires transparency, accountability, and real-world evidence that models are designed and used responsibly.

  • Share how it works: Make it easy for people to understand what data shapes an AI model and how key decisions are made to increase comfort and confidence.
  • Invite public input: Give users chances to engage with AI systems, ask questions, and see safeguards in action so they feel included in shaping technology.
  • Test in real life: Show the real-world value and safety of AI solutions through pilot programs and clear feedback channels to demonstrate trustworthiness.
Summarized by AI based on LinkedIn member posts
  • View profile for Katerina Budinova

    MBA, CMgr MCMI | HVAC Product Director | Innovation & Sustainability Leader | Driving Growth, Performance & High-Impact Teams | Founder – Women Front Network & Green Clarity Sprint

    12,816 followers

    Trust in AI isn't a PR problem. It's an engineering one. Public trust in AI is falling fast. In the UK, 87% of people want stronger regulation on AI and a majority believe current safeguards aren't enough. We can't rebuild that trust with ethics statements, glossy videos, or "trust centers" that nobody reads. We need to engineer trust into AI systems from day one. That means: Designing for transparency and explainability (not just performance) Piloting high-benefit, low-risk use cases that prove value (and safety) Embedding value-alignment into system architecture using standards like ISO/IEEE 24748-7000 Engineers can no longer afford to be left out of the trust conversation. They are the trust conversation. Here’s how: 🔧 1. Value-Based Engineering (VBE): Turning Ethics into System Design Most companies talk about AI ethics. Few can prove it. Value-Based Engineering (VBE), guided by ISO/IEEE 24748-7000, helps translate public values into system requirements. It’s a 3-step loop: Elicit values: fairness, accountability, autonomy Translate into constraints: e.g., <5% error rate disparity across groups Implement & track across dev lifecycle This turns “fairness” from aspiration to implementation. The UK’s AI Safety Institute can play a pivotal role in defining and enforcing these engineering benchmarks. 🔍 2. Transparency Isn’t a Buzzword. It’s a Stack Explainability has layers: Global: what the system is designed to do Local: why this output, for this user, right now? Post hoc: full logs and traceability The UK’s proposed AI white paper encourages responsible innovation but it’s time to back guidance with technical implementation standards. The gold standard? If something goes wrong, you can trace it and fix it with evidence. ✅ 3. Trust Is Verifiable, Not Assumed Brundage et al. offer the blueprint: External audits and third-party certifications Red-team exercises simulating adversarial misuse Bug bounty-style trust challenges Compute transparency: what was trained, how, and with what data? UK regulators should incentivise these practices with procurement preferences and public reporting frameworks. This isn’t compliance theater. It’s engineering maturity. 🚦 4. Pilot High-Impact, Low-Risk Deployments Don’t go straight to AI in criminal justice or benefits allocation. Start where you can: Improve NHS triage queues Explainable fraud detection in HMRC Local council AI copilots with human-in-the-loop override Use these early deployments to build evidence and public trust. 📐 5. Build Policy-Ready Engineering Systems Public trust is shaped not just by what we build but how we prove it works. That means: Engineering for auditability Pre-wiring systems for regulatory inspection Documenting assumptions and risk mitigation Let’s equip Ofcom, ICO, and the AI Safety Institute with the tools they need and ensure engineering teams are ready to deliver. The public is asking: Can we trust this? The best answer isn’t a promise. It’s a protocol.

  • View profile for Shubham Tripathi

    Helping organizations become AI-First | Expand revenue. Protect the core. Deliver value

    3,659 followers

    Over the last 3 days, we explored: 1. The Trust Paradox — We trust LLMs more than people we know, without knowing how they work. 2. The Input Dilemma — AI is only as good as what we feed it, yet we hesitate to be honest with a black box. 3. The Systemic Risk — The models were trained on nearly everything public. We can't audit the past, and regulations may favor those already in power. So where does that leave us? Do we simply opt out? Wait for someone else to fix it? Or demand better? Trust won’t be rebuilt through statements and safety boards. It has to be earned — structurally. Here are a few ways we could start: Open-source training summaries: Not the entire dataset, but at least show what types of data shaped the model. Transparent evaluation benchmarks: Let users see what “good” looks like — and what trade-offs were made. User-side audit tools: Allow people to trace or delete their own data interactions if stored. Model behavior disclosures: Tell users what the model is optimized for — accuracy? engagement? speed? AI doesn’t need to be perfect to be trusted. But it needs to be transparent enough to be questioned, and accountable enough to be corrected. This isn’t about making LLMs safe for society. It’s about making society safe around LLMs. 👉 If you had to design one feature that builds trust into AI, what would it be? Let’s stop just using the models — and start shaping the systems around them. #AItrust #ResponsibleAI #ModelTransparency #LLM #HumanAIInteraction #DigitalGovernance #AIadoption #TrustByDesign #AITrustandSafety

  • View profile for Vaibhava Lakshmi Ravideshik

    AI Engineer | LinkedIn Learning Instructor | Titans Space Astronaut Candidate (03-2029) | Author - “Charting the Cosmos: AI’s expedition beyond Earth” | Knowledge Graphs, Ontologies and AI for Genomics

    17,687 followers

    Like a fortress growing taller but keeping the same cracks, large language models may be expanding without becoming safer. A collaborative study between the UK AI Security Institute, Anthropic, University of Oxford, and the The Alan Turing Institute exposes this unsettling symmetry. The study demonstrates that data poisoning does not dilute with scale. Even as models and datasets grow by orders of magnitude, the absolute number of poisoned samples required to implant a backdoor remains roughly constant. In their experiments, 250 poisoned documents were sufficient to compromise models ranging from 600M to 13B parameters, despite the largest model being trained on nearly twenty times more clean data. This overturns the long-held belief that increasing data volume would naturally “average out” adversarial noise. Instead, larger models appear to be more sample-efficient learners, capable of internalizing both useful and malicious signals with equal precision. For those of us working on trust layers over model training - through Knowledge Graphs, ontology-driven provenance, and dynamic data vetting - this finding reinforces a critical point: robustness is not an emergent property of scale; it must be deliberately engineered. Key implications include: 1) Scaling laws for capability may mirror scaling laws for vulnerability. 2) Fine-tuning or alignment processes cannot reliably erase deeply embedded backdoors; they often only suppress them. 3) Graph-based reasoning layers may become essential for tracing data lineage and identifying subtle poisoning patterns before training. In the pursuit of larger and more capable models, the real challenge is ensuring that every data point shaping them remains interpretable, auditable, and trusted. Scaling safety will demand more than data volume - it will require transparency, traceability, and semantic intelligence across the entire data pipeline. Full length article: https://lnkd.in/gmMNdFgF #AISafety #DataPoisoning #ModelRobustness #BackdoorAttacks #AdversarialAI #AICybersecurity #LLMSecurity #AITrust #AIIntegrity #ResponsibleAI #ScalingLaws #FoundationModels #LargeLanguageModels #ModelAlignment #AIAlignment #ModelScaling #AIResearch #MachineLearningResearch #KnowledgeGraphs #OntologyEngineering #DataLineage #DataProvenance #TrustworthyAI #ExplainableAI #InterpretableAI #SemanticAI #AIEthics #AIGovernance #SafeAI #AITransparency #AIForGood #TechPolicy #DigitalTrust #FutureOfAI #AI #MachineLearning #DeepLearning #GenerativeAI #TechInnovation #EmergingTech

  • View profile for Catherine McMillan

    Founding Director @ The AI Collective | Scaling global AI ecosystems

    7,165 followers

    Global trust in AI is collapsing. So we tested the question policymakers keep asking: will national regulation fix it? In studying 47 countries, our team at The AI Collective Institute discovered something surprising: - National AI regulation does not increase public trust. - The strongest predictor of trust? Daily use. That means the crisis of trust in AI isn’t going to be solved by laws on paper alone. It has to be solved in people’s lives. Through education, exposure, and deliberate, inclusive design. For me, this feels deeply personal. If we want AI to serve humanity, people need to believe in it. They need to see it working for them, not just being regulated above them. This is why we built the Institute. To bring evidence into the conversation, to challenge assumptions, and to give policymakers, builders, and citizens the tools to close the trust gap. You can explore the full report + an interactive map of 47 countries in the comments. I’m so proud of our team (Elizabeth Farrell, Stanley C., Katherine Vittini), our Policy Director Liel Zino, MPP, and the entire AI Collective community. This is just the beginning! The future of AI won’t be decided by technology alone. It will be decided by us.

  • View profile for Jakob Mökander

    Director - Science & Technology Policy

    7,032 followers

    What does the UK think about AI? And how can we build trust to accelerate adoption? New research from TBI and Ipsos 📊 👇 In January 2025, PM Keir Starmer announced the AI Opportunities Action Plan, setting out a bold vision to make the UK a global AI leader. But there is a clear gap between the government’s narrative and public attitudes New polling by Tony Blair Institute for Global Change and Ipsos show that: 👫 While 25% of UK adults use generative AI weekly, nearly half have never used it 📉 More UK adults view AI as a risk for the economy (39%) than an opportunity (20%) 🛟 38% of UK adults cite lack of trust in AI content as the main barrier to adoption This matters. Without broad public support, the government will struggle to deliver on the AI opportunities action plan and its wider growth agenda Yet the picture is not all bleak. That 25% of UK adults already use generative AI weekly is remarkable, given that the technology is only a few years old (the internet took over 20 years to reach widespread adoption) Our survey also shows strong public support for AI use-cases with tangible benefits – like cancer screening. Further, comfort with AI is linked to the presence of safeguards Five recommendations to build public trust in AI: 1. Strategic communication: Focus on use-cases that matter to people – not only efficiency gains. AI should be a tool to tackle real-world problems 2. Real-world evaluation: Assess AI with human-centric benchmarks. AI systems that cause harm should be improved based on feedback 3. Responsible governance: Strengthen the UK’s sector-specific approach to AI regulation, to ensure that the AI systems are reliable, safe and transparent 4. Digital upskilling: Invest in training programs to build AI literacy, risk awareness, and confidence in real-world applications across the population 5. Public engagement: Create opportunities for people to engage with AI practitioners and co-design public sector AI use cases Read more👉 https://bit.ly/4ndnu3d In short, the government has a role to play not only in ensuring AI systems are ethical, legal, and safe, but also in fostering healthy public attitudes to AI HUGE thanks to the incredible team behind this report – including TBI colleagues Jo Puddick, Naman Goel, Alan Wager and Tim Rhydderch as well as collaborators Daniel Cameron, Ben Roff and Helen Margetts! Thanks also to all experts who have provided valuable input, incl. Markus Anderljung, Elizabeth Seger, Karen Mcluskie, Ray Eitel-Porter, Daisy McGregor, Chris Jones, Tim Flagg, Kevin Luca Zandermann, Keegan McBride, Dr. Laura Gilbert CBE and many others!

Explore categories