Comments for BRAID UK https://braiduk.org/ Bridging Responsible AI Divides Mon, 08 Dec 2025 15:10:52 +0000 hourly 1 https://wordpress.org/?v=6.7.5 Comment on The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study by Taking Unbelievably Creative to the Lowry - Digital Skills Education https://braiduk.org/the-responsible-ai-ecosystem-seven-lessons-from-the-braid-landscape-study#comment-12 Mon, 30 Jun 2025 15:05:50 +0000 https://braiduk.org/?p=3624#comment-12 […] of ‘responsible’, so I’ve enjoyed reading Fabio Tollon and Shannon Vallor‘s Landscape Study looking at the development of the R-AI Ecosystem. I’ve uploaded the 7 key lessons they […]

]]>
Comment on Machining Sonic Identities by The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study - BRAID UK https://braiduk.org/machining-sonic-identities#comment-11 Wed, 18 Jun 2025 06:01:07 +0000 https://braiduk.org/?p=1883#comment-11 […] funding, and shared community values. BRAID-funded projects like ‘Muted Registers’, ‘Machining Sonic Identities,’ and ‘Sustainable AI Futures’ aim to broaden R-AI’s vision beyond harm reduction, toward […]

]]>
Comment on CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures by The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study - BRAID UK https://braiduk.org/creaatif-crafting-responsive-assessments-of-ai-and-tech-impacted-futures#comment-10 Wed, 18 Jun 2025 06:00:52 +0000 https://braiduk.org/?p=1959#comment-10 […] interests of artists, designers, and writers, and early scoping work by the BRAID-funded project CREAATIF suggests that these tools are already harming more than they are helping creatives in the […]

]]>
Comment on Medical AI and Sociotechnical Harm by AI Regulation vs. Innovation: How Much Should The UK Let AI Run Free in 2025? - AI Is https://braiduk.org/regulatory-guidelines-informing-the-societal-and-ethical-factors-shaping-medical-ai-adoption#comment-9 Tue, 18 Feb 2025 13:24:52 +0000 https://braiduk.org/?p=1895#comment-9 […] While existing content has discussed ethical concerns and public safety, this section focuses specifically on algorithmic bias and its societal implications. Algorithmic bias occurs when AI systems produce outcomes that unfairly disadvantage certain groups, often due to biased training data. For instance, studies have shown that facial recognition systems are less accurate for individuals with darker skin tones, leading to potential discrimination in law enforcement and hiring processes (BRAID UK). […]

]]>
Comment on Creating a dynamic archive of responsible ecosystems in the context of creative AI by Using AI for the Documentation of Intangible Cultural Heritage - University of Reading Digital Humanities Hub https://braiduk.org/creating-a-dynamic-archive-of-responsible-ecosystems-in-the-context-of-creative-ai#comment-8 Fri, 12 Jul 2024 09:06:20 +0000 https://braiduk.org/?p=1953#comment-8 […] to the introduction of novel technologies. At the heart of the BRAID/AHRC-funded project ‘Creating a Dynamic Archive of Responsible Ecosystems in the Context of Creative AI’ (2023-4), has been the question as to what might archives become in the future when AI will most […]

]]>
Comment on A shrinking path to safety: how a narrowly technical approach to align AI with the public good could fail by Model alignment protects against accidental harms, not intentional ones - CybAI news https://braiduk.org/a-shrinking-path-to-safety-how-a-narrowly-technical-approach-to-align-ai-with-the-public-good-could-fail#comment-7 Sat, 25 May 2024 23:33:13 +0000 https://braiduk.org/?p=1145#comment-7 […] There are a couple of important caveats. Model alignment, especially RLHF, is hard to get right, and there have been aligned chatbots that were nonetheless harmful. And alignment doesn’t matter if the product concept is itself creepy. Finally, for combatting more serious kinds of accidental harms, such as those that might arise from autonomous agents, a narrowly technical approach is probably not enough. […]

]]>
Comment on Responsible AI in International Public Service Media by New partnership to research responsible AI in international public media - Public Media Alliance https://braiduk.org/responsible-ai-in-international-public-service-media#comment-6 Thu, 16 May 2024 09:01:29 +0000 https://braiduk.org/?p=1899#comment-6 […] by the BRAID Fellowship and led by Dr Kate Wright (University of Edinburgh), the project will use PMA’s extensive network to research best practices in the deployment of AI in an […]

]]>
Comment on A shrinking path to safety: how a narrowly technical approach to align AI with the public good could fail by Our Researchers Favourite Articles 2023 - LCFI https://braiduk.org/a-shrinking-path-to-safety-how-a-narrowly-technical-approach-to-align-ai-with-the-public-good-could-fail#comment-5 Wed, 01 May 2024 11:23:38 +0000 https://braiduk.org/?p=1145#comment-5 […] A shrinking path to safety: how a narrowly technical approach to align AI with the public good could… […]

]]>
Comment on A shrinking path to safety: how a narrowly technical approach to align AI with the public good could fail by Gentle Machines: The Case for the Humanities in an AI World - AI Spirituality https://braiduk.org/a-shrinking-path-to-safety-how-a-narrowly-technical-approach-to-align-ai-with-the-public-good-could-fail#comment-4 Sat, 27 Apr 2024 00:48:22 +0000 https://braiduk.org/?p=1145#comment-4 […] of experts across law and ethics, design, and the history and philosophy of technology. Similarly, the creators of responsible AI tools were an amalgamation of computer and data scientists, social scientists, philosophers and […]

]]>
Comment on A shrinking path to safety: how a narrowly technical approach to align AI with the public good could fail by On the promise of arts and humanities research and the BRAID Fellowships - BRAID UK https://braiduk.org/a-shrinking-path-to-safety-how-a-narrowly-technical-approach-to-align-ai-with-the-public-good-could-fail#comment-3 Sun, 10 Dec 2023 10:26:23 +0000 https://braiduk.org/?p=1145#comment-3 […] comprises a complex ecosystem of actors, stakeholders, researchers and publics. Yet as explained in our earlier blog post, too often the evaluation of excellence […]

]]>