JR
  • Apps
  • AI Philosophy, Policies, and Competencies
  • AI-Powered UDL Strategies
  • LUDIA
  • AI Toolbox
  • Apps
  • AI Philosophy, Policies, and Competencies
  • AI-Powered UDL Strategies
  • LUDIA
  • AI Toolbox
Search

Hallucination and Bias Free Generative AI

3/25/2024

0 Comments

 
Picture
Concerns around the reliability and equity of generative AI are particularly salient in a school context. This short video shows how bots can be made to “think” critically through the collaborative work of a team of agents.

🤖 3 agents generate separate answers to the user question from different open source language models (easily accessible though the Groq API)

🔁 A 4th agent compares their outputs, similar to the prompt engineering technique known as “self-consistency”. Each AI response being newly generated, a hallucination is unlikely to be consistent, especially across different models… unless it is systemic, which is why:

🧠 A 5th agent analyzes the responses for potential biases and stereotypes

🔍 A 6th agent conducts online research to fact-check the answers (and provides references in the full “script”)

📝 The last agent writes a final summary

Clearly not ready to ship, but a successful proof of concept and experiment, in my opinion.

I can imagine a nice TOK activity seeing students compete to create the most reliable Q&A bot - not focusing on the coding, but on the workings of AI, and on a meta-cognitive exploration of the processes behind “critical thinking”.

Interestingly, the ability to create and manage agents is expected to be a core feature of GPT-5.

For now, crewAI is one way students, teachers and school leaders can design teams of AI agents with specific functions and tools for any purpose - very easily, and for free.
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

Proudly powered by Weebly
Photo from Gwydion M. Williams
  • Apps
  • AI Philosophy, Policies, and Competencies
  • AI-Powered UDL Strategies
  • LUDIA
  • AI Toolbox