This one started with a job application. I was looking into Temporal as part of the process and, as tends to happen, the learning took on a life of its own. The result was a playable game rather than a tidy tutorial project, which probably says something about how my brain works.
The Hallucinated Truth is based on the BBC radio panel show of the same name, where a contestant delivers a story that is almost entirely nonsense with a handful of genuine facts hidden inside it. The other players have to spot the truths buried among the lies. I thought it would be interesting to recreate that format using a large language model, partly because it is a genuinely fun concept and partly because the discourse around AI hallucinations tends to be a bit relentlessly gloomy. If the model is going to make things up, you might as well build a game around it.
The stack brings together Temporal for workflow orchestration, LangChain and Ollama running llama3 locally, and the Google Custom Search API to ground the hidden facts in real, verifiable information. You pick a subject, the LLM generates an elaborate and largely fictional story about them, and your job is to identify which statements are actually true. The LLM then evaluates your answers semantically, so you do not need to quote the story back verbatim.
The whole thing runs locally in Docker. It is primarily a proof of concept for how orchestration, local LLMs, and live web search can be combined into something structured and inspectable, but it is also genuinely quite good fun to play.