Connect with us


Are AI models doomed to always hallucinate?



Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.

The mistakes range from strange and innocuous — like claiming that the Golden Gate Bridge was transported across Egypt in 2016 — to highly problematic, even dangerous.

A mayor in Australia recently threatened to sue OpenAI because ChatGPT mistakenly claimed he pleaded guilty in a major bribery scandal. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. And LLMs frequently give bad mental health and medical advice, like that wine consumption can “prevent cancer.”

This tendency to invent “facts” is a phenomenon known as hallucination, and it happens because of the way today’s LLMs — and all generative AI models, for that matter — are developed and trained.

Training models

Generative AI models have no real intelligence — they’re statistical systems that predict words, images, speech, music or other data. Fed an enormous number of examples, usually sourced from the public web, AI models learn how likely data is to occur based on patterns, including the context of any surrounding data.

For example, given a typical email ending in the fragment “Looking forward…”, an LLM might complete it with “… to hearing back” — following the pattern of the countless emails it’s been trained on. It doesn’t mean the LLM is looking forward to anything.

“The current framework of training LLMs involves concealing, or ‘masking,’ previous words for context” and having the model predict which words should replace the concealed ones, Sebastian Berns, a Ph.D. researchers at Queen Mary University of London, told TechCrunch in an email interview. “This is conceptually similar to using predictive text in iOS and continually pressing one of the suggested next words.”

This probability-based approach works remarkably well at scale — for the most part. But while the range of words and their probabilities are likely to result in text that makes sense, it’s far from certain.

LLMs can generate something that’s grammatically correct but nonsensical, for instance — like the claim about the Golden Gate. Or they can spout mistruths, propagating inaccuracies in their training data. Or they can conflate different sources of information, including fictional sources, even if those sources clearly contradict each other.

It’s not malicious on the LLMs’ part. They don’t have malice, and the concepts of true and false are meaningless to them. They’ve simply learned to associate certain words or phrases with certain concepts, even if those associations aren’t accurate.

” ‘Hallucinations’ are connected to the inability of an LLM to estimate the uncertainty of its own prediction,” Berns said. “An LLM is typically trained to always produce an output, even when the input is very different from the training data. A standard LLM does not have any way of knowing if it’s capable of reliably answering a query or making a prediction.”

Solving hallucination

The question is, can hallucination be solved? It depends on what you mean by “solved.”

Vu Ha, an applied researcher and engineer at the Allen Institute for Artificial Intelligence, asserts that LLMs “do and will always hallucinate.” But he also believes there are concrete ways to reduce — albeit not eliminate — hallucinations, depending on how an LLM is trained and deployed. 

“Consider a question answering system,” Ha said via email. “It’s possible to engineer it to have high accuracy by curating a high quality knowledge base of questions and answers, and connecting this knowledge base with an LLM to provide accurate answers via a retrieval-like process.”

Ha illustrated the difference between an LLM with a “high quality” knowledge base to draw on versus one with less careful data curation. He ran the question “Who are the authors of the Toolformer paper?” (Toolformer is an AI model trained by Meta) through Microsoft’s LLM-powered Bing Chat and Google’s Bard. Bing Chat correctly listed all eight Meta co-authors, while Bard misattributed the paper to researchers at Google and Hugging Face.

“Any deployed LLM-based system will hallucinate. The real question is if the benefits outweigh the negative outcome caused by hallucination,” Ha said. In other words, if there’s no obvious harm done by a model — the model gets a date or name wrong once in a while, say — but it’s otherwise helpful, then it might be worth the trade-off. “It’s a question of maximizing expected utility of the AI,” he added.

Berns pointed out another technique that had been used with some success to reduce hallucinations in LLMs: reinforcement learning from human feedback (RLHF). Introduced by OpenAI in 2017, RLHF involves training an LLM, then gathering additional information to train a “reward” model and fine-tuning the LLM with with the reward model via reinforcement learning.

In RLHF, a set of prompts from a predefined data set are passed through an LLM to generate new text. Then, human annotators are used to rank the outputs from the LLM in terms of their overall “helpfulness” — data that’s used to train the reward model. The reward model, which at this point can take in any text and assign it a score of how well humans perceive it, is then used to fine-tune the LLM’s generated responses.

OpenAI leveraged RLHF to train several of its models, including GPT-4. But even RLHF isn’t perfect, Berns warned.

“I believe the space of possibilities is too large to fully ‘align’ LLMs with RLHF,” Berns said. “Something often done in the RLHF setting is training a model to produce an ‘I don’t know’ answer [to a tricky question], primarily relying on human domain knowledge and hoping the model generalizes it to its own domain knowledge. Often it does, but it can be a bit finicky.”

Alternative philosophies

Assuming hallucination isn’t solvable, at least not with today’s LLMs, is that a bad thing? Berns doesn’t think so, actually. Hallucinating models could fuel creativity by acting as a “co-creative partner,” he posits — giving outputs that might not be wholly factual but that contain some useful threads to tug on nonetheless. Creative uses of hallucination can produce outcomes or combinations of ideas that might not occur to most people.

“‘Hallucinations’ are a problem if generated statements are factually incorrect or violate any general human, social or specific cultural values — in scenarios where a person relies on the LLM to be an expert,” he said. “But in creative or artistic tasks, the ability to come up with unexpected outputs can be valuable. A human recipient might be surprised by a response to a query and therefore be pushed into a certain direction of thoughts which might lead to the novel connection of ideas.”

Ha argued that the LLMs of today are being held to an unreasonable standard — humans “hallucinate” too, after all, when we misremember or otherwise misrepresent the truth. But with LLMs, he believes we experience a cognitive dissonance because the models produce outputs that look good on the surface but contain errors upon further inspection.

“Simply put, LLMs, just like any AI techniques, are imperfect and thus make mistakes,” he said. “Traditionally, we’re OK with AI systems making mistakes since we expect and accept imperfections. But it’s more nuanced when LLMs make mistakes.”

Indeed, the answer may well not lie in how generative AI models work at the technical level. Insofar as there’s a “solution” to hallucination today, treating models’ predictions with a skeptical eye seems to be the best approach.

Disclaimer – This is just shared content from above mentioned source for knowledge sharing.


TC Startup Battlefield master class with Canvas Ventures: Creating strategic defensibility as an early-stage startup



Each year, TechCrunch selects the top 200 early-stage founders from across the globe to feature at TechCrunch Disrupt in San Francisco. And as part of our programming, we host master classes with industry experts and venture capitalists to provide tactical advice and insight to these founders.

Today, I’m excited to share the first of a four-part series with Canvas Ventures’ Mike Ghaffary. In this session, Ghaffary outlined the important components of startup defensibility, the key strategic advantage buckets, and what startups can do to stay competitive as they build and scale.

This private session took place in August, and we are sharing these now so all of you can also reap the benefits of Startup Battlefield.

Disclaimer – This is just shared content from above mentioned source for knowledge sharing.

Continue Reading


Meta’s $500 Quest 3 targets consumer mixed reality



Meta’s Quest Pro arrived to a mixed reaction when it launched late last year. The consensus – if one can be found – was that the headset presented some impressive technological leaps over its consumer predecessor (the Quest 2), but the $1,500 price tag was ultimately prohibitively expensive. If that sounds at all familiar, it’s because that’s more or less the same feedback we see every time an intriguing new headset his the market.

I had the opportunity to try the headset out back in January at CES, along with the latest from HTC, Magic Leap and Sony PlayStation. I probably shouldn’t have tried it on immediately after the Magic Leap 2 – which was the ultimate example of very good, but entirely too expensive XR technology.

The Quest Pro isn’t the Magic Leap, even though the two are effectively going after the same subset of users: enterprise clients. Meta and Magic Leap both – I think rightfully – determined that the real money is in selling headsets for training, prototyping and other business-minded functions. Many big corporations will spend $1,500 (or even $3,300) without batting an eye, if it means saving money in the long run.

But Meta is not quite ready to abandon the consumer market just yet – nor is it ready to put all its eggs in the AR basket. Sticking to mixed reality affords a fuller spectrum of applications, including more immersive VR experiences – including games. For the AR bit, opaque headset like the Quest Pro rely on passthrough technology, using on-board cameras to effectively reconstruct an image of the world around you.

It’s no surprise, then that the new Quest 3 maintains that technology. The big question is why the Quest Pro is sticking around. The obvious answer is that the Pro is less than a year old. The Quest 2, on the other hand, if a week or two short of its third birthday – in fact, it was released so long ago that it still carried the Oculus name.

The Meta Quest 3 mixed reality headset, sitting on Meta's first-party charging stand

Image Credits: Darrell Etherington

Ultimately, however, there is a lot on this new headset that makes the pro version seem almost redundant – or, at very least, very overpriced. While it’s true that new headset lacks some of that enterprise edition’s more premium features, the Pro’s starting price is around 3x that of the Quest 3. That’s not easy to justify. Of course, Meta’s not really thinking much about enterprise year.

Last week, we attended briefing in the Bay Area, featuring the new headset. The Meta Quest 3 inherets a lot of DNA from the Pro, including its mixed reality platform. Even if the company had already invested years and millions into the VR content side of things, maintaining both categories would be foundational, as full immersion lends itself better to the non-casual end of the gaming spectrum. With the exception of a relative handful of titles like Pokemon Go, the current generation of titles don’t require a player to be tied to a fixed real-world location.

According to Meta, the Quest 3’s full-color Passthrough tech has 10x as many pixels as its predecessor and 3x more than the significantly pricier Quest Pro. The visuals are powered by a pair of displays (one per eye) that measure in at 2064 x 2208 pixels (“4K+ Inifinite Display”). It’s the highest res display on any Meta/Oculus device. The 110-degree field of view is roughly 15% wider than the 2. 

Man wearing the Meta Quest 3 mixed reality headset, holding a controller, viewed from the side

Man wearing the Meta Quest 3 mixed reality headset, holding a controller, viewed from the side

The system is powered by the newly announced Qualcomm Snapdragon XR2 Gen 2 chip, which itself promises double the GPU processing power than the Gen 1. In keeping with that 50 upcoming titles are actually graphicly improved versions of older games. Or you can just go ahead and play any of the 500 or so Quest 2-compatible games/apps. There are also 50 entirely new titles coming up on the platform.

Our hands on experience with the handset involved some quick game demos, none of them nearly long enough to give you a full-on review. But that’s kind of the whole deal with these sorts of events. Among the titles were Ghostbusters: Rise of the Ghost Lord, Samba de Amiga and Stranger Things: Tender Claws. Of the three, Ghostbusters is the one that really stuck with me. I admit I’ve got a childhood soft spot for that one – but also, when I close my eyes and think about VR’s promise, it’s these sorts of immersive experiences.

The headset is fairly comfortable. Again, I admit that I didn’t have a ton of time with it – I’ll have to save the more comprehensive writeup for a review. But at 515 grams, it’s a good bit lighter than the notoriously heavy 722 gram Quest Pro. It’s also not a huge bump from the Quest 2’s 500 grams. It’s far easier to imagine working out in Quest 3, versus the professional model.

The visuals are a marked improvement over the last generation. They’re higher res and crisper, which goes a long way toward adding immersion to the whole experience. So, too, does the 40% louder speakers, pai4red with 3D spatial audio tech.

Close up of the top of the Meta Quest 3 touch controller

Image Credits: Darrell Etherington

The headset looks a good bit like the Quest 2, though there are now three slits in the front of the visor, positioning the cameras directly in front of the eye. The system also uses SLAM (simultaneous localization and mapping) to map the environment and determine the position of walls and other landmarks. This is more or less the same technology found in autonomous cars and robotic systems. This can help you avoid getting too close when in VR and tie graphics to real world object in AR. They do, however, drop the Pro’s face and eye tracking — so that’s a point in the pricier model’s favor.

The system ships with a pair of refined Touch Plus controllers, which drop their predecessor’s rings, while getting improved haptic feedback. “Feel more connected to every experience with ergonomic, ring-free Touch Plus controllers that let you experience realistic sensations and fine-tuned precision – as if you’re actually holding a bow, scrambling up skyscrapers or blasting through space,” Meta writes. “You can even explore without controllers, thanks to Direct Touch that follows your gestures, letting you use just your hands to find your way.”

The Meta Quest 3 mixed reality headset, sitting on a first-party charger with an orange headstrap

Image Credits: Darrell Etherington

The controllers weigh in at 126 grams (including the AAA battery) — 38 grams lighter than the older Touch controllers. The headset should take around two hours to charge from 0-100%. 

Meta is promising roughly the same battery life for the headset as the Quest 2, which was rated at 2-3 hours. Here’s a more complete breakdown directly from the company,

  • Overall: Up to 2.2 hours of usage on average
  • Media: 2.9 hours of usage on average
  •  Gaming: 2.4 hours of usage on average
  • Social: 2.2 hours of usage on average
  •  Productivity: 1.5 hours of usage on average

Pre-order starts today, shipping on 10/10. If you buy the 128GB model ($499) before 1/27/24, Meta will toss in a free company of Asgard’s Wrath 2. Pick up the 512GB model ($650), and you get the game, along with a six month Meta Quest+ subscription. 

Read more about Meta Connect on TechCrunch

Disclaimer – This is just shared content from above mentioned source for knowledge sharing.

Continue Reading


The Dungeons & Dragons DLC for Minecraft includes dice rolls, magic missiles and more



Minecraft just dropped its newest DLC for Dungeons & Dragons (D&D) players to enjoy. In partnership with D&D publisher Wizards of the Coast and Everbloom Games, the DLC takes players on an adventure into the Forgotten Realms, letting them explore classic locations like Candlekeep, Icewind Dale, Revel’s End and more.

The most interesting part about the new DLC is that it introduces new mechanics from the tabletop roleplaying game that many Minecraft players may not be familiar with. However, note that it isn’t a direct D&D simulator and still has the same framework as Minecraft.

The D&D DLC allows players to unlock various spells, customize stats, roll d20s, chat with NPCs, level up their character as well as choose from four classes: barbarian, paladin, rogue and wizard. (You can cast fireball in Minecraft, now? Sign us up).

There are also new monsters to attack, including goblins, dragons, mind flayers, mimics, displacer beasts and beholders, among other iconic creatures from D&D lore. Plus, it features a new interface with a quest log, inventory and glossary screens.

Alongside the launch, Minecraft is introducing a free adventure made for 3rd-level characters called Lightning Keep, where players have to save refugees from a dragon. In addition, Wizards of the Coast released a new Minecraft-themed Monstrous Compendium, providing information on Minecraft mobs like a creeper’s defense stats or an ender dragon’s dexterity.

The new DLC pack is available on the Minecraft Marketplace for 1510 Minecoins (approximately $8 USD).

Disclaimer – This is just shared content from above mentioned source for knowledge sharing.

Continue Reading


Copyright © 2023 All Rights Reserved, Noor Marketing