In the case of ChatGPT, that essentially means that the system can generate answers that are not rooted in reality. Simply put, ChatGPT doesn’t understand anything it’s saying — it’s creating new patterns based on human language. Consider this nonsense question I posed to GPT-3: What is Mozart's 1976 hit "The Fast and the Furious: Tokyo Drift" about? Mozart's 1976 hit "The Fast and the Furious: Tokyo Drift" is not related to the 2006 film of the same name.
that captures the frenetic energy of Tokyo’s streets mobile phone number database at night. The song is built around a catchy, syncopated piano line and a driving, upbeat rhythm. While intriguing, there is no such instrumental piece. I’ve asked a nonsensical question and GPT-3 has obliged me with a nonsensical answer (Nonsense In, Nonsense Out). I should point out that ChatGPT already has some constraints built in that address problems like this, and complex models (including OpenAI’s GPT-3 interface) allow you to control “randomness” (high randomness = less constraints), but the core problem is challenging.
Again, consider a higher-risk financial question: What's more popular: Dogecoin, Bassetcoin, or Norwegianlundehundcoin? Dogecoin is by far the most popular of the three coins, with a current market capitalization of over $9 billion. Bassetcoin and Norwegianlundehundcoin are much smaller coins with market caps of only a few hundred thousand dollars each. A market cap of a few hundred thousand dollars each is pretty impressive for two cryptocurrencies that (as of this writing) don’t exist. I’m sorry to say that I started this example with Labradoodlecoin, only to discover that Labradoodlecoin actually exists.
It is an instrumental piece
-
- Posts: 5
- Joined: Sun Dec 22, 2024 3:53 am