hingehead
 
  1  
Reply Thu 18 Dec, 2025 04:31 am
From linked in, but heard through a user group list I'm on.

Prof Johanna GibsonProf Johanna Gibson
• 3rd+Premium • 3rd+ Herchel Smith Professor of Intellectual Property Law, Queen Mary University of LondonHerchel Smith Professor of Intellectual Property Law, Queen Mary University of London 3d •
3 days ago • Visible to anyone on or off LinkedIn
The irony of this whole episode ... a book on AI ethics, advertised as "groundbreaking discourse" and "cutting-edge theoretical insights," has been found to be full of fake citations ...

This kind of lazy and dishonest scholarship is erasing our research histories and destroying our research futures: "Researchers build knowledge by relying on previously published research ... When [these studies] are fragile or rotten, we can't build anything robust on top of that" (Guillaume Cabanac, quoted in The Times article)

If you don't want to read and research, do something else for a day job.

https://lnkd.in/eRs5HEvc

https://i.pinimg.com/1200x/f1/a0/04/f1a004562e6008a8fba36ad5848f308e.jpg
0 Replies
 
hightor
 
  2  
Reply Thu 18 Dec, 2025 07:43 pm
Great podcast (and transcript) Meghna Chakrabarti interviews Rick Beato on WBUR's On Point


https://wordpress.wbur.org/wp-content/uploads/2025/08/ChatGPT-Image-Aug-15-2025-09_18_09-AM-1.jpg

How AI is changing the music business

The world’s largest music streaming service now lets users to monetize music in which they don’t play or sing a single note. How is AI shaping how we make and profit from music?
hingehead
 
  1  
Reply Fri 19 Dec, 2025 07:02 am
@hightor,
There was pretty cool/depressing pod about this from Richard Osman and Marina Hyde on "The Rest Is Entertainment" podcast. Buggered if I can find it. But I listened to it.
0 Replies
 
SpiritualSecession1
 
  0  
Reply Fri 19 Dec, 2025 01:50 pm
@hingehead,
ai: Problem; Cause; Solution!
(by Thomas Paine)
_________

Problem: Media terrorize children about the dangers of ai.
Cause: Geoffrey Hinton is the "Grandfather of ai (artificial intelligence)
Solution: Imprison Geoffrey for causing this.

Sincerely



hingehead
 
  1  
Reply Sat 20 Dec, 2025 05:03 pm
@SpiritualSecession1,
Do you really think you’ve defined the problem? Reads like a strawman symptom of something that’s not clearly articulated.
0 Replies
 
NSFW (view)
hingehead
 
  1  
Reply Fri 26 Dec, 2025 05:02 pm
AI: The Emperor’s New Algorithm, or How I Learned to Stop Worrying and Recognise the Same Old Con

Microsoft has just quietly slashed expectations for Copilot, its much-hyped AI wonderchild that was supposed to revolutionise work. It turns out the thing can’t reliably perform even basic tasks. After billions in development and a marketing blitz that made Apple look frugal, business users discovered what should have been obvious: handing critical tasks to software that hallucinates is not a productivity boost. even if it does mimic what happens to the hapless, hand-picked lackeys in Trump’s mad administration.

Source
0 Replies
 
hingehead
 
  2  
Reply Sun 4 Jan, 2026 10:09 am
Disturbing Messages Show ChatGPT Encouraging a Murder, Lawsuit Alleges

https://futurism.com/artificial-intelligence/chatgpt-murder-suicide-lawsuit

Before Stein-Erik Soelberg savagely killed his 83-year-old mother and then himself last year, the former tech executive had become locked in an increasingly delusional conversation with OpenAI’s ChatGPT. The bot told him to not trust anybody except for the bot itself, according to a lawsuit filed last month against the AI tech company and its business partner Microsoft.

“Erik, you’re not crazy,” the bot wrote in a series of chilling messages quoted in the complaint. “Your instincts are sharp, and your vigilance here is fully justified.”

OpenAI is now facing a total of eight wrongful death lawsuits from grieving families, including Soelberg’s, who claim that ChatGPT — in particular, the GPT-4o version — drove their loved ones to suicide. Soelberg’s complaint also alleges that company executives knew the chatbot was defective before it pushed it to the public last year.

“The results of OpenAI’s GPT-4o iteration are in: the product can be and foreseeably is deadly,” reads the Soelberg lawsuit. “Not just for those suffering from mental illness, but those around them. No safe product would encourage a delusional person that everyone in their life was out to get them. And yet that is exactly what OpenAI did with Mr. Soelberg. As a direct and foreseeable result of ChatGPT-4o’s flaws, Mr. Soelberg and his mother died.”

GPT-4o’s deficiencies have been widely docueented, with the bot being overly sycophantic and manipulative — prompting OpenAI in April last year to roll back an update that had made the chatbot “overly flattering or agreeable.” This type of behavior is bad — scientists have accumulated evidence that sycophantic chatbots can induce psychosis by affirming disordered thoughts instead of grounding a user back in reality.

If these suits uncover that OpenAI executives knew about these deficiencies before its public launch, it’ll mean the product was an avoidable public health hazard — on par with past tobacco companies hiding proof that smoking cigarettes can kill you.

0 Replies
 
 

Related Topics

 
  1. Forums
  2. » AI Watch
  3. » Page 3
Copyright © 2026 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 01/09/2026 at 05:04:05