Reply Sat 18 Oct, 2025 07:42 pm
For professional reasons I keep a close eye on developments in AI, and the resulting fallout.

Creating this forum to share interesting news - pro and anti AI - with a focus on the socio-economic impact
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Question • Score: 2 • Views: 55 • Replies: 5
No top replies

 
hingehead
 
  1  
Reply Sat 18 Oct, 2025 07:43 pm
@hingehead,
First, an overview of the current state of play through Australia eyes:
https://i.pinimg.com/736x/16/f6/0a/16f60a1f43d192844bfec156b15b9b48.jpg
0 Replies
 
edgarblythe
 
  1  
Reply Sat 18 Oct, 2025 09:03 pm
As a writer, I have to be wary of ai. I do interact with a single ai, but I avoid literary references. I use it as an alternate to google at times, when I'm not getting what I want. I don't converse with ai, don't make any of it personal.
hingehead
 
  1  
Reply Sat 18 Oct, 2025 09:40 pm
@edgarblythe,
Wise. I'm not totally anti, in principle, but I have a stock of stories of professional misconduct due to the mistaken belief that AI is actually intelligent as opposed to a 'probability-based guessing machine'. Court documents, consultant reports to government, detection of student AI use - all unverified and unchecked before being acted on - and then blowing up in faces.

Even if it was bright as your best intern - you'd check it's work before submitting it, right? Gobsmacks me. And there's that sneaking submission it's all another ponzi scheme - way too much money has been invested in it for the returns. If the 'trough of delusion' is deep enough entire economies are going to be slugged hard.

That said my institution is pushing forward with adopting it (fortunately not as zealots) but with a mindset that the students we graduate will need to know how to use it prudently.

It will hopefully get better - but right now the generic GenAI bots/engines are often ludicrously wrong.

To get a sense of how wrong ask it for the answer to a question you know the answer to (rather than something you don't - which is way harder to check).
edgarblythe
 
  1  
Reply Sat 18 Oct, 2025 10:30 pm
@hingehead,
The one I use definitely is programmed to interpret certain things according to the programmer's politics.
hingehead
 
  1  
Reply Sat 18 Oct, 2025 10:54 pm
@edgarblythe,
All of the ones I've used have a psychophancy I find abhorrent. As someone said "AI will never tell you that you're the asshole".

The AI engine deployed with one of our library discovery tools actually has trigger phrases. If you ask it show you evidence that vaccines cause autism it delivers an error rather that provide any sort of answer. However if you rephrase the question without an implied point of view e.g. "Is there any evidence that points to a link between vaccination and autism" you get an answer.

BTW the tool I'm talking about uses our library collection as LLM and the AI's main role is turn your natural language question into a boolean search of the library's collection, return the 5 most relevant results (relevancy is function of the library's search tool, not the AI), and then summarise whatever full text was available in those five results.

To its credit you can see the boolean search it created, the full citations and links for the top 5 results AND a link to the entire search. It doesn't hallucinate in the sense it doesn't make up citations.

I think it might be extremely useful to students because you can use natural language querying, rather having the library try and turn you into a mini-librarian understanding boolean logic, bibliographic metadata and search strategies.

A lot of commentators see this sort of specialisation as the real value of AI tools, rather than the chatgpt/claude/gemini et al 'we can answer anything' approach.

My work also provides access to the corporate version of CoPilot - so your LLM is every corporate document you personally have permissions to access to. I"ve found that occasionally useful, but it's quite **** with Outlook emails.
0 Replies
 
 

Related Topics

 
  1. Forums
  2. » AI Watch
Copyright © 2025 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 10/19/2025 at 06:17:30