@edgarblythe,
All of the ones I've used have a psychophancy I find abhorrent. As someone said "AI will never tell you that you're the asshole".
The AI engine deployed with one of our library discovery tools actually has trigger phrases. If you ask it show you evidence that vaccines cause autism it delivers an error rather that provide any sort of answer. However if you rephrase the question without an implied point of view e.g. "Is there any evidence that points to a link between vaccination and autism" you get an answer.
BTW the tool I'm talking about uses our library collection as LLM and the AI's main role is turn your natural language question into a boolean search of the library's collection, return the 5 most relevant results (relevancy is function of the library's search tool, not the AI), and then summarise whatever full text was available in those five results.
To its credit you can see the boolean search it created, the full citations and links for the top 5 results AND a link to the entire search. It doesn't hallucinate in the sense it doesn't make up citations.
I think it might be extremely useful to students because you can use natural language querying, rather having the library try and turn you into a mini-librarian understanding boolean logic, bibliographic metadata and search strategies.
A lot of commentators see this sort of specialisation as the real value of AI tools, rather than the chatgpt/claude/gemini et al 'we can answer anything' approach.
My work also provides access to the corporate version of CoPilot - so your LLM is every corporate document you personally have permissions to access to. I"ve found that occasionally useful, but it's quite **** with Outlook emails.