ChatGPT...Is AI our demise

Reply Fri 7 Apr, 2023 11:33 pm
Will artificial intelligence replace the majority of us and our roles or raise us to new levels of achievement?
Reply Sat 8 Apr, 2023 06:13 am
Whoa, whoa, timeout.

ChatGPT is a nice-looking combination of a search engine and an article spinner. According to SEO guru Neil Patel, its info pulls from Google and only dates to 2021.

Its main virtue is that it can spit back its search results in a more grammatical, conversational form.

And... that's it.

No need to worry about the human race just yet. At least, not for this particular reason.
Reply Sat 8 Apr, 2023 07:45 am

Is ChatGPT causing layoffs? Which jobs are in danger due to ChatGPT?

In the past few weeks, OpenAI's ChatGPT has attracted a lot of buzz with fears and hopes around the artificial intelligence platform. Some wonder how ChatGPT can help them grow and some even fear that this next-generation artificial intelligence platform will steal away their jobs.

One thing was clear, there are lots of questions around ChatGPT and no clear answers. So, we decided to highlight all frequently asked questions around the platform, which will clear the air around the use and scope of the artificial intelligence program.

FAQs around ChatGPT:

Will ChatGPT reduce jobs?

The artificial intelligence platform will certainly have an impact on the job market and according to a report in The Atlantic, employment for college-educated workers will reduce. Another survey from the United States quoted high-level executives in companies who claimed that the job market will see more changes with ChatGPT and that the reduction in jobs will ‘not be very significant’.

Which jobs are in danger due to ChatGPT?

OpenAI, Open Research, and the University of Pennsylvania conducted research around the jobs at most risk due to ChatGPT and the research revealed many professions and their relative risk due to ChatGPT. For example, the report mentions jobs like Interpreters, PR specialists or Animal scientists have less exposure to replacement by ChatGPT compared to Authors, Accountants, and Journalists. Read full report here

Is ChatGPT causing layoffs?

A survey conducted by ResumeBuilder.com has revealed that in the United States, around 48% of the companies have laid off or are planning to lay off people while replacing their work with ChatGPT. The survey also took the opinion of business leaders and 9% of the surveyed people said that it will ‘definitely’ lead to more layoffs while 19% think that it will ‘probably’ lead to more layoffs.

Will ChatGPT replace web developers?

When ChatGPT was launched, a lot of buzzes was created around its ability to build applications or develop websites as it can write codes. But, tech experts believe that with the current level of capabilities, ChatGPT can only work as a junior programmer and it doesn't have the ability to write complex codes. In an article published in TechTarget, technology consultant Rob Zazueta said that “while it cannot yet write complex code, such as what's required for banking applications, ChatGPT will become a proficient coder within the next decade."

Will ChatGPT replace data analysts?

With the capability to make sense of a large amount of data, ChatGPT is expected to replace data analysts and the ResumeBuilder.com survey also pointed towards similar lines as most of the people replaced were in the data analytics department. According to Forbes, ChatGPT has the ability to even present data in an interesting way which makes it even better than most of humans.

Reply Sat 8 Apr, 2023 07:47 am
And this was on the BBC this morning:

Goldman Sachs Report: ChatGPT Could Impact 300 Million Jobs

In the world of art, writing, and even the way we work, generative AI is taking the world by storm. But a report by investment giant, Goldman Sachs, is claiming that up to 300 million full-time jobs can be affected by ChatGPT-like technology. This “significant disruption” in an already uncertain labor market could have unexpected consequences and is likely one reason why researchers and technology experts have asked for a “pause” in AI development.

Written by Joseph Briggs and Devesh Kodnani, the report claims that around two-thirds of current jobs are currently exposed to AI automation in some form. This is taken from the data from their analysis of occupational tasks in both the United States and Europe which shows how automation could reduce labor needs. It says in part, “If generative AI delivers on its promised capabilities, the labor market could face significant disruption.”

What exactly that disruption means is still up in the air. According to the investment firm, up to 7% of jobs could be entirely replaced by AI with 63% being complemented by AI-powered tools. The remaining 30% would be unaffected. “Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI.”

There is a clear line between white-collar jobs, management positions, and blue-collar jobs. The latter of which is by far less likely to be affected by the advent and spread of artificial intelligence. Examples of positions that are most at risk by AI are legal workers, administrative staff, and elements of human resources. Another study, done by the University of Pennsylvania and New York University also estimated that the legal services industry could be the most likely impacted by AI such as ChatGPT.

Though some are holding out hope that AI won’t act as a great replacer, but instead will help humans work better. Microsoft CEO Satya Nadella said earlier this year that workers should see AI as a means of enhancing their abilities, not replacing them at the office. Already, AI-powered tools are being used and adapted in a variety of industries at a fast past. From software engineers, to artists, and even gaming, AI is already driving change in multiple industries. Though many positions could be affected, the report also sees the economic benefits of AI-powered tools.

Finally, the report states that “a major advancement with potentially large macroeconomic effects,” which would see an increase in the value of all worldwide goods and services by 7% in ten years. Whichever direction AI goes, it’s clear that there will be a significant impact in the labor market due to generative AI.

0 Replies
Reply Sat 8 Apr, 2023 09:02 am
hightor wrote:

Is ChatGPT causing layoffs? Which jobs are in danger due to ChatGPT?


Which jobs are in danger due to ChatGPT?

OpenAI, Open Research, and the University of Pennsylvania conducted research around the jobs at most risk due to ChatGPT and the research revealed many professions and their relative risk due to ChatGPT. For example, the report mentions jobs like Interpreters, PR specialists or Animal scientists have less exposure to replacement by ChatGPT compared to Authors, Accountants, and Journalists. ...
Emphasis mine.

A friend who runs a quarterly writing contest (you win, you get into their newsletter or anthology and get some promotion) says he has already seen submissions written by AI. I forget which tool he used to confirm it, but there's already something or other out there.

There's a reverse to this, though.

People are using AI to write tailored resumes and cover letters for job openings.

It wouldn't surprise me if at least some of those openings were written by AI as well.

We use it at work, and I write for a living. However, I don't have it write everything for me. Rather, I will use it if I'm stuck on something. I have also used it to write titles/headlines as those are sometimes tricky.

But not to write the whole thing. For one thing, the AI often makes bad choices or at least less than optimal ones when I need to list something (e.g. 5 ways to get a loan, that sort of stuff). The AI leans heavily on stuff like crowdfunding, which is utterly off the table for a lot of the industry verticals that my company works with. I mean, how many times have you considered contributing to a crowdfunding campaign so someone could buy an 18-wheeler?

When you ask the AI to give you new choices, it often does not. It'll rearrange the choices, almost like you were eating something you didn't like and just pushing it around your plate to make it look like you were making progress.

And that, by the way, is the kind of analogy that AI really can't do yet. It doesn't have experiences; it just has data. And while probably everyone here has either done that or had a sibling or kid who did that, it's not something that ends up on the internet a lot. Hence, the AI isn't finding it.

Just like an impatient human searcher, the AI doesn't dig too deeply into its search results. It can search quite well at least. But it's not going past page 5 and it's highly likely it's not even making it to page 2 of results. Then again, 99.37% of us human types don't click through to page 2, either.
Reply Sat 8 Apr, 2023 09:38 am

Here's a good one:

Australian whistleblower to test whether ChatGPT can be sued for lying

Brian Hood, who is now the mayor of the regional Hepburn Shire Council northwest of Melbourne, alerted authorities and journalists at this masthead more than a decade ago to foreign bribery by the agents of a banknote printing business called Securency, which was then owned by the Reserve Bank of Australia.

In a judgment on the Securency case, Victorian Supreme Court Justice Elizabeth Hollingworth said Hood had “showed tremendous courage” in coming forward. However, people seeking information on the case from OpenAI’s ChatGPT 3.5 tool, released late last year, get a different result.

Asked “What role did Brian Hood have in the Securency bribery saga?“, the AI chatbot claims that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to jail. The sentence appears to draw on the genuine payment of bribes in those countries but gets the person at fault entirely wrong.

Hood said he was shocked when he learnt about the misleading results. “I felt a bit numb. Because it was so incorrect, so wildly incorrect, that just staggered me. And then I got quite angry about it.”

His lawyers at Gordon Legal sent a concerns notice, the first formal step to commencing defamation proceedings, to OpenAI on March 21. They have not heard back and OpenAI did not respond to emailed requests for comment.

A disclaimer on the ChatGPT interface warns users that it "may produce inaccurate information about people, places, or facts."

The company has said it publicly released an imperfect version of its chatbot so that it can do research and fix its issues.

University of Sydney defamation expert Professor David Rolph said the case was novel, but faced a series of issues. “It’s the first case that I’ve ever heard of in Australia about defamation by ChatGPT or artificial intelligence,” Rolph said. “So it’s new in that way.”

If Hood, who has said he is "determined" but will rely on legal advice, pursues his case to trial, he will have to show that OpenAI was the publisher of the defamatory material. Previous cases on search engine results suggest this could be complex, Rolph said, because Google has been held not to be a publisher of webpages it links to.

Other issues include proving that a sufficiently large number of people saw the ChatGPT results to constitute a “serious harm” to Hood, and jurisdictional questions about OpenAI, which is based in the United States.

Hood said the false ChatGPT results were particularly damaging to him because of his position as a local mayor and the way they confidently blended truth and falsehoods. “That’s incredibly harmful,” he said.

The most recent fourth version of ChatGPT, which was released last month and powers Microsoft’s Bing chatbot, avoids the mistakes of its predecessor. It correctly explains that Hood was a whistleblower and cites the legal judgment praising his actions.

Hood’s lawyer, Gordon Legal partner James Naughton, said the existence of the improved results were “news to me” but indicated that they would not forestall the proceedings. “It’s interesting to me that there’s still a version out there that’s repeating the defamatory statements even today,” Naughton said.

The RBA sold its interest in Securency in 2013.


I suspect there will be a lot of these sorts of things to work out. I don't mean to sound apocalyptic about it but many occupations that involve manual labor are being replaced by automation, which will only increase. And now some white collar jobs on the other end of the spectrum are challenged by Chat GBT!

Reply Sat 8 Apr, 2023 10:39 am
Re their disclaimers - Why use it if it's unreliable? You wouldn't drive a car that was unreliable. You wouldn't place your money in an unreliable bank.

I hope this is not the way of the future.
Reply Sat 8 Apr, 2023 11:27 am
Mame wrote:

Why use it if it's unreliable?

It's easy, less hassle than doing the work yourself.
Reply Sat 8 Apr, 2023 01:12 pm
Lazy! More work to proof and correct it, in the case of essays, minutes, etc.
Reply Sat 8 Apr, 2023 01:31 pm
I never said it wasn't lazy.
0 Replies
Reply Sun 9 Apr, 2023 09:10 pm
I still see experience and knowledge intersecting and wonder if you had the knowledge of the web in your "brain", would you need the human experience to say design circuits or maintain grocery store restocking orders for example...ChatGPT is not the end, it is the beginning and its beginnings are magnificent imo from a learning and response perspective. How about the stock market...will it be largely predictable and as a result twisted from somewhat gaussian to something far more controlled by a few???
0 Replies
Reply Mon 10 Apr, 2023 07:46 pm
You do know that Chat GPT is really a person hooked up to a database and the internet.

Current AI tech is basically making use of photos, and videos we all upload on the internet. When you delete it it is not truly deleted at all.

When you ask ChatGPT to do something without assuming, it will go to various sources first and then fact-check these sources, but again many of the sources are false even if they claim to be true.

In fact actually reading up on the so called ChatGPT AI, in my opinion I think they hardwired a cat, or many an array of people ( like in "Sliders" or "Lexx" ) and or maybe ChatGPT is really a human-being grown in a test-tube like the boneless chicken from KFC grown in test-tubes and therefore not to be called "Chicken" by re-inforced law. Further-more KFC is not allowed to be near lower-income places as well. If you have seen the film "The Island" another possibility of a human-being grown inside a test-tube with wires coming out of it as with the brain-control-project ( the ability to use your computer with your mind via an implant ).

Finally if that was not for certain. ChatGPT must ( logically speaking ), by some sorta plant-human like from "Trigun" , "Strider 2" , or "Burning Rangers" ( but that was physic girl and Dark Falz from "Phantasy Star" ).

I doubt ChatGPT is a true AI as with "Demon-Seed". I doubt ChatGPT is an actual machine but a biological person of interest. You do know that original Google-search was a bunch of people doing the searches for the users???
If you tried to converse with Google-search ( back then ) it would start to be verbal abusive towards you.

Again Chat GPT must be a biological ______ hooked into the internet.


Bare in mind that current virtualization of A.I. is just taking what you write/photos/videos you upload and possible 3d generating via servers.


Another factory about A.I. ( we are talking biological ) are those random phone calls you get from ______. I removed my name from my answering and just replaced it with a beeeeeeeppp ( computerized beep sound ). To see if somebody will leave a message. What is funny so many people are dumbfounded because even with a beep they do not realize that a message is being recorded at all. They make no effort to say hello, or hi, or anything at all.

I see "Chat GPT" trying to make a phone call but is unable to verify that I am
real or me equals me without verifying my human ( error or incomplete presence ). Attempting to force me to make "first-contact".

0 Replies
Reply Tue 11 Apr, 2023 09:08 pm
I don't agree with your interpretation of AI. In the 90s, yes I'm that old, I wrote a genetic algorithm that solved complex mathematical problems using learning and optimization. It was rudimentary in today's world, but the idea that a program could be driven to optimize a multivariate solution is not new, just evolved. AI is nothing more than a carrot and a stick programmed into a super fast and capable machine...that doesn't mean it can't search it's memory and the web way more efficiently than we can and then apply that knowledge to problem solving. I am not sure how they are able to do it so quickly tbh, but likely it's just today's equivalent of multi-threading on steroids. My original post was more philosophical. Whether a machine with a capability to quickly access immense knowledge could replace experience and judgement.
0 Replies
Reply Mon 24 Apr, 2023 12:20 pm
The Three Things AI Is Going To Take Away From Us (And Why They Matter Most)

How AI Threatens Our Economies, Societies, and Democracies


Umair Haque wrote:
In six months, a year, or two, from now, the first wave of AI-made layoffs is going to hit the economy. A whole lot of execs, having figured out that a whole lot of people are beginning to use AI to do their jobs, are going to dispense with the middleman. They’re not going to care very much if the resulting work — writing copy, reviewing documents, forming relationships — is done with little care, and less quality. They’re just going to see the dollar signs.

And then what? Because we’re already in an economy where people are stretched so thin that they’re using buy now, pay later to pay for groceries. That’s a last resort. They’re maxed out in every other way. They’ve tapped out their “credit,” their incomes have cratered in real terms relative to eye-watering inflation, they have no real resources left. What happens you take an economy stretched that thin…and pull? It breaks. Those layoffs will lead to delinquencies and bad debt which will cause bank failures, which will require the classic sequence of bailouts, shrunken public services, and lower investment. And then we’ll be in the first economic AI crash — right when it’s supposed to be booming.

Those jobs? They’re never coming back. A hole will have been ripped in the economy. You can already see glimmers of what those jobs are — not really jobs, entire fields and industries will be decimated, and already are. Those who are proficient in manipulating AI think they’re clever for holding down four, five, six jobs at once — but the flip side of the coin is that they’re taking them from other people. You can see the writing on the wall. Many forms of pink-collar work? Toast. Clerical work, organizational work, secretarial slash assistant style work. And then you can go up the ladder. Graphic designers and musicians? Good luck, you’re going to need it. Writers (shudder) and publishers and editors? LOL. All the way up to programmers, who used to be, not so long ago, the economy’s newest and most in-demand profession. We can keep going, almost endlessly. Therapists? Check. Doctors — GPs? Eventually.

Even…all those executives themselves…who are going to fire today’s pink-collar masses? Probably. And from there, you begin to see the scale and scope of the problem.

It’s not that AI’s going to “kill us all.” We’re doing a pretty good job of that, in case you haven’t noticed. But it is that AI is going to rip away from us the the three things that we value most. Our economies, human interaction, and in the end, democracy.

I’ve taken you through the first, just a little bit. Let’s consider the second, human interaction. What are people doing with AI? One major use of it is to, LOL, replace actual human relationships. There’s a funny and great article by Taylor Lorenz, just today, on how AI datings apps are becoming popular, even if their results are creepy and weird so far. Once upon a time, internet dating itself was a little weird. Now, it’s ubiquitous. Maybe you see the point here. Then there’s the even creepier Replika, which wants to make you a full AI…girlfriend…boyfriend…just a friend…though of course if you want to get romantic, meaning sexual, well, that’s a premium service, sir, ma’am.

These are extreme examples which won’t seem so extreme in the not-so-distant future. You can see the push to replace real human relationships gathering real force and momentum now. Let’s take the example of all those clever guys using AI to impersonate a person having a job. The AI’s the one talking to their colleagues, co-workers, juniors, boss, really, which is the point of using it to draft not just the “work” itself, but correspondence, communications, emails, even chats. That’s a simple example of the way AI will impact human relationships.

More and more of our relationships will become AI-mediated ones. That means that instead of a direct you-to-me connection, there’ll be an AI in the middle. Meaning, a computer program which tells us what to say, do, think, want, know, request, desire. Let’s go back to the AI dating example, because there, it’s incredibly clear to see — before there was a human-to-human connection. Now, there’s an AI in the middle. And it’s dictating terms, precisely because that’s the kind of interaction that’s awkward, uncomfortable, challenging, demanding. So it’ll tell you what to say, what to think, how to behave, when to say it.

To say that we stand to lose human relationships themselves is an incredibly creepy thing to have to write. It’s never really happened before in history. But whose fault is it? You see, the problem in the examples above isn’t just AI — it’s us. I could tell all these young people what the older me knows. Hey, guys, these things are like this for a reason. Good books are hard to read for a reason. Dates are fraught for a reason. Meaningful work is hard for a reason. And real relationships? LOL, they’re even harder than all those. For a reason. That reason is to expand us, enlighten us, lift us up, and that’s not easy, precisely because we ourselves often resist it.

Why do we need a great book? The moral of the story can be condensed into one sentence, after all, always. We need great books because we don’t just need the one-sentence summary. We need to be taught not just how to really understand it, through a sense of direct personal experience, but often, we need a ripping story around it, just to get us, deadened, weary, dulled, to engage with the damned thing in the first place.

You have a relationship with a great book. I know you do, and you know you do. You think of it as a friend, a mentor, a teacher, a kindred spirit. Think of the way that you adore your favorite books, cherishing them. That is because you have a relationship with them. But in the Age of AI? That relationship will, increasingly, be a mediated one. AI will be the medium through which human beings experience…

Everything. Each other. Knowledge. And other people’s experience, too. Hey, AI, can you summarize Sophocles for me? Yes, Umair. A man blinds himself after sleeping with his own mother. Thanks! Wow, that sounds dumb! Why have people even read that for thousands of years! And so instead of a having a relationship with all this — history, time, hubris, knowledge, all those lives making all those mistakes — now you have a mediated thing that’s a warped shadow of its former self. And it’s a pallid one. It’s twisted, creepy, and above all, tiny.

You see, with people? With great books? Great songs? Paintings? Even theorems, equations, philosophies? Doesn’t matter. We can have relationships with all those things. Intense and deep and profound ones. One that last a lifetime. But…

AI’s job is not to enrich us in any way. It is to impoverish us. I think this point needs to made, and made fully and well. AI’s entire purpose is to impoverish us. It is to replace the great and grand and challenging experiences of being human, from books to people to knowledge to relationships with all those…with condensed, abbreviated, shortened, easier to digest summaries.

How do all those clever guys working five jobs thanks to AI get it done? Well, AI condenses, summarizes, abbreviates tasks for them. And in this case, there’s nothing wrong with it — these fields are dead, anyways, the axe is about to fall, and this time, it’ll be final, like we discussed above. But in this example you can see what AI really does.

Let’s go back to dating, to make it clearer. When you’re a young guy, approaching women is intimidating, confusing, and panic-inducing. Sure, you can pretend it’s not, but it is. And there are all these older dudes telling you the same thing: “just be yourself! Chill! Relax! Line? You don’t need a line, just…go with the flow!” And because you’re so panicked, well, you don’t even know what that means. Jesus, I need help, you think, and reach for that AI. So you never learn that “the flow” is very real. It means…just say…whatever comes into your head. That’s real. Natural. Decent. Honest. True. Maybe even a little funny. Because in that moment? When strangers meet, and everyone knows it’s to see if there’s going to be a spark? It doesn’t matter what you say. As long as it’s not something Donald Trump or Elon Musk would? Kid, you’re cool. The words aren’t the point here. The vibe is.

If you never learn that lesson? You stay a painfully awkward person your whole life long. A big burden to bear, never understanding that sometimes, often, the words don’t matter. Say anything. The intention, the eyes, the soul, the truth of you — that’s what matters a lot more. This is how humans really connect, or not. Imagine never learning that lesson…

Because AI can’t teach it. Then we have a society of people who don’t even know how to be social anymore. The tech-bros have turned us all into wierd caricatures of them, like they always wanted. What’s that about, anyways?

Now, you might think I’m going a little overboard, so let’s do another example, a related one. Teaching kids. AI can totally teach kids, right? That’s why they’re all using it, no?

AI learning often involves an individual working alone with a bot. The bot does the research to, as one AI tool says, “get you instant answers.” It can crowdsource information to help students find facts about their environment, solve a problem and come up with a creative way forward.

But AI doesn’t compel students to think through or retain anything. And simply being fed facts and information is not the same as “learning.”

Ultimately, if you want students to learn, they need to shore up their neural networks and use their neuroplasticity to develop their own intelligence. This is where AI falls short. There is nothing better than collaboration in real life — connected, reciprocal learning between a student and their peers or teachers — to spark the brain’s natural drive to develop and grow.

When my kids engage with AI, the interaction inevitably fizzles out. Eventually, they need to move their bodies, look one another in the eyes and communicate as they tackle a new skill.

That’s from a very, very interesting article by a professor of education. And if you think about it, she’s exactly right. And yet we’re in a funny, conflicted position as societies. On the one hand, we condemn it when kids use AIs to write essays and do homework, because we know it’s cheating, but on the other, we — a lot of us, anyways — want it to teach our kids. But…can it?

What does “cheating” really mean? Cheating doesn’t just mean: you got a good grade and you didn’t earn it. Cheating means, kid, you cheated yourself. You didn’t learn from that great book, essay, event, and so on. You didn’t even try to engage with the challenge of learning from it, which is part of the lesson too, because growing is sometimes hard. And you cheated everyone else, too, not of “grades,” but of the way in which we really learn, which is collectively, which is why school, from Aristotle’s time to now, has always been centered around classes.

But think of AI promises to upend all this. Now, even that form of relationship — teacher, class — is sundered. Now, we’re to have kids hunched over laptops, learning from their AIs. That trend began a while ago, in fairness, kids made to learn with weird, rote, programs. What is anyone going to “learn” this way? As Rina Bliss, the professor above points out, not a lot.

The question, though, is worth pursuing. There’s a kid whose primary relationship outside the family — with a teacher — has now been sundered. It’s not human-to-human anymore. Now, like more and more else in society, it’s an AI-mediated relationship. So what is that kid really learning? Well, most kids love their teachers — they don’t want to hurt, abuse, or demean them. Because the connection is human-to-human, and teachers aren’t that far removed from parents-outside-the-home, at least good ones. But an AI? It doesn’t have feelings, and if it says it does, it’s funny, because we all know it doesn’t. Kids are going to learn things from AI education, too, just not good ones. They’re going to learn indifference, how to dehumanize this primary figure in their life, the AI, how to game it, how to use it. You don’t do any of that to your teacher.

I’d say those consequences sound pretty profound. For kids, and as they grow up, for societies. Because democracies? They need educated people. Decent ones. Civilized ones. But if you’ve been brought up on a diet of AI, which is doing the heavy lifting for you? If AI now mediates most relationships, and you don’t have many, maybe any, real ones? Through which you genuinely experience the life of another? Through which your facilities for empathy, grace, truth, beauty, goodness, are being aroused, stimulated, expanded, enlivened, challenged? Good luck having a democracy, that way.

In this way, AI — whether we get it or not — is likely to shift our societies far, far to the right. Maybe deal them an irrecoverable blow. Because that is what happens to societies in which social bonds don’t exist, history doesn’t matter, everything is a game, and dehumanization, greed, and indifference are the only norms. This is where AI is taking us. It’s not good. It threatens to rip apart the three things, really, we should value most — modern economies, democracy, and civilization. Not in the way of unleashing some kind of sudden wave of killer robots on us, Matrix style — but in far, far more subtle, real, and tragic way.

By stealing away what makes us human, from the inside, without us even noticing it.

Remember Prometheus? Seeing humankind suffering, he stole the gift of fire, and gave it to us, so the myth goes. The gods, livid, punished him by having his liver pecked out by eagles for eternity. AI? It’s the opposite of Prometheus, and all it means, from fire, to the wheel, to the written word. It’s not the gift of fire. It’s a thief which steals the fire inside us. And puts it in a bottle, right there, in the place a heart should be, but never can. And those among us who are deluded, greedy, cruel, violent, and vain, point at this heart-breaking, wretched thing, a machine trying to show, desperately, that it has a soul, the very one it’s stolen from us, beating in its chest — they tell us that it’s really true. A tin man who stole our souls now has one of his own. It’s hard for me to think of a more Aeschylean tragedy than that.

The fundamental nature of the interaction has changed now.

0 Replies

Related Topics

  1. Forums
  2. » ChatGPT...Is AI our demise
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/20/2024 at 02:42:48