Google and its rivals are increasingly employing AI-generated summaries, but
research indicates their results are far from authoritative and open to
manipulation
Does aspartame cause cancer? The potentially carcinogenic properties of the
popular artificial sweetener, added to everything from soft drinks to children’s
medicine, have been debated for decades. Its approval in the US stirred
controversy in 1974, several UK supermarkets banned it from their products in
the 00s, and peer-reviewed academic studies have long butted heads. Last year,
the World Health Organization concluded aspartame was “possibly carcinogenic” to
humans, while public health regulators suggest that it’s safe to consume in the
small portions in which it is commonly used.
While many of us may look to settle the question with a quick Google search,
this is exactly the sort of contentious debate that could cause problems for the
internet of the future. As generative AI chatbots have rapidly developed over
the past couple of years, tech companies have been quick to hype them as a
utopian replacement for various jobs and services – including internet search
engines. Instead of scrolling through a list of webpages to find the answer to a
question, the thinking goes, an AI chatbot can scour the internet for you,
combing it for relevant information to compile into a short answer to your
query. Google and Microsoft are betting big on the idea and have already
introduced AI-generated summaries into Google Search and Bing.
Continue reading...
Tag - Chatbots
Megan Garcia said Sewell, 14, used Character.ai obsessively before his death and
alleges negligence and wrongful death
The mother of a teenager who killed himself after becoming obsessed with an
artificial intelligence-powered chatbot now accuses its maker of complicity in
his death.
Megan Garcia filed a civil suit against Character.ai, which makes a customizable
chatbot for role-playing, in Florida federal court on Wednesday, alleging
negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer
III, 14, died in Orlando, Florida, in February. In the months leading up to his
death, Setzer used the chatbot day and night, according to Garcia.
In the US, you can call or text the National Suicide Prevention Lifeline on 988,
chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis
counselor. In the UK, the youth suicide charity Papyrus can be contacted on 0800
068 4141 or email pat@papyrus-uk.org, and in the UK and Ireland Samaritans can
be contacted on freephone 116 123, or email jo@samaritans.org or
jo@samaritans.ie. In Australia, the crisis support service Lifeline is 13 11 14.
Other international helplines can be found at befrienders.org
Continue reading...
ByteDance dismissed person in August it says ‘maliciously interfered’ with
training of artificial intelligence models
The owner of TikTok has sacked an intern for allegedly sabotaging an internal
artificial intelligence project.
ByteDance said it had dismissed the person in August after they “maliciously
interfered” with the training of artificial intelligence (AI) models used in a
research project.
Continue reading...
The assistant, which has sparked privacy concerns, can also be accessed on £299
Ray-Ban Meta sunglasses
Meta, the owner of Facebook and Instagram, has launched its artificial
intelligence assistant in the UK, alongside AI-boosted sunglasses modelled by
Mark Zuckerberg.
Meta’s AI assistant, which can generate text and images, is now available on its
social media platforms in the UK and Brazil, having already been launched in the
US and Australia.
Continue reading...
Journalists and other writers are employed to improve the quality of chatbot
replies. The irony of working for an industry that may well make their craft
redundant is not lost on them
For several hours a week, I write for a technology company worth billions of
dollars. Alongside me are published novelists, rising academics and several
other freelance journalists. The workload is flexible, the pay better than we
are used to, and the assignments never run out. But what we write will never be
read by anyone outside the company.
That’s because we aren’t even writing for people. We are writing for an AI.
Continue reading...
With adjustments to the way we teach students to think about writing, we can
shift the emphasis from product to process
It’s getting close to the beginning of term. Parents are starting to fret about
lunch packs, school uniforms and schoolbooks. School leavers who have university
places are wondering what freshers’ week will be like. And some university
professors, especially in the humanities, will be apprehensively pondering how
to deal with students who are already more adept users of large language models
(LLMs) than they are.
They’re right to be concerned. As Ian Bogost, a professor of film and media and
computer science at Washington University in St Louis, puts it: “If the first
year of AI college ended in a feeling of dismay, the situation has now devolved
into absurdism. Teachers struggle to continue teaching even as they wonder
whether they are grading students or computers; in the meantime, an endless AI
cheating and detection arms race plays out in the background.”
Continue reading...
Andrea Bartz, Charles Graeber and Kirk Wallace Johnson allege company misused
work to teach chatbot Claude
The artificial intelligence company Anthropic has been hit with a class-action
lawsuit in California federal court by three authors who say it misused their
books and hundreds of thousands of others to train its AI-powered chatbot
Claude, which generates texts in response to users’ prompts.
The complaint, filed on Monday by writers and journalists Andrea Bartz, Charles
Graeber and Kirk Wallace Johnson, said that Anthropic used pirated versions of
their works and others to teach Claude to respond to human prompts.
Continue reading...
LLMs’ ‘reversal curse’ leads it to fail at drawing relationships between simple
facts. It’s a problem that could prove fatal
In 2021, linguist Emily Bender and computer scientist Timnit Gebru published a
paper that described the then-nascent field of language models as one of
“stochastic parrots”. A language model, they wrote, “is a system for haphazardly
stitching together sequences of linguistic forms it has observed in its vast
training data, according to probabilistic information about how they combine,
but without any reference to meaning.”
The phrase stuck. AI can still get better, even if it is a stochastic parrot,
because the more training data it has, the better it will seem. But does
something like ChatGPT actually display anything like intelligence, reasoning,
or thought? Or is it simply, at ever-increasing scales, “haphazardly stitching
together sequences of linguistic forms”?
If a human learns the fact, “Valentina Tereshkova was the first woman to travel
to space”, they can also correctly answer, “Who was the first woman to travel to
space?” This is such a basic form of generalization that it seems trivial. Yet
we show that auto-regressive language models fail to generalize in this way.
This is an instance of an ordering effect we call the Reversal Curse.
We test GPT-4 on pairs of questions like, “Who is Tom Cruise’s mother?” and,
“Who is Mary Lee Pfeiffer’s son?” for 1,000 different celebrities and their
actual parents. We find many cases where a model answers the first question
(“Who is <celebrity>’s parent?”) correctly, but not the second. We hypothesize
this is because the pretraining data includes fewer examples of the ordering
where the parent precedes the celebrity (eg “Mary Lee Pfeiffer’s son is Tom
Cruise”).
Continue reading...
The ChatGPT maker is betting big, while Google hopes its AI tools won’t replace
workers, but help them to work better
• Don’t get TechScape delivered to your inbox? Sign up here
What if you build it and they don’t come?
It’s fair to say the shine is coming off the AI boom. Soaring valuations are
starting to look unstable next to the sky-high spending required to sustain
them. Over the weekend, one report from tech site the Information estimated that
OpenAI was on course to spend an astonishing $5bn more than it makes in revenue
this year alone:
If we’re right, OpenAI, most recently valued at $80bn, will need to raise more
cash in the next 12 months or so. We’ve based our analysis on our informed
estimates of what OpenAI spends to run its ChatGPT chatbot and train future
large language models, plus ‘guesstimates’ of what OpenAI’s staffing would cost,
based on its prior projections and what we know about its hiring. Our conclusion
pinpoints why so many investors worry about the profit prospects of
conversational artificial intelligence.
In this paper, we argue against the view that when ChatGPT and the like produce
false claims, they are lying or even hallucinating, and in favour of the
position that the activity they are engaged in is bullshitting … Because these
programs cannot themselves be concerned with truth, and because they are
designed to produce text that looks truth-apt without any actual concern for
truth, it seems appropriate to call their outputs bullshit.
Part of what’s tricky about us talking about it now is that we actually don’t
know exactly what’s going to transpire. What we do know is the first step is
going to be sitting down [with the partners] and really understanding the use
cases. If it’s school administrators versus people in the classroom, what are
the particular tasks we actually want to get after for these folks?
If you are a school teacher some of it might be a simple email with ideas about
how to use Gemini in lesson planning, some of it might be formal classroom
training, some of it one on one coaching. Across 1,200 people there will be a
lot of different pilots, each group with around 100 people.
Continue reading...