Anthropic says model is able to carry out computer tasks – as fears mount such
technology will replace workers
An artificial intelligence startup backed by Amazon and Google says it has
created an AI agent that can carry out tasks on the computer such as moving a
mouse cursor and typing text.
US company Anthropic said its AI model, called Claude, could now perform
computing tasks including filling out forms, planning an outing and building a
website.
Continue reading...
Tag - OpenAI
The startup behind ChatGP, which is reportedly planning to become a for-profit
business, is now valued on par with Uber
OpenAI has raised $6.6bn (£5bn) in a funding round that values the artificial
intelligence business at $157bn, with chipmaker Nvidia and Japanese group
SoftBank among its investors.
The San Francisco-based startup, responsible for the ChatGPT chatbot, did not
give details of a reported restructuring that will transform it into a
for-profit business. The funding round was led by Thrive Capital, a US venture
capital fund, and other backers include MGX, an Abu Dhabi-backed investment
firm.
Continue reading...
OpenAI o1, AKA Strawberry, appears to be a significant advance, but its ‘chain
of thought’ should be made public knowledge
It’s nearly two years since OpenAI released ChatGPT on an unsuspecting world,
and the world, closely followed by the stock market, lost its mind. All over the
place, people were wringing their hands wondering: What This Will Mean For
[enter occupation, industry, business, institution].
Within academia, for example, humanities professors agonised about how they
would henceforth be able to grade essays if students were using ChatGPT or
similar technology to help write them. The answer, of course, is to come up with
better ways of grading, because students will use these tools for the simple
reason that it would be idiotic not to – just as it would be daft to do
budgeting without spreadsheets. But universities are slow-moving beasts and even
as I write, there are committees in many ivory towers solemnly trying to
formulate “policies on AI use”.
Continue reading...
William Saunders, a former research engineer at the startup, concerned about who
will make safety decisions
OpenAI’s plan to become a for-profit company could encourage the artificial
intelligence startup to cut corners on safety, a whistleblower has warned.
William Saunders, a former research engineer at OpenAI, told the Guardian he was
concerned by reports that the ChatGPT developer is preparing to change its
corporate structure and will no longer be controlled by its non-profit board.
Continue reading...
Reported move follows recent departure of several senior figures from ChatGPT
developer
OpenAI is reportedly pushing ahead with plans to become a for-profit company, as
more senior figures left the ChatGPT developer after the surprise exit of its
chief technology officer, Mira Murati.
The San Francisco-based startup is preparing to change its corporate structure
as it seeks $6.5bn (£4.9bn) of new funding, according to reports.
Continue reading...
Journalists and other writers are employed to improve the quality of chatbot
replies. The irony of working for an industry that may well make their craft
redundant is not lost on them
For several hours a week, I write for a technology company worth billions of
dollars. Alongside me are published novelists, rising academics and several
other freelance journalists. The workload is flexible, the pay better than we
are used to, and the assignments never run out. But what we write will never be
read by anyone outside the company.
That’s because we aren’t even writing for people. We are writing for an AI.
Continue reading...
With adjustments to the way we teach students to think about writing, we can
shift the emphasis from product to process
It’s getting close to the beginning of term. Parents are starting to fret about
lunch packs, school uniforms and schoolbooks. School leavers who have university
places are wondering what freshers’ week will be like. And some university
professors, especially in the humanities, will be apprehensively pondering how
to deal with students who are already more adept users of large language models
(LLMs) than they are.
They’re right to be concerned. As Ian Bogost, a professor of film and media and
computer science at Washington University in St Louis, puts it: “If the first
year of AI college ended in a feeling of dismay, the situation has now devolved
into absurdism. Teachers struggle to continue teaching even as they wonder
whether they are grading students or computers; in the meantime, an endless AI
cheating and detection arms race plays out in the background.”
Continue reading...
Deal ‘meets audience where they are’ by pairing publisher’s content within tech
startup’s products, including ChatGPT
Condé Nast and OpenAI announced a multi-year partnership on Tuesday to display
content from the publisher’s brands such as the Vogue, Wired and the New Yorker
within the AI startup’s products, including ChatGPT and its SearchGPT prototype.
The financial terms of the deal were not disclosed. The Microsoft-backed, Sam
Altman-led firm has signed similar deals with Time magazine, the Financial
Times, Business Insider owner Axel Springer, France’s Le Monde and Spain’s Prisa
Media over the past few months. The deals give OpenAI access to the large
archives of text owned by the publishers, which are necessary both for training
large language models like ChatGPT and for finding real-time information.
Continue reading...
AI company bans accounts and says operation did not appear to have meaningful
audience engagement
OpenAI said on Friday it had taken down accounts of an Iranian group for using
its ChatGPT chatbot to generate content meant for influencing the US
presidential election and other issues.
The operation, identified as Storm-2035, used ChatGPT to generate content
focused on topics such as commentary on the candidates on both sides in the US
elections, the conflict in Gaza and Israel’s presence at the Olympic Games and
then shared it via social media accounts and websites, Open AI said.
Continue reading...
LLMs’ ‘reversal curse’ leads it to fail at drawing relationships between simple
facts. It’s a problem that could prove fatal
In 2021, linguist Emily Bender and computer scientist Timnit Gebru published a
paper that described the then-nascent field of language models as one of
“stochastic parrots”. A language model, they wrote, “is a system for haphazardly
stitching together sequences of linguistic forms it has observed in its vast
training data, according to probabilistic information about how they combine,
but without any reference to meaning.”
The phrase stuck. AI can still get better, even if it is a stochastic parrot,
because the more training data it has, the better it will seem. But does
something like ChatGPT actually display anything like intelligence, reasoning,
or thought? Or is it simply, at ever-increasing scales, “haphazardly stitching
together sequences of linguistic forms”?
If a human learns the fact, “Valentina Tereshkova was the first woman to travel
to space”, they can also correctly answer, “Who was the first woman to travel to
space?” This is such a basic form of generalization that it seems trivial. Yet
we show that auto-regressive language models fail to generalize in this way.
This is an instance of an ordering effect we call the Reversal Curse.
We test GPT-4 on pairs of questions like, “Who is Tom Cruise’s mother?” and,
“Who is Mary Lee Pfeiffer’s son?” for 1,000 different celebrities and their
actual parents. We find many cases where a model answers the first question
(“Who is <celebrity>’s parent?”) correctly, but not the second. We hypothesize
this is because the pretraining data includes fewer examples of the ordering
where the parent precedes the celebrity (eg “Mary Lee Pfeiffer’s son is Tom
Cruise”).
Continue reading...