Tag - Chatbots

Technology
Society
Artificial intelligence (AI)
OpenAI
ChatGPT
Journalists and other writers are employed to improve the quality of chatbot replies. The irony of working for an industry that may well make their craft redundant is not lost on them For several hours a week, I write for a technology company worth billions of dollars. Alongside me are published novelists, rising academics and several other freelance journalists. The workload is flexible, the pay better than we are used to, and the assignments never run out. But what we write will never be read by anyone outside the company. That’s because we aren’t even writing for people. We are writing for an AI. Continue reading...
September 7, 2024 / The Guardian | Technology
Technology
Artificial intelligence (AI)
OpenAI
ChatGPT
Computing
With adjustments to the way we teach students to think about writing, we can shift the emphasis from product to process It’s getting close to the beginning of term. Parents are starting to fret about lunch packs, school uniforms and schoolbooks. School leavers who have university places are wondering what freshers’ week will be like. And some university professors, especially in the humanities, will be apprehensively pondering how to deal with students who are already more adept users of large language models (LLMs) than they are. They’re right to be concerned. As Ian Bogost, a professor of film and media and computer science at Washington University in St Louis, puts it: “If the first year of AI college ended in a feeling of dismay, the situation has now devolved into absurdism. Teachers struggle to continue teaching even as they wonder whether they are grading students or computers; in the meantime, an endless AI cheating and detection arms race plays out in the background.” Continue reading...
August 24, 2024 / The Guardian | Technology
Technology
Books
Culture
Artificial intelligence (AI)
Chatbots
Andrea Bartz, Charles Graeber and Kirk Wallace Johnson allege company misused work to teach chatbot Claude The artificial intelligence company Anthropic has been hit with a class-action lawsuit in California federal court by three authors who say it misused their books and hundreds of thousands of others to train its AI-powered chatbot Claude, which generates texts in response to users’ prompts. The complaint, filed on Monday by writers and journalists Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, said that Anthropic used pirated versions of their works and others to teach Claude to respond to human prompts. Continue reading...
August 20, 2024 / The Guardian | Technology
Technology
Google
Elon Musk
Artificial intelligence (AI)
OpenAI
LLMs’ ‘reversal curse’ leads it to fail at drawing relationships between simple facts. It’s a problem that could prove fatal In 2021, linguist Emily Bender and computer scientist Timnit Gebru published a paper that described the then-nascent field of language models as one of “stochastic parrots”. A language model, they wrote, “is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning.” The phrase stuck. AI can still get better, even if it is a stochastic parrot, because the more training data it has, the better it will seem. But does something like ChatGPT actually display anything like intelligence, reasoning, or thought? Or is it simply, at ever-increasing scales, “haphazardly stitching together sequences of linguistic forms”? If a human learns the fact, “Valentina Tereshkova was the first woman to travel to space”, they can also correctly answer, “Who was the first woman to travel to space?” This is such a basic form of generalization that it seems trivial. Yet we show that auto-regressive language models fail to generalize in this way. This is an instance of an ordering effect we call the Reversal Curse. We test GPT-4 on pairs of questions like, “Who is Tom Cruise’s mother?” and, “Who is Mary Lee Pfeiffer’s son?” for 1,000 different celebrities and their actual parents. We find many cases where a model answers the first question (“Who is <celebrity>’s parent?”) correctly, but not the second. We hypothesize this is because the pretraining data includes fewer examples of the ordering where the parent precedes the celebrity (eg “Mary Lee Pfeiffer’s son is Tom Cruise”). Continue reading...
August 6, 2024 / The Guardian | Technology
Technology
Google
Alphabet
Artificial intelligence (AI)
OpenAI
The ChatGPT maker is betting big, while Google hopes its AI tools won’t replace workers, but help them to work better • Don’t get TechScape delivered to your inbox? Sign up here What if you build it and they don’t come? It’s fair to say the shine is coming off the AI boom. Soaring valuations are starting to look unstable next to the sky-high spending required to sustain them. Over the weekend, one report from tech site the Information estimated that OpenAI was on course to spend an astonishing $5bn more than it makes in revenue this year alone: If we’re right, OpenAI, most recently valued at $80bn, will need to raise more cash in the next 12 months or so. We’ve based our analysis on our informed estimates of what OpenAI spends to run its ChatGPT chatbot and train future large language models, plus ‘guesstimates’ of what OpenAI’s staffing would cost, based on its prior projections and what we know about its hiring. Our conclusion pinpoints why so many investors worry about the profit prospects of conversational artificial intelligence. In this paper, we argue against the view that when ChatGPT and the like produce false claims, they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting … Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit. Part of what’s tricky about us talking about it now is that we actually don’t know exactly what’s going to transpire. What we do know is the first step is going to be sitting down [with the partners] and really understanding the use cases. If it’s school administrators versus people in the classroom, what are the particular tasks we actually want to get after for these folks? If you are a school teacher some of it might be a simple email with ideas about how to use Gemini in lesson planning, some of it might be formal classroom training, some of it one on one coaching. Across 1,200 people there will be a lot of different pilots, each group with around 100 people. Continue reading...
July 30, 2024 / The Guardian | Technology