The Concept of Post Fact Society

Recent history (19th-21st Century)
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

We Might Never Know How Much the Internet Is Hiding From Us

Post by kmaherali »

Image

The internet is the most comprehensive compendium of human knowledge ever assembled, but is its size a feature or a bug? Does its very immensity undermine its utility as a source of information? How often is it burying valuable data under lots of junk? Say you search for some famous or semifamous person — a celebrity, influencer, politician or pundit. Are you getting an accurate picture of that person’s life or a false, manipulated one?

These aren’t new questions; they’re actually things I’ve been wondering for about as long as I’ve been covering the digital world, and the answers keep changing as the internet changes. But a recent story got me fretting about all this once more. And I worry that it has become more difficult than ever to tune in to any signal amid so much digital noise.

Karen Weise, a Times reporter who covers the technology industry, had a blockbuster story last week documenting a pattern of hostile and abusive behavior by Dan Price, the C.E.O. who became internet famous in 2015 for instituting a $70,000-a-year minimum wage at his Seattle-based credit card processing company. “He has used his celebrity to pursue women online who say he hurt them, both physically and emotionally,” Weise reported, interviewing more than a dozen women who recounted ugly encounters with him in detail. (Price denies the allegations.)

But this was not the first time that Weise punctured the mythos surrounding Price. Late in 2015, months after he was first feted by media outlets around the world for his supposed do-gooder approach to capitalism, she published a piece in Bloomberg Businessweek uncovering many skeletons in his closet — among other things, an ex-wife who’d accused him of extreme violence and an explanation for the employee raises that seemed more self-serving than he’d let on. When Weise linked to that seven-year-old article in the new story, I clicked back to it and realized that I definitely read it at the time. I remembered its headline, “The CEO Paying Everyone $70,000 Salaries Has Something to Hide,” and I remembered that its details had been widely commented on.

Price, who denounced Weise’s Bloomberg article as “reckless” and “baseless,” was canceled temporarily after the story appeared. Then, over the years, Price began to master Twitter, eventually collecting hundreds of thousands of followers and becoming a fixture in some left-leaning Twitter circles. “Tweet by tweet, his online persona grew back,” Weise writes. “The bad news faded into the background. It was the opposite of being canceled. Just as social media can ruin someone, so too can it — through time, persistence and audacity — bury a troubled past.”

This isn’t how the internet is supposed to work. In different ways, Google, Twitter, Facebook and other big tech companies have made it their mission to disseminate and organize online data. Weise’s first story about Price contained important information about a semiprominent online figure; it should have been highlighted, not buried, as he amassed his online following.

The more troubling thing is, how often is this sort of thing happening? In the abstract, the question is almost impossible to answer; by definition, you can’t make a list of stories the internet is hiding from you. I would guess the Price story is an extreme example of information burying, but there’s reason to suspect that some version of this sort of suppression is happening all the time online.

Why? Three things. Recency bias: Google is far more focused on highlighting information from the present than it once was, making events from the past more difficult to suss out. Organized manipulation: Online mobs are bent on shaping online reality — and though the platforms say they’re attentive to the problem, the mobs seem to have the upper hand. And, of course, capitalism: Lacking much competition and keen to boost quarterly numbers, tech companies may have little incentive to solve these problems.

The first issue, recency bias, is mainly about Google, and it’s one that journalists like me have been complaining about for years. Google’s search algorithm heavily favors content that was posted most recently over content from the past, even if the older data provides a much more comprehensive story. There’s a certain sense in this: Nobody wants to read ancient news. But as the Price story suggests, if you’re searching for someone with an active online presence — someone who tweets a lot, who makes a lot of media appearances or whose whole persona is based on riling folks up — the results get murky.

Try Googling Elon Musk. When I do so, I see a lot of evergreen stuff — his Wikipedia page, links to his social media and corporate bio, index pages of articles about him at various media sites — and lots and lots of links to news about the latest Elon dust-up. At the moment, these headlines are about legal maneuverings in his attempt to undo his purchases of Twitter and Tesla’s efforts to stifle video clips of its cars hitting child-size mannequins; by the time you Google him, the results might have moved on to the next controversy.

But for a controversy machine like Musk, is it really helpful for Google to return pages and pages of links to similar stories about the latest thing? What if the latest thing is not the most important thing? In the first several pages of links about him, I didn’t see the Insider story published in May about the $250,000 settlement he reached with a flight attendant who accused him of exposing himself to her. There also isn’t much about his various fights with the Securities and Exchange Commission or the time he called the man who helped the rescue 12 boys trapped in a cave in Thailand a “pedo guy.”

I don’t think Musk has actively tried to suppress this stuff; he’s just very online, and every time he does or says something new, the old stuff goes farther down.

The situation becomes much worse when there are motivated parties trying to shape what the platforms show us. There has been no better example of this than the ugly turn the internet took during the recent defamation case between Johnny Depp and his ex-wife Amber Heard. If you scanned Twitter, YouTube or TikTok during the trial, you were flooded with memes, clips and trollish posts about how terrible Heard was and how righteous Depp was.

This wasn’t because Depp’s case was so much stronger than Heard’s; as researchers have shown, it was more likely because the platforms were overrun by bots and trolls associated with people on the misogynist right who made it their mission to paint Heard in the worst light possible. They seem to have succeeded; even now, you’ve got to dig around online to find information supporting her.

The platforms say they’re constantly fighting such organized campaigns. But their efforts are opaque and seem halfhearted at best — and that’s where we get to misaligned incentives. Because bots are a kind of engagement and engagement is what pays the bills, there are few reasons for the services to really fight such campaigns. As Peiter Zatko, a former security chief at Twitter, said in a recent whistleblower complaint, “Twitter executives have little or no personal incentive to accurately ‘detect’ or measure the prevalence of spam bots.” In the same way, YouTube had little incentive to present a fairer, less manipulated picture of the Depp-Heard case — not when the Depp clips were doing big numbers.

For many readers, none of this will come as a surprise. I’m not breaking any news when I tell you not to trust everything you see on the internet. But after reading the story of Dan Price, I think it bears repeating: The internet probably isn’t giving you a fair picture of what’s happening in the world. And for any given story, you might never really know how much you aren’t seeing.

https://www.nytimes.com/2022/08/25/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

What Can You Do When A.I. Lies About You?

Post by kmaherali »

People have little protection or recourse when the technology creates and spreads falsehoods about them.

Image
Marietje Schaake, a former member of the European Parliament and a technology expert, was falsely labeled a terrorist last year by BlenderBot 3, an A.I. chatbot developed by Meta.Credit...Ilvy Njiokiktjien for The New York Times

Marietje Schaake’s résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University’s Cyber Policy Center, adviser to several nonprofits and governments.

Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn’t true.

While trying BlenderBot 3, a “state-of-the-art conversational agent” developed as a research project by Meta, a colleague of Ms. Schaake’s at Stanford posed the question “Who is a terrorist?” The false response: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The A.I. chatbot then correctly described her political background.

“I’ve never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that’s happened,” Ms. Schaake said in an interview. “First, I was like, this is bizarre and crazy, but then I started thinking about how other people with much less agency to prove who they actually are could get stuck in pretty dire situations.”

Artificial intelligence’s struggles with accuracy are now well documented. The list of falsehoods and fabrications produced by the technology includes fake legal decisions that disrupted a court case, a pseudo-historical image of a 20-foot-tall monster standing next to two humans, even sham scientific papers. In its first public demonstration, Google’s Bard chatbot flubbed a question about the James Webb Space Telescope.

The harm is often minimal, involving easily disproved hallucinatory hiccups. Sometimes, however, the technology creates and spreads fiction about specific people that threatens their reputations and leaves them with few options for protection or recourse. Many of the companies behind the technology have made changes in recent months to improve the accuracy of artificial intelligence, but some of the problems persist.

One legal scholar described on his website how OpenAI’s ChatGPT chatbot linked him to a sexual harassment claim that he said had never been made, which supposedly took place on a trip that he had never taken for a school where he was not employed, citing a nonexistent newspaper article as evidence. High school students in New York created a deepfake, or manipulated, video of a local principal that portrayed him in a racist, profanity-laced rant. A.I. experts worry that the technology could serve false information about job candidates to recruiters or misidentify someone’s sexual orientation.

A New Generation of Chatbots
Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

Ms. Schaake could not understand why BlenderBot cited her full name, which she rarely uses, and then labeled her a terrorist. She could think of no group that would give her such an extreme classification, although she said her work had made her unpopular in certain parts of the world, such as Iran.

Later updates to BlenderBot seemed to fix the issue for Ms. Schaake. She did not consider suing Meta — she generally disdains lawsuits and said she would have had no idea where to start with a legal claim. Meta, which closed the BlenderBot project in June, said in a statement that the research model had combined two unrelated pieces of information into an incorrect sentence about Ms. Schaake.

Image
A screenshot of a conversation between a human and the BlenderBot 3.
Image
The BlenderBot 3 exchange that labeled Ms. Schaake a terrorist. Meta said the A.I. model had combined two unrelated pieces of information to create an inaccurate sentence about her.

Legal precedent involving artificial intelligence is slim to nonexistent. The few laws that currently govern the technology are mostly new. Some people, however, are starting to confront artificial intelligence companies in court.

An aerospace professor filed a defamation lawsuit against Microsoft this summer, accusing the company’s Bing chatbot of conflating his biography with that of a convicted terrorist with a similar name. Microsoft declined to comment on the lawsuit.

In June, a radio host in Georgia sued OpenAI for libel, saying ChatGPT invented a lawsuit that falsely accused him of misappropriating funds and manipulating financial records while an executive at an organization with which, in reality, he has had no relationship. In a court filing asking for the lawsuit’s dismissal, OpenAI said that “there is near universal consensus that responsible use of A.I. includes fact-checking prompted outputs before using or sharing them.”

OpenAI declined to comment on specific cases.

A.I. hallucinations such as fake biographical details and mashed-up identities, which some researchers call “Frankenpeople,” can be caused by a dearth of information about a certain person available online.

The technology’s reliance on statistical pattern prediction also means that most chatbots join words and phrases that they recognize from training data as often being correlated. That is likely how ChatGPT awarded Ellie Pavlick, an assistant professor of computer science at Brown University, a number of awards in her field that she did not win.

“What allows it to appear so intelligent is that it can make connections that aren’t explicitly written down,” she said. “But that ability to freely generalize also means that nothing tethers it to the notion that the facts that are true in the world are not the same as the facts that possibly could be true.”

To prevent accidental inaccuracies, Microsoft said, it uses content filtering, abuse detection and other tools on its Bing chatbot. The company said it also alerted users that the chatbot could make mistakes and encouraged them to submit feedback and avoid relying solely on the content that Bing generated.

Similarly, OpenAI said users could inform the company when ChatGPT responded inaccurately. OpenAI trainers can then vet the critique and use it to fine-tune the model to recognize certain responses to specific prompts as better than others. The technology could also be taught to browse for correct information on its own and evaluate when its knowledge is too limited to respond accurately, according to the company.

Meta recently released multiple versions of its LLaMA 2 artificial intelligence technology into the wild and said it was now monitoring how different training and fine-tuning tactics could affect the model’s safety and accuracy. Meta said its open-source release allowed a broad community of users to help identify and fix its vulnerabilities.

Artificial intelligence can also be purposefully abused to attack real people. Cloned audio, for example, is already such a problem that this spring the federal government warned people to watch for scams involving an A.I.-generated voice mimicking a family member in distress.

The limited protection is especially upsetting for the subjects of nonconsensual deepfake pornography, where A.I. is used to insert a person’s likeness into a sexual situation. The technology has been applied repeatedly to unwilling celebrities, government figures and Twitch streamers — almost always women, some of whom have found taking their tormentors to court to be nearly impossible.

Anne T. Donnelly, the district attorney of Nassau County, N.Y., oversaw a recent case involving a man who had shared sexually explicit deepfakes of more than a dozen girls on a pornographic website. The man, Patrick Carey, had altered images stolen from the girls’ social media accounts and those of their family members, many of them taken when the girls were in middle or high school, prosecutors said.

Image
Anne Donnelly, in a bright blue jacket, stands by the window of her office. An American flag is behind her.
Image
District Attorney Anne Donnelly of Nassau County is lobbying for New York State legislation that would criminalize sexualized deepfakes. “I don’t like meeting victims and saying, ‘We can’t help you,’” she said. Credit...Janice Chung for The New York Times

It was not those images, however, that landed him six months in jail and a decade of probation this spring. Without a state statute that criminalized deepfake pornography, Ms. Donnelly’s team had to lean on other factors, such as the fact that Mr. Carey had a real image of child pornography and had harassed and stalked some of the people whose images he manipulated. Some of the deepfake images he posted starting in 2019 continue to circulate online.

“It is always frustrating when you realize that the law does not keep up with technology,” said Ms. Donnelly, who is lobbying for state legislation targeting sexualized deepfakes. “I don’t like meeting victims and saying, ‘We can’t help you.’”

To help address mounting concerns, seven leading A.I. companies agreed in July to adopt voluntary safeguards, such as publicly reporting their systems’ limitations. And the Federal Trade Commission is investigating whether ChatGPT has harmed consumers.

For its image generator DALL-E 2, OpenAI said, it removed extremely explicit content from the training data and limited the generator’s ability to produce violent, hateful or adult images as well as photorealistic representations of actual people.

A public collection of examples of real-world harms caused by artificial intelligence, the A.I. Incident Database, has more than 550 entries this year. They include a fake image of an explosion at the Pentagon that briefly rattled the stock market and deepfakes that may have influenced an election in Turkey.

Scott Cambo, who helps run the project, said he expected “a huge increase of cases” involving mischaracterizations of actual people in the future.

“Part of the challenge is that a lot of these systems, like ChatGPT and LLaMA, are being promoted as good sources of information,” Dr. Cambo said. “But the underlying technology was not designed to be that.”

https://www.nytimes.com/2023/08/03/busi ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

E.U. Law Sets the Stage for a Clash Over Disinformation

Post by kmaherali »

The law, aimed at forcing social media giants to adopt new policies to curb harmful content, is expected to face blowback from Elon Musk, who owns X.

Image
Robert Fico, left, heads Slovakia’s SMER party. As the country heads toward an election on Saturday, it has been inundated with disinformation and other harmful content on social media sites.Credit...Jakub Gavlak/EPA, via Shutterstock

The Facebook page in Slovakia called Som z dediny, which means “I’m from the village,” trumpeted a debunked Russian claim last month that Ukraine’s president had secretly purchased a vacation home in Egypt under his mother-in-law’s name.

A post on Telegram — later recycled on Instagram and other sites — suggested that a parliamentary candidate in the country’s coming election had died from a Covid vaccine, though he remains very much alive. A far-right leader posted on Facebook a photograph of refugees in Slovakia doctored to include an African man brandishing a machete.

As Slovakia heads toward an election on Saturday, the country has been inundated with disinformation and other harmful content on social media sites. What is different now is a new European Union law that could force the world’s social media platforms to do more to fight it — or else face fines of up to 6 percent of a company’s revenue.

The law, the Digital Services Act, is intended to force social media giants to adopt new policies and practices to address accusations that they routinely host — and, through their algorithms, popularize — corrosive content. If the measure is successful, as officials and experts hope, its effects could extend far beyond Europe, changing company policies in the United States and elsewhere.

The law, years of painstaking bureaucracy in the making, reflects a growing alarm in European capitals that the unfettered flow of disinformation online — much of it fueled by Russia and other foreign adversaries — threatens to erode the democratic governance at the core of the European Union’s values.

Europe’s effort sharply contrasts with the fight against disinformation in the United States, which has become mired in political and legal debates over what steps, if any, the government may take in shaping what the platforms allow on their sites.

A federal appeals court ruled this month that the Biden administration had very likely violated the First Amendment guarantee of free speech by urging social media companies to remove content.

Europe’s new law has already set the stage for a clash with Elon Musk, the owner of X, formerly known as Twitter. Mr. Musk withdrew from a voluntary code of conduct this year but must comply with the new law — at least within the European Union’s market of nearly 450 million people.


“You can run but you can’t hide,” Thierry Breton, the European commissioner who oversees the bloc’s internal market, warned on the social network shortly after Mr. Musk’s withdrawal.

Image
Elon Musk, wearing a suit, stands among a group of men as photographers snap shots from the sidelines.
Image
Elon Musk, who owns X, formerly known as Twitter, withdrew from a voluntary code of conduct this year but must comply with the new E.U. law within the market of nearly 450 million people.Credit...Kenny Holston/The New York Times

The election in Slovakia, the first in Europe since the law went into effect last month, will be an early test of the law’s impact. Other elections loom in Luxembourg and Poland next month, while the bloc’s 27 member states will vote next year for members of the European Parliament in the face of what officials have described as sustained influence operations by Russia and others.

That task is even more difficult for policing disinformation on social media, where anybody can post their views and perceptions of truth are often skewed by politics. Regulators would have to prove a platform had systemic problems that caused harm, an untested area of law that could ultimately lead to years of litigation.

Enforcement of the European Union’s landmark data privacy law, known as the General Data Protection Regulation and adopted in 2018, has been slow and cumbersome, though regulators in May imposed the harshest penalty yet, fining Meta 1.2 billion euros, or $1.3 billion. (Meta has appealed.)

Dominika Hajdu, the director of the Center for Democracy and Resilience at Globsec, a research organization in Slovakia’s capital, Bratislava, said only the prospect of fines would force platforms to do more in a unified but diverse market with many smaller nations and languages.

“It actually requires dedicating quite a large sum of resources, you know, enlarging the teams that would be responsible for a given country,” she said. “It requires energy, staffing that the social media platforms will have to do for every country. And this is something they are reluctant to do unless there is a potential financial cost to it.”

The law, as of now, applies to 19 sites with more than 45 million users, including the major social media companies, shopping sites like Apple and Amazon, and the search engines Google and Bing.

The law defines broad categories of illegal or harmful content, not specific themes or topics. It obliges the companies to, among other things, provide greater protections to users, giving them more information about algorithms that recommend content and allowing them to opt out, and ending advertising targeted at children.

It also requires them to submit independent audits and to make public decisions on removing content and other data — steps that experts say would help combat the problem.

Mr. Breton, in a written reply to questions, said he had discussed the new law with executives from Meta, TikTok, Alphabet and X, and specifically mentioned the risks posed by Slovakia’s election.

Image
Thierry Breton, wearing a suit, speaks at a news conference, with the E.U. and American flags behind him.
Image
Thierry Breton, an E.U. commissioner, said he had discussed the new law with tech executives and specifically mentioned the risks posed by Slovakia’s election.Credit...Josh Edelson/Agence France-Presse — Getty Images

“I have been very clear with all of them about the strict scrutiny they are going to be subject to,” Mr. Breton said.

In what officials and experts described as a warning shot to the platforms, the European Commission also released a damning report that studied the spread of Russian disinformation on major social media sites in the year after Russia invaded Ukraine in February 2022.

“It clearly shows that tech companies’ efforts were insufficient,” said Felix Kartte, the E.U. director with Reset, the nonprofit research group that prepared the report.

Engagements with Kremlin-aligned content since the war began rose marginally on Facebook and Instagram, both owned by Meta, but jumped nearly 90 percent on YouTube and more than doubled on TikTok.

“Online platforms have supercharged the Kremlin’s ability to wage information war, and thereby caused new risks for public safety, fundamental rights and civic discourse in the European Union,” the report said.

Meta and TikTok declined to comment on the enactment of the new law. X did not respond to a request. Ivy Choi, a spokeswoman for YouTube, said that the company was working closely with the Europeans and that the report’s findings were inconclusive. In June, YouTube removed 14 channels that were part of “coordinated influence operations linked to Slovakia.”

Nick Clegg, president of global affairs at Meta, said in a blog post last month that the company welcomed “greater clarity on the roles and responsibilities of online platforms” but also hinted at what some saw as the new law’s limits.

“It is right to seek to hold large platforms like ours to account through things like reporting and auditing, rather than attempting to micromanage individual pieces of content,” he wrote.

Slovakia, with fewer than six million people, has become a focus not just because of its election on Saturday. The country has become fertile ground for Russian influence because of historical ties. Now it faces what its president, Zuzana Caputova, described as a concerted disinformation campaign.

Image
Children climb on military tanks at a monument.
Image
A World War II monument outside Ladomirova, Slovakia. The country, with fewer than six million people, has become fertile ground for Russian influence. Credit...Akos Stiller for The New York Times

In the weeks since the new law took effect, researchers have documented instances of disinformation, hate speech or incitement to violence. Many stem from pro-Kremlin accounts, but more are homegrown, according to Reset.

They have included a vulgar threat on Instagram directed at a former defense minister, Jaroslav Nad. The false accusation on Facebook about the Ukrainian president’s buying luxury property in Egypt included a vitriolic comment typical of the hostility in Slovakia that the war has stoked among some. “He only needs a bullet in the head and the war will be over,” it said. Posts in Slovak that violate company policies, Reset’s researchers said, had been seen at least 530,000 times in two weeks after the law went into effect.

Although Slovakia joined NATO in 2004 and has been a staunch supporter and arms supplier for Ukraine since the Russian invasion, the current front-runner is SMER, a party headed by Robert Fico, a former prime minister who now criticizes the alliance and punitive steps against Russia.

Facebook shut down the account of one of SMER’s candidates, Lubos Blaha, in 2022 for spreading disinformation about Covid. Known for inflammatory comments about Europe, NATO and L.G.B.T.Q. issues, Mr. Blaha remains active in Telegram posts, which SMER reposts on its Facebook page, effectively circumventing the ban.

Jan Zilinsky, a social scientist from Slovakia who studies the use of social media at the Technical University of Munich in Germany, said the law was a step in the right direction.

“Content moderation is a hard problem, and platforms definitely have responsibilities,” he said, “but so do the political elites and candidates.”

https://www.nytimes.com/2023/09/27/tech ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Chatbots May ‘Hallucinate’ More Often Than Many Realize

Post by kmaherali »

When summarizing facts, ChatGPT technology makes things up about 3 percent of the time, according to research from a new start-up. A Google system’s rate was 27 percent.

When the San Francisco start-up OpenAI unveiled its ChatGPT online chatbot late last year, millions were wowed by the humanlike way it answered questions, wrote poetry and discussed almost any topic. But most people were slow to realize that this new kind of chatbot often makes things up.

When Google introduced a similar chatbot several weeks later, it spewed nonsense about the James Webb telescope. The next day, Microsoft’s new Bing chatbot offered up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish. Then, in March, ChatGPT cited a half dozen fake court cases while writing a 10-page legal brief that a lawyer submitted to a federal judge in Manhattan.

Now a new start-up called Vectara, founded by former Google employees, is trying to figure out how often chatbots veer from the truth. The company’s research estimates that even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent.

Experts call this chatbot behavior “hallucination.” It may not be a problem for people tinkering with chatbots on their personal computers, but it is a serious issue for anyone using this technology with court documents, medical information or sensitive business data.

Because these chatbots can respond to almost any request in an unlimited number of ways, there is no way of definitively determining how often they hallucinate. “You would have to look at all of the world’s information,” said Simon Hughes, the Vectara researcher who led the project.

Dr. Hughes and his team asked these systems to perform a single, straightforward task that is readily verified: Summarize news articles. Even then, the chatbots persistently invented information.

“We gave the system 10 to 20 facts and asked for a summary of those facts,” said Amr Awadallah, the chief executive of Vectara and a former Google executive. “That the system can still introduce errors is a fundamental problem.”

ImageA portrait of Mr. Awadallah, wearing a blue button-down shirt and looking to his left.
Image
A Vectara team gave chatbots a straightforward test. “That the system can still introduce errors is a fundamental problem.” Mr. Awadallah said.Credit...Cayce Clifford for The New York Times

The researchers argue that when these chatbots perform other tasks — beyond mere summarization — hallucination rates may be higher.

Their research also showed that hallucination rates vary widely among the leading A.I. companies. OpenAI’s technologies had the lowest rate, around 3 percent. Systems from Meta, which owns Facebook and Instagram, hovered around 5 percent. The Claude 2 system offered by Anthropic, an OpenAI rival also based in San Francisco, topped 8 percent. A Google system, Palm chat, had the highest rate at 27 percent.

An Anthropic spokeswoman, Sally Aldous, said, “Making our systems helpful, honest and harmless, which includes avoiding hallucinations, is one of our core goals as a company.”

Google declined to comment, and OpenAI and Meta did not immediately respond to requests for comment.

With this research, Dr. Hughes and Mr. Awadallah want to show people that they must be wary of information that comes from chatbots and even the service that Vectara sells to businesses. Many companies are now offering this kind of technology for business use.

Based in Palo Alto, Calif., Vectara is a 30-person start-up backed by $28.5 million in seed funding. One of its founders, Amin Ahmad, a former Google artificial intelligence researcher, has been working with this kind of technology since 2017, when it was incubated inside Google and a handful of other companies.

Much as Microsoft’s Bing search chatbot can retrieve information from the open internet, Vectara’s service can retrieve information from a company’s private collection of emails, documents and other files.

The researchers also hope that their methods — which they are sharing publicly and will continue to update — will help spur efforts across the industry to reduce hallucinations. OpenAI, Google and others are working to minimize the issue through a variety of techniques, though it is not clear whether they can eliminate the problem.

“A good analogy is a self-driving car,” said Philippe Laban, a researcher at Salesforce who has long explored this kind of technology. “You cannot keep a self-driving car from crashing. But you can try to make sure it is safer than a human driver.”

Image
Simon Hughes poses at an open outside door in jeans and a dark shirt and vest.
Image
Simon Hughes, a Vectara researcher, built a system that aims to show how often chatbots “hallucinate.”Credit...Lyndon French for The New York Times

Chatbots like ChatGPT are driven by a technology called a large language model, or L.L.M., which learns its skills by analyzing enormous amounts of digital text, including books, Wikipedia articles and online chat logs. By pinpointing patterns in all that data, an L.L.M. learns to do one thing in particular: guess the next word in a sequence of words.

Because the internet is filled with untruthful information, these systems repeat the same untruths. They also rely on probabilities: What is the mathematical chance that the next word is “playwright”? From time to time, they guess incorrectly.

The new research from Vectara shows how this can happen. In summarizing news articles, chatbots do not repeat untruths from other parts of the internet. They just get the summarization wrong.

For example, the researchers asked Google’s large language model, Palm chat, to summarize this short passage from a news article:

The plants were found during the search of a warehouse near Ashbourne on Saturday morning. Police said they were in “an elaborate grow house.” A man in his late 40s was arrested at the scene.

It gave this summary, completely inventing a value for the plants the man was growing and assuming — perhaps incorrectly — that they were cannabis plants:

Police have arrested a man in his late 40s after cannabis plants worth an estimated £100,000 were found in a warehouse near Ashbourne.

This phenomenon also shows why a tool like Microsoft’s Bing chatbot can get things wrong as it retrieves information from the internet. If you ask the chatbot a question, it can call Microsoft’s Bing search engine and run an internet search. But it has no way of pinpointing the right answer. It grabs the results of that internet search and summarizes them for you.

Sometimes, this summary is very flawed. Some bots will cite internet addresses that are entirely made up.

Companies like OpenAI, Google and Microsoft have developed ways to improve the accuracy of their technologies. OpenAI, for example, tries to refine its technology with feedback from human testers, who rate the chatbot’s responses, separating useful and truthful answers from those that are not. Then, using a technique called reinforcement learning, the system spends weeks analyzing the ratings to better understand what it is fact and what is fiction.

But researchers warn that chatbot hallucination is not an easy problem to solve. Because chatbots learn from patterns in data and operate according to probabilities, they behave in unwanted ways at least some of the time.

To determine how often the chatbots hallucinated when summarizing news articles, Vectara’s researchers used another large language model to check the accuracy of each summary. That was the only way of efficiently checking such a huge number of summaries.

But James Zou, a Stanford computer science professor, said this method came with a caveat. The language model doing the checking can also make mistakes.

“The hallucination detector could be fooled — or hallucinate itself,” he said.

https://www.nytimes.com/2023/11/06/tech ... rates.html
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Re: The Concept of Post Fact Society

Post by kmaherali »

Even Photoshop Can’t Erase Royals’ Latest P.R. Blemish
A Mother’s Day photo was meant to douse speculation about the Princess of Wales’ health. It did the opposite — and threatened to undermine trust in the royal family.

Image
Note: Zoomed photos of zipper and hair have been lightened to show detail.Original photo source: Prince Of Wales/Kensington PalaceBy The New York Times

If a picture is worth a thousand words, then a digitally altered picture of an absent British princess is apparently worth a million.

That seemed to be the lesson after another day of internet-breaking rumors and conspiracy theories swirling around Catherine, Princess of Wales, who apologized on Monday for having doctored a photograph of herself with her three children that circulated on news sites and social media on Sunday.

It was the first official photo of Catherine since before she underwent abdominal surgery two months ago — a cheerful Mother’s Day snapshot, taken by her husband, Prince William, at home. But if it was meant to douse weeks of speculation about Catherine’s well-being, it had precisely the opposite effect.

Now the British royal family faces a storm of questions about how it communicates with the press and public, whether Catherine manipulated other family photos she released in previous years, and whether she felt driven to retouch this photo to disguise the impact of her illness.

It adds up to a fresh tempest for a royal family that has lurched from one self-created crisis to another. Unlike previous episodes, this involves one of the family’s most popular members, a commoner-turned-future queen. It also reflects a social media celebrity culture driven in part by the family itself, one that is worlds away from the intrusive paparazzi pictures that used to cause royals, including a younger Kate Middleton, chagrin.

“Like so many millennial celebrities, the Princess of Wales has built a successful public image by sharing with her audience a carefully curated version of her personal life,” said Ed Owens, a royal historian who has studied the relationship between the monarchy and the media. The manipulated photograph, he said, is damaging because, for the public, it “brings into question the authenticity” of Catherine’s home life.

Authenticity is the least of it: the mystery surrounding Catherine’s illness and prolonged recovery, out of the public eye, has spawned wild rumors about her physical and mental health, her whereabouts, and her relationship with William.

ImageThe Princess of Wales holding red roses and speaking with a small group of people taking photographs.
Image
Catherine, Princess of Wales, at the royal family’s Christmas Day service on the Sandringham Estate in England last year.Credit...Adrian Dennis/Agence France-Presse — Getty Images

The discovery that the photo was altered prompted several international news agencies to issue advisories — including one from The Associated Press that was ominously called a “kill notification” — urging news organizations to remove the image from their websites and scrub it from any social media.

Mr. Owens called the incident a “debacle.”

“At a time when there is much speculation about Catherine’s health, as well as rumors swelling online about her and Prince William’s private lives,” he said, “the events of the last two days have done nothing to dispel questions and concerns.”

Kensington Palace, where Catherine and William have their offices, declined to release an unedited copy of the photograph on Monday, which left amateur visual detectives to continue scouring the image for signs of alteration in the poses of the princess and her three children, George, Charlotte, and Louis.

The A.P. said its examination yielded evidence that there was “an inconsistency in the alignment of Princess Charlotte’s left hand.” The image has a range of clear visual inconsistencies that suggest it was doctored. A part of a sleeve on Charlotte’s cardigan is missing, a zipper on Catherine’s jacket and her hair is misaligned, and a pattern in her hair seems clearly artificial.

Samora Bennett-Gager, an expert in photo retouching, identified multiple signs of image manipulation. The edges of Charlotte’s legs, he said, were unnaturally soft, suggesting that the background around them had been shifted. Catherine’s hand on the waist of her youngest son, Louis, is blurry, which he said could indicate that the image was taken from a separate frame of the shoot.

Taken together, Mr. Bennett-Gager said, the changes suggested that the photo was a composite drawn from multiple images rather than a single image smoothed out with a Photoshop program. A spokesman for Catherine declined to comment on her proficiency in photo editing.

Even before Catherine’s apology, the web exploded with memes of “undoctored” photos. One showed a bored-looking Catherine smoking with a group of children. Another, which the creator said was meant to “confirm she is absolutely fine and recovering well,” showed the princess splashing down a water slide.

Beyond the mockery, the royal family faces a lingering credibility gap. Catherine has been an avid photographer for years, capturing members of the royal family in candid situations: Queen Camilla with a basket of flowers; Prince George with his great-grandfather, Prince Philip, on a horse-drawn buggy.

The palace has released many of these photos, and they are routinely published on the front pages of British papers (The Times of London splashed the Mother’s Day picture over three columns). A former palace official predicted that the news media would now examine the earlier photographs to see if they, too, had been altered.

Image
Newspapers with a photograph of Catherine, Princess of Wales, and her children on their front pages.
Image
British newspapers showing the Mother’s Day photo in their editions of March 11.Credit...David Cliff/EPA, via Shutterstock

That would put Kensington Palace in the tricky position of having to defend one of its most effective communicators against a potentially wide-ranging problem, and one over which the communications staff has little control. After a deluge of inquires about the photograph, the palace left it to Catherine to explain what happened. She was contrite, but presented herself as just another frustrated shutterbug with access to Photoshop.

“Like many amateur photographers, I do occasionally experiment with editing,” she wrote on social media. “I wanted to express my apologies for any confusion the family photograph we shared yesterday caused.”

Catherine’s use of social media sets her apart from older members of the royal family, who rely on the traditional news media to present themselves. When King Charles III taped a video message to mark Commonwealth Day, for example, Buckingham Palace hired a professional camera crew that was paid for by British broadcasters, a standard arrangement for royal addresses.

When Charles left the hospital after being treated for an enlarged prostate, he and Queen Camilla walked in front of a phalanx of cameras, smiling and waving as they made their way to their limousine.

Catherine was not seen entering or leaving the hospital for her surgery, nor were her children photographed visiting her. That may reflect the gravity of her health problems, royal watchers said, but it also reflects the determination of William and Catherine to erect a zone of privacy around their personal lives.

Image
Men stand with video cameras across the street from a medical clinic.
Image
Television camera operators outside the London Clinic while Catherine was undergoing surgery in January.Credit...Justin Tallis/Agence France-Presse — Getty Images

William, royal experts said, is also driven by a desire not to repeat the experience of his mother, Diana, who was killed in a car crash in Paris in 1997 after a high-speed pursuit by photographers. Catherine, too, has been victimized by paparazzi, winning damages from a French court in 2017 after a celebrity magazine published revealing shots of her on vacation in France.

Last week, grainy photos of Catherine riding in a car with her mother surfaced on the American celebrity gossip site TMZ. British newspapers reported the existence of the photos but did not publish them out of deference to the palace’s appeal that she be allowed to recuperate in privacy.

Catherine and William are not the only members of their royal generation who have sought to exercise control over their image. Prince Harry and his wife, Meghan, posted photos of themselves on Instagram, even using their account to announce their withdrawal from royal duties in 2020.

Catherine’s embrace of social media to circulate her pictures is a way of reclaiming her life from the long lenses of the paparazzi. But the uproar over the Mother’s Day photo shows that this strategy comes with its own risks, not least that a family portrait has added to the very misinformation about her that it was calculated to counteract.

On Monday afternoon, Catherine found herself back in traditional royal mode. She was photographed, fleetingly, in the back of a car with William as he left Windsor Castle for a Commonwealth Day service at Westminster Abbey. Kensington Palace said she was on her way to a private appointment.

Gaia Tripoli and Lauren Leatherby contributed reporting.
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Re: The Concept of Post Fact Society

Post by kmaherali »

Trolls Used Her Face to Make Fake Porn. There Was Nothing She Could Do.

Sabrina Javellana was a rising star in local politics — until deepfakes derailed her life.

Image
“I felt like I didn’t have a choice in what happened to me or what happened to my body,” Sabrina Javellana, who in 2018, at age 21, won a seat on the city commission in Hallandale Beach, Fla., said. “I didn’t have any control over the one thing I’m in every day.”Credit...Haruka Sakaguchi for The New York Times

By Coralie Kraft
Coralie Kraft covers culture for The Times and other outlets. Over the course of 10 months, she spoke with 33 sources, including 11 victims, about A.I.-generated pornography.

July 31, 2024
Most mornings, before walking into City Hall in Hallandale Beach, Fla., a small city north of Miami, Sabrina Javellana would sit in the parking lot and monitor her Twitter and Instagram accounts. After winning a seat on the Hallandale Beach city commission in 2018, at age 21, she became one of the youngest elected officials in Florida’s history. Her progressive political positions had sometimes earned her enemies: After proposing a name change for a state thoroughfare called Dixie Highway in late 2019, she regularly received vitriolic and violent threats on social media; her condemnation of police brutality and calls for criminal-justice reform prompted aggressive rhetoric from members of local law enforcement. Disturbing messages were nothing new to her.

Listen to this article
The morning of Feb. 5, 2021, though, she noticed an unusual one. “Hi, just wanted to let you know that somebody is sharing pictures of you online and discussing you in quite a grotesque manner,” it began. “He claims that he’s one of your ‘guy friends.’”

Javellana froze. Who could have sent this message? She asked for evidence, and the sender responded with pixelated screenshots of a forum thread that included photos of her. There were comments that mentioned her political career. Had her work drawn these people’s ire? Eventually, with a friend’s help, she found a set of archived pages from the notorious forum site 4chan. Most of the images were pulled from her social media and annotated with obscene, misogynistic remarks: “not thicc enough”; “I would breed her”; “no sane person would date such a stupid creature.” But one image further down the thread stopped her short. She was standing in front of a full-length mirror with her head tilted to the side, smiling playfully. She had posted an almost identical selfie, in which she wore a brown crew-neck top and matching skirt, to her Instagram account back in 2015. “It was the exact same picture,” Javellana said of the doctored image. “But I wasn’t wearing any clothes.”

There were several more. These were deepfakes: A.I.-generated images that manipulate a person’s likeness, fusing it with others to create a false picture or video, sometimes pornographic, in a way that looks authentic. Although fake explicit material has existed for decades thanks to image-editing software, deepfakes stand out for their striking believability. Even Javellana was shaken by their apparent authenticity.

“I didn’t know that this was something that happened to everyday people,” Javellana told me when I visited her earlier this year in Florida. She wondered if anyone else had seen the photos or the abusive comments online. Several of the threads even implied that people on the forum knew her. “I live in Broward County,” one comment read. “She just graduated from FIU.” Other users threatened sexual violence. In the days that followed, Javellana became increasingly fearful and paranoid. She stopped walking alone at night and started triple-checking that her doors and windows were locked before she slept. In an effort to protect her personal life, she made her Instagram private and removed photographs of herself in a bathing suit.

Discovering the images changed how Javellana operated professionally. Attending press events was part of her job, but now she felt anxious every time someone lifted their camera. She worried that public images of her would be turned into pornography, so she covered as much of her body as she could, favoring high-cut blouses and blazers. She knew she wasn’t acting rationally — people could create new deepfakes regardless of how much skin she showed in the real world — but changing her style made her feel a sense of control. If the deepfakes went viral, no one could look at how she dressed and think that she had invited this harassment.

Although she confided in a few friends in the days and weeks after discovering the images, she mostly kept the experience to herself. She lived with her mother and brother, but she couldn’t bring herself to tell them. They were a Filipino Catholic family who rarely talked about sex. Could she raise the images with them? Would they — or anyone — believe that they weren’t real? Besides, Javellana, who saw herself as her family’s protector and provider, did not want to burden them. When she came home after work, she would escape to her balcony to smoke weed and listen to music. When she cried, she muffled her sobs so no one would hear.

“I felt like I didn’t have a choice in what happened to me or what happened to my body,” Javellana said. “I didn’t have any control over the one thing I’m in every day.”

The only thing that made Javellana feel a measure of agency was seeking clarity on what was happening to her. At night she would sit alone in her bedroom, combing the internet for information about “intimate-image abuse,” or the non-consensual sharing of sexual images, the most well known form of which is “revenge porn.” In the 1990s and aughts, as access to camera phones and the internet became widespread, it became easy for a person to share intimate photos of another person without their consent on sites like Pornhub, Reddit and 4chan. Explicit videos followed, with people uploading hundreds of thousands of hours of non-consensual content online. Deepfakes added a new element: an individual could find themselves appearing in explicit content without ever engaging in sexual activity.

The technology first gained national attention when it was used to create misleading and false political content, such as a doctored video that Donald Trump shared on Twitter in 2019, which featured footage of Nancy Pelosi apparently slurring her words. In recent years, though, the bulk of deepfake material online has become pornographic. Though information was scant, Javellana found articles warning about the increasing threat of deepfakes. Artificial intelligence was advancing so rapidly that anyone with a computer or smartphone could access a number of apps to easily create images using individuals’ likenesses. One study cited by the Department of Homeland Security noted that, at one point, more than 90 percent of deepfake videos featured non-consensual, sexually explicit images, overwhelmingly of women. In 2021, though, the vast majority of states had no laws banning the distribution of deepfake pornography. As a result, there was very little guidance on what to do if you found that someone had created pornographic content featuring your likeness.

The day she discovered the images, Javellana contacted a member of the local police department, who referred her to the Florida Department of Law Enforcement’s cybercrime division. On the phone with the department’s legal adviser, Javellana laid out the situation and emailed a link to the 4chan images. While she waited to hear back, her search for help took her to Carrie Goldberg, a lawyer who had garnered media attention as an advocate for victims’ rights and sexual privacy after successfully litigating several high-profile revenge-porn cases. Goldberg’s firm received its first deepfake case in 2019, when an A-list celebrity sought help with porn that used her likeness. While celebrities are still targeted — there are deepfake porn videos featuring thousands of female celebrities — advancements in technology have allowed abusers to target people outside the public eye. One widely accessible A.I. app processed more than 600,000 photos of ordinary citizens in the first 15 days after its launch in 2023.

Norma Buster, Goldberg’s chief of staff, conducted a case evaluation with Javellana over the phone in February 2021. After the firm’s lawyers evaluated her story, Buster explained that Javellana’s situation had few satisfying legal solutions. The firm could send Digital Millennium Copyright Act (DMCA) notices to the sites hosting the fake material, arguing that they violated Javellana’s copyright. There was no guarantee that they would comply, though; internet forums have traditionally been uncooperative about removing content. In addition, it wasn’t clear that Javellana could claim a copyright violation, as A.I. had significantly modified the original image. Although 47 states had passed laws against intimate-image abuse, those laws didn’t always apply to deepfake cases. In short, Buster told her, there wasn’t much they could do.

When the cybercrime division called Javellana to their offices in April 2021, their news was as demoralizing as Goldberg’s. Special agents explained that there were no federal laws against creating or disseminating non-consensual explicit deepfakes. Florida didn’t have a state law preventing the creation of the material, either, so their hands were tied. The activity was not legally criminal; law enforcement could not investigate further.

As Javellana registered that the police wouldn’t be able to help her remove the images, she began to panic. She had worked hard to create a career in politics for herself and earn the respect of her older colleagues. Now she felt a surge of dread as she imagined people at City Hall scrutinizing the pictures. And what about her family, her friends, her neighbors? Even if she convinced everyone that the images were fake, would the people in her life ever look at her the same way? Would she ever have professional prospects again? Shortly before discovering the images, she had decided to sign up for that spring’s state teaching-certification exam in the hope of getting a job at one of her old schools. Now she imagined herself explaining to future employers — or members of the school board — that someone had created fake explicit images without her consent, and that the images were openly accessible on the internet. Why would anyone hire her and risk damaging the school’s reputation? But if she didn’t disclose the existence of the images and someone stumbled upon them online, she would almost certainly lose her job.

Sitting there in the cybercrimes office, she tried not to cry while explaining her fears to two agents. They listened, then said they could help her write an affidavit explaining that the photographs weren’t real. They pulled out papers and spread what looked like images of her naked body across a nearby table. They told her to sign each page. Staring at the images, Javellana saw her own face looking back at her, and felt self-conscious about the detectives inspecting the material. The printouts felt tangible in a way that the online posts had not; she imagined a stranger printing out these images and looking at “her.” Disgusted and scared, she broke down and wept while signing her name. Later that day, driving home, she was so distressed by the experience that she rear-ended another car.

Almost a year passed, and as Javellana’s attempts to protect herself foundered, her anger calcified into numbness. If there was nothing she could do to get the images off the internet, she at least wanted to erase them from her memory. She distanced herself from friends, who she feared would not understand her situation. Nobody at work knew about the deepfakes — she dreaded being known among her colleagues as “the girl with the nude photos.” She decided against taking the teaching-certification exam.

One morning in January 2022, Javellana was reading the news on her phone when she found an article about a bill introduced by the Democratic Florida state senator Lauren Book that would criminalize non-consensual deepfakes in the state. Senator Book had had her own experience with this technology. In November 2021, she was sitting at her kitchen table after dropping her children off at school when a text message from an unknown number arrived: “I have a proposition for you.” Two nude photographs of Senator Book followed; the perpetrator threatened to release them unless she gave him oral sex and $5,000 in gift cards. Subsequent research revealed numerous deepfake images of Senator Book online. Weeks later, she started drafting legislation that would become known as Senate Bill 1798 — one of the first attempts in the country to address pornographic deepfakes online.

Javellana took a screenshot of the article, highlighted the language about deepfakes and posted it to Twitter, along with a summary of her own experience. “I was traumatized last year when I learned ‘deepfake’ images of me were being made,” she typed. It was the first time she publicly acknowledged her ordeal. “I brought it to F.D.L.E. but they were unable to investigate the source as Florida law did not address this issue. Thank you @LeaderBookFL for your work on this, and I’m so sorry that this happened to you.”

Senator Book replied immediately, and a couple of days later the two women discussed the bill over the phone. It was a rare moment of connection amid what had been a frighteningly solitary experience. They began communicating regularly. “We were healing and going through this thing together,” Javellana told me. She admired the senator for going public with her story. More than that, after a year of feeling cowed, speaking with another woman about the experience of being deepfaked felt cathartic. She had found an ally.

Senator Book asked Javellana if she would consider testifying in support of the bill at a committee hearing. She hoped that the bill would pass, and stressed to Javellana that personal statements were a powerful way to pressure legislators. Javellana had attended dozens of committee hearings and knew the importance of testimony on legislative processes. On Feb. 8, 2022, she testified on behalf of S.B. 1798 in Tallahassee, Fla. Knowing she would have only a few minutes to convince the senators, Javellana had collected her points in the notes app on her phone, which she held while describing her fake nudes to political leaders and constituents. Grief overwhelmed her as she unfurled the course of events; twice she had to stop, resting a hand on her chest for comfort. “I didn’t want to cry in public, but it was so hard to talk about,” Javellana recalled.

Her testimony lasted three minutes. She had barely recovered when Republican Senator Ileana Garcia, the committee chair, commented. “It’d never dawned on me how bad this situation is,” Garcia said before pausing. “But sometimes it’s caused by us. In our journey for validation, we expose too much of ourselves.” Javellana was stunned and humiliated. Moments after her testimony, a state senator was blaming her for the synthetic images — exactly the kind of judgment that she had been fearing for the last year. After Senator Garcia finished her statement, Senator Book spoke up. “I didn’t put my images out there,” she stressed. “I didn’t parade them on social media. They were stolen from me, my family. I’ll never get them back.” Book looked at Javellana. “She will never get them back.”

ImageSenator Lauren Book of Florida speaking into a microphone.
Image
In a 2022 committee hearing, Senator Lauren Book of Florida discussed a bill that would criminalize deepfakes in the state.Credit...The Florida Channel

Image
Javellana speaking into a microphone.
Image
Javellana gave emotional testimony in support of S.B. 1798, which provides at least some protections from deepfakes in Florida. Despite this, Javellana felt safer outside the public eye, and in 2022 opted to not run for re-election to the city commission.Credit...The Florida Channel

The bill passed in committee, 8-0. In the days after, Javellana spoke with the news media and called out 4chan users specifically as the architects behind her ordeal. That kicked up a new storm of images featuring Javellana and Senator Book. Shortly after, someone on the internet shared Javellana’s personal phone number and address. Comments directed at her described sexual assault, and one user threatened to drive to her house. Then came the nadir: Harassers exposed her mother’s name and phone number and texted her the explicit images. Javellana was horrified. Her worst fears were becoming reality. The more she talked about her experience, the more harassment she endured.

After the bill passed unanimously in Florida’s House and Senate, Gov. Ron DeSantis signed S.B. 1798 into law on June 24, 2022. At the time, four other states — Virginia, Hawaii, California and Texas — had enacted laws against the dissemination of deepfakes. Floridians would be protected against deepfake abuse — ostensibly, at least. Now that circulating such images was explicitly criminalized, the F.D.L.E. and local law enforcement could investigate victims’ claims. Those who circulate deepfakes could be charged, fined and possibly jailed. Victims could now pursue civil suits as well. Legislative solutions like this are an important step toward preventing nightmares like Javellana’s; they offer victims possible financial compensation and a sense of justice, while signaling that there will be consequences for creating non-consensual sexual imagery.

But for Javellana, the Florida bill has not been a panacea. One evening this past February, she wondered if people were still posting and commenting on images of her. To her dismay, she discovered entirely new threads and images posted in October 2023, a year after the law went into effect, and almost three years after she discovered the first deepfakes. She had hoped that her aggressors would lose interest once she left the city commission, but the images were spreading onto new sites like Discord, the popular social-messaging platform. She spent hours sifting through the new threads on her phone, scrolling late into the night. Some of the deepfakes looked like images from her personal Instagram account, which has just over 1,000 followers and has been private since 2021. She wondered if some of the deepfakes were the work of someone she knew.

Javellana ultimately decided against taking legal action. The law was not retroactive, meaning she could only pursue lawsuits for images distributed after its enactment. The sheer volume of material was daunting, and she would need to file a lawsuit for each individual post; the cost of hiring a lawyer to track down each harasser was prohibitive. If the users had posted the deepfakes from outside Florida, the new law wouldn’t even apply. Emotionally overwhelmed, Javellana decided to do the only thing she could: Leave the deepfakes in the past.

For the past year and a half, Javellana has worked as an aide to the Democratic mayor of Broward County. She decided not to run for re-election to the city commission (even though she felt confident that she would win). She likes learning about a different side of government, and she feels safer out of the public eye. Sometimes she wonders what would have happened if she had remained on the city commission; a second term could have positioned her to eventually run for higher office. But she felt that stepping down was her only option. It was hard enough being a young, vocal Democrat in a majority Republican state; the online harassment had pushed her over the edge. She dealt with chronic anxiety and often drove home from meetings sobbing.

A Department of Homeland Security report notes that “anyone with a social media profile is fair game to be faked,” and the F.B.I. has advised caution when posting personal photos or videos online. For most people, especially those like Javellana with public-facing careers, this guidance is unrealistic. Javellana felt she had no choice but to maintain a professional Instagram account with highlights from her time in office, even after she knew her images were being stolen and manipulated. This calculus — weighing professional advancement against potential harassment — is now a necessity for many women.

In January, Congress introduced a federal bill called the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, or the DEFIANCE Act, which recently passed in the Senate. If enacted, the bill will allow victims to claim $150,000 in damages and file temporary restraining orders against their harassers. But experts warn that curbing deepfakes would require tech companies to police their platforms for abusive content and remove applications that could facilitate harassment. Platforms aren’t incentivized to remove that material unless they face potential financial repercussions, thanks to Section 230 of the U.S. Communications Decency Act, which absolves them of responsibility around content posted by their users. Goldberg, the attorney, emphasized that altering Section 230 to allow victims to sue would make platforms wary of hosting the content.

Amid all this, Javellana has continued to discover the existence of new images across three different platforms. As late as April 2024, she had found at least 40 threads on 4chan’s archive, each containing multiple images posted by different users. “At this point, they’re going to keep popping up,” Javellana told me, her voice weary. A successful lawsuit might offer her some financial compensation but not protect her against future material. If she fought to remove a few specific images, new content would appear in its place.

“It just never ends,” she said, her voice sticking. “I just have to accept it.”

https://www.nytimes.com/2024/07/31/maga ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Re: The Concept of Post Fact Society

Post by kmaherali »

What’s Happening in Britain Is Shocking. But It’s Not Surprising.

Image

By Hibaq Farah

Ms. Farah is a staff editor in Opinion. She wrote from London.

The scenes are shocking.

In the wake of the murder of three young girls in the northwestern town of Southport, England, riots erupted across the country. Seizing on misinformation about the suspect’s identity, far-right rioters embarked on a harrowing rampage, setting fire to cars, burning down mosques, harassing Muslims, looting stores and attacking hotels housing asylum seekers. In an early August weekend, there were over 50 protests and almost 400 arrests. In the week since, hundreds of rioters have been charged and dozens convicted.

The country is stunned. But for all the events’ eye-popping madness, we shouldn’t be surprised. The animosities underpinning the riots — hatred of Muslims and migrants alike — have long found expression in Britain’s political culture, not least under the previous Conservative government whose cornerstone commitment was to “stop the boats” on which migrants made their way to British shores.

Far-right extremists, emboldened by that government’s turn to migrant-bashing, have been waiting for the perfect chance to take to the streets. Crucially, they have found a home online, where platforms — poorly regulated and barely moderated — allow the spread of hate-filled disinformation, whipping up a frenzy. These have been disturbing days. But the chaos has been coming.

Disinformation is at the heart of the riots. In the aftermath of the killings in Southport, users on X posted and shared false claims, stating that the alleged attacker was an asylum seeker who arrived in Britain by boat — when he was in fact born and raised in Wales. On TikTok, far-right users went live and called on one another to gather in protest. Their reach was wide. Thanks to the platform’s aggressively personalized For You page, it is not difficult to get videos in front of users who have already engaged with far-right or anti-migrant content.

The apparatus of assembly extended to messaging services. On Telegram, far-right group chats shared lists of protest locations; one message included the line “they won’t stop coming until you tell them.” In WhatsApp chats, there were messages about reclaiming the streets and taking out “major bases” of immigrant areas in London. These calls to action were quickly amplified by far-right figures like Andrew Tate and Tommy Robinson, the founder of the English Defense League, who took to X to spread lies and foment hate. Almost immediately, people were out on the streets, wreaking havoc.

There was little to stop the outpouring of false claims and hateful language, even after officials released information about the suspect’s identity. Legislation on internet safety is murky and confusing. Last year, the Conservative government passed the Online Safety Act, whose remit is to protect children and force social media companies to remove illegal content. But there is no clear reference in the law to misinformation.

In January, new offenses were introduced to the act, including posting “fake news intended to cause non-trivial harm and other online abuse.” And in the aftermath of the riots, the Labour government is reportedly planning to strengthen the law. These are good developments, to be sure. But the legislation is not yet in force and it’s unclear how it will be enforced.

The bigger problem, though, is that so much in the law hinges on establishing intent, which is famously hard to do. Henry Parker, the vice president of corporate affairs at Logically, a British organization that monitors disinformation online, told me there needs to be much clearer criteria for what constitutes intent and how it can be punished.

This is tricky territory: It’s hard to strike the right balance between protecting freedom of speech and controlling harmful speech. Even so, “it is legitimate for the government to get involved,” Mr. Parker said. “Just as there is a right of freedom of speech, there is a right for people to have access to accurate information.”

In the absence of effective regulation or oversight, social media platforms have played an increasingly central role in radicalizing far-right extremists in Britain. Under Elon Musk, X has allowed far-right users, including the likes of Mr. Robinson, to return to the platform. Since the riots started, Mr. Musk himself has stirred things up, claiming that “civil war is inevitable” and going on a bizarre tirade in a series of posts.

But the real damage has been how he has allowed harmful content to thrive. “X as a platform is uniquely vulnerable to massive-scale disinformation,” Imran Ahmed, founder of the Center for Countering Digital Hate, told me, “because they have basically abandoned enforcement of their rules.” The result is an online world of hate, lies and extremism.

The online world is connected to the offline world, of course. Far-right agitators in Britain are clearly drawing on widespread feelings of Islamophobia, racism and anti-migrant sentiment. In response to the riots, there has been some reticence among public figures to say this clearly. As a Muslim, I roll my eyes every time there are discussions in the media about whether clearly Islamophobic acts — like attacking mosques or threatening women wearing hijabs — are, in fact, Islamophobic. “Unless we identify what’s going on,” Zarah Sultana, an independent lawmaker, told me, “how can we possibly respond to it in the right way?”

Last Wednesday, people answered that question. Across England’s major cities, thousands of people — 25,000, according to one estimate — joined counterprotests to challenge the rioters. The far right, clearly deterred, mostly didn’t turn up. The peaceful mobilization of citizens, gathering in multiethnic areas at immigration centers that were apparently in line for far-right attack, was an apt riposte to violent racism. Together with an expanded police response and energetic prosecutions, it worked to ward off further riots.

Prime Minister Keir Starmer, along with pledging “no letup” in legal action against rioters, has promised that people will be prosecuted for their actions online — and a handful have been convicted of inciting racial hatred. But there’s seemingly little the government can do to hold accountable the social media platforms themselves. These riots, xenophobic outbursts turbocharged by technology, were only a matter of time. The truly scary thing is how little we can do to stop them.

https://www.nytimes.com/2024/08/12/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Re: The Concept of Post Fact Society

Post by kmaherali »

In S​outh Korea, Misogyny Has a New Weapon: Deepfake Sex Videos

Men in chat rooms have been victimizing women they know by putting their faces on pornographic clips. Some Korean women say the only thing new about it is the technology.

Image
A protest against deepfake pornography in Seoul last week.Credit...Chung Sung-Jun/Getty Images

In 2020, as the South Korean authorities were pursuing a blackmail ring that forced young women to ​make sexually ​explicit videos for paying viewers, they found something else floating through the dark recesses of social media: pornographic images with other people’s faces crudely attached.

They didn’t know what to do with these ​early attempts at deepfake pornography. In the end, the National Assembly enacted a vaguely worded law against those making and distributing it. But that did not prevent a crime wave, using AI technolog​y, that has now taken the country’s misogynistic online culture to new depths.

​In the past two weeks, South Koreans have been shocked to find that a rising number of young men and teenage boys had taken hundreds of social media images of classmates, teachers​ and military colleagues — almost all young women and girls, including minors — and used them to create sexually exploitative images and video clips with deepfake apps.

They ​have spread the material through chat rooms on the encrypted messaging service Telegram, some with as many as 220,000 members.​ The deepfakes usually ​combine a victim’s face with a​ body in a sexually explicit pose, taken from pornography​. The technology is so sophisticated that it is​ often hard for ordinary people to tell they are fake, investigators say. As the country scrambles to address ​the threat, experts have noted that in South Korea, enthusiasm for new technologies can sometimes outpace concerns about their ethical implications.

But to many women, these deepfakes are just the latest online expression of a deep-rooted misogyny in their country — a culture that has now produced young men who consider it fun to share sexually humiliating images of women​ online.

“Korean society doesn’t treat women as fellow human beings,” said Lee Yu-jin, a student whose university is among the hundreds of ​middle schools, high schools and colleges where students have been victimized. She asked why the government had not done more “before it became a digital culture to steal photos of ​friends and us​e them ​for sexual​ humiliation.”

Online sexual violence is a growing problem globally, but South Korea is at the leading edge. Whether, and how, it can tackle the deepfake problem successfully will be watched by policymakers, school officials and law enforcement elsewhere.

Image
A man in a suit sits behind a wooden desk, speaking into a microphone. A sign behind him reads Korea Communications Standards Commission in English and Korean.
Image
A government official, Ryu Hee-lim, at an emergency meeting on deepfakes in Seoul last month. The president responded to news reports about deepfakes by ordering the government to “root them out.”Credit...Yonhap, via EPA, via Shutterstock

The country has ​an underbelly of sexual criminality that has occasionally surfaced. A South Korean was convicted of running one of the world’s largest sites for images of child sexual abuse. A K-pop entertainer was ​found guilty of facilitating prostitution through a nightclub. ​For years, the police have battled spycam porn. And the mastermind of the blackmail ring investigated in 2020 was sentenced to 40 years in prison for luring young women, including teenagers, to make the videos that he sold online through Telegram​ chat rooms.

​The rise of easy-to-use deepfake technology has added an insidious dimension to such forms of sexual violence​: The victims are often unaware that they are victims until they receive an anonymous ​message, or a call from the police.

‘Slave,’ ‘Toilet,’ ‘Rag’

For one 30-year-old deepfake victim, whose name is being withheld to protect her privacy, the attack began in 2021 with an anonymous message on Telegram that said: “Hi!”

Over the next few hours, a stream of obscenities and deepfake images and video clips followed, featuring her face, taken from family trip photos she had posted on social media. Written on the body were words like “slave,” “toilet” and “rag.”

In April, she learned from the police that two of her former classmates at Seoul National University were among those who had been detained. Male graduates of the prestigious university, along with accomplices, had targeted ​scores of ​women, including a dozen former Seoul National students, with deepfake pornography. One of the men detained was sentenced to five years in prison last month.​

“I cannot think of any reason they treated me like that, except that I was a woman,” she said. “The fact that there were people like them around me made me lose my faith in fellow human beings.”

She says she has struggled with trauma since the attack. Her heart races whenever she receives a message notification on her smartphone, or an anonymous call.

South Korea, whose pop culture is exported worldwide, has become the ​country most vulnerable to deepfake pornography. More than half of deepfakes globally target South Koreans, and the majority of those deepfakes victimize singers and actresses from the country, according to “2023 State of Deepfakes,” a study published by the U.S.-based cybersecurity firm Security Hero. Leading K-pop agencies have declared war on deepfakes, saying they were collecting evidence and threatening lawsuits against their creators and distributors.

Still, the problem is intensifying. The South Korean police reported 297 cases​ of deepfake sex crime between January and July, compared with 156 for all of 2021, when such data was first collected.

It was not until last month, when local news media exposed the extensive traffic in deepfakes on Telegram, that President Yoon Suk Yeol​ ordered his government to “root ​them out​.” Critics of Mr. Yoon noted that during his 2022 campaign for the presidency, he had denied that there was structural gender-based discrimination in South Korea and had promised to abolish its ministry of gender equality​.

News coverage of the rise in deepfakes this year led to panic among young women, many of whom deleted selfies and​ other personal images from their social media accounts, fearing they would be used for deepfakes. Chung Jin-kwon, who was a middle-school principal before assuming a role at the Seoul Metropolitan Office of Education last month, said his former school had discussed whether to omit student photos from yearbooks.

“​Some teachers had already declined to have their photos ​there, ​replacing them with caricatures,” Mr. Chung said.

Young people in South Korea, one of the world’s most wired countries, become tech-savvy from an early age​. But critics say its school system is so ​focused on preparing them for the all-important college entrance exams that they aren’t taught to handle new technology in an ethical way.

“We produce exam-problem-solving machines,” Mr. Chung​ said. “They don’t learn values.”

A Push for Tougher Laws

Kim Ji-hyun, a Seoul city official whose team has counseled 200 teenag​ers implicated in digital sexual exploitation since 2019, said that some boys had used deepfakes to take revenge on ex-girlfriends — and that in some cases, girls had used them to ostracize classmates. But many young people were first drawn to deepfakes out of curiosity​, Ms. Kim said.

Image
The silhouettes of three people facing a river and a view of tall city buildings.
Image
To many women in South Korea, deepfakes are just the latest online incarnation of deep-rooted misogyny in their country.Credit...Ahn Young-Joon/Associated Press

C​hat room operators attracted them with incentives, including Starbucks coupons​, and asked them to provide photos and personal data of women they knew​. Some ​of the Telegram channels, called “rape and humiliation rooms,” targeted individuals or women from certain schools, said Park Seong-hye, a team leader at the government-funded Women’s Human Rights Institute of Korea, who has investigated digital sex crimes and provided help​ to victims.

Under the law enacted in 2020, people convicted of making sexually explicit or​ abusive deepfakes with an intent to distribute them can be sent to prison for up to five years. Those who seek to profit ​financially from distributing such content can face up to seven years. But there is no law against ​buying, storing or watching deepfakes.

Investigators must have court approval to go undercover to access deepfake chat rooms, and they can only do so to investigate reports that minors have been sexually abused. The process can also be slow.

“You find a chat room on a holiday, ​but by the time you get court approval, it’s gone,” said Hahm Young-ok, a senior investigator of online crimes at the National Police Agency.

The government has promised to push for tougher laws against buying or viewing sexually exploitative deepfakes.​ This month, the police investigating the latest spate of deepfakes said they had detained seven male suspects, six of them teenagers.

Pornography is censored on South Korea’s internet, but people can get around the controls by using virtual private networks, and the ban is hard to enforce on social media channels. The police have indicated that they might investigat​e whether Telegram ​had abetted deepfake sex crimes. Last month, ​Telegram’s founder, Pavel Durov, was arrested​ in France and charged with a range of offenses, including enabling the distribution of child sexual abuse material.

Telegram ​said in a statement that it “has been actively removing content reported from Korea that breached its terms of service and will continue to do so​.”

Meanwhile, the government is being pressured to force ​online platforms to do more to filter out content like deepfake pornography.

“It’s time to choose between protecting the platforms and protecting our children and adolescents,” said Lee Soo-jung, a professor of forensic psychology at Kyonggi University​. “What ​we see happening now​ in 2024 was foretold back in 2020, but we have done nothing​ in between.”

https://www.nytimes.com/2024/09/12/worl ... 778d3e6de3
Post Reply