тут може бути ваша реклама

Мінфін: Україна отримає транш 3,9 млрд доларів від США, гроші спрямують на зарплати

Кошти буде спрямовано на виплати заробітної плати вчителям, співробітникам ДСНС, працівникам державних органів і на соціальні виплати

your ad here

Online misinformation fuels tensions over deadly Southport stabbing attack

LONDON — Within hours of a stabbing attack in northwest England that killed three young girls and wounded several more children, a false name of a supposed suspect was circulating on social media. Hours after that, violent protesters were clashing with police outside a nearby mosque.

Police say the name was fake, as were rumors that the 17-year-old suspect was an asylum-seeker who had recently arrived in Britain. Detectives say the suspect charged Thursday with murder and attempted murder was born in the U.K., and British media including the BBC have reported that his parents are from Rwanda.

That information did little to slow the lightning spread of the false name or stop right-wing influencers pinning the blame on immigrants and Muslims.

“There’s a parallel universe where what was claimed by these rumors were the actual facts of the case,” said Sunder Katwala, director of British Future, a think tank that looks at issues including integration and national identity. “And that will be a difficult thing to manage.”

Local lawmaker Patrick Hurley said the result was “hundreds of people descending on the town, descending on Southport from outside of the area, intent on causing trouble — either because they believe what they’ve written, or because they are bad faith actors who wrote it in the first place, in the hope of causing community division.”

One of the first outlets to report the false name, Ali Al-Shakati, was Channel 3 Now, an account on the X social media platform that purports to be a news channel. A Facebook page of the same name says it is managed by people in Pakistan and the U.S. A related website on Wednesday showed a mix of possibly AI-generated news and entertainment stories, as well as an apology for “the misleading information” in its article on the Southport stabbings.

By the time the apology was posted, the incorrect identification had been repeated widely on social media.

“Some of the key actors are probably just generating traffic, possibly for monetization,” said Katwala. The misinformation was then spread further by “people committed to the U.K. domestic far right,” he said.

Governments around the world, including Britain’s, are struggling with how to curb toxic material online. U.K. Home Secretary Yvette Cooper said Tuesday that social media companies “need to take some responsibility” for the content on their sites.

Katwala said that social platforms such as Facebook and X worked to “de-amplify” false information in real time after mass shootings at two mosques in Christchurch, New Zealand, in 2019.

Since Elon Musk, a self-styled free-speech champion, bought X, it has gutted teams that once fought misinformation on the platform and restored the accounts of banned conspiracy theories and extremists.

Rumors have swirled in the relative silence of police over the attack. Merseyside Police issued a statement saying the reported name for the suspect was incorrect, but have provided little information about him other than his age and birthplace of Cardiff, Wales.

Under U.K. law, suspects are not publicly named until they have been charged and those under 18 are usually not named at all. That has been seized on by some activists to suggest the police are withholding information about the attacker.

Tommy Robinson, founder of the far-right English Defense League, accused police of “gaslighting” the public. Nigel Farage, a veteran anti-immigration politician who was elected to Parliament in this month’s general election, posted a video on X speculating “whether the truth is being withheld from us” about the attack.

Brendan Cox, whose lawmaker wife Jo Cox was murdered by a far-right attacker in 2016, said Farage’s comments showed he was “nothing better than a Tommy Robinson in a suit.”

“It is beyond the pale to use a moment like this to spread your narrative and to spread your hatred, and we saw the results on Southport’s streets last night,” Cox told the BBC.

your ad here

«Україна проводить найскладнішу реконструкцію в Європі з часів Другої світової війни» – спецпредставниця США Пріцкер

У довгостроковій перспективі, за словами Пенні Пріцкер, Україні необхідно зосередитися на залученні іноземних інвестицій

your ad here

У Мінфіні заявили про готовність до компромісу з Радою щодо підвищення податків

Сергій Марченко нагадав, що Україна щомісяця витрачає на сектор безпеки і оборони 166 млрд гривень, а щодня – в середньому 5,6 млрд гривень

your ad here

Bloomberg: олігарх із РФ виплатить уряду Британії 750 тисяч фунтів через справу про ухилення від санкцій

У самого Авена немає банківських рахунків у Британії, але його підозрювали у використанні рахунків дружини та фірми з управління майном

your ad here

Manipulated video shared by Musk mimics Harris’ voice, raising concerns about AI in politics

New York — A manipulated video that mimics the voice of Vice President Kamala Harris saying things she did not say is raising concerns about the power of artificial intelligence to mislead with Election Day about three months away.

The video gained attention after tech billionaire Elon Musk shared it on his social media platform X on Friday evening without explicitly noting it was originally released as parody.

The video uses many of the same visuals as a real ad that Harris, the likely Democratic president nominee, released last week launching her campaign. But the video swaps out the voice-over audio with another voice that convincingly impersonates Harris.

“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the voice says in the video. It claims Harris is a “diversity hire” because she is a woman and a person of color, and it says she doesn’t know “the first thing about running the country.” The video retains “Harris for President” branding. It also adds in some authentic past clips of Harris.

Mia Ehrenberg, a Harris campaign spokesperson, said in an email to The Associated Press: “We believe the American people want the real freedom, opportunity and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump.”

The widely shared video is an example of how lifelike AI-generated images, videos or audio clips have been utilized both to poke fun and to mislead about politics as the United States draws closer to the presidential election. It exposes how, as high-quality AI tools have become far more accessible, there remains a lack of significant federal action so far to regulate their use, leaving rules guiding AI in politics largely to states and social media platforms.

The video also raises questions about how to best handle content that blurs the lines of what is considered an appropriate use of AI, particularly if it falls into the category of satire.

The original user who posted the video, a YouTuber known as Mr Reagan, has disclosed both on YouTube and on X that the manipulated video is a parody. But Musk’s post, which has been viewed more than 123 million times, according to the platform, only includes the caption “This is amazing” with a laughing emoji.

X users who are familiar with the platform may know to click through Musk’s post to the original user’s post, where the disclosure is visible. Musk’s caption does not direct them to do so.

While some participants in X’s “community note” feature to add context to posts have suggested labeling Musk’s post, no such label had been added to it as of Sunday afternoon. Some users online questioned whether his post might violate X’s policies, which say users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

The policy has an exception for memes and satire as long as they do not cause “significant confusion about the authenticity of the media.”

Musk endorsed former President Donald Trump, the Republican nominee, earlier this month. Neither Mr Reagan nor Musk immediately responded to emailed requests for comment Sunday.

Two experts who specialize in AI-generated media reviewed the fake ad’s audio and confirmed that much of it was generated using AI technology.

One of them, University of California, Berkeley, digital forensics expert Hany Farid, said the video shows the power of generative AI and deepfakes.

“The AI-generated voice is very good,” he said in an email. “Even though most people won’t believe it is VP Harris’ voice, the video is that much more powerful when the words are in her voice.”

He said generative AI companies that make voice-cloning tools and other AI tools available to the public should do better to ensure their services are not used in ways that could harm people or democracy.

Rob Weissman, co-president of the advocacy group Public Citizen, disagreed with Farid, saying he thought many people would be fooled by the video.

“I don’t think that’s obviously a joke,” Weissman said in an interview. “I’m certain that most people looking at it don’t assume it’s a joke. The quality isn’t great, but it’s good enough. And precisely because it feeds into preexisting themes that have circulated around her, most people will believe it to be real.”

Weissman, whose organization has advocated for Congress, federal agencies and states to regulate generative AI, said the video is “the kind of thing that we’ve been warning about.”

Other generative AI deepfakes in both the U.S. and elsewhere would have tried to influence voters with misinformation, humor or both.

In Slovakia in 2023, fake audio clips impersonated a candidate discussing plans to rig an election and raise the price of beer days before the vote. In Louisiana in 2022, a political action committee’s satirical ad superimposed a Louisiana mayoral candidate’s face onto an actor portraying him as an underachieving high school student.

Congress has yet to pass legislation on AI in politics, and federal agencies have only taken limited steps, leaving most existing U.S. regulation to the states. More than one-third of states have created their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.

Beyond X, other social media companies also have created policies regarding synthetic and manipulated media shared on their platforms. Users on the video platform YouTube, for example, must reveal whether they have used generative artificial intelligence to create videos or face suspension.

your ad here

Can tech help solve the Los Angeles homeless crisis? Finding shelter may someday be a click away

LOS ANGELES — Billions of dollars have been spent on efforts to get homeless people off the streets in California, but outdated computer systems with error-filled data are all too often unable to provide even basic information like where a shelter bed is open on any given night, inefficiencies that can lead to dire consequences.

The problem is especially acute in Los Angeles, where more than 45,000 people — many suffering from serious mental illness, substance addictions or both — live in litter-strewn encampments that have spread into virtually every neighborhood, and where rows of rusting RVs line entire blocks.

Even in the state that is home to Silicon Valley, technology has not kept up with the long-running crisis. In an age when anyone can book a hotel room or rent a car with a few strokes on a mobile phone, no system exists that provides a comprehensive listing of available shelter beds in Los Angeles County, home to more than 1 in 5 unhoused people in the U.S.

Mark Goldin, chief technology officer for Better Angels United, a nonprofit group, described L.A.’s technology as “systems that don’t talk to one another, lack of accurate data, nobody on the same page about what’s real and isn’t real.”

The systems can’t answer “exactly how many people are out there at any given time. Where are they?” he said.

The ramifications for people living on the streets could mean whether someone sleeps another night outside or not, a distinction that can be life-threatening.

“They are not getting the services to the people at the time that those people either need the service, or are mentally ready to accept the services,” said Adam Miller, a tech entrepreneur and CEO of Better Angels.

The problems were evident at a filthy encampment in the city’s Silver Lake neighborhood, where Sara Reyes, executive director of SELAH Neighborhood Homeless Coalition, led volunteers distributing water, socks and food to homeless people, including one who appeared unconscious.

She gave out postcards with the address of a nearby church where the coalition provides hot food and services. A small dog bolted out of a tent, frantically barking, while a disheveled man wearing a jacket on a blistering hot day shuffled by a stained mattress.

At the end of the visit Reyes began typing notes into her mobile phone, which would later be retyped into a coalition spreadsheet and eventually copied again into a federal database.

“Anytime you move it from one medium to another, you can have data loss. We know we are not always getting the full picture,” Reyes said. The “victims are the people the system is supposed to serve.”

The technology has sputtered while the homeless population has soared. Some ask how can you combat a problem without reliable data to know what the scope is? An annual tally of homeless people in the city recently found a slight decline in the population, but some experts question the accuracy of the data, and tents and encampments can be seen just about everywhere.

Los Angeles Mayor Karen Bass has pinpointed shortcomings with technology as among the obstacles she faces in homelessness programs and has described the city’s efforts to slow the crisis as “building the plane while flying it.”

She said earlier this year that three to five homeless people die every day on the streets of L.A.

On Thursday, Gov. Gavin Newsom ordered state agencies to start removing homeless encampments on state land in his boldest action yet following a Supreme Court ruling allowing cities to enforce bans on sleeping outside in public spaces.

There is currently no uniform practice for caseworkers to collect and enter information into databases on the homeless people they interview, including notes taken on paper. The result: Information can be lost or recorded incorrectly, and it becomes quickly outdated with the lag time between interviews and when it’s entered into a database. 

The main federal data system, known as the Homeless Management Information System, or HMIS, was designed as a desktop application, making it difficult to operate on a mobile phone.

“One of the reasons the data is so bad is because what the case managers do by necessity is they take notes, either on their phones or on scrap pieces of paper or they just try to remember it, and they don’t typically input it until they get back to their desk” hours, days, a week or even longer afterward, Miller said.

Every organization that coordinates services for homeless people uses an HMIS program to comply with data collection and reporting standards mandated by the U.S. Department of Housing and Urban Development. But the systems are not all compatible.

Sam Matonik, associate director of data at L.A.-based People Assisting the Homeless, a major service provider, said his organization is among those that must reenter data because Los Angeles County uses a proprietary data system that does not talk to the HMIS system.  

“Once you’re manually double-entering things, it opens the door for all sorts of errors,” Matonik said. “Small numerical errors are the difference between somebody having shelter and not.”

Bevin Kuhn, acting deputy chief of analytics for the Los Angeles Homeless Services Authority, the agency that coordinates homeless housing and services in Los Angeles County, said work is underway to create a database of 23,000 beds by the end of the year as part of technology upgrades.

For case managers, “just seeing … the general bed availability is challenging,” Kuhn said.

Among other changes is a reboot of the HMIS system to make it more compatible with mobile apps and developing a way to measure if timely data is being entered by case workers, Kuhn said.

It’s not uncommon for a field worker to encounter a homeless person in crisis who needs immediate attention, which can create delays in collecting data. Los Angeles Homeless Services Authority aims for data to be entered in the system within 72 hours, but that benchmark is not always met.

In hopes of filling the void, Better Angels assembled a team experienced in building large-scale software applications. They are constructing a mobile-friendly prototype for outreach workers — to be donated to participating groups in Los Angeles County — that will be followed by systems for shelter operators and a comprehensive shelter bed database.

Since homeless people are transient and difficult to locate for follow-up services, one feature would create a map of places where an individual had been encountered, allowing case managers to narrow the search.

Services are often available, but the problem is linking them with a homeless person in real time. So, a data profile would show services the individual received in the past, medical issues and make it easy to contact health workers, if needed.

As a secondary benefit — if enough agencies and providers agree to participate — the software could produce analytical information and data visualizations, spotlighting where homeless people are moving around the county, or concentrations of where homeless people have gathered.

One key goal for the prototypes: ease of use even for workers with scant digital literacy. Information entered into the app would be immediately unloaded to the database, eliminating the need for redundant reentries while keeping information up to date.

Time is often critical. Once a shelter bed is located, there is a 48-hour window for the spot to be claimed, which Reyes says happens only about half the time. The technology is so inadequate, the coalition sometimes doesn’t learn a spot is open until it has expired.

She has been impressed with the speed of the Better Angels app, which is in testing, and believes it would cut down on the number of people who miss the housing window, as well as create more reliability for people trying to obtain services.

“I’m hoping Better Angels helps us put the human back into this whole situation,” Reyes said.  

your ad here

Швейцарія не буде вилучати доходи від заморожених російських активів для передачі їх Україні – ЗМІ

Напередодні президентка Європейської комісії Урсула фон дер Ляєн оголосила про перерахування 1,5 мільярда євро доходів від заморожених російських активів

your ad here

US claims TikTok collected user views on issues like abortion, gun control

WASHINGTON — In a fresh broadside against one of the world’s most popular technology companies, the Justice Department late Friday accused TikTok of harnessing the capability to gather bulk information on users based on views on divisive social issues like gun control, abortion and religion.

Government lawyers wrote in a brief filed to the federal appeals court in Washington that TikTok and its Beijing-based parent company ByteDance used an internal web-suite system called Lark to enable TikTok employees to speak directly with ByteDance engineers in China.

TikTok employees used Lark to send sensitive data about U.S. users, information that has wound up being stored on Chinese servers and accessible to ByteDance employees in China, federal officials said.

One of Lark’s internal search tools, the filing states, permits ByteDance and TikTok employees in the U.S. and China to gather information on users’ content or expressions, including views on sensitive topics, such as abortion or religion. Last year, The Wall Street Journal reported TikTok had tracked users who watched LGBTQ content through a dashboard the company said it had since deleted.

The new court documents represent the government’s first major defense in a consequential legal battle over the future of the popular social media platform, which is used by more than 170 million Americans. Under a law signed by President Joe Biden in April, the company could face a ban in a few months if it doesn’t break ties with ByteDance.

The measure was passed with bipartisan support after lawmakers and administration officials expressed concerns that Chinese authorities could force ByteDance to hand over U.S. user data or sway public opinion towards Beijing’s interests by manipulating the algorithm that populates users’ feeds.

The Justice Department warned, in stark terms, of the potential for what it called “covert content manipulation” by the Chinese government, saying the algorithm could be designed to shape content that users receive.

“By directing ByteDance or TikTok to covertly manipulate that algorithm; China could for example further its existing malign influence operations and amplify its efforts to undermine trust in our democracy and exacerbate social divisions,” the brief states.

The concern, they said, is more than theoretical, alleging that TikTok and ByteDance employees are known to engage in a practice called “heating” in which certain videos are promoted in order to receive a certain number of views. While this capability enables TikTok to curate popular content and disseminate it more widely, U.S. officials posit it can also be used for nefarious purposes.

Justice Department officials are asking the court to allow a classified version of its legal brief, which won’t be accessible to the two companies.

Nothing in the redacted brief “changes the fact that the Constitution is on our side,” TikTok spokesperson Alex Haurek said in a statement.

“The TikTok ban would silence 170 million Americans’ voices, violating the 1st Amendment,” Haurek said. “As we’ve said before, the government has never put forth proof of its claims, including when Congress passed this unconstitutional law. Today, once again, the government is taking this unprecedented step while hiding behind secret information. We remain confident we will prevail in court.”

In the redacted version of the court documents, the Justice Department said another tool triggered the suppression of content based on the use of certain words. Certain policies of the tool applied to ByteDance users in China, where the company operates a similar app called Douyin that follows Beijing’s strict censorship rules.

But Justice Department officials said other policies may have been applied to TikTok users outside of China. TikTok was investigating the existence of these policies and whether they had ever been used in the U.S. in, or around, 2022, officials said.

The government points to the Lark data transfers to explain why federal officials do not believe that Project Texas, TikTok’s $1.5 billion mitigation plan to store U.S. user data on servers owned and maintained by the tech giant Oracle, is sufficient to guard against national security concerns.

In its legal challenge against the law, TikTok has heavily leaned on arguments that the potential ban violates the First Amendment because it bars the app from continued speech unless it attracts a new owner through a complex divestment process. It has also argued divestment would change the speech on the platform because a new social platform would lack the algorithm that has driven its success.

In its response, the Justice Department argued TikTok has not raised any valid free speech claims, saying the law addresses national security concerns without targeting protected speech, and argues that China and ByteDance, as foreign entities, aren’t shielded by the First Amendment.

TikTok has also argued the U.S. law discriminates on viewpoints, citing statements from some lawmakers critical of what they viewed as an anti-Israel tilt on the platform during its war in Gaza.

Justice Department officials disputes that argument, saying the law at issue reflects their ongoing concern that China could weaponize technology against U.S. national security, a fear they say is made worse by demands that companies under Beijing’s control turn over sensitive data to the government. They say TikTok, under its current operating structure, is required to be responsive to those demands.

Oral arguments in the case is scheduled for September. 

your ad here

Ми не чекаємо закінчення війни, щоб планувати відбудову України – заступниця міністра торгівлі США

«Підтримувати Україну ­– це більше, ніж просто правильний вчинок. Це інвестиція в трансатлантичну безпеку та демократичні цінності, які ми всі глибоко цінуємо»

your ad here