тут може бути ваша реклама

Next Boeing CEO should understand past mistakes, airlines boss says 

DUBAI — The next CEO of Boeing BA.N should have an understanding of what led to its current crisis and be prepared to look outside for examples of best industrial practices, the head of the International Air Transport Association said on Sunday.

U.S. planemaker Boeing is engulfed in a sprawling safety crisis, exacerbated by a January mid-air panel blowout on a near new 737 MAX plane. CEO Dave Calhoun is due to leave the company by the end of the year as part of a broader management shake-up, but Boeing has not yet named a replacement.

“It is not for me to say who should be running Boeing. But I think an understanding of what went wrong in the past, that’s very important,” IATA Director General Willie Walsh told Reuters TV at an airlines conference in Dubai, adding that Boeing was taking the right steps.

IATA represents more than 300 airlines or around 80% of global traffic.

“Our industry benefits from learning from mistakes, and sharing that learning with everybody,” he said, adding that this process should include “an acknowledgement of what went wrong, looking at best practice, looking at what others do.”

He said it was critical that the industry has a culture “where people feel secure in putting their hands up and saying things aren’t working the way they should do.”

Boeing is facing investigations by U.S. regulators, possible prosecution for past actions and slumping production of its strongest-selling jet, the 737 MAX.

‘Right steps’

Calhoun, a Boeing board member since 2009 and former GE executive, was brought in as CEO in 2020 to help turn the planemaker around following two fatal crashes involving the MAX, its strongest-selling jet.

But the planemaker has lost market share to competitor Airbus AIR.PA, with its stock losing nearly 32% of its value this year as MAX production plummeted this spring.

“The industry is frustrated by the problems as a result of the issues that Boeing have encountered. But personally, I’m pleased to see that they are taking the right steps,” Walsh said.

Delays in the delivery of new jets from both Boeing and Airbus are part of wider problems in the aerospace supply chain and aircraft maintenance industry complicating airline growth plans.

Walsh said supply chain problems are not easing as fast as airlines want and could last into 2025 or 2026.

“It’s probably a positive that it’s not getting worse, but I think it’s going to be a feature of the industry for a couple of years to come,” he said.

Earlier this year IATA brought together a number of airlines and manufacturers to discuss ways to ease the situation, Walsh said.

“We’re trying to ensure that there’s an open dialogue and honesty,” between them, he said.

your ad here

‘Open source’ investigators use satellites to identify burned Darfur villages

Investigators using satellite imagery to document the war in western Sudan’s Darfur region say 72 villages were burned down in April, the most they have seen since the conflict began. Henry Wilkins talks with the people who do this research about how so-called open-source investigations could be crucial in holding those responsible for the violence to account.

your ad here

Robot will try to remove nuclear debris from Japan’s destroyed reactor

TOKYO — The operator of Japan’s destroyed Fukushima Daiichi nuclear power plant demonstrated Tuesday how a remote-controlled robot would retrieve tiny bits of melted fuel debris from one of three damaged reactors later this year for the first time since the 2011 meltdown.

Tokyo Electric Power Company Holdings plans to deploy a “telesco-style” extendable pipe robot into Fukushima Daiichi No. 2 reactor to test the removal of debris from its primary containment vessel by October.

That work is more than two years behind schedule. The removal of melted fuel was supposed to begin in late 2021 but has been plagued with delays, underscoring the difficulty of recovering from the magnitude 9.0 quake and tsunami in 2011.

During the demonstration at the Mitsubishi Heavy Industries’ shipyard in Kobe, western Japan, where the robot has been developed, a device equipped with tongs slowly descended from the telescopic pipe to a heap of gravel and picked up a granule.

TEPCO plans to remove less than 3 grams (0.1 ounce) of debris in the test at the Fukushima plant.

“We believe the upcoming test removal of fuel debris from Unit 2 is an extremely important step to steadily carry out future decommissioning work,” said Yusuke Nakagawa, a TEPCO group manager for the fuel debris retrieval program. “It is important to proceed with the test removal safely and steadily.”

About 880 tons of highly radioactive melted nuclear fuel remain inside the three damaged reactors. Critics say the 30- to 40-year cleanup target set by the government and TEPCO for Fukushima Daiichi is overly optimistic. The damage in each reactor is different, and plans must accommodate their conditions.

Better understanding the melted fuel debris from inside the reactors is key to their decommissioning. TEPCO deployed four mini drones into the No. 1 reactor’s primary containment vessel earlier this year to capture images from the areas where robots had not reached.

your ad here

New cars in California could alert drivers for breaking the speed limit

SACRAMENTO, California — California could eventually join the European Union in requiring all new cars to alert drivers when they break the speed limit, a proposal aimed at reducing traffic deaths that would likely impact motorists across the country should it become law.

The federal government sets safety standards for vehicles nationwide, which is why most cars now beep at drivers if their seat belt isn’t fastened. A bill in the California Legislature — which passed its first vote in the state Senate on Tuesday — would go further by requiring all new cars sold in the state by 2032 to beep at drivers when they exceed the speed limit by at least 16 kph.

“Research has shown that this does have an impact in getting people to slow down, particularly since some people don’t realize how fast that their car is going,” said state Sen. Scott Wiener, a Democrat from San Francisco and the bill’s author.

The bill narrowly passed Tuesday, an indication of the tough road it could face. Republican state Sen. Brian Dahle said he voted against it in part because he said sometimes people need to drive faster than the speed limit in an emergency.

“It’s just a nanny state that we’re causing here,” he said.

While the goal is to reduce traffic deaths, the legislation would likely impact all new car sales in the U.S. That’s because California’s auto market is so large that car makers would likely just make all of their vehicles comply with the state’s law.

California often throws its weight around to influence national — and international — policy. California has set its own emission standards for cars for decades, rules that more than a dozen other states have also adopted. And when California announced it would eventually ban the sale of new gas-powered cars, major automakers soon followed with their own announcement to phase out fossil-fuel vehicles.

The technology, known as intelligent speed assistance, uses GPS technology to compare a vehicle’s speed with a dataset of posted speed limits. Once the car is at least 16 kph over the speed limit, the system would emit “a brief, one-time visual and audio signal to alert the driver.”

It would not require California to maintain a list of posted speed limits. That would be left to manufacturers. It’s likely these maps would not include local roads or recent changes in speed limits, resulting in conflicts.

The bill states that if the system receives conflicting information about the speed limit, it must use the higher limit.

The technology is not new and has been used in Europe for years. Starting later this year, the European Union will require all new cars sold there to have the technology — although drivers would be able to turn it off.

The National Highway and Traffic Safety Administration estimates that 10% of all car crashes reported to police in 2021 were speeding related — including an 8% increase in speeding-related fatalities. This was especially a problem in California, where 35% of traffic fatalities were speeding-related — the second highest in the country, according to a legislative analysis of the proposal.

Last year, the National Transportation Safety Board recommended federal regulators require all new cars to alert drivers when speeding. Their recommendation came after a crash in January 2022 when a man with a history of speeding violations was traveling more than 100 miles per hour when he ran a red light and hit a minivan, killing himself and eight other people.

The NTSB has no authority and can only make recommendations.

your ad here

Attempts to regulate AI’s hidden hand in Americans’ lives flounder

DENVER — The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide.

Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed. Colorado Gov. Jared Polis hesitantly signed the bill on Friday.

Colorado’s bill and those that faltered in Washington, Connecticut and elsewhere faced battles on many fronts, including between civil rights groups and the tech industry, and lawmakers wary of wading into a technology few yet understand and governors worried about being the odd-state-out and spooking AI startups.

Polis signed Colorado’s bill “with reservations,” saying in an statement he was wary of regulations dousing AI innovation. The bill has a two-year runway and can be altered before it becomes law.

“I encourage (lawmakers) to significantly improve on this before it takes effect,” Polis wrote.

Colorado’s proposal, along with six sister bills, are complex, but will broadly require companies to assess the risk of discrimination from their AI and inform customers when AI was used to help make a consequential decision for them.

The bills are separate from more than 400 AI-related bills that have been debated this year. Most are aimed at slices of AI, such as the use of deepfakes in elections or to make pornography.

The seven bills are more ambitious, applying across major industries and targeting discrimination, one of the technology’s most perverse and complex problems.

“We actually have no visibility into the algorithms that are used, whether they work or they don’t, or whether we’re discriminated against,” said Rumman Chowdhury, AI envoy for the U.S. Department of State who previously led Twitter’s AI ethics team.

While anti-discrimination laws are already on the books, those who study AI discrimination say it’s a different beast, which the U.S. is already behind in regulating.

“The computers are making biased decisions at scale,” said Christine Webber, a civil rights attorney who has worked on class action lawsuits over discrimination including against Boeing and Tyson Foods. Now, Webber is nearing final approval on one of the first-in-the-nation settlements in a class action over AI discrimination.

“Not, I should say, that the old systems were perfectly free from bias either,” said Webber. But “any one person could only look at so many resumes in the day. So you could only make so many biased decisions in one day and the computer can do it rapidly across large numbers of people.”

When you apply for a job, an apartment or a home loan, there’s a good chance AI is assessing your application: sending it up the line, assigning it a score or filtering it out. It’s estimated as many as 83% of employers use algorithms to help in hiring, according to the Equal Employment Opportunity Commission.

AI itself doesn’t know what to look for in a job application, so it’s taught based on past resumes. The historical data that is used to train algorithms can smuggle in bias.

Amazon, for example, worked on a hiring algorithm that was trained on old resumes: largely male applicants. When assessing new applicants, it downgraded resumes with the word “women’s” or that listed women’s colleges because they were not represented in the historical data — the resumes — it had learned from. The project was scuttled.

Webber’s class action lawsuit alleges that an AI system that scores rental applications disproportionately assigned lower scores to Black or Hispanic applicants. A study found that an AI system built to assess medical needs passed over Black patients for special care.

Studies and lawsuits have allowed a glimpse under the hood of AI systems, but most algorithms remain veiled. Americans are largely unaware that these tools are being used, polling from Pew Research shows. Companies generally aren’t required to explicitly disclose that an AI was used.

“Just pulling back the curtain so that we can see who’s really doing the assessing and what tool is being used is a huge, huge first step,” said Webber. “The existing laws don’t work if we can’t get at least some basic information.”

That’s what Colorado’s bill, along with another surviving bill in California, are trying to change. The bills, including a flagship proposal in Connecticut that was killed under opposition from the governor, are largely similar.

Colorado’s bill will require companies using AI to help make consequential decisions for Americans to annually assess their AI for potential bias; implement an oversight program within the company; tell the state attorney general if discrimination was found; and inform to customers when an AI was used to help make a decision for them, including an option to appeal.

Labor unions and academics fear that a reliance on companies overseeing themselves means it’ll be hard to proactively address discrimination in an AI system before it’s done damage. Companies are fearful that forced transparency could reveal trade secrets, including in potential litigation, in this hyper-competitive new field.

AI companies also pushed for, and generally received, a provision that only allows the attorney general, not citizens, to file lawsuits under the new law. Enforcement details have been left up to the attorney general.

While larger AI companies have more or less been on board with these proposals, a group of smaller Colorado-based AI companies said the requirements might be manageable by behemoth AI companies, but not by budding startups.

“We are in a brand new era of primordial soup,” said Logan Cerkovnik, founder of Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us into definitions and restricts our use of technology while this is forming is just going to be detrimental to innovation.”

All agreed, along with many AI companies, that what’s formally called “algorithmic discrimination” is critical to tackle. But they said the bill as written falls short of that goal. Instead, they proposed beefing up existing anti-discrimination laws.

Chowdhury worries that lawsuits are too costly and time consuming to be an effective enforcement tool, and laws should instead go beyond what even Colorado is proposing. Instead, Chowdhury and academics have proposed accredited, independent organization that can explicitly test for potential bias in an AI algorithm.

“You can understand and deal with a single person who is discriminatory or biased,” said Chowdhury. “What do we do when it’s embedded into the entire institution?”

your ad here

China’s Digital Silk Road exports internet technology, controls

washington — China promotes its help to Southeast Asian countries in modernizing their digital landscapes through investments in infrastructure as part of its “Digital Silk Road.” But rights groups say Beijing is also exporting its model of authoritarian governance of the internet through censorship, surveillance and controls.

China’s state media this week announced Chinese electrical appliance manufacturer Midea Group jointly built its first overseas 5G factory in Thailand with Thai mobile operator AIS, Chinese telecom service provider China Unicom and tech giant Huawei.

The 208,000-square-meter smart factory will have its own 5G network, Xinhua news agency reported.

Earlier this month, Beijing reached an agreement with Cambodia to establish a Digital Law Library of the Association of Southeast Asian Nations (ASEAN) Inter-Parliamentary Assembly. Cambodia’s Khmer Times said the objective is to “expand all-round cooperation in line with the strategic partnership and building a common destiny community.”

But parallel to China’s state media-promoted technology investments, rights groups say Beijing is also helping countries in the region to build what they call “digital authoritarian governance.”

Article 19, an international human rights organization dedicated to promoting freedom of expression globally and named after Article 19 of the Universal Declaration of Human Rights, in an April report said the purpose of the Digital Silk Road is not solely to promote China’s technology industry. The report, China: The rise of digital repression in the Indo-Pacific, says Beijing is also using its technology to reshape the region’s standards of digital freedom and governance to increasingly match its own.

VOA contacted the Chinese Embassy in the U.S. for a response but did not receive one by the time of publication.

Model of digital governance

Looking at case studies of Cambodia, Malaysia, Nepal and Thailand, the Article 19 report says Beijing is spreading China’s model of digital governance along with Chinese technology and investments from companies such as Huawei, ZTE and Alibaba.

Michael Caster, Asia digital program manager with Article 19, told VOA, “China has been successful at providing a needed service, in the delivery of digital development toward greater connectivity, but also in making digital development synonymous with the adoption of PRC [People’s Republic of China]-style digital governance, which is at odds with international human rights and internet freedom principles, by instead promoting notions of total state control through censorship and surveillance, and digital sovereignty away from universal norms.”

The group says in Thailand, home to the world’s largest overseas Chinese community, agreements with China bolstered internet controls imposed after Thailand’s 2014 coup, and it notes that Bangkok has since been considering a China-style Great Firewall, the censorship mechanism Beijing uses to control online content.

In Nepal, the report notes security and intelligence-sharing agreements with China and concerns that Chinese security camera technology is being used to surveil exiled Tibetans, the largest such group outside India.

The group says Malaysia’s approach to information infrastructure appears to increasingly resemble China’s model, citing Kuala Lumpur’s cybersecurity law passed in April and its partnering with Chinese companies whose technology has been used for repressing minorities inside China.

Most significantly, Article 19 says China is involved at “all levels” of Cambodia’s digital ecosystem. Huawei, which is facing increasing bans in Western nations over cybersecurity concerns, has a monopoly on cloud services in Cambodia.

While Chinese companies say they would not hand over private data to Beijing, experts doubt they would have any choice because of national security laws.

Internet gateway

Phnom Penh announced a decree in 2021 to build a National Internet Gateway similar to China’s Great Firewall, restricting the Cambodian people’s access to Western media and social networking sites.

“That we have seen the normalization of a China-style Great Firewall in some of the countries where China’s influence is most pronounced or its digital development support strongest, such as with Cambodia, is no coincidence,” Caster said.

The Cambodian government says the portal will strengthen national security and help combat tax fraud and cybercrime. But the Internet Society, a U.S.- and Switzerland-based nonprofit internet freedom group, says it would allow the government to monitor individual internet use and transactions, and to trace identities and locations.

Kian Vesteinsson, a senior researcher for technology and democracy with rights group Freedom House, told VOA, “The Chinese Communist Party and companies that are aligned with the Chinese state have led a charge internationally to push for internet fragmentation. And when I say internet fragmentation, I mean these efforts to carve out domestic internets that are isolated from global internet traffic.”

Despite Chinese support and investment, Vesteinsson notes that Cambodia has not yet implemented the plan for a government-controlled internet.

“Building the Chinese model of digital authoritarianism into a country’s internet infrastructure is extraordinarily difficult. It’s expensive. It requires technical capacity. It requires state capacity, and all signs point to the Cambodian government struggling on those fronts.”

Vesteinsson says while civil society and foreign political pressure play a role, business concerns are also relevant as requirements to censor online speech or spy on users create costs for the private sector.

“These governments that are trying to cultivate e-commerce should keep in mind that a legal environment that is free from these obligations to do censorship and surveillance will be more appealing to companies that are evaluating whether to start up domestic operations,” he said.

Article 19’s Caster says countries concerned about China’s authoritarian internet model spreading should do more to support connectivity and internet development worldwide.

“This support should be based on human rights law and internet freedom principles,” he said, “to prevent China from exploiting internet development needs to position its services – and often by extension its authoritarian model – as the most accessible option.”

China will hold its annual internet conference in Beijing July 9-11. China’s Xinhua news agency reports this year’s conference will discuss artificial intelligence, digital government, information technology application innovation, data security and international cooperation.

Adrianna Zhang contributed to this report.

your ad here

IS turns to artificial intelligence for advanced propaganda amid territorial defeats

Washington — With major military setbacks in recent years, supporters of the Islamic State terror group are increasingly relying on artificial intelligence (AI) to generate online propaganda, experts said.

A new form of propaganda developed by IS supporters is broadcasting news bulletins with AI-generated anchors in multiple languages.

The Islamic State Khorasan (ISKP) group, an IS affiliate active in Afghanistan and Pakistan, produced in a video an AI-generated anchorman to appear reading news following an IS-claimed attack in Bamiyan province in Afghanistan on May 17 that killed four people, including three Spanish tourists.

The digital image posing as an anchor spoke the Pashto language and had features resembling local residents in Bamiyan, according to The Khorasan Diary, a website dedicated to news and analysis on the region.

Another AI-generated propaganda video by Islamic State appeared on Tuesday with a different digital male news anchor announcing IS’s responsibility for a car bombing in Kandahar, Afghanistan.

“These extremists are very effective in spreading deepfake propaganda,” said Roland Abi Najem, a cybersecurity expert based in Kuwait.

He told VOA that a group like IS was already effective in producing videos with Hollywood-level quality, and the use of AI has made such production more accessible for them.

“AI now has easy tools to use to create fake content whether it’s text, photo, audio or video,” Abi Najem said, adding that with AI, “you only need data, algorithms and computing power, so anyone can create AI-generated content from their houses or garages.”

IS formally began using the practice of AI-generated news bulletins four days after an attack at a Moscow music hall on March 22 killed some 145 people. The attack was claimed by IS.

In that video, IS used a “fake” AI-generated news anchor talking about the Moscow attack, experts told The Washington Post last week.

Mona Thakkar, a research fellow at the International Center for the Study of Violent Extremism, said pro-IS supporters have been using character-generation techniques and text-to-speech AI tools to produce translated news bulletins of IS’s Amaq news agency.

“These efforts have garnered positive responses from other users, reflecting that, through future collaborative efforts, many supporters could produce high quality and sophisticated AI-powered propaganda videos for IS of longer durations with better graphics and more innovation techniques,” she told VOA.

Thakkar said she recently came across some pro-IS Arabic-speaking supporters on Telegram who were recommending to other supporters “that beginners use AI image generator bots on Telegram to maintain the high quality of images as the bots are very easy and quick to produce such images.”

AI-generated content for recruitment

While IS’s ability to project power largely decreased due to its territorial defeat in Syria and Iraq, experts say supporters of the terror group believe artificial intelligence offers an alternative to promote their extremist ideology.

“Their content has mainly focused on showing that they’re still powerful,” said Abi Najem. “With AI-generated content now, they can choose certain celebrities that have influence, especially on teenagers, by creating deepfake videos.”

“So first they manipulate these people by creating believable content, then they begin recruiting them,” he said.

In a recent article published on the Global Network on Extremism and Technology, researcher Daniel Siegel said generative AI technology has had a profound impact on how extremist organizations engage in influence operations online, including the use of AI-generated Muslim religious songs, known as nasheeds, for recruitment purposes.

“The strategic deployment of extremist audio deepfake nasheeds, featuring animated characters and internet personalities, marks a sophisticated evolution in the tactics used by extremists to broaden the reach of their content,” he wrote.

Siegel said that other radical groups like al-Qaida and Hamas have also begun using AI to generate content for their supporters.

Cybersecurity expert Abi Najem said he believes the cheap technology will increase the availability of AI-generated content by extremist groups on the internet.

“While currently there are no stringent regulations on the use of AI, it will be very challenging for governments to stop extremist groups from exploiting these platforms for their own gain,” he said.

This story originated in VOA’s Kurdish Service.

your ad here

Australian researchers unveil device that harvests water from the air

SYDNEY — A device that absorbs water from air to produce drinkable water was officially launched in Australia Wednesday.

Researchers say the so-called Hydro Harvester, capable of producing up to 1,000 liters of drinkable water a day, could be “lifesaving during drought or emergencies.”

The device absorbs water from the atmosphere. Solar energy or heat that is harnessed from, for example, industrial processes are used to generate hot, humid air. That is then allowed to cool, producing water for drinking or irrigation.

The Australian team said that unlike other commercially available atmospheric water generators, their invention works by heating air instead of cooling it.

Laureate Professor Behdad Moghtaderi, a chemical engineer and director of the University of Newcastle’s Centre for Innovative Energy Technologies, told VOA how the technology operates.  

“Hydro Harvester uses an absorbing material to absorb and dissolve moisture from air. So essentially, we use renewable energy, let’s say, for instance, solar energy or waste heat. We basically produce super saturated, hot, humid air out of the system,” Moghtaderi said. “When you condense water contained in that air you would have the drinking water at your disposal.”

The researchers say the device can produce enough drinking water each day to sustain a small rural town of up to 400 people. It could also help farmers keep livestock alive during droughts.

Moghtaderi says the technology could be used in parts of the world where water is scarce.

Researchers were motivated by the fact that Australia is an arid and dry country.

“More than 2 billion people around the world, they are in a similar situation where they do not have access to, sort of, high-quality water and they deal with water scarcity,” Moghtaderi said

Trials of the technology will be conducted in several remote Australian communities this year.

The World Economic Forum, an international research organization, says “water scarcity continues to be a pervasive global challenge.”

It believes that atmospheric water generation technology is a “promising emergency solution that can immediately generate drinkable water using moisture in the air.”

However, it cautions that generally the technology is not cheap, and estimates that one mid-sized commercial unit can cost between $30,000 and $50,000.

 

your ad here

Researchers use artificial intelligence to classify brain tumors

SYDNEY — Researchers in Australia and the United States say that a new artificial intelligence tool has allowed them to classify brain tumors more quickly and accurately.  

The current method for identifying different kinds of brain tumors, while accurate, can take several weeks to produce results.  The method, called DNA methylation-based profiling, is not available at many hospitals around the world.

To address these challenges, a research team from the Australian National University, in collaboration with the National Cancer Institute in the United States, has developed a way to predict DNA methylation, which acts like a switch to control gene activity.  

This allows them to classify brain tumors into 10 major categories using a deep learning model.

This is a branch of artificial intelligence that teaches computers to process data in a way that is inspired by a human brain.

The joint U.S.-Australian system is called DEPLOY and uses microscopic pictures of a patient’s tissue called histopathology images.

The researchers see the DEPLOY technology as complementary to an initial diagnosis by a pathologist or physician.

Danh-Tai Hoang, a research fellow at the Australian National University, told VOA that AI will enhance current diagnostic methods that can often be slow.

“The technique is very time consuming,” Hoang said. “It is often around two to three weeks to obtain a result from the test, whereas patients with high-grade brain tumors often require treatment as soon as possible because time is the goal for brain tumor(s), so they need to get treatment as soon as possible.”

The research team said its AI model was validated on large datasets of approximately 4,000 patients from across the United States and Europe and an accuracy rate of 95 percent.

Their study has been published in the journal Nature Medicine.

your ad here