Get more with FAQZe

Premium sliders, page composer and unlimited layouts...

learn more...
тут може бути ваша реклама

Doctors display ‘PillBot’ that can explore inner human body

vancouver, british columbia — A new, digestible mini-robotic camera, about the size of a multivitamin pill, was demonstrated at the annual TED Conference in Vancouver. The remote-controlled device can eliminate invasive medical procedures.

With current technology, exploration of the digestive tract involves going through the highly invasive procedure of an endoscopy, in which a camera at the end of a cord is inserted down the throat and into a medicated patient’s stomach.

But the robotic pill, developed by Endiatx in Hayward, California, is designed to be the first motorized replacement of the procedure. A patient fasts for a day, then swallows the PillBot with lots of water. The PillBot, acting like a miniature submarine, is piloted in the body by a wireless remote control. After the exam, it then flushes out of the human body naturally.

For Dr. Vivek Kumbhari, co-founder of the company and professor of medicine and chairman of gastroenterology and hepatology at the Mayo Clinic, it is the latest step toward his goal of democratizing previously complex medicine.

If procedure-based diagnostics can be moved from a hospital to a home, “then I think we have achieved that goal,” he said. The new setting would require fewer medical staff personnel and no anesthesia, producing “a safer, more comfortable approach.”

Kumbhari said this technology also makes medicine more efficient, allowing people to get care earlier in the course of an illness.

For co-founder Alex Luebke, the micro-robotic pill can be transformative for rural areas around the world where there is limited access to medical facilities.

“Especially in developing countries, there is no access” to complex medical procedures, he said. “So being able to have the technology, gather all that information and provide you the solution, even in remote areas – that’s the way to do it.”

Luebke said if internet access is not immediately available, information from the PillBot can be transmitted later.

The duo are also utilizing artificial intelligence to provide the initial diagnosis, with a medical doctor later developing a treatment plan.

Joel Bervell is known to his million social media followers as the “Medical Mythbuster” and is a fourth-year medical student at Washington State University. He said the strength of this type of technology is how it can be easily used in remote and rural communities.

Many patients “travel hundreds of miles, literally, for their appointment. Use of a pill that would not require a visit to a physician “would be life-changing for them.” 

The micro-robotic pill is undergoing trials and will soon be in front of the U.S. Food and Drug Administration for approval, which developers expect to have in 2025. It’s expected that the pill would then be widely available in 2026.

Kumbhari hopes the technology can be expanded to the bowels, vascular system, heart, liver, brain and other parts of the body. Eventually, he hopes, this will allow hospitals to be left for more urgent medical care and surgeries.

your ad here

Apple pulls WhatsApp and Threads from App Store on Beijing’s orders

HONG KONG — Apple said it had removed Meta’s WhatsApp messaging app and its Threads social media app from the App Store in China to comply with orders from Chinese authorities.

The apps were removed from the store Friday after Chinese officials cited unspecified national security concerns.

Their removal comes amid elevated tensions between the U.S. and China over trade, technology and national security.

The U.S. has threatened to ban TikTok over national security concerns. But while TikTok, owned by Chinese technology firm ByteDance, is used by millions in the U.S., apps like WhatsApp and Threads are not commonly used in China.

Instead, the messaging app WeChat, owned by Chinese company Tencent, reigns supreme.

Other Meta apps, including Facebook, Instagram and Messenger remained available for download, although use of such foreign apps is blocked in China due to its “Great Firewall” network of filters that restrict use of foreign websites such as Google and Facebook.

“The Cyberspace Administration of China ordered the removal of these apps from the China storefront based on their national security concerns,” Apple said in a statement.

“We are obligated to follow the laws in the countries where we operate, even when we disagree,” Apple said.

A spokesperson for Meta referred to “Apple for comment.”

Apple, previously the world’s top smartphone maker, recently lost the top spot to Korean rival Samsung Electronics. The U.S. firm has run into headwinds in China, one of its top three markets, with sales slumping after Chinese government agencies and employees of state-owned companies were ordered not to bring Apple devices to work.

Apple has been diversifying its manufacturing bases outside China.

Its CEO Tim Cook has been visiting Southeast Asia this week, traveling to Hanoi and Jakarta before wrapping up his travels in Singapore. On Friday he met with Singapore’s deputy prime minister, Lawrence Wong, where they “discussed the partnership between Singapore and Apple, and Apple’s continued commitment to doing business in Singapore.”

Apple pledged to invest over $250 million to expand its campus in the city-state.

Earlier this week, Cook met with Vietnamese Prime Minister Pham Minh Chinh in Hanoi, pledging to increase spending on Vietnamese suppliers.

He also met with Indonesian President Joko Widodo. Cook later told reporters that they talked about Widodo’s desire to promote manufacturing in Indonesia, and said that this was something that Apple would “look at.”

your ad here

EU politicians embrace TikTok despite data security concerns

Sundsvall,  Sweden — German Chancellor Olaf Scholz’s short videos of his three-day trip to China this week proved popular in posts on Chinese-owned social media platform TikTok, which the European Union, Canada, Taiwan and the United States banned on official devices more than a year ago, citing security concerns.

By Friday, one video showing highlights of Scholz’s trip had garnered 1.5 million views while another of him speaking about it on the plane home had 1.4 million views. 

Scholz opened his TikTok account April 8 to attract youth, promising he wouldn’t post videos of himself dancing.  His most popular post so far, about his 40-year-old briefcase, was watched 3.6 million times.  Many commented, “This briefcase is older than me.”

Scholtz is one of several Western leaders to use TikTok, despite concerns that its parent company, ByteDance, could provide private user data to the Chinese government and could also be used to push a pro-Beijing agenda.

 

Greek Prime Minister Kyriakos Mitsotakis has 258,000 followers on TikTok, and Irish Prime Minister Simon Harris has 99,000 followers. 

U.S. President Joe Biden’s reelection campaign team opened a TikTok account in February, despite Biden himself vowing to sign legislation expected to be voted on as early as Saturday to force ByteDance to divest in the U.S. or face a ban. 

Former U.S. President Donald Trump, who unsuccessfully tried to ban TikTok in 2020, in March reversed his position and now appears to oppose a ban. 

ByteDance denies it would provide user data to the Chinese government, despite reports indicating it could be at risk, and China has firmly opposed any forced sale.

Kevin Morgan, TikTok’s director of security and integrity in Europe, the Middle East and Africa, says more than 134 million people in 27 EU countries visit TikTok every month, including a third of EU lawmakers. 

As the European Union’s June elections approach, more European politicians are using the popular platform favored by young people to attract votes. 

Ola Patrik Bertil Moeller, a Swedish legislator with the Social Democratic Party who has 124,000 followers on TikTok, told VOA, “We as politicians participate in the conversation and spread accurate images and answer the questions that people have. If we’re not there, other forces that don’t want good will definitely be there.”

But other European politicians see TikTok as risky.  

Norwegian Prime Minister Jonas Gahr Store on Monday expressed his uneasiness about social media platforms, including TikTok, being “used by various threat actors for several purposes, such as recruitment for espionage, influencing through disinformation and fake news, or mapping regime critics. This is disturbing.”

Konstantin von Notz, vice-chairman of the Green Parliamentary Group in the German legislature, told VOA, “While questions of security and the protection of personal data generally arise when using social networks, the issue is even more relevant for users of TikTok due to the company’s proximity to the Chinese state.” 

Matthias C. Kettemann, an internet researcher at the Leibniz Institute for Media Research in Hamburg, Germany, told VOA, “Keeping data safe is a difficult task; given TikTok’s ties to China doesn’t make it easier.”  But he emphasized, “TikTok is obliged to do these measures through the EU’s GDPR [General Data Protection Regulation] anyway from a legal side.”

But analysts question whether ByteDance will obey European law if pressed by the Chinese state.

Matthias Spielkamp, executive director AlgorithmWatch, told VOA, “Does TikTok have an incentive to comply with European law? Yes, there’s an enormous amount of money on the line. Is it realistic that TikTok, being owned by a Chinese company, can resist requests for data by its Chinese parent? Hardly. How is this going to play out? No one knows right now.”

Adrianna Zhang contributed to this report.

your ad here

Meta’s new AI agents confuse Facebook users 

CAMBRIDGE, Massachusetts — Facebook parent Meta Platforms has unveiled a new set of artificial intelligence systems that are powering what CEO Mark Zuckerberg calls “the most intelligent AI assistant that you can freely use.” 

But as Zuckerberg’s crew of amped-up Meta AI agents started venturing into social media in recent days to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. 

One joined a Facebook moms group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum. 

Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France’s Mistral, have been churning out new AI language models and hoping to convince customers they’ve got the smartest, handiest or most efficient chatbots. 

While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it’s now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp. 

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta’s newest models were built with 8 billion and 70 billion parameters — a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training. 

“The vast majority of consumers don’t candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant,” Nick Clegg, Meta’s president of global affairs, said in an interview. 

‘A little stiff’

He added that Meta’s AI agent is loosening up. Some people found the earlier Llama 2 model — released less than a year ago — to be “a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions,” he said. 

But in letting down their guard, Meta’s AI agents have also been spotted posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press. 

“Apologies for the mistake! I’m just a large language model, I don’t have experiences or children,” the chatbot told the group. 

One group member who also happens to study AI said it was clear that the agent didn’t know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human. 

“An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it,” said Aleksandra Korolova, an assistant professor of computer science at Princeton University. 

Clegg said Wednesday that he wasn’t aware of the exchange. Facebook’s online help page says the Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.” The group’s administrators have the ability to turn it off. 

Need a camera?

In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.” 

Meta said in a written statement Thursday that “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.” The company said it is constantly working to improve the features. 

In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey. 

They may eventually hit a limit, at least when it comes to data, said Nestor Maslej, a research manager for Stanford’s Institute for Human-Centered Artificial Intelligence. 

“I think it’s been clear that if you scale the models on more data, they can become increasingly better,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.” 

More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. “Yet they still cannot plan well,” Maslej said. “They still hallucinate. They’re still making mistakes in reasoning.” 

Getting to AI systems that can perform higher-level cognitive tasks and common-sense reasoning — where humans still excel— might require a shift beyond building ever-bigger models. 

Seeing what works

For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights, and summarize long documents. 

“You’re seeing companies kind of looking at fit, testing each of the different models for what they’re trying to do and finding some that are better at some areas rather than others,” said Todd Lohr, a leader in technology consulting at KPMG. 

Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta’s vice president of AI research, said at a recent London event that the company’s goal over time is to make a Llama-powered Meta AI “the most useful assistant in the world.” 

“In many ways, the models that we have today are going to be child’s play compared to the models coming in five years,” she said. 

But she said the “question on the table” is whether researchers have been able to fine-tune its bigger Llama 3 model so that it’s safe to use and doesn’t, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use. 

“It’s not just a technical question,” Pineau said. “It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands.”

your ad here

Developers: Enhanced AI could outthink humans in 2 to 5 years

vancouver, british columbia — Just as the world is getting used to the rapidly expanding use of AI, or artificial intelligence, AGI is looming on the horizon.

Experts say when artificial general intelligence becomes reality, it could perform tasks better than human beings, with the possibility of higher cognitive abilities, emotions, and ability to self-teach and develop.

Ramin Hasani is a research scientist at the Massachusetts Institute of Technology and the CEO of Liquid AI, which builds specific AI systems for different organizations. He is also a TED Fellow, a program that helps develop what the nonprofit TED conference considers to be “game changers.”

Hasani says that the first signs of AGI are realistically two to five years away from being reality. He says it will have a direct impact on our everyday lives.

What’s coming, he says, will be “an AI system that can have the collective knowledge of humans. And that can beat us in tasks that we do in our daily life, something you want to do … your finances, you’re solving, you’re helping your daughter to solve their homework. And at the same time, you want to also read a book and do a summary. So an AGI would be able to do all that.”

Hasani says that advancing artificial intelligence will allow for things to move faster and can even be made to have emotions.

He says proper regulation can be achieved by better understanding how different AI systems are developed.

This thought is shared by Bret Greenstein, a partner at London-based  PricewaterhouseCoopers who leads its efforts on artificial intelligence.

“I think one is a personal responsibility for people in leadership positions, policymakers, to be educated on the topic, not in the fact that they’ve read it, but to experience it, live it and try it. And to be with people who are close to it, who understand it,” he says.

Greenstein warns that if it is over-regulated, innovation will be curtailed and access to AI will be limited to people who could benefit from it.

For musician, comedian and actor Reggie Watts, who was the bandleader on “The Late Late Show with James Corden” on CBS, AI and the coming of AGI will be a great way to find mediocre music, because it will be mimicked easily.

Calling it “artificial consciousness,” he says existing laws to protect intellectual property rights and creative industries, like music, TV and film, will work, provided they are properly adopted.

“I think it’s just about the usage of the tool, how it’s … how it’s used. Is there money being made off of it, so on, so forth. So, I think that that we already have … tools that exist that deal with these types of situations, but [the laws and regulations] need to be expanded to include AI because they’ll probably be a lot more nuance to it.”

Watts says that any form of AI is going to be smarter than one person, almost like all human intelligence collected into one point. He feels this will cause humanity to discover interesting things and the nature of reality itself.

This year’s conference was the 40th year for TED, the nonprofit organization that is an acronym for Technology, Entertainment and Design.

your ad here

Шмигаль заявив про «зрушення» у розблокуванні військової допомоги США

«Ми отримали запевнення в підтримці законопроектів обома партіями. Очікуємо, що найближчим часом цей великий пакет допомоги буде проголосовано»

your ad here

Курс на 40: на міжбанку гривня стрімко падає відносно долара

Національний банк України опівдні встановив довідкове значення курсу на рівні 39 гривень 77,16 копійки за долар, це на 17 копійок більше за офіційний курс на 19 квітня

your ad here

Google fires 28 workers protesting contract with Israel

New York — Google fired 28 employees following a disruptive sit-down protest over the tech giant’s contract with the Israeli government, a Google spokesperson said Thursday.

The Tuesday demonstration was organized by the group “No Tech for Apartheid,” which has long opposed “Project Nimbus,” Google’s joint $1.2 billion contract with Amazon to provide cloud services to the government of Israel.

Video of the demonstration showed police arresting Google workers in Sunnyvale, California, in the office of Google Cloud CEO Thomas Kurian’s, according to a post by the advocacy group on X, formerly Twitter.

Kurian’s office was occupied for 10 hours, the advocacy group said.

Workers held signs including “Googlers against Genocide,” a reference to accusations surrounding Israel’s attacks on Gaza.

“No Tech for Apartheid,” which also held protests in New York and Seattle, pointed to an April 12 Time magazine article reporting a draft contract of Google billing the Israeli Ministry of Defense more than $1 million for consulting services.

A “small number” of employees “disrupted” a few Google locations, but the protests are “part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” a Google spokesperson said.

“After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety,” the Google spokesperson said. “We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”

Israel is one of “numerous” governments for which Google provides cloud computing services, the Google spokesperson said.

“This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” the Google spokesperson said.

your ad here

Країни G7 «продовжать працювати» над шляхами застосування активів РФ на підтримку України

«Ми вітаємо пропозиції ЄС щодо спрямування надзвичайних надходжень від суверенних заморожених активів Росії на користь України»

your ad here

Після двох днів рекордів на міжбанку зупинилося падіння гривні

Національний банк України опівдні встановив довідкове значення курсу на рівні 39 гривень 57,88 копійки за долар, це майже не відрізняється від офіційного курсу на 16 квітня

your ad here

Долар іде на новий рекорд – уже понад 39 з половиною гривень

Національний банк України близько 12:30 встановив довідкове значення курсу на рівні 39 гривень 50,94 копійки за долар – це на 11 копійок більше за офіційний курс на 15 квітня

your ad here