Tuesday 30 May 2023

G-7 Officials Establish "Hiroshima Process" to Govern Generative AI


Leaders of the Group of Seven (G-7) countries have reached an agreement on the establishment of the "Hiroshima Process" to govern generative artificial intelligence (AI). The leaders expressed concerns about the potential disruption posed by rapidly advancing technologies and emphasized the need for governance aligned with G-7 values. Cabinet-level discussions will be held on this issue, and the outcomes will be presented by the end of the year, as stated in a joint statement at the G-7 summit.

Japanese Prime Minister Fumio Kishida emphasized the importance of human-centric and trustworthy AI development. He called for cooperation to ensure secure cross-border data flow and pledged financial support for this endeavor. The push for increased regulation echoes similar calls from industry and government leaders following the rapid development of OpenAI's ChatGPT, which has sparked a competitive race among companies. There is concern that unchecked advancements in generative AI, capable of producing convincing text, images, and videos, could become powerful tools for disinformation and political disruption.

In response to these concerns, OpenAI CEO Sam Altman and IBM's privacy chief have urged US senators to implement stricter AI regulations. Additionally, the World Health Organization has cautioned against the rapid adoption of AI in healthcare due to the risk of medical errors, potentially eroding trust in the technology and delaying its widespread use.

UK Prime Minister Rishi Sunak intends to formulate policies to manage the risks and benefits of AI and has invited Sam Altman and other experts to the UK. The European Union is also taking steps toward regulating AI tools, including requirements for transparency when interacting with AI and real-time identification of individuals in public. Altman has expressed support for the establishment of a new regulatory authority in the US to maintain its leadership in the field.

The Japanese government tends to prefer softer guidelines for overseeing AI, as opposed to strict regulatory laws like those of the European Union. However, experts suggest that the government should be prepared to enact stricter laws if significant issues arise. Setting international standards for regulating generative AI is challenging due to differing societal values among G-7 countries. To ensure effective regulation, it is crucial to involve as many countries as possible in the discussion, including lower-income nations, according to experts.






 

Prominent Experts and Public Figures Rally for Global Action on AI Risks


 Urgent Statement Gains Support from Key Signatories

AI experts, journalists, policymakers, and the public are joining forces to address the pressing concerns surrounding advanced artificial intelligence (AI). In a significant development, a concise statement has been released, signed by a group of influential figures, emphasizing the need to prioritize mitigating the risks of AI. The signatories include renowned AI scientists and notable public figures, reflecting the growing awareness of the potential severe risks associated with AI technology.


The statement asserts that safeguarding against the risk of extinction caused by AI should be treated as a global priority, similar to other societal-scale risks such as pandemics and nuclear war. By openly discussing and acknowledging these risks, the signatories aim to foster a broader understanding and encourage active engagement from experts and the general public.

Among the prominent signatories are:

  • Geoffrey Hinton: Emeritus Professor of Computer Science at the University of Toronto, widely recognized for his groundbreaking work on neural networks.
  • Yoshua Bengio: Professor of Computer Science at the University of Montreal and Mila, renowned for his contributions to deep learning and AI research.
  • Demis Hassabis: CEO of Google DeepMind, a leading figure in the field of AI research and development.
  • Sam Altman: CEO of OpenAI, a visionary leader in the AI community, committed to ensuring safe and beneficial AI for all.

These influential figures lend their expertise and support to the urgent call for action in addressing the risks associated with advanced AI. By adding their names to the statement, they emphasize the importance of proactive measures to safeguard humanity's future in the face of AI's potential perils.


The list of signatories also includes numerous other notable AI scientists, policymakers, and public figures who share similar concerns. However, it is worth noting that the focus of this article is on the four aforementioned individuals who represent a diverse range of expertise and influence within the AI landscape.


The statement serves as a catalyst for open dialogue, raising awareness about the risks posed by AI technology and highlighting the need for collective action. It invites experts from various fields, policymakers, and the public to contribute their knowledge and insights in order to address these risks effectively.


To further support the cause and stay informed, interested individuals can sign the statement by providing their full name, work email, title, and affiliation through the provided signup option. Additionally, a subscription to the AI Safety Newsletter is available to receive updates and past newsletters related to AI safety.


Furthermore, the statement urges those concerned about the risks of AI to consider making a donation to support the mission of the Center for AI Safety (CAIS), a non-profit organization dedicated to reducing societal-scale risks associated with artificial intelligence.


As discussions on AI risks gain momentum, the involvement of prominent experts and public figures amplifies the message and paves the way for meaningful action. By collectively addressing the risks posed by AI, the global community can strive for a future where advanced AI technologies are developed and deployed responsibly, ensuring the well-being and safety of humanity.


Monday 29 May 2023

Nvidia Unveils Glimpse of Gaming-AI Convergence at Computex 2023


Nvidia CEO Jensen Huang showcased an immersive glimpse into the future of gaming at Computex 2023 in Taipei. The demonstration exhibited the collision of gaming and AI through a visually stunning rendering of a cyberpunk ramen shop, where players can engage in real-time conversations with video game characters using their own voices.

The concept envisions a departure from traditional dialogue options, allowing players to simply hold a button, speak with their own voice, and receive responses from the in-game character. While the showcased dialogue left room for improvement, Nvidia's innovative approach to natural speech recognition opens up exciting possibilities for immersive gaming experiences.

The conversation revolved around a concerned player interacting with Jin, the shop proprietor, discussing rising crime rates and a notorious crime lord named Kumon Aoki. The AI-generated dialogue responded to the player's inquiries, providing information on where to find the crime lord and urging caution.

While the showcased chatbot dialogue may not have been groundbreaking, the remarkable aspect was the generative AI's ability to process and respond to natural speech. Nvidia's demo left viewers eager for a hands-on experience to explore the potential for diverse outcomes and more engaging interactions.

The demo was created in collaboration between Nvidia and Convai, aimed at promoting their tools, including the Nvidia ACE (Avatar Cloud Engine) for Games middleware. The ACE suite comprises various components, such as NeMo tools for deploying large language models, Riva for speech-to-text and text-to-speech capabilities, among others.

Beyond the chatbot conversation, the demo showcased the visual prowess of Unreal Engine 5, incorporating ray-tracing technology to deliver stunning graphics. While the chatbot aspect may have felt underwhelming in comparison, the demo highlighted the potential for combining realistic visuals with AI-driven dialogue.

During a Computex pre-briefing, Nvidia's VP of GeForce Platform, Jason Paul, confirmed that the technology could scale to multiple characters and potentially enable NPC interactions. However, such advanced capabilities have yet to be thoroughly tested.

While it remains to be seen if developers will fully embrace Nvidia's ACE toolkit, notable upcoming games like S.T.A.L.K.E.R. 2: Heart of Chernobyl and Fort Solis have already adopted elements such as "Omniverse Audio2Face." This feature aims to synchronize facial animations of 3D characters with the speech of their voice actors, enhancing the overall immersion in gaming narratives.

Nvidia's demonstration at Computex 2023 offered a tantalizing glimpse of the future where gaming and AI seamlessly converge, promising more dynamic and interactive gameplay experiences for players worldwide.

Nvidia Ventures into Israel to Construct Supercomputer, Meeting Soaring AI Demands

                                          

According to Reuters- Nvidia is constructing Israel's most powerful artificial intelligence (AI) supercomputer to meet the increasing demand for AI applications. The cloud-based system, known as Israel-1, will be partially operational by the end of 2023 and is expected to cost hundreds of millions of dollars.

Gilad Shainer, a senior vice president at Nvidia, highlighted the company's collaboration with 800 startups in Israel and tens of thousands of software engineers. The Israel-1 supercomputer aims to deliver up to eight exaflops of AI computing power, making it one of the fastest AI supercomputers globally. An exaflop represents the ability to perform 1 quintillion calculations per second.

According to Shainer, AI is considered the "most important technology in our lifetime," and the development of AI and generative AI applications requires large graphics processing units (GPUs). The supercomputer will enable companies in Israel to access the computational power needed for training on large datasets, facilitating the creation of solutions for more complex problems.

The supercomputer was developed by the former Mellanox team, which Nvidia acquired in 2019 for approximately $7 billion. Nvidia's initial focus for the supercomputer is to support its Israeli partners, but they may expand its use to collaborations with partners outside of Israel in the future.

In a separate announcement, Nvidia revealed its collaboration with the University of Bristol in the UK to build a new supercomputer using a newly developed Nvidia chip, aiming to compete with Intel and Advanced Micro Devices Inc.

Breaking Barriers According to Nvidia Chief: How AI Enables Programming for All

Nvidia CEO Jensen Huang declared that artificial intelligence (AI) has made it possible for anyone to become a computer programmer simply by speaking to the computer. He celebrated this development as the end of the "digital divide." Nvidia, known for supplying chips and computing systems for AI, has become the world's most valuable listed semiconductor company.

During a speech at the Computex forum in Taipei, Huang emphasized that AI is leading a computing revolution. He delighted the crowd by occasionally using Mandarin or Taiwanese words. He highlighted that each computing era opens up new possibilities, and AI is no exception.

Huang emphasized the low barrier to entry in programming, asserting that the digital divide has been closed. He stated, "Everyone is a programmer now - you just have to say something to the computer." He also attributed the rapid growth of AI to its ease of use and predicted that it would impact every industry.

Nvidia's chips have enabled companies like Microsoft to incorporate human-like chat features into search engines. Huang demonstrated the capabilities of AI, including generating a short pop song with minimal instructions to praise Nvidia. He also unveiled a partnership with WPP, the world's largest advertising group, for generative AI-enabled content in digital advertising.

Nvidia has faced challenges in meeting the demand for its AI chips. Tesla CEO Elon Musk recently compared acquiring Nvidia's graphics processing units (GPUs) to obtaining drugs, indicating their scarcity.

Wednesday 24 May 2023

A Robotic Bee That Can Fly Fully In All Directions Developed By Researchers

Researchers at Washington State University have successfully developed a robotic bee capable of flying in all directions. The innovative prototype, known as Bee++, features four carbon fiber and mylar wings, each equipped with lightweight actuators for precise control. This achievement marks the first time a robotic bee has demonstrated stable flight in all directions, including complex motions like yaw.


The Bee++ weighs 95 mg and boasts a 33-millimeter wingspan, making it larger than real bees, which weigh around 10 milligrams. However, the robot's autonomous flight time is limited to approximately five minutes, after which it must be tethered to a power source via a cable. The researchers are also working on developing other types of insect robots, including crawlers and water striders.


Led by Néstor O. Pérez-Arancibia, an associate professor in WSU's School of Mechanical and Materials Engineering, the team published their findings in the journal IEEE Transactions on Robotics. Pérez-Arancibia will present the results at the upcoming IEEE International Conference on Robotics and Automation.


For over three decades, researchers have been striving to create artificial flying insects. These miniature robots have the potential to revolutionize various fields, including artificial pollination, search and rescue operations in confined spaces, biological research, and environmental monitoring in hostile environments.


However, achieving liftoff and controlled landing for these tiny robots required the development of controllers that mimic the functionality of an insect's brain.


"It's a mixture of robotic design and control," explains Pérez-Arancibia. "Control involves highly mathematical principles, where you design an artificial brain. Some refer to it as hidden technology, but without these simplified brains, nothing would work."


Initially, the researchers developed a two-winged robotic bee, but its mobility was limited. In 2019, Pérez-Arancibia and two of his PhD students successfully constructed a four-winged robot light enough to achieve takeoff. To perform maneuvers like pitching or rolling, the researchers programmed the front and back wings, as well as the right and left wings, to flap differently, creating the necessary torque to rotate the robot along its main horizontal axes.


However, controlling the complex yaw motion proved to be crucial. Without it, the robots would lose control and be unable to focus on a specific target, resulting in crashes.


"If you can't control yaw, you're super limited," Pérez-Arancibia states. "Imagine a bee trying to reach a flower but constantly spinning due to the lack of yaw control."


Full freedom of movement is also essential for evasive maneuvers and tracking objects effectively.


"The system is highly unstable, and the problem is extremely challenging," Pérez-Arancibia explains. "For years, people had theoretical ideas about yaw control, but actuation limitations prevented successful implementation."


To address this issue, the researchers took inspiration from insects and adjusted the wing orientation to allow controlled twisting. They also increased the wing flapping frequency from 100 to 160 times per second.


"The solution involved both the physical design of the robot and the invention of a new controller—the 'brain' that guides the robot's actions," Pérez-Arancibia reveals.


 

Figure, an AI Startup, Secures $70 Million Funding to Develop Humanoid Robots

AI startup Figure announced on Wednesday that it has raised $70 million in its first external funding round, led by Parkway Venture Capital. The company aims to build general-purpose humanoid robots and will utilize the new funding to expedite the development and manufacturing of its first autonomous humanoid, scheduled for launch within the coming months. While the valuation of the one-year-old startup was not disclosed, insiders estimate it to be over $400 million.


Founder and CEO of Figure, Brett Adcock, personally invested $20 million in the funding round. Other investors include Aliya Capital and Bold Ventures. Headquartered in Sunnyvale, California, Figure focuses on creating versatile humanoid robots capable of performing tasks in various environments, from warehouses to retail spaces. The company is currently engaged in discussions with retailers regarding potential commercialization opportunities.


Adcock highlighted that Figure stands out from other robotics companies like Boston Dynamics and Amazon Robotics by emphasizing the development of robots capable of handling general tasks. The long-term objective is to enable the robots to learn and interact with their environment. Adcock expressed his belief in the vast potential of general-purpose humanoid robots, stating that their deployment in the workforce can help address labor shortages and eventually eliminate the need for unsafe and undesirable jobs.


In the race to develop commercially viable humanoid robots, both major tech companies and startups, including Figure, are striving to lead the way. Tesla, for instance, unveiled a prototype of its humanoid robot named 'Optimus' last year. CEO Elon Musk anticipates that the electric vehicle manufacturer will begin taking orders for the robot within three to five years, with a price tag below $20,000.

 

Tuesday 23 May 2023

India's Infosys unveils AI platform Infosys Topaz

Infosys, India's second-largest software services exporter, announced the launch of its platform called Infosys Topaz for generative artificial intelligence (AI). The company also confirmed its focus on generative AI projects and initiatives during its FY23 earnings call. Salil Parekh, CEO of Infosys, stated that they have active projects with clients involving generative AI platforms.


Clients are increasingly looking to leverage generative AI to address specific areas within their businesses. Infosys has trained open-source generative AI platforms using its internal software development libraries, foreseeing the technology to offer more opportunities for collaboration with clients and enhance internal productivity metrics. Additionally, Infosys is developing its own generative AI applications using both open-source algorithms and proprietary platforms like ChatGPT.


Parekh emphasized that the company is actively working on client projects centered around large models to cater to various needs within client organizations. The aim is to explore how generative AI can leverage these large models to create more efficient applications for clients.


Prior to the earnings season, another IT services firm, Tech Mahindra, introduced the Generative AI Studio, a suite of solutions within its AI offerings. The platform enables enterprises to produce high-quality content outputs at an accelerated pace by providing structured and customized generative AI features.


With these developments, both Infosys and Tech Mahindra are showcasing their commitment to harnessing the potential of generative AI and delivering innovative solutions to meet the evolving needs of their clients.


 

Monday 22 May 2023

Tackling the Rise of Generative AI: Regulators Look to Established Rules in the Case of ChatGPT

                          

With the rapid development of powerful artificial intelligence services like ChatGPT, regulators are resorting to existing regulations to control a technology that has the potential to reshape societies and businesses.The European Union is leading the way in drafting new AI rules that could serve as a global benchmark, addressing privacy and safety concerns that have arisen with the rapid advancement of generative AI technology, such as OpenAI's ChatGPT.


However, the enforcement of these regulations is expected to take several years.


"In the absence of specific regulations, governments can only apply existing rules," said Massimiliano Cimnaghi, a European data governance expert at consultancy BIP. "Data protection laws are applied to protect personal data, and regulations that have not been specifically defined for AI but are still applicable come into play when there is a threat to people's safety."


In April, Europe's national privacy watchdogs established a task force to address concerns related to ChatGPT following the Italian regulator Garante's temporary shutdown of the service. Garante accused OpenAI of violating the EU's General Data Protection Regulation (GDPR), a comprehensive privacy regime implemented in 2018.


ChatGPT was reinstated after OpenAI agreed to incorporate age verification features and allow European users to block the use of their information for training the AI model.


A source close to Garante revealed that the agency plans to extend its examination to other generative AI tools. Additionally, data protection authorities in France and Spain have initiated probes into OpenAI's compliance with privacy laws.


Bringing in AI experts is a priority for regulators. Generative AI models have gained notoriety for producing errors or "hallucinations," generating misinformation with surprising confidence. Such errors could have significant consequences, leading to unfair rejections for loans or benefit payments if AI is used by banks or government departments to expedite decision-making processes. Major tech companies like Alphabet's Google (GOOGL.O) and Microsoft Corp (MSFT.O) have already ceased using AI products with ethical concerns, particularly in the financial sector.


Regulators aim to apply existing rules that cover various aspects, including copyright, data privacy, and two key issues: the data used to train AI models and the content they generate. Experts and regulators from the United States and Europe highlight the importance of agencies "interpreting and reinterpreting their mandates." For example, the US Federal Trade Commission (FTC) is investigating algorithms for discriminatory practices under its existing regulatory powers.


In the European Union, proposals for the AI Act will require companies like OpenAI to disclose any copyrighted material used, such as books or photographs, to train their models, potentially exposing them to legal challenges. However, proving copyright infringement may not be straightforward, as lawmakers acknowledge the complexity involved.


French data regulator CNIL is taking a "creative" approach to examine how existing laws can be applied to AI. In France, discrimination claims are typically handled by the Defenseur des Droits (Defender of Rights). However, CNIL has taken the lead on AI bias due to the Defender of Rights' limited expertise in this area. While data protection and privacy remain their main focus, CNIL is exploring the full range of effects and considering using GDPR provisions that protect individuals from automated decision-making. However, reaching a consensus among regulators may prove challenging, with potential differences in views and approaches.


In the UK, the Financial Conduct Authority, among other state regulators, is developing new guidelines for AI. They are collaborating with the Alan Turing Institute in London, as well as legal and academic institutions, to enhance their understanding of the technology.


Italian Watchdog Expands AI Review Following Brief Ban on ChatGPT

                                

Italy's data protection authority, Garante, has unveiled plans to conduct a comprehensive review of various artificial intelligence (AI) platforms and recruit AI experts, signaling an increased focus on scrutinizing this powerful technology in the wake of the temporary ban on ChatGPT in March.Garante stands out as one of the most proactive among the 31 national data protection authorities responsible for overseeing Europe's General Data Protection Regulation (GDPR), the data privacy framework.


This regulatory agency has a history of being at the forefront, having been the first to ban the AI chatbot company Replika, impose fines on facial recognition software maker Clearview AI, and impose restrictions on TikTok in Europe.In March, Garante temporarily banned ChatGPT, developed with support from Microsoft Corp (MSFT.O) through OpenAI, and initiated an investigation into suspected violations of privacy regulations.


Agostino Ghiglia, a member of Garante's board, stated, "We intend to launch a comprehensive review of generative and machine learning AI applications available online to assess whether these new tools comply with data protection and privacy laws. If necessary, we will initiate further investigations."


The soaring popularity of ChatGPT has prompted major tech players from Alphabet (GOOGL.O) to Meta (META.O) to introduce their own versions. Simultaneously, governments and policymakers worldwide are engaged in discussions on new legislation that could take years to implement.


Ghiglia emphasized the need for expertise in the rapidly evolving AI landscape, stating, "We are seeking three AI advisors who possess a strong technical background to assist us in our data protection efforts."This move highlights how regulators are utilizing existing legislation to oversee a technology that has the potential to revolutionize societies and businesses.


Garante's board, consisting of legal experts, acknowledged that the authority currently has 144 staff members, considerably fewer than its European counterparts in France, Spain, and Britain. Ghiglia confirmed that most of the staff have legal backgrounds.


In their crackdown on ChatGPT, Garante invoked provisions of the GDPR, particularly those safeguarding minors and granting individuals the right to request data deletion and object to the use of their personal data.


After Garante's intervention, OpenAI, the developer of ChatGPT, made adjustments to ensure compliance with regulations.Ghiglia remarked, "Garante's board members often become aware of potential privacy breaches by actively exploring digital tools and applications as they become available. In the case of ChatGPT, we discovered it was not in compliance with EU data privacy rules."


It will likely take several years for potential new AI regulations to be enacted.


"That's why we acted swiftly with ChatGPT," Ghiglia explained.

Thursday 18 May 2023

61% of Americans Consider AI as a threat to humanity: Reuters/Ipsos Poll

 

An overwhelming majority of Americans, as revealed in a Reuters/Ipsos poll published on Wednesday, express concern about the potential risks posed by the rapid growth of artificial intelligence (AI) technology. Over two-thirds of respondents are worried about the negative impacts of AI, with 61% believing it could endanger human civilization.


The widespread integration of AI into daily life, exemplified by the exponential rise of OpenAI's ChatGPT chatbot, has propelled AI to the forefront of public discourse. This has triggered an AI arms race, with industry giants like Microsoft and Google striving to outperform each other in the field of AI.


Lawmakers and AI companies have also voiced apprehension. OpenAI CEO Sam Altman testified before the U.S. Congress, highlighting concerns about potential misuse of AI technology and advocating for regulation. During a Senate panel discussion on AI applications, Senator Cory Booker emphasized the global explosion of AI and the need to address its regulation.


The Reuters/Ipsos poll underscores the prevailing sentiments among Americans, with three times as many individuals foreseeing negative consequences from AI compared to those who do not. Specifically, 61% of respondents perceive AI as a threat to humanity, while only 22% disagreed, and 17% remained uncertain.


Concern levels were particularly high among supporters of former President Donald Trump, with 70% expressing apprehension, compared to 60% among Joe Biden voters. Additionally, Evangelical Christians were more inclined to strongly agree that AI poses risks to humanity, with 32% holding this view compared to 24% of non-Evangelical Christians.


Landon Klein, director of U.S. policy at the Future of Life Institute, which penned an open letter signed by Tesla CEO Elon Musk calling for a pause in AI research, highlighted the broad-based worry regarding AI's negative effects. Drawing parallels to the beginning of the nuclear era, Klein stressed the importance of taking action in response to public concerns.


Despite the concerns expressed, Americans prioritize other issues such as crime and the economy. A significant majority, 77%, supports increasing police funding to combat crime, while 82% are worried about the risk of a recession.


Industry experts argue for a better understanding of AI's benefits. Sebastian Thrun, a computer science professor at Stanford and founder of Google X, emphasizes that AI will enhance people's quality of life, making them more competent and efficient. Ion Stoica, a professor at UC Berkeley and co-founder of AI company Anyscale, points out that the positive applications of AI, such as revolutionizing drug discovery, often go unnoticed compared to the attention garnered by ChatGPT.


The online poll surveyed 4,415 U.S. adults between May 9 and May 15, with a credibility interval (a measure of accuracy) of plus or minus 2 percentage points.

Wednesday 17 May 2023

Shell To Embrace New AI Technology to Deep-Sea Oil Exploration

 


Shell Plc and big-data analytics firm SparkCognition have announced a collaboration to enhance offshore oil output through the use of AI-based technology. SparkCognition's AI algorithms will play a crucial role in processing and analyzing vast amounts of seismic data for Shell's deep-sea exploration and production operations in the U.S. Gulf of Mexico.


Shell's vice president of innovation and performance, Gabriel Guerra, expressed the company's commitment to reinventing their exploration methods and finding innovative solutions. The integration of AI technology aims to improve operational efficiency, speed, and overall success in exploration. Remarkably, this new approach can significantly reduce the time required for explorations, cutting it down from nine months to less than nine days.


Bruce Porter, the chief science officer at SparkCognition, emphasized the transformative potential of generative AI for seismic imaging, stating that it can disrupt and revolutionize the exploration process, leading to wide-ranging implications.


The AI-powered technology will generate subsurface images using fewer seismic data scans compared to traditional methods, contributing to the preservation of deep-sea environments. By accelerating the exploration workflow and reducing the need for extensive seismic surveys, costs associated with high-performance computing can also be minimized.


Shell's partnership with SparkCognition marks a significant step forward in leveraging advanced AI capabilities to optimize deep-sea oil exploration, ultimately driving operational efficiency, cost savings, and a more sustainable approach to offshore production.



Monday 15 May 2023

OpenAI is Preparing To Release a New Open-Source Language Model


OpenAI is getting ready to launch a new language model that will be available to the public, according to Reuters reports. This model, called ChatGPT, is well-known for its ability to generate prose and poetry as requested. It has become a topic of great interest in Silicon Valley, as investors see generative AI as a promising growth area for technology companies.

In January, Microsoft made a significant investment in OpenAI, strengthening their partnership and increasing competition with Google's parent company, Alphabet. Now, Meta Platforms Inc is hurrying to join the race by developing its own generative AI products capable of producing human-like writing, art, and other content.


However, the report suggests that OpenAI's upcoming model is unlikely to directly compete with GPT. Reuters reached out to OpenAI for comment, but the company has not yet responded.


Collaboration Between SAP and Microsoft Expands Focus on Generative AI in Recruiting

SAP announced on Monday its plans to enhance collaboration with Microsoft in the realm of generative AI for personnel recruiting. The German software company will integrate its SuccessFactors solutions with Microsoft's 365 Copilot and Azure OpenAI Service. This integration will enable access to advanced language models and facilitate the generation of natural language for recruitment purposes.


Christian Klein, CEO of SAP, expressed enthusiasm about the potential that generative AI holds for the industry and SAP's customers. This collaboration marks an exciting step forward in leveraging the power of AI to enhance recruiting processes.


In April, SAP had previously revealed its intention to incorporate OpenAI's ChatGPT, a technology supported by Microsoft, into its products. With this latest announcement, SAP and Microsoft further solidify their commitment to harnessing generative AI for innovative solutions in the field of personnel recruiting.

Brazilian Startup Eve Completes Wind Tunnel Tests for Flying Car Prototype

Brazilian electric plane manufacturer Eve Holding Inc, controlled by Embraer, announced on Monday that it has successfully conducted wind tunnel testing for its flying car prototype, bringing the futuristic vehicle one step closer to becoming a reality. The company aims to begin commercial operations of its fully electric vehicle, known as an electric vertical take-off and landing (eVTOL) vehicle or flying taxi, by 2026.


The completion of wind tunnel tests is a crucial milestone for certification by regulators and future production and sales worldwide. Luiz Valentini, Eve's top technology officer, stated that the information gathered during this phase of development has helped refine the technical solutions of their eVTOL before moving forward with production tooling and conforming prototypes.


In the first half of this year, Eve plans to finalize the selection of its main equipment suppliers and commence building its first full-scale prototype in the second half. Additional testing is scheduled for 2024. The company already has an order backlog of nearly 2,800 orders, with investments from United Airlines and Rolls-Royce.


The wind tunnel tests were conducted in Switzerland using a scale model of the eVTOL. This allowed Eve to assess the performance of various components, such as the fuselage and wings, during flight.


Eve, which debuted on the New York Stock Exchange last year, has prioritized certification as a crucial target. Analysts believe the company is on track to achieve its ambitious plans, even if it may not be the first to bring a flying car to market. According to Jefferies analysts, Eve's eVTOL has the potential to capture a significant share of the emerging eVTOL market.


Both Jefferies and JPMorgan recently increased their target prices for Eve, citing the company's strong backlog and support from Embraer. Eve's shares have experienced an approximate 10% increase this year.

 


Featured post

ChatGPT Elements and How It Has Integrated Third-Party Apps

OpenAI Chatbot, ChatGPT has changed how people interact with technology, making conversations with AI models more natural. One of the stando...