Meta's Nick Clegg Plays Down AI Risks, Calls Current Models "Quite Stupid"


Nick Clegg, President of Global Affairs at Meta, the parent company of Facebook, downplayed the risks associated with current artificial intelligence (AI) models, referring to them as "quite stupid." Clegg stated that the hype around AI has surpassed the technology itself and that current models are far from achieving true autonomy and independent thinking.

During an interview with the BBC, Clegg explained that large language models, such as Meta's Llama 2, which powers chatbots like ChatGPT, essentially connect dots in vast datasets of text and predict the next word in a sequence. He highlighted that the concerns raised by some AI experts about existential threats pertain to systems that do not yet exist.

Meta's decision to make Llama 2 available as an open-source tool for commercial businesses and researchers has generated mixed reactions within the tech community. While open-sourcing provides valuable user testing data for identifying bugs and improvements, there are concerns about potential misuse and the effectiveness of guardrails to prevent harmful behavior.

It is worth noting that Meta partnered with Microsoft for this initiative, making Llama 2 accessible through Microsoft platforms like Azure. Microsoft has also made significant investments in OpenAI, the creator of ChatGPT. This collaboration raises questions about the concentration of power in the AI industry and its impact on competition.

OpenAI and Meta have faced legal challenges recently, including a lawsuit filed by comedian Sarah Silverman, alleging copyright infringement in the training of their AI systems.

Dame Wendy Hall, a Computer Science professor at the University of Southampton, expressed concerns about open-sourcing AI, particularly in terms of regulation. She questioned whether the industry can be trusted to self-regulate or if collaboration with governments is necessary.

Clegg acknowledged the need for AI regulation and emphasized that open-sourcing of large language models is already happening. He stated that the focus should be on ensuring responsible and safe practices. Clegg asserted that Meta's open-sourced LLMs are safer than other AI models that have been open-sourced.