🌀 Chaos Continuing

OpenAI insiders explain why safety-conscious employees are leaving.

In partnership with

Hello, AI Enthusiasts!

In this edition, we dive into a recent revelation by Jan Leike, a former key player at OpenAI. After his departure, Leike took to X to disclose some eye-opening internal conflicts and pressing AI safety concerns within the company.

Don’t remember subscribing or want to stop receiving our newsletters completely? Unsubscribe here

TODAY’S AI SECRET:

  • Ex-Superalignment co-leader Jan Leike shared insider info on OpenAI after departing.

  • Daily trending & featured AIs boost your career and business.

  • UK urges firms on AI safety at seoul summit.

Get AI Secret's latest newsletter every morning before 6 a.m. ET. Can't find it? Check your spam or promotions, then drag it to your main inbox for future ease. (click this for Gmail)

Doing this a few times should ensure that our emails consistently reach your inbox.

TODAY’S AI WORLD

  • iOS 18 - Apple is set to announce new AI capabilities, including auto-summarization of notifications and enhanced Siri voice interaction, in iOS 18 at the upcoming Worldwide Developers Conference.

  • China’s Leading AI Chatbot - Doubao by ByteDance, launched in August, now exceeds Ernie Bot in downloads and regular monthly iOS users in China, according to Sensor Tower.

  • Outpacing Nvidia - Intel confirms its upcoming Falcon Shores hybrid processor, combining x86 and Xe GPU cores, is set to consume 1500W of power, surpassing Nvidia's B200.

  • AI-Generated Content Overrun - Tech experts warn of the damaging impact of "slop," AI-generated webpages and images cluttering the internet.

OPENAI

Ex-Superalignment co-leader Jan Leike shared insider info on OpenAI after departing.

Image Credit: AI Secret

After Jan Leike left, he shared insider details about OpenAI on X, revealing internal conflicts and AI safety issues. This has sparked concerns about the company's dedication to AI safety during its rapid growth.

The Details:

  • Lack of Computational Resources: Jan Leike, head of Superalignment, revealed that the team received only 20% of the promised computational resources, hampering their efforts.

  • Focus on Shiny Products Over Safety: Leike criticized OpenAI for prioritizing the launch of "shiny products" over addressing AGI safety issues, making it increasingly difficult for the safety team to function effectively.

  • Non-Disparagement Agreements: Departing employees are required to sign agreements that prevent them from speaking negatively about OpenAI, or they forfeit their company shares. Some, like Daniel Kokotajlo (DK), refused to sign and publicly criticized the company's safety practices.

  • Historical Conflicts: A division in OpenAI between safety advocates and growth supporters led to recent high-profile departures.

  • Technical vs. Market Priorities: The conflict between technical safety advocates and market-oriented executives has reached a breaking point. The technical team wants to ensure AGI is developed responsibly, while the market team prioritizes expanding ChatGPT and other commercial products.

  • Recent Resignations: Key figures like Ilya Sutskever and several safety team members have left, citing a loss of confidence in OpenAI's commitment to safe AGI development. This exodus has left the Superalignment team without clear leadership.

Why It Matters:

  • Neglecting Safety for Speed: The resignations of top safety advocates highlight the risks of accelerating AI development without sufficient safety measures, potentially leading to harmful outcomes.

  • Public Trust and Accountability: OpenAI's internal conflicts and the non-disparagement agreements raise concerns about transparency and accountability in one of the world's leading AI companies.

  • Erosion of Internal Trust: The internal strife and leadership disputes have eroded trust within OpenAI, affecting morale and the company's ability to retain top talent.

TOGETHER WITH STACKED MARKETER

60,000 world-class marketers read this newsletter daily

Marketers at Google, Tesla, Meta, Shopify, Amazon, Pinterest, Meta, and other successful companies, read Stacked Marketer to increase their IQ.

Stacked Marketer curates the most useful and important digital marketing news, trends, strategies, and case studies and boils it down to a 7-minute read.

AI TOOLS

Trending AIs.

🚀 Vizard.ai creates viral clips for social media within minutes.

🦸‍♂ Publer helps you manage your social media with AI.

💪 Fit Senpai offers personalized fitness plans in seconds.

🌈 Predis.ai generates shared videos, carousels, and single image posts in your brand language.

🎨 Sologo is the next-gen AI logo design generator.

Featured AIs.

UK

UK urges firms on AI safety at seoul summit.

Image Credit: AI Secret

The UK government announced that it would use a major summit in South Korea this week to push for reducing the risks of artificial intelligence, urging companies to develop AI responsibly.

The Details:

  • UK's Push for AI Safety: The UK government is using a major summit in South Korea to advocate for reducing AI risks, emphasizing the need for firms to integrate safety into their AI models.

  • Global Leadership in AI Risk Management: Prime Minister Rishi Sunak aims to establish the UK as a global leader in managing AI risks and opportunities, continuing efforts from a previous AI safety summit hosted in the UK.

  • Diverging International Approaches: Different countries are taking varied approaches to AI regulation, with the EU implementing strict laws and the US seeing state-specific regulations, while the UK favors a more cautious approach to avoid stifling innovation.

Why It Matters:

  • Promoting Safe AI Development: Encouraging firms to build safety into AI models is crucial to prevent potential harms and ensure technology benefits society.

  • International Cooperation: Collaborative efforts among countries like China, the US, India, and Canada are essential to create a unified approach to AI safety, enhancing global standards and practices.

  • Balancing Innovation and Regulation: The UK's strategy to avoid premature regulation while fostering innovation highlights the challenge of maintaining a balance between technological progress and ensuring public safety.

1 Million+ AI enthusiasts are eager to learn about your product.

AI Secret is the world’s #1 AI Newsletter, boasting over 1 million readers from leading companies such as OpenAI, Google, Meta, and Microsoft. We've assisted in promoting Over 500 AI-Related Products. Will yours be the next?

What We Can Offer:

  • Launch an Advertising Campaign

  • Conduct a Survey

  • Introduce New Product or Features

  • Other Business Cooperation

Email our co-founder Mark directly at [email protected] if the button fails.