• The AI Furnace
  • Posts
  • AI Furnace Newsroom: OpenAI Brings OpenClaw’s Creator In-House, Senior Anthropic Safety Researcher Quits, ChatGPT Ads Arrive, AI Funding Latest

AI Furnace Newsroom: OpenAI Brings OpenClaw’s Creator In-House, Senior Anthropic Safety Researcher Quits, ChatGPT Ads Arrive, AI Funding Latest

The AI Furnace weekly newsletter. Your destination for the latest news, innovations, opportunities, and product launches in AI.

In partnership with

Welcome to this week’s AI Furnace Newsroom

In today’s insights we cover:

  1. OpenAI Hires OpenClaw Founder, Peter Steinberger, to Build Multi-agent Systems

  2. A Senior Anthropic Safety Researcher Quits with a Public Warning

  3. ChatGPT Ads Arrive in the U.S.

  4. OpenAI Launches Frontier for Enterprise Agents

  5. Anthropic Raises $30B & AI Funding Stays White-Hot Across Voice, Video and Frontier Labs

  6. Security Lags as AI Adoption Increases

Read time: 5 mins

💡 Furnace Insights

OpenAI

Peter Steinberger, whose open-source project OpenClaw took the world by storm over the past few weeks as a social network specifically for AI agents, is joining OpenAI as the company leans more into multi-agent systems. OpenClaw will continue as an open-source project backed by a dedicated foundation, rather than getting folded into OpenAI as a closed product. Steinberger says he is a builder at heart and sees OpenAI as the fastest path to bring AI agents to everyone.

OpenClaw’s rapid growth as a social network surfaced the “malicious skills” problem and so it’s no wonder OpenAI is interested: once agents have access to apps, files, and accounts, the skills marketplace becomes the new attack surface. Researchers flagged over 400 malicious skills in its Moltbook ecosystem.

OpenAI bringing Steinberger in while keeping OpenClaw open suggests OpenAI wants to make multi-agents systems a core product layer and to shape standards for permissions, auditing, and safe skill distribution before the agent world fragments into a thousand incompatible (and insecure) ecosystems.  

Anthropic

Anthropic’s safeguards lead, Mrinank Sharma, resigned and published a cryptic poetry-like letter warning that “the world is in peril,” pointing to growing tension between stated safety values and day-to-day pressures. The exit landed amid a wider industry moment where labs are shipping workplace agents and new integrations at a pace that makes safety governance feel perpetually behind, and safety researchers are starting to quit across the board.  

When safety leaders leave loudly, it’s rarely about one feature but usually about how decisions get made under commercial pressure. And for companies adopting Claude in real workflows, it’s a reminder that “trust” isn’t just model quality; it’s internal incentives, escalation paths, and whether safeguards have the power to slow launches when needed. 

OpenAI

OpenAI has started testing ads in ChatGPT for U.S. users on the Free plan and on the $8 Go plan. Ads appear below responses, clearly labeled, with controls to manage personalization. Higher subscription tiers remain ad-free. OpenAI says ads don’t influence answers and that advertisers don’t get access to user conversations.  

The key shift is behavioral: ChatGPT is becoming a place people browse and decide what to buy, not just ask for information. Ads underneath an answer turn “help me choose” into a monetizable moment, which puts OpenAI on a path closer to Google Search - except the intent signal is richer because it’s conversational. The risk is equally clear: if users start doubting whether advice is clean or commercial, they won’t complain loudly but just ask fewer high-intent questions. 

OpenAI

OpenAI announced an enterprise platform called Frontier to help large organizations build, deploy, and govern AI agents, with emphasis on shared context, permissions, and operating boundaries. A notable piece is OpenAI embedding “Forward Deployed Engineers” with customer teams to push prototypes into production.  

This sees OpenAI trying to figure out its competitive positioning against Anthropic, as Anthropic is becoming the default AI lab for enterprises and OpenAI traditionally veers towards a social consumer layer AI lab for the masses. This is OpenAI saying that enterprise is still on its radar, and acknowledging the real enterprise bottleneck: companies don’t fail at AI because models aren’t smart but because agents don’t fit into systems of record safely. Frontier is the missing middle layer: connectors, execution environments, and governance that make “AI coworkers” manageable at scale. 

The key recurring theme here with OpenAI’s hire of the OpenClaw founder is its move to multi-agent systems - the Frontier platform is multi-agent systems for enterprise while OpenClaw is this for consumers.

AI market

This month’s funding tape is a reminder that AI “category leaders” are forming fast beyond just the frontier labs, and investor demand is high here. Anthropic raised a $30B round at a $380B valuation, ElevenLabs (voice AI company) raised a $500M round at an $11B valuation led by Sequoia Capital, and Runway AI (video AI company) raised $315M at a $5.3B valuation. Forbes reported that legal AI startup Harvey may raise at a $11B valuation, and its competitor Legora is also rumoured to be raising at a $6B valuation. 

Investors are paying for distribution, data, and infrastructure leverage, not just model demos. Voice (ElevenLabs) and video/world-model tooling (Runway) are emerging as durable layers in media and enterprise workflows, while frontier labs (Anthropic) are being valued like future platforms. The tension is that these valuations quietly assume operational maturity (i.e., safety, compliance, uptime, enterprise controls) and not just research momentum.  

Capital is also flowing at the earlier stages, with AI lab Fundamental emerging from stealth as a unicorn last week with $255 million in funding at a $1.4 billion post-money valuation. You can see more earlier stage rounds in the AI Venture Deals of the Week section in this newsletter below.

Security

A new Darktrace report highlights growing unease among security teams as AI agents are granted access to critical data and processes, with a large majority of security professionals (76%) justifiably worried about the security implications of these tools, with nearly half of security experts (47%) saying they are either “very” or “extremely” concerned that the agents could operate with direct access to sensitive data or critical business processes. 

The practical takeaway is that agents change security from “protect the perimeter” to “control the permissions.” When a system can read, write, send, and execute, the biggest risks become mis-scoped access, weak audit trails, and workflow-level prompt injection. And these problems aren’t solved by traditional endpoint security alone. The companies that adopt agents safely will be the ones that treat them like employees: least-privilege access, logging, approvals for sensitive actions, and continuous monitoring. 

📈 AI Venture Deals of the Week

  • Reflow, a workforce and workflow intelligence for enterprise ops platform, raised a $15M seed round.

  • Dono, an AI-powered property records platform that turns fragmented county records into usable ownership data, raised a $6.5M seed round.

  • Entire, an AI agent transparency and developer hub startup founded by ex-GitHub CEO Thomas Dohmke, raised a $60M seed round. 

  • OPAQUE, a confidential AI / secure data infrastructure startup, raised its $24M Series B. 

  • VulnCheck, an AI-accelerated vulnerability management startup, raised its $25M Series B. 

  • Bedrock Robotics, an autonomous construction systems company (heavy equipment autonomy), raised its $270M Series B

  • Olix, a photonic AI chip startup focused on inference, raised its $220M Series A round. 

  • Runway, a generative video / “world model” AI company, raised its $315M Series E

  • Anthropic, a foundation-model company (Claude), raised its $30B Series G round. 

⚒️ New AI Product Launches You Don’t Want to Miss 

  • ByteDance released Seedance 2.0, a next-generation multimodal audio+video generation model (text/image/audio/video inputs) designed for “cinematic” creation and editing workflows. 

  • Alibaba released Qwen3.5, a new flagship model positioned for the agentic AI era with upgraded language capability and lower cost claims. 

  • Zhipu’s GLM-5 launched as an open-source model line emphasizing lower hallucination and enterprise deployability. 

  • OpenAI released Frontier, an enterprise agent platform for building/deploying/managing “AI coworkers” with governance, permissions, and org-level context. 

  • Sarvam AI released Sarvam Edge, an on-device (offline) model suite for speech recognition, translation, and TTStargeted at Indian languages and low-connectivity environments. 

Upcoming Events 📅

Interested in meeting the Who’s Who of AI?

The AI Hot 100 Summit is back for its third edition on May 6-7th, 2026 in New York City. The only AI Summit built for real connections where AI Visionaries meet Industry Leaders. Join over 500 AI Executive Leaders, Founders and Investors for a jam packed 2 days of enterprise AI lightning talks, innovation showcases, and curated networking.

Get ready to meet and greet speakers and attendees from OpenAI, NVIDIA, Haleon plc, Legora, Anthropic, Cursor, Eleven Labs, Adobe, Walmart, Insight Partners & more.

🗓️ May 7, 2026

📍New York City

📢 Want to partner? Reach out to get your brand in front of 25k+ AI executives, entrepreneurs, researchers, investors, and AI leaders.