Introduction
Artificial Intelligence is no longer experimental. It’s embedded in our digital products—powering recommendations, automating decisions, and enabling entirely new user experiences. As AI systems scale, so does their impact. That’s why the European Union introduced the EU AI Act—the first legal framework specifically designed to govern the risks of artificial intelligence.
For product and innovation teams, this isn’t just a compliance issue—it’s strategic. The decisions you make today around AI design, deployment, and monitoring will determine not just your legal exposure, but your ability to build scalable, ethical, and trusted digital products.
1. OpenAI Launches “Instant Checkout” — Agentic Shopping via ChatGPT
OpenAI has enabled users (initially in the U.S.) to complete purchases directly in ChatGPT using a new feature called Instant Checkout or “agentic shopping.”
It relies on an Agentic Commerce Protocol co-developed with Stripe, enabling AI agents, merchants, and users to coordinate the transaction within chat.
To start, support is limited to single-item purchases from U.S. Etsy sellers; multi-item carts and Shopify integration are slated for future rollout.
Importantly, product discovery remains neutral (i.e. not ad-based) — ChatGPT surfaces relevant items based on user requests, then provides a “Buy” button to complete purchase and handle shipping/payment.
Merchants retain control over fulfillment, returns, and post-purchase experience.
This marks a shift: ChatGPT is now making a direct play into e-commerce, beyond just recommendations.
What to watch: merchant adoption, user trust (privacy, security), geographic expansion, and overall conversion friction in real usage.
2. Sora 2 + Sora App: Text-to-Video Meets Social AI
On September 30, OpenAI launched Sora 2, its next-generation text-to-video engine, together with a social app called Sora.
Highlights:
Synchronized audio & realism: Sora 2 supports dialogue, sound effects, ambient audio, and better physics (e.g. realistic motion, object interaction) than earlier systems.
Identity / “cameos”: Users can record a short video + audio sample to embed their own likeness (or friends’) into generated scenes — a “cameo” system.
Social feed model: The Sora app mimics TikTok-style video browsing: vertical feed, remixing, discovery of AI-generated clips.
Rollout & constraints: Initially available by invite only on iOS in the U.S. and Canada. OpenAI has applied content safeguards, moderation, and watermarking to help signal synthetic origin.
Copyright opt-out: The system may draw from copyrighted materials unless rights holders opt out of inclusion in generated feeds.
Risks & debates:
Within hours of release, invite codes began reselling (on eBay and other marketplaces) at steep markups — prompting concerns about speculation, unfair access, and violation of terms.
Deepfake risk & ethical concerns: Videos surfaced using recognisable people (e.g. public figures) in absurd or misleading contexts, raising backlash around impersonation, misinformation, and copyright infringement.
Some observers describe Sora’s open video feed as a “deepfake factory,” warning that the blending of AI video + social dynamics could worsen disinformation.
Implications: Sora moves OpenAI deeper into media ecosystems. The success of its social model, safety architecture, and public trust will determine whether it becomes a mainstream platform or a controversial experiment.
3. California Enacts SB 53: AI Transparency Mandated
On September 29, California Governor Gavin Newsom signed SB 53, also dubbed the Transparency in Frontier AI Act, which requires major AI developers to publicly disclose safety protocols and mitigation plans for advanced systems.
Key provisions:
Companies with $500M+ in revenue must report on how they manage catastrophic / runaway AI risks.
Violations can carry fines up to $1 million.
What this means:
AI firms must now treat transparency and auditability as compliance obligations (not just ethical goals). California’s move may pressure other states or federative bodies to adopt similar frameworks — potentially fracturing the regulatory landscape.
4. Trump Officials Push Back on CHAI (Industry-Led AI Governance)
CHAI, backed by major players like OpenAI and Microsoft, had proposed voluntary frameworks for health AI systems.
Trump officials questioned whether CHAI would function as a de facto regulatory gatekeeper, favouring incumbents.
The debate underscores the tension between voluntary governance by the industry vs public regulation — especially in sensitive domains like healthcare.
This struggle is emblematic of larger battles over who will control AI oversight: the market, the state, or hybrid efforts — and how much influence stakeholders have in shaping safety norms.
5. Perplexity Acquires Visual Electric to Compete in AI Video
In a move that signals rising competition in AI video, Perplexity AI announced it has acquired Visual Electric, a specialist video generation and media platform, to challenge offerings like Sora.
The acquisition is strategic: it bolsters Perplexity’s capacity to produce higher-fidelity AI video content and integrates media generation into its broader AI stack.
This suggests that firms beyond OpenAI see video generation and media verticals as battlegrounds for dominance in AI.
Already, Google and other platforms are exploring video-driven AI (e.g. Veo, etc.), so Perplexity’s move could fuel an arms race.
Watch closely for how integrations unfold (video + chat + search) and how data and moderation systems scale in more players.
6. Global AI Governance & UN Dialogue Pulls Forward
While less headline-grabbing, a significant development this week was the formal launch of the Global Dialogue on AI Governance, endorsed by UN member states.
This initiative is meant to develop multi-stakeholder norms, principles, and reference frameworks for AI governance globally.
It grows from the Global Digital Compact and other UN efforts to build consensus around digital public goods, AI safety norms, and cross-border alignment.
In parallel, UN officials have warned that relying too much on algorithmic systems without oversight invites systemic risk.
Though slower, these governance efforts may become the scaffolding upon which regulation, standards, and international AI treaties are built.
Conclusion
This week’s developments signal that we are entering a new phase in AI:
Action over suggestion: With agentic shopping, OpenAI is pushing ChatGPT to do on behalf of users, not just suggest.
Media & identity at the frontier: Sora 2 + the Sora app test the fusion of AI generation, social media, and identity embedding — a potent but delicate cocktail.
Governance urgency: California’s SB 53 and backlash against CHAI show that regulation and oversight are no longer peripheral debates — they’re front and centre.
Competition intensifies: Perplexity’s push into AI video, along with UN governance moves, shows that the ecosystem is diversifying rapidly.
If you found this roundup insightful, subscribe for next week’s edition, share with peers tracking AI trends, and leave a comment with a story you’d like me to cover more deeply. See you in the next AI Weekly News Highlights.