OpenAI is acquiring Promptfoo to make security testing and evaluation a default part of building “AI coworkers” in OpenAI Frontier. The signal here is operational: prompt-injection, jailbreaks, data leaks, and tool misuse are moving from “edge cases” to baseline controls enterprises will expect before agents touch real systems.
McKinsey’s thesis is that real estate is crossing from “AI pilots” (summaries, drafting, lookup) into agentic automation that can run end-to-end workflows with humans supervising. The useful operating-model takeaway: the ROI shows up only when leaders redesign the whole workflow (ownership, controls, handoffs), not when they bolt AI onto one task.
Brookings lays out the platform conflict as model providers move “up the stack” into applications and workflows, putting them in direct competition with the developers and companies that depend on their APIs. The practical risk for builders is dependency: pricing, access, and product “feature parity” can change when your supplier also wants to own the customer relationship.
DeepMind uses AlphaGo’s 10-year mark to explain how search + reinforcement learning evolved into a set of reusable “problem-solving primitives” that now show up in science work (e.g., protein folding) and formal reasoning systems. The point isn’t nostalgia—it’s a clean explanation of how one breakthrough architecture can seed multiple capability lines over time.
Brookings argues most national AI strategies over-invest in generic “capacity building” (compute, models) and under-invest in the harder part: embedding AI into real sectors where value is created. Their fix is “cognitive infrastructure”—data, institutions, talent, and domain expertise—aligned to what a country already does well so AI adoption becomes compounding rather than symbolic.
#policy#sovereignty#infrastructure
Going Deeper
Optional reads for those who want more. (Some may be behind a paywall)