In the last eighteen months, the largest companies in the world quietly accepted a fact that had been visible for some time: in an age of foundation models, proprietary data is the only durable moat left. Compute can be rented. Talent can be hired. Algorithms diffuse within a quarter of any frontier paper. The one thing that cannot be recreated is a long‑running, well‑structured, real‑world dataset that nobody else has, and the legal right to train on it.
That single realisation has reshaped how the AI industry buys. Reddit's deal with Google, reported at $60 million a year for access to its content firehose, was the first to make the new economics public. Anthropic followed by signing its own arrangement with Reddit. OpenAI struck a multi‑year licence with News Corp, then with Axel Springer, the FT, and a string of other publishers. Bloomberg, the most expensively guarded data set in finance, has spent two years negotiating with the model labs about what its terminal data is now worth in a world where someone is willing to pay for it differently than its subscribers do. The same conversations are happening, privately, with hospital networks, banks, telcos, law firms, manufacturing groups, and sovereigns.
These are unfamiliar deals for the people doing them. They look a little like licensing, a little like M&A, a little like a joint venture, and a little like something that does not yet have a name. They sit at the intersection of three different professional communities — finance, technology, and law — and they fall between all three.
The deal nobody is set up to do
Consider what a chief executive of a large hospital network in India, or a consumer bank in Brazil, or a logistics group in the Gulf, is actually facing when an AI buyer approaches them.
Their banker, who has handled their last three financings and knows their balance sheet better than they do, has never priced a data deal in their life. They will pull a comparable transaction deck and discover there are no comparables yet, or the few that exist are the headline announcements above and reveal almost nothing about how they were structured.
Their technology advisor, often a Big Four partner, can write a beautiful deck about generative AI but has never actually built a model, has never trained one on a noisy real‑world dataset, and cannot tell them whether their data is in fact valuable for training, valuable for fine‑tuning, valuable for evaluation, or valuable for nothing at all. The three are very different markets at very different prices.
Their general counsel knows the privacy regime of their home jurisdiction, but the buyer is in another, the data may need to leave a third, and the regulatory frame is moving in real time. India's DPDP Act came into force a year ago. The EU AI Act is still being interpreted in practice. The UK has chosen a sector‑specific path. The FDA has its own AI framework for medical applications. The cross‑border legality of a single dataset can flip from yes to no on the publication of a new statutory instrument.
Each of the three professionals understands one face of the deal. None of them understand the deal. The CEO is left to do the translation themselves, and most of the time they do not, and the deal does not happen.
Three asymmetries
The deals that do close, and the ones that close on extraordinary terms, share a common shape. They are written at points where the market has not yet caught up to a structural reality. There are three asymmetries that recur, and a practitioner who understands all three is doing very different work from one who understands any one of them in isolation.
The first is the asymmetry of data. The buyer often does not know what they need. The seller often does not know what they have. A hospital network may believe its imaging archive is the prize, when in fact it is the longitudinal coupling between admissions, prescriptions and outcomes that is genuinely scarce. A bank may believe its transaction data is the asset, when in fact what is unique is its decade of credit decisions, including the rejected ones. The right adviser quietly inverts what the seller is selling, because what is rare is rarely what is offered.
The second is the asymmetry of AI. There is a wide gap, today, between what the model labs say they can do and what their products actually deliver in production. Inside that gap sits the question of whether a particular dataset is worth $5 million, $50 million, or nothing. The answer requires someone who has run a model in production against a real customer pipeline and knows where the failure modes live. It is not knowable from a distance.
The third is the asymmetry of regulation. Every new regulatory frame creates, for a period, a window in which a deal is legal in one jurisdiction but not another, or for one purpose but not another, or before a date but not after. These windows are not loopholes; they are legitimate consequences of regulators moving at different speeds and writing different rules. They are also, for a small number of well‑structured transactions, a source of value that disappears when the rules harmonise. The adviser who reads the gazette as carefully as the term sheet is worth a different fee than one who does not.
Each asymmetry is small. Their product is large. A deal written at the intersection of all three — a data asset whose true value the seller does not know, deployed against an AI capability whose limits the buyer does not understand, in a regulatory moment that will not last — is the deal that creates a decade of value for both sides, and is the deal that almost nobody is positioned to write.
What the right practice looks like
A new category of advisory has to exist for these deals. It is not banking, because the structures are not securities and the timelines are not quarters. It is not consulting, because the product is a transaction, not a slide deck. It is not law, because the judgement that matters is commercial and only secondarily legal. It needs its own seat.
The practice that does this work has, in our view, four characteristics. It is led by an operator who has actually built in AI, because the difference between a $5 million data set and a $50 million one is invisible to anyone who has not. It is global by reach, because the asymmetries above are international by definition. It is anchored in deep institutional relationships, not transactional networks, because the people on both sides of these deals are already at the top of their organisations and only meet through introductions they trust. And it is paid in a way that survives long iteration cycles — modest retainers, success‑contingent economics, and where appropriate equity or co‑investment alongside — because these deals do not close in ninety days.
The economics of the practice have to match the rhythm of the work, or the work does not get done well.
Why now
Three forces have converged in the last eighteen months that did not converge before.
AI has crossed the threshold where proprietary data, rather than model architecture, is the binding constraint on capability. The buyers know this. The sellers are starting to. The price discovery is happening this year and next.
Capital pools globally have decided that AI is the dominant thematic for the decade. Sovereign wealth funds, family offices and corporate venture arms are looking for ways into the category that are not "buy more Nvidia." Strategic data deals, AI joint ventures, and operating partnerships with model companies are the second wave of that capital flow, and they require an intermediary who can speak the language of both sides.
Regulation is finally moving from principle to instrument. The EU AI Act's implementing rules, the FDA's emerging framework for AI in medical devices, India's DPDP, the UK's sector approach, and the patchwork emerging in the United States are creating, for the first time, a real surface of cross‑jurisdictional difference that can be analysed, priced, and structured around. That surface will not last more than a few years. The practitioners who read it correctly today will write the most valuable deals of the period.
Markets create new categories of adviser when they create new kinds of deal. This is one of those moments.
Louisa
Louisa AI was founded to be the relationship engine for senior professionals operating in exactly this kind of market — people whose work is bound, more than they care to admit, by which conversations they can have, and how quickly. The product sits inside firms today helping them find the right relationships at the right time.
Louisa AI Advisory is the same engine, used on ourselves. We operate at the intersection described above, with the global relationship network of senior practitioners that has accumulated over twenty years of dealmaking, and we structure long‑term partnerships with a small number of clients to find and close the deals that sit at these asymmetries. We are not a bank. We are not a consultancy. We are not a law firm. We are something else, made necessary by the moment.
If the picture above describes the deal you are thinking about, or the seat you wish someone occupied for you, we should talk.
