Eva Hesse — Midlibrary

LLMs as Infrastructure: The C-Suite Playbook for LLM Implementation

George Salapa

--

The wait calculation is 0.

There are times in history when it pays to wait. Apple has invested a lot of resources and talent to Siri only to see it fully eclipsed by ChatGPT, leaving it no choice but to integrate OpenAI’s flagship product to take over the wheel of its most treasured product: iPhone. Their wait calculation was probably pretty steep. It is easy (and fun) to do this wait calculation with hindsight kind of thing, but almost impossible looking into the future.

While previous technological revolutions gave us years to adapt, it is not clear if it will be the same with AI. It seems to be rewriting the rules of progress in real-time. That frontier Large Language Models (LLMs) with a good enough Retrieval Augmented Generation (RAG*) layer can replace junior lawyers is now old news. Yes, we are moving very fast towards agentic systems, in which LLMs can perform complete workflows and potentially replace large parts of service economy, and reasoning models (OpenAI’s o1) that are outperforming PhD candidates in a fraction of the time.

When a technology proves it can match years of human expertise in months, the wait calculation isn’t just zero — it’s potentially negative.

Amidst this storm of innovation, surrounded by (how could it be otherwise) an immense hype, business models are shaping up. It’s clear that wrappers (applications using LLMs as infrastructure, e.g. Perplexity, Cursor among many other) have significant value and growth potential — product design matters, user switching costs matter.

Meanwhile, frontier LLM providers are releasing cookbooks and demo applications, demonstrating their eagerness to offer their models as APIs for developers and startups to connect to their most profitable business line — model as utility is here to stay.

But among all this rapid evolution, where does a non-AI company stand?

Build, experiment! The wait calculation is 0. Companies should build their own architecture on top of LLMs (their applications) — transforming their static internal knowledge into fluid, instantly accessible intelligence that empowers employees and lifts their cognitive load through AI assistants that initially master company knowledge, then evolve as functions and connections are added to autonomously perform tasks and accelerate routines.

In the minimum, and I’d like to argue undeniably, custom LLM applications have the potential to: a) transform stale knowledge written in documents and stored on databases to live fluid active intelligence ‘talking’ to its users (employees), b) transform routine (but cognitively demanding tasks — like synthesizing operational reports from dozens of regional offices, identifying performance anomalies, and preparing executive summaries — tasks that middle management repeats quarterly but must approach fresh each time with substantial cognitive effort) into fully automated ‘self-thinking’ workflows, c) remove cognitive friction in data processing where information/data is exchanged between employees internally and between the enterprise and outside world at large.

When companies build their own solutions, they retain control of their code, their custom prompts and proprietary knowledge bases — essentially creating unique competitive advantages that can’t be replicated by competitors using off-the-shelf solutions.

Yes, there’s likely a startup offering a solution for every use case today. But consider this: building your own API-first architecture gives you full control over your AI infrastructure and intellectual property. It means your people will actually understand AI, not just use retail product X. It means deep integration with your existing systems, customization that precisely matches your business needs, and importantly independence from third-party roadmaps. You control your data security, optimize costs without middleman markups, and maintain the flexibility to switch. As frontier models improve at a rapid pace and their usage costs continue to fall, the value of your architecture grows — you can literally switch one line of code — the connection to the API endpoint, easily swapping one model for another (the new generation, perhaps).

It is easy. Start doing things.

Hire LLM developers (**) who can help you accelerate, start building custom solutions inside your enterprise, deeply integrated with your enterprise systems. Your architecture should be as sophisticated as your brand. Unleash internal innovation by encouraging people to experiment with AI. The bottom-up discovery matters and it matters a lot. The AI transformation needs both technical rigor and human intuition.

This is where LLMs reveal their most fascinating quality. Remember, LLMs are weird. Unlike previous technologies that followed predictable implementation paths, LLMs thrive or falter based on the unique intuition of their users. Some of your most unexpected wins will come from generalists who discover ways to amplify their natural abilities — those who seem to have an almost supernatural capacity to find novel applications that make them truly superhuman at their jobs (Some of them won’t tell; they will feel as if it’s their secret superpower. Let them feel that — don’t spoil it!). Meanwhile, your specialists will find ways to automate what slows them down, finally free to focus on the highest-value aspects of their expertise. Let your people explore.

*a method of finding in vast text extracted from documents the relevant bits and injecting it to LLM together with question before it starts writing

**😇

--

--

George Salapa
George Salapa

Written by George Salapa

Thoughts on technology, coding, money & culture. Wrote for Forbes and Venturebeat before.

No responses yet