I have spent twenty years building and running businesses. Property development. Tourism operations. Advisory boards. Consulting. In every one of those businesses, the same pattern repeated: the software we bought did 70% of what we needed, and the remaining 30% -- the part that actually differentiated us -- we either built custom or lived without.
When serverless computing arrived, it felt like the answer. Write a function. Deploy it. Scale automatically. No servers to manage. No infrastructure to maintain. The promise was compelling: focus on logic, not plumbing.
Fifteen years later, I can tell you what the promise got wrong. Not the compute model. Serverless compute works brilliantly. What it got wrong was the assumption that compute is the hard part.
It is not. Context is the hard part.
The Promise and the Reality
Serverless architecture promised to simplify. And it did simplify one thing: infrastructure management. You no longer provision servers, manage capacity, or worry about scaling. That is real and valuable.
But serverless also decomposed your application into isolated functions. Each function runs independently. Each function starts from zero. And each function needs to answer the same questions before it can do anything useful:
- Who is calling? Authentication. Validate the token, decode the claims, confirm the identity.
- What can they access? Authorisation. Check their role, team membership, and permission level.
- What organisation are they in? Multi-tenancy. Ensure they only see their own data.
- What data do they need? Data access. Connect to the database, query the right tables, handle connection pooling.
- What business rules apply? Domain logic. Apply pricing tiers, workflow states, approval chains, compliance rules.
Every function answers these questions. Every function rebuilds this context. And across an application with 50, 100, or 500 serverless functions, teams write the same authentication check, the same permission lookup, the same organisation scoping logic hundreds of times.
This is the business context gap: the space between what serverless provides (compute) and what business applications need (identity, permissions, organisational structure, and data access).
How the Gap Grows
It starts small. Your first serverless function validates a JWT, queries a database, and returns a response. Thirty lines of code. Ten of them are business logic. Twenty are context plumbing.
Your tenth function does the same. But now you have a shared auth utility. A database connection helper. A permission checker. You have started building an internal framework to provide the context that your platform does not.
Your fiftieth function reveals the pattern in full. The auth utility has grown to handle refresh tokens, organisation switching, and role hierarchies. The database helper manages connection pooling, retry logic, and timeout handling. The permission checker handles team-based access, guest permissions, and resource-level authorisation.
You have built a platform inside your platform. Except it is not a platform. It is a collection of utilities maintained by your team, tested by your team, and debugged by your team at 2am when the permission checker fails on an edge case nobody anticipated.
Werner Vogels, CTO of Amazon, said that serverless lets teams focus on business logic rather than infrastructure. He was right about the aspiration. But the context gap means teams replaced infrastructure management with context management. The complexity did not disappear. It moved.
The Anatomy of the Context Gap
The gap has four layers. Each one adds code, latency, and failure surface to every function in your application.
Layer 1: Identity
Every function needs to know who is calling. In a serverless architecture, this means:
- Extract the authorization header from the request
- Validate the JWT signature against the auth provider's public keys
- Decode the token claims (user ID, email, organisation, role)
- Handle expired tokens, malformed tokens, and missing tokens
- Optionally fetch additional user metadata from the auth provider's API
This is typically 20-40 lines of code per function, abstracted into a shared utility. But the utility must handle every auth provider's quirks. Clerk tokens look different from Auth0 tokens. Firebase tokens have different claim structures. Session tokens behave differently from API keys.
When teams adopt serverless, they pick an auth provider and write the integration once. Then they maintain it forever. Every auth provider API change, every token format update, every new edge case flows back to this utility.
Layer 2: Permissions
Knowing who is calling is not enough. You need to know what they can do. In most organisations, permissions are not simple "admin or not" checks. They involve:
- Role-based access: What is the user's role? Administrator, member, viewer, guest?
- Team-based scoping: Which team do they belong to? Can they see this team's data?
- Resource-level permissions: Do they own this document? Were they assigned to this task?
- Organisation hierarchy: Is this a parent organisation viewing child organisation data?
- Feature flags: Does their subscription tier include this capability?
Each function must evaluate the relevant permission checks before executing any business logic. The permission model that starts as a simple role check in month one becomes a multi-dimensional access control system by month twelve.
And here is the insidious part: permission bugs are security vulnerabilities. A function that skips a permission check does not just produce a bad user experience. It exposes data. According to OWASP's Top 10, broken access control has been the number one web application security risk since 2021. The context gap is a security gap.
Layer 3: Organisation Context
Business applications are multi-tenant. Every query, every mutation, every response must be scoped to the correct organisation. In a serverless function, this means:
- Determine the current organisation from the token or request context
- Ensure every database query includes an organisation filter
- Prevent cross-organisation data leakage through parameter manipulation
- Handle users who belong to multiple organisations
- Support organisation hierarchies (parent/child, franchises, partner networks)
Forget the organisation filter on a single database query, and one customer sees another customer's data. This is not a bug. It is a breach. And in a serverless architecture with hundreds of functions, each one independently implementing organisation scoping, the surface area for this error is vast.
Layer 4: Data Access
Serverless functions need data. But connecting to a database from a serverless function is not the same as connecting from a traditional server. The challenges:
- Connection exhaustion: Each function invocation may open a new database connection. At scale, hundreds of simultaneous invocations exhaust the database's connection limit. Teams deploy connection poolers (PgBouncer, RDS Proxy) to manage this, adding another piece of infrastructure that serverless was supposed to eliminate.
- Cold start + connection latency: A cold function must establish a new TLS connection to the database. This adds 50-200ms to the first request, on top of the cold start itself.
- Connection reuse: Functions that share a runtime may reuse connections, but this behaviour is platform-dependent and unreliable. Teams cannot count on it.
The data access layer that "just worked" in a monolithic application becomes a distributed systems problem in a serverless architecture. Every function is an independent client of your database, and your database was designed for tens of connections, not thousands.
The Multiplication Problem
Each layer of the context gap is manageable in isolation. The problem is multiplication.
A single serverless function with 20 lines of context code and 10 lines of business logic is fine. The ratio is not ideal, but the function is readable and maintainable.
One hundred serverless functions, each with its own flavour of context code, is a maintenance burden. Shared utilities help but introduce coupling. A bug in the auth utility affects every function. An update to the permission model requires testing every function. A database schema change ripples through every data access pattern.
Three hundred serverless functions -- which is not unusual for a mature product -- means the context layer is itself a large codebase. It has its own bugs, its own performance characteristics, and its own scaling concerns. Teams assign engineers to maintain the "platform layer" that exists only because the hosting platform does not provide context.
This is the multiplication problem: the context gap does not scale linearly with the number of functions. It scales with the complexity of the permission model, the number of organisation structures, and the frequency of changes to the data layer. As the business grows, the context layer grows faster than the business logic it serves.
What Platform-Provided Context Looks Like
The alternative to building context in every function is having the platform provide it.
Imagine a serverless function that starts not from zero, but from full context. When the function executes, it already knows:
- Who is calling: The user's identity, role, and organisation membership, validated and decoded before your code runs
- What they can access: Permissions evaluated against the platform's access control model, not your custom implementation
- What organisation they are in: Every data query automatically scoped to the correct tenant
- What data is available: A typed API for accessing organisational data -- tasks, documents, goals, team structure -- without writing database queries or managing connections
This is the Context API model. Instead of every function rebuilding the world, the platform provides the world and the function provides the logic.
The difference in code is stark. A function that checks whether a user can update a task goes from:
Thirty-plus lines of auth validation, permission checking, organisation scoping, database querying, and error handling -- to five lines: receive context, apply business logic, return result.
The context gap disappears because the platform fills it. The function does what serverless always promised: it runs business logic. Nothing else.
Why This Matters Now More Than in 2020
Three trends are accelerating the context gap from annoyance to architectural constraint.
AI Agents Need Context
AI is moving from chatbots to agents that take action. An AI agent that schedules a meeting needs to know the user's calendar, team members, and preferences. An agent that updates a project status needs to know the project, the permission model, and the notification rules. An agent that generates a report needs access to goals, tasks, and data.
On a platform without provided context, every AI agent integration requires the same context plumbing as every serverless function, multiplied by the breadth of data the agent needs to access. Context engineering is not just about prompts. It is about giving AI systems access to the organisational knowledge they need to act usefully.
Low-Code and Non-Developer Builders
The line between developer and builder is disappearing. Finance managers build dashboards. Operations leads create automations. Marketing teams deploy landing pages. These builders cannot be expected to implement authentication middleware and permission checks. They need platforms where context is provided, not constructed.
This is the insight behind the shift from deployment platforms to business platforms. A deployment platform serves developers who can build context infrastructure. A business platform serves anyone who needs to build software that understands the organisation.
Function Count Is Exploding
Microservice and serverless adoption continues to grow. Datadog's State of Serverless report shows year-over-year increases in function count per organisation. More functions mean more context code, more shared utilities to maintain, and more surface area for permission bugs.
The context gap was tolerable at 10 functions. It was manageable at 50. At 300, it is the primary maintenance cost of the application. At 1,000, it is the reason teams consider replatforming.
The Strategic Implication
Here is what two decades of building and running businesses taught me about technology decisions: the cost you pay today is not the cost that matters. The cost that matters is the cost at 10x scale.
A team with 5 serverless functions can absorb the context gap. The shared auth utility takes a day to build. The permission checks are simple. The database connections are manageable.
A team with 500 serverless functions cannot absorb it. The context layer is now a platform-within-a-platform, maintained by expensive engineers who could be building features instead. Every new function requires integration with the context layer. Every permission change requires a sweep across hundreds of functions. Every security audit requires reviewing context implementations individually.
The strategic question is not "can we build context infrastructure?" Every engineering team can. The question is "should we?"
If your platform provides compute and nothing else, you will build context infrastructure. You have no choice. If your platform provides compute and context, your engineers build features. That is the difference between building on a deployment platform and building on a business platform.
Evaluating Your Own Context Gap
Audit your serverless functions. Count the lines of code in each function that handle:
- Authentication and token validation
- Permission and role checking
- Organisation scoping and multi-tenancy
- Database connection management
- User and team data retrieval
If those lines exceed your business logic lines, the context gap is your dominant cost. If your shared utilities for auth, permissions, and data access are maintained by a dedicated engineer or team, the context gap is your dominant staffing cost.
Then ask: does this improve with scale? For most teams, it does not. Every new function adds to the context layer. Every new permission model increases the complexity. Every new organisation structure requires updates across the codebase.
The gap widens as you grow. Platform-provided context is how you close it.
The Path Forward
Serverless is not the problem. Serverless compute is one of the most significant advances in application infrastructure in the last two decades. The problem is that compute alone is not enough.
Business applications need context. They need to know who, what, where, and why before they can do anything useful. When the platform provides that context, functions are small, focused, and secure. When the platform does not, functions are bloated with plumbing, fragile from duplicated logic, and vulnerable to permission gaps.
The next generation of platforms recognises this. They provide compute and context as a unified layer. Your function receives an authenticated, authorised, organisation-scoped context object before your first line of code executes. The context gap closes. And your engineering team goes back to doing what serverless always promised: building business logic.
That is the architectural pattern worth investing in. Not more compute. More context.
Ready to close the context gap? WaymakerOS provides platform-level context to every app and Ambassador you build. Identity, permissions, organisation data, and AI -- available before your first line of code runs. Explore Host or learn how context engineering changes what platforms can do.
Related reading: See why your serverless functions have no business context, understand the difference between deployment platforms and business platforms, or explore how the year of custom apps is reshaping what teams build.
About the Author

Stuart Leo
Stuart Leo founded Waymaker to solve a problem he kept seeing: businesses losing critical knowledge as they grow. He wrote Resolute to help leaders navigate change, lead with purpose, and build indestructible organizations. When he's not building software, he's enjoying the sand, surf, and open spaces of Australia.