← Back to News & Articles

Why Your Serverless Functions Still Have No Business Context

Every serverless function starts from zero. It knows nothing about your users or your organisation.

Insights9 min
Why Your Serverless Functions Still Have No Business Context

You deploy a serverless function. It receives a request. And it knows absolutely nothing.

Not who sent it. Not what team they belong to. Not what permissions they have. Not what data they should be allowed to see. Not what organisation they work for. Nothing.

Every serverless function on every major deployment platform starts from zero. No identity. No context. No awareness of the business it serves.

This is the gap nobody talks about at launch. And it is the gap that consumes weeks of engineering time the moment you try to build anything real.

The zero-context problem

Here is what actually happens when you deploy a serverless function on a typical hosting platform.

You write your business logic. Maybe it processes an order, generates a report, sends a notification, or transforms some data. The logic itself might take a few hours to build. Straightforward.

Then reality arrives.

Who is calling this function? You need authentication. So you integrate an auth provider — Clerk, Auth0, Firebase Auth, or a custom JWT solution. You write middleware to validate tokens per OWASP authentication guidelines, extract user IDs, and handle expired sessions. That is a day of work, minimum. More if you want it done properly.

What are they allowed to do? Authentication tells you who someone is. Authorisation tells you what they can do. Now you need a permissions layer. Is this user an admin? A team member? A guest? Can they see this data or only their own? You build role checks, permission gates, and access control logic. Another day. Maybe two.

What organisation do they belong to? Multi-tenancy. The function needs to scope every database query to the correct organisation. You add tenant isolation to every query, every response, every cache key. Miss one, and customer A sees customer B's data. This is not a feature — it is a liability.

What data can they access? Now you connect to a database. You configure connection strings, manage connection pooling, handle retries, and write queries that respect the permission boundaries you just built. Another integration. Another set of environment variables. Another thing that can break at 2am.

You started with business logic that took a few hours. You ended with an infrastructure project that took two weeks.

And you have to do it again for the next function. And the next one. And every function after that.

The real cost nobody calculates

Teams rarely add up the total cost of the zero-context problem. They treat each integration as a one-time task and move on. But the cost compounds in ways that do not show up on any dashboard.

Integration time per function. Every new function needs the same auth middleware, the same permission checks, the same database connections. Even with shared libraries, wiring it together takes hours. Multiply that across 10, 20, 50 functions, and the integration work dwarfs the business logic.

Maintenance surface area. Each integration is a dependency. Auth providers update their SDKs. Database drivers release breaking changes. Token formats evolve. A single auth provider update can touch every function in your fleet. That is not a deploy — it is a coordinated migration.

Security risk per function. Every time a developer manually implements auth and permissions, they create an opportunity to get it wrong. One function that forgets to check tenant isolation. One endpoint that validates the token but not the role. One query that does not scope to the organisation. Security bugs do not come from malice. They come from repetition without automation.

Context blindness. Even after you solve auth, permissions, and data access, your functions still do not understand the business. They do not know what goals the organisation is pursuing, what projects are active, what tasks are assigned to which teams, or how the data they process relates to the wider operation. They execute logic in a vacuum.

This is what developers mean when they say serverless is "easy to start, hard to scale." The compute scales automatically. The context does not scale at all — because it was never there to begin with.

Why deployment platforms cannot solve this

The major deployment platforms — Vercel, AWS Lambda, Netlify, Cloudflare Workers — are excellent at what they do. They deploy code, scale compute, and handle infrastructure. They are deployment platforms.

But deployment is not the problem.

The problem is that a deployment platform has no concept of your organisation. It does not know your users, your teams, your permissions, your goals, or your data structures. It cannot, because that is not what it was built to do. It was built to run code, not to understand the business the code serves.

This creates a structural gap. The platform handles the easy part (compute) and leaves the hard part (context) entirely to you.

Consider what a function needs to answer a simple question like "show this user their team's active projects":

  1. Validate the auth token
  2. Extract the user identity
  3. Look up the user's organisation
  4. Look up the user's team membership
  5. Check the user's permissions for project data
  6. Query the database for projects scoped to that team
  7. Filter by active status
  8. Return the result

Steps 1 through 6 are context. Step 7 is business logic. Step 8 is a response. The ratio is roughly 80% context infrastructure to 20% actual work.

On a deployment platform, you build all eight steps yourself. Every time. For every function.

The platform gives you a container to run code in. You supply everything that makes the code meaningful.

The context gap in AI-powered functions

The zero-context problem gets worse — significantly worse — when you add AI to the picture.

AI functions are increasingly common. Generate a summary. Draft a response. Analyse a dataset. Recommend an action. The business logic is a prompt and an API call. Simple enough.

But useful AI needs context. An AI that generates a project summary needs to know which project, who is asking, what they care about, and what other work is related. An AI that drafts a customer response needs to know the customer's history, the organisation's tone, and the relevant policies.

Without context, AI functions produce generic output. With context, they produce organisational intelligence.

This is the same principle behind context engineering — the quality of AI output is determined by the quality of the context it receives, not the cleverness of the prompt. A serverless function that starts from zero gives AI zero context to work with.

Teams solve this by building elaborate context-gathering pipelines. They query multiple databases, assemble user profiles, fetch related records, and construct a context object before the AI ever sees the prompt. This pipeline — not the AI call itself — is where the engineering time goes. And it is duplicated across every AI-powered function.

The irony is hard to miss. You deploy AI to save time. Then you spend weeks building the infrastructure to give AI the context it needs to be useful.

What business-aware functions look like

There is a different architecture. Instead of functions that start from zero and build context manually, imagine functions that start with full organisational awareness.

When a request arrives, the function already knows:

  • Who is calling. User identity, verified and available. No auth middleware to write.
  • What team they belong to. Team membership resolved automatically. No lookup queries.
  • What they are allowed to see. Permissions evaluated before your code runs. No manual role checks.
  • What organisation they work for. Tenant isolation handled at the platform level. No scoping logic.
  • What data exists. Structured data — tasks, projects, goals, documents, tables — queryable directly. No separate database integration.

This is what a Context API provides. Instead of request.body containing raw JSON that you must interpret and verify, you get ctx.user, ctx.team, ctx.permissions, and ctx.tables — the full organisational picture, available from line one.

The function you write is pure business logic. Filter the projects. Calculate the metric. Generate the report. Transform the data. The context infrastructure is not your problem because it is not infrastructure at all — it is a platform capability.

This changes the economics of serverless development. A function that took two weeks (two days of logic, eight days of integration) now takes two days. The integration work disappears because the platform already did it.

The multiplier effect

The real impact is not on a single function. It is on velocity across the entire fleet.

When every function requires manual integration, teams naturally build fewer functions. The overhead discourages experimentation. Building a quick internal tool or a one-off automation is not quick — it carries the same integration tax as a production endpoint. So teams do not build it. They work around the gap with spreadsheets and manual processes instead.

When functions start with context, the calculus inverts. Building a new function is fast because you only write the logic. Teams build more. They automate more. They experiment more. The internal tools that never got built — the ones that were "not worth the integration effort" — suddenly take an afternoon instead of a sprint.

This is how custom apps become practical for teams that are not software companies. Not because the code is easier to write, but because the 80% of work that had nothing to do with code is eliminated.

Over 10, 20, 50 functions, the difference is not incremental. It is structural. One architecture scales linearly with effort. The other scales linearly with ideas.

Five signs your functions lack business context

1. Every function has its own auth middleware. If your team has copied an authMiddleware.ts file into five different function directories, context is missing at the platform level. Shared libraries reduce duplication but do not eliminate the problem — you are still maintaining integration code.

2. Permission checks are inconsistent. One function checks roles. Another checks a custom flag. A third trusts the frontend to enforce access. When permissions are not centralised, they are not reliable.

3. Database connections are configured per function. If each function manages its own connection string, pooling, and retry logic, you are paying an infrastructure tax on every deployment. Edge-deployed functions make this worse — connection management at the edge is a different challenge than in a data centre.

4. Your AI features produce generic output. If your AI-powered functions generate content that could apply to any organisation, they are missing context. Useful AI output reflects the specific goals, data, and language of the business it serves.

5. Building a new function takes days, not hours. When the majority of development time goes to wiring up integrations rather than writing logic, the platform is not carrying its weight. The purpose of a platform is to handle the parts that are the same across every function so your team focuses on the parts that are different.

The architecture decision

This is not a tooling preference. It is an architecture decision that affects every function you build from this point forward.

On one side: deployment platforms. They run your code at scale. You build the context layer yourself — auth, permissions, data access, tenant isolation, AI context pipelines. The compute is serverless. The integration work is very much server-ful.

On the other side: business platforms that include deployment. They run your code at scale AND provide organisational context to every function natively. The compute and the context are both handled. You write logic.

The difference between a deployment platform and a business platform is not features on a comparison table. It is whether your functions understand the business they serve or operate in isolation.

For a single function with simple requirements, the difference is negligible. Configure auth once, connect to a database, move on.

For an organisation building 10, 20, 50 functions — internal tools, automations, AI-powered workflows, customer-facing endpoints — the difference is weeks of engineering time per quarter. It is the difference between functions that execute in a vacuum and functions that operate with full organisational awareness.

Where this is heading

The serverless market is maturing. The early promise — "deploy functions without managing servers" — has been delivered. Every major cloud provider and hosting platform offers serverless compute. The infrastructure problem is solved.

The next problem is context. Functions that understand the business. Functions that know who is calling, what they need, and what data they can access — without the developer rebuilding that understanding from scratch every time.

This is where the industry splits. Deployment platforms will continue to excel at compute. Business platforms will provide compute AND context. Teams will choose based on whether they want to build the context layer themselves or inherit it from the platform.

The question is not whether your serverless functions can scale. They can. Every platform handles that.

The question is whether your serverless functions know anything about your business. Today, on most platforms, the answer is no. Every function starts from zero. Every function rebuilds the same context. Every function operates alone.

That is the gap. And it is the gap that determines whether serverless is a deployment convenience or a genuine productivity multiplier for your organisation.


Build functions that understand your business. WaymakerOS Ambassadors start with full organisational context — user identity, team membership, permissions, and data access — from line one. No auth middleware. No permission wiring. No database plumbing. Just logic. Learn how Host works or explore what you can build.


Related reading: Understand why context engineering matters more than prompt engineering, see how operations at the edge changes the serverless equation, or explore the difference between a deployment platform and a business platform.

About the Author

Stuart Leo

Stuart Leo

Stuart Leo founded Waymaker to solve a problem he kept seeing: businesses losing critical knowledge as they grow. He wrote Resolute to help leaders navigate change, lead with purpose, and build indestructible organizations. When he's not building software, he's enjoying the sand, surf, and open spaces of Australia.