We're Hiring!
Take the next step in your career and work on diverse technology projects with cross-functional teams.
LEARN MORE
Mountain West Farm Bureau Insurance
office workers empowered by business technology solutions
BLOG
7
25
2019
3.1.2023

Before Going Serverless, Be Aware of Limitations

Last updated:
9.16.2020
3.1.2023
No items found.

Serverless functions (often referred to as Function as a Service or FaaS) will no doubt continue to grow in popularity and remain a cornerstone of IT services for many years to come. However, they are simply another way of building, maintaining, and delivering IT systems. With that in mind, they naturally have disadvantages or situations in which they may not be the preferred technology to use. These are due both to the nature of serverless and how it is currently implemented by cloud service providers.

 

Limitations Due to the Nature of Serverless

Serverless architectures have some inherent limitations. The most apparent is that serverless functions are by their very nature, stateless (at least in the majority of cases). Because of this, you must architect how they interact with stateful components of your infrastructure stack if you wish to retain any data. This can make things more complex.

It can also lead to more latency. With applications on traditional servers or virtual hosts, you can minimize latency by placing all components on the same instance or placing physical servers within the same rack. With serverless, the infrastructure stack is abstracted and you do not have as much control over network optimization. Rather each stateless element communicates via APIs and as you add more and more of these services, the latency between them can stack up. This shouldn’t be a major problem unless you are running a latency-sensitive app.

Finally, serverless architectures run only within their proprietary cloud. You can write in JSON to have some portability between cloud providers, but you must test your serverless functions within a cloud, whichever it may be, if you wish to truly replicate production environments including error logs and scalability.

What many of these inherent limitations boil down to is a loss of control because the underlying infrastructure is abstracted. You can’t control anything outside of the code you write to execute your function. The infrastructure performance, issue resolution, system and network configuration, even security protocol such as transport-based access controls are all out of your hands and you must rely on the vendor to assist.

 

Limitations Due to How Serverless is Implemented

There are some additional limitations that will likely evolve as FaaS matures. One example is deployment: because each function serves a small piece of the overall application, it can be difficult to deploy large-scale apps entirely via serverless containers simply due to the granularity involved.

Another example is the temporal nature of serverless functions. While you can run your Azure Functions on Azure VMs as part of an App Service plan, running them as a serverless function in consumption mode means they have a max runtime of 10 minutes. Without a premium plan, they also run into problems with cold starts. If your function hasn’t been called in some time, it has to spin up, increasing the latency involved.

Premium plans hold perpetually warm instances to avoid this and also add unlimited execution duration, but they are still in preview mode. This is what I mean when I say we expect the current implementation limitations of serverless to evolve: already Microsoft has started exploring ways around some problems like cold starts and execution time limits.

Finally, the old nemesis of vendor lock-in returns for serverless functions, as each vendor has its own technologies at play and uses its own APIs. It can be tough to translate or migrate one function between services because each has its own terminology and components may or may not be analogous. If your app is dependent on one vendor’s database, for example, you may not be able to replicate the exact functionality in another.

 

Serverless is Still Great Tech

Despite some limitations including increased complexity, limits on execution time and resource consumption, API dependencies, and a pay-per-call model that can get very expensive, serverless FaaS can be a great way to automate, scale, and deploy your cloud-native infrastructure.

In particular they can lead to faster deployment times, automated scaling, and despite the pay-per-call model, lower costs due to granular billing and paying only when the function is used. They are not necessarily essential for every app, but you should weigh the pros and cons before architecting new cloud services.

Recent Blog Posts

lunavi logo alternate white and yellow
4.5.2024
03
.
27
.
2024
Utilizing Bicep Parameter Files with ALZ-Bicep

Ready to achieve more efficient Azure Deployments? You can use Bicep parameters instead of JSON which opens new opportunities for deployment. Let Lunavi expert, Joe Thompson, show you how.

Learn more
lunavi logo alternate white and yellow
3.26.2024
03
.
04
.
2024
Anticipating Surges in Cyber Attacks and Bolstering Your InfoSec Defenses in 2024

Learn how to navigate 2024 with the right InfoSec defenses to protect your organization against a rising number of cyber attacks.

Learn more
lunavi logo alternate white and yellow
3.26.2024
01
.
03
.
2024
Microsoft Copilot is Re-Shaping the Innovation Frontier

Microsoft 365 Copilot has been released, and it's changing the way we work. More than OpenAI or ChatGPT, read how Copilot can seamlessly integrate with your workflow.

Learn more