Skip to main content

Ultimate Guide to Kubernetes NaaS

· 16 min read
Patryk Kobielak
Cloud Native Enthusiast & Founder @ Sharedkube


I assume readers of this guide understand all basic Kubernetes object types and the basics of Kubernetes architecture. There are a few fundamental concepts to understand before diving into details of managed Kubernetes evolution, its benefits and the impact of this technology.

What is NaaS

Namespace-as-a-Service (NaaS) at its core is a business model that leverages Kubernetes namespace as a product delivered to a customer. This might, but not necessarily, involve multi-tenancy and internal (dev) or public (production) implementation. Multi-tenant implementations involve ensuring data, network and process isolation between tenants. Tenants actions also have to be filtered through policy enforcement tools. Public implementations involve ensuring platform is prepared to run production workloads.

NaaS is not a revolution in cloud computing. It is an evolution of Managed Kubernetes offerings such as AKS (Azure provider), EKS (AWS provider) or GKS (GCP provider) that was needed to solve still existing pain points in using these technologies.

What are Managed Services Interfaces


Managed Services in the context of NaaS are all services deployed on the Kubernetes cluster that make it complete operational for workloads. Examples of these are fundamental ones like ingress controllers, external-dns, monitoring software or storage providers but also additional ones, that enhance Kubernetes environment, like service mesh, progressive delivery orchestrators or custom operators.

Managed Services Interfaces are interfaces that the cluster operators configured and documented for cluster users to use Managed Services for their applications. These can be as simple as a document stating, f. ex. “To make ingress with custom domain, first add custom-domain: annotation to your namespace object.” or as complex as creating custom Kafka clusters using CRDs with provided schema (which is still not that complex, right?).

NaaS users and operators separation of responsibility

NaaS solution without Managed Services Interfaces provided for developers makes little to no sense to implement as separation of expertise in teams does not match operations they are both meant to perform. Application developers should focus on innovation and cluster usage - not administration, while cluster operators should focus on cluster administration and providing easy-to-use, configurable and healthy Managed Services Interfaces for developers - not use them.

NaaS implementation types

In regards to previous explanations, despite that NaaS has some common features, it can have multiple implementations:

  • Internal Single-Tenant - Cluster delivered to internal team with interfaces for managed services, use of the cluster for developers is scoped to one or multiple namespaces
  • Internal Multi-Tenant (Can be an implementation of a concept known as IDP) - Cluster delivered to internal teams with interfaces for managed services, use of the cluster for developers is scoped to one or multiple namespaces per team, isolation is provided between tenants
  • Public Single-Tenant - inherits from Managed Kubernetes Service (e.g. AKS, EKS, GKS), but additionally provides interfaces for managed services, use of the cluster is scoped to one or multiple namespaces, cluster prepared to run production workloads
  • Public Multi-Tenant - inherits from Managed Kubernetes Service (e.g. AKS, EKS, GKS), but additionally provides interfaces for managed services, use of the cluster is scoped to one namespace per tenant, isolation is provided between tenants, cluster prepared to run production workloads

Problems to solve

Although Kubernetes has impressive features, achieving widespread adoption requires solving significant challenges that prevent many companies from using it as their main hosting solution. Others struggle with the complexity of maintenance and the technology itself. Let’s check out these problems and their impact.

Kubernetes Favors Enterprises Over Startups


Kubernetes adoption often requires a large commitment from the company incorporating it, making it more suitable for larger enterprises than startups. The platform's complexity and the extensive resources needed for its operation and maintenance demand significant technical expertise and financial investment.

Startups, with their limited budgets and smaller technical teams, may find it difficult to leverage Kubernetes effectively without compromising agility or overextending their resources. Larger organizations, on the other hand, can afford the dedicated IT staff and infrastructure costs associated with Kubernetes, enabling them to fully utilize its capabilities for scalable and efficient application deployment. This disparity in resource availability and technical capacity makes Kubernetes a challenging option for startups aiming to stay lean and move quickly.

Overdependence Risk


Startups risk becoming overly dependent on a few team members with specific knowledge, which can lead to challenges in knowledge transfer and continuity. The reliance on a limited number of key team members for critical knowledge and operations is more common in startups, where teams are smaller.

It is often argued that simpler solutions could be more effective, especially for a Minimum Viable Product (MVP), and could help avoid missing crucial investment opportunities due to overcomplicated setups. However, this approach leads to the possibility that startups will transition to Kubernetes as their company matures, potentially facing challenges integrating with the cloud-native ecosystem if their product was not originally designed to leverage the latest features of cloud-native technologies. These challenges could be mitigated by planning and building the application with a cloud-native environment in mind from day one.

Resource Efficiency Concern


Startups need to optimize their financial and computational resources, making the cost of setting up and maintaining complex infrastructures like Kubernetes a significant concern.

Oftentimes, startups rely on a limited budget which is the primary reason for carefully selecting investments that would ensure fast growth for the company. The budget not only refers to money but also to the available workforce. Deployment, development and maintenance of Kubernetes cluster can be very time-consuming work sometimes reaching a low cost-effectiveness ratio if not having a highly specialized cloud engineer in the team. In this case, leaving all the shiny features of Kubernetes might be the right choice to keep the startup alive.

Flexibility and Lock-in Issue


Startups value the ability to pivot and adapt quickly, making them particularly sensitive to the risks of vendor lock-in that can restrict their future technology choices. When opting for Kubernetes, many are drawn to public cloud managed solutions from big cloud providers (like AWS, GCP or Azure) for their convenience and scalability. However, this choice often leads to vendor lock-in, limiting startups' flexibility and ability to pivot. This dependency complicates migrating services, adapting to new technologies, or optimizing costs effectively, posing significant challenges for startups that thrive on agility and the need to innovate rapidly in response to market demands.

High Cost of DevOps Talent


The high cost of DevOps talent poses a significant challenge for companies aiming to utilize Kubernetes, a complex container orchestration platform requiring specialized knowledge for effective deployment and management. As demand for skilled DevOps professionals outpaces supply, salaries have surged, making it financially burdensome for companies, especially startups and SMEs, to hire and retain the necessary expertise. This financial strain can divert resources away from other critical areas such as product development, marketing, and customer support.

Moreover, the steep learning curve associated with Kubernetes adds to the challenge, as training existing staff can be time-consuming and costly, potentially slowing down innovation and deployment cycles. Consequently, companies may find themselves in a difficult position, unable to fully leverage Kubernetes' benefits for lack of affordable expertise, which in turn could impede their ability to scale, innovate, and compete effectively in the market.

Knowledge Barrier Problem


The Knowledge Barrier Problem significantly impacts companies aiming to utilize Kubernetes due to its inherent complexity and the specialized expertise it demands. The steep learning curve associated with Kubernetes can slow adoption, leading to potential misconfigurations and inefficient utilization. These challenges can compromise the stability, security, and performance of deployments, delaying product development and extending time to market. As a result, companies might struggle to harness Kubernetes' full capabilities, hindering their ability to scale, innovate, and secure a competitive edge in their respective markets.

Complexity vs Benefit Dilemma


Kubernetes offers significant advantages such as scalability, portability, high availability, and improved resource efficiency. These features enable businesses to manage workload fluctuations efficiently, ensure continuous application availability, and reduce operational costs by optimizing infrastructure usage. Furthermore, Kubernetes supports DevOps practices and CI/CD integration, which accelerates development cycles and enhances product time-to-market.

However, the adoption of Kubernetes is not without its challenges. The platform is known for its operational complexity, requiring substantial investment in training and skills development. Additionally, the initial setup and ongoing maintenance pose considerable operational overhead, and the overall cost of implementation can be very high. Kubernetes-based platforms need constant attention as it is cloud-native leading technology that evolves very fast and both core components often show breaking changes to pay attention to, but also many of the accompanying system software on the cluster need caution and precision when upgrading. Security concerns and the management of interdependent components add to the complexity. Businesses must, therefore, weigh these complexities against the potential benefits, considering their strategic goals, resource availability, and the competitive landscape to make an informed decision on whether Kubernetes aligns with their long-term success strategy.

Scalability Planning Challenge


The necessity to anticipate future growth and ensure that the infrastructure can scale accordingly is a significant challenge for companies. Predicting the scale and pace of growth is inherently difficult. When a company scales too fast for the DevOps team to keep up with the infrastructure, the platform can suffer in quality, becoming more error-prone in the future. This scenario increases the likelihood of incidents, potentially compromising platform stability and security.

Furthermore, rapid scaling beyond the DevOps team's capacity to manage the infrastructure can slow down future development. The platform may become more susceptible to errors, causing more incidents to happen. This not only diverts resources from innovation to crisis management but also affects the overall user experience negatively. Ensuring that the infrastructure can smoothly scale with the company's growth is crucial to maintaining a high-quality platform, reducing the risk of incidents, and supporting continuous development.

Environmental Impact and CO2 Footprint Concern


It is well known now that datacenters’ impact on climate has surpassed that of aviation in the last years. There was also a memorable Keynote on KubeCon 2020 about “How to love K8s and Not Wreck the Planet” by Holly Cummins which gave awareness about what is about to happen in the near future if we don’t act on climate responsibility in terms of the current workloads management environmental impact.

Kubernetes operators and developers often spin workloads that later become so-called “zombie workloads” that just stay running, doing nothing and consuming energy to stay up. Also, a popular thing is intermediary environments (k8s clusters) like “staging” or “pre-prod” which might get created for some particular deployment reasons and left out as “they might be needed in the future” instead of using advanced deployment strategies like shadow deployments, A/B testing or canaries. This leads to inefficient resource allocation and higher costs but also drives higher environmental impact which needs to be addressed.

Sharedkube NaaS solutions

Sharedkube addresses all of the challenges highlighted in the previous section through its innovative Namespace-as-a-Service (NaaS) solutions, designed specifically for startups and businesses looking to harness the power of Kubernetes without the associated complexities and high costs while maintaining flexibility and avoiding vendor lock. Here's how SharedKube's NaaS solutions tackle each problem.

Kubernetes Makes Startups Stronger


Sharedkube's unique NaaS model democratizes access to Kubernetes, making it feasible for startups. By abstracting the underlying complexity of Kubernetes and providing a simplified, scalable infrastructure, Sharedkube enables startups to deploy and manage their applications with ease, without requiring a large IT staff or extensive Kubernetes expertise. This lowers the entry level to the cloud-native ecosystem, allowing startups to enjoy the benefits of Kubernetes without the traditional barriers right from the start.

Mitigating Overdependence Risk


Sharedkube reduces the risk of overdependence on a few key team members by offering a managed service that is easy to use and well-documented. This facilitates knowledge transfer and ensures continuity, even in smaller teams. By providing Managed Services Interfaces and support, Sharedkube ensures that startups can maintain agility and reduce the risk of knowledge bottlenecks. And if the documentation is not enough, the Sharedkube team is your friendly DevOps team always there to help you succeed.

Enhancing Resource Efficiency


Sharedkube's NaaS solution is based on cutting-edge technologies and concepts that allow it to optimize total financial and computational resources by up to 90%. By offering a production-grade infrastructure that is scalable and robust, startups can minimize their initial investment and operational costs. This efficiency allows startups to allocate their limited resources more effectively, focusing on growth and innovation rather than infrastructure management.

Ensuring Flexibility and Removing Lock-in


Sharedkube's platform is designed with flexibility in mind, allowing startups to avoid vendor lock-in and maintain their ability to pivot and adapt quickly. By providing a layer of abstraction over Kubernetes, Sharedkube ensures that startups can easily migrate their services if needed, preserving their agility and ability to innovate. From our customer’s perspective using Sharedkube is just like using yet another Kubernetes cluster, but governed by a DevOps team of experts in the field.

Reducing DevOps Experts dependency


By simplifying the deployment and management of Kubernetes environments, Sharedkube reduces the need for specialized DevOps talent. This allows startups and SMEs to leverage Kubernetes without the high cost of hiring and retaining a dedicated DevOps team, freeing up financial resources for other critical business needs.

Lowering the Knowledge Barrier


Developers should not need to know how to upgrade Kubernetes clusters and set up Thanos for cluster monitoring. Sharedkube lowers the knowledge barrier by taking all Kubernetes management and maintenance tasks off customers’ shoulders, but also by offering a user-friendly platform with extensive documentation and support. This enables companies to quickly onboard their teams and start leveraging Kubernetes without a steep learning curve, accelerating adoption and reducing the risk of misconfiguration.

Balancing Complexity vs Benefit


Sharedkube's NaaS solution is designed to offer the benefits of Kubernetes—such as scalability, portability, and high availability—while minimizing complexity and operational overhead. By providing managed services and interfaces, Sharedkube enables businesses to focus on their core competencies and innovation, rather than getting stuck by the intricacies of Kubernetes management.

Facilitating Scalability Planning


Sharedkube's scalable infrastructure ensures that businesses can grow their applications being cloud-native citizens seamlessly from day one without worrying about the underlying infrastructure. This is achieved by providing a modular platform that can handle increased workloads efficiently through adjustable plans and hosting options, ensuring that the quality and stability of the platform are maintained even as the company grows.

Reducing Environmental Impact


Sharedkube is committed to sustainability and minimizing the environmental impact of its operations. By optimizing resource utilization and encouraging efficient deployment strategies, Sharedkube helps reduce the carbon footprint associated with running Kubernetes clusters. This approach aligns with the growing need for climate responsibility in the tech industry, offering a more sustainable solution for cloud-native development.

Comparisons to other solutions

Sharedkube NaaS is often misunderstood with its way of working and benefits over different solutions on the market that seem to achieve a similar outcome. Let us have a look at brief comparisons to other solutions currently available.

Sharedkube vs GKE Autopilot


Sharedkube differentiates itself from GKE Autopilot by offering a more flexible, cost-efficient, vendor-agnostic approach to Kubernetes management. Autopilot provides Kubernetes control plane and data plane automated management layer with limited additional services integration, whereas Sharedkube gives the same with a Managed Services Interfaces on top of that serving as plug-and-play extensions to easily fit into user preferred tech-stack. GKE Autopilot is way more expensive than Sharedkube having in mind the same features as Sharedkube reduces DevOps costs up to 90% in comparison to conventional setup, not to mention Autopilot is known to be priced 191% higher that GKE standard offer in benchmarks. Also Sharedkube does not lock you down to any cloud provider.

Sharedkube vs Fargate


Compared to AWS Fargate, which is a serverless provider for containerized applications, Sharedkube is Kubernetes Namespace provider that offers a deeper level of customization and control over Kubernetes workloads. From operational point of view both products allow user to not care about control plane and data plane management, but Sharedkube implements Managed Services Interfaces for workloads to extends their functionality in a plug-and-play fashion with easy but extendable configuration if needed. It is hard to compare the cost-effectiveness factor as using serverless is often aimed at different use cases than server-based workloads.

Sharedkube vs Project Capsule


First thing to clarify is that Project Capsule is a framework that can be applied on top of Kubernetes to achieve partial multi-tenancy, Sharedkube on the other hand is a Kubernetes platform provider where multi-tenancy is just a part of how it works. To make it trivial, Project Capsule could be used to build small part of Sharedkube, to implement part of multi-tenancy concept (but it was not). However, Project Capsule could be used to achieve acceptable level of multi-tenancy in Internal Developer Platforms. Having said that, Sharedkube is much more than Project Capsule can offer as it not only extends on what Project Capsule is doing in terms of multi-tenancy by providing isolation on more levels, but also provides optimized low-cost compute, Managed Services Interfaces and active support.

Sharedkube vs vCluster


vCluster is a project that offers virtual Kubernetes clusters on top of a host Kubernetes cluster as a way to achieve Kubernetes multi-tenancy with isolated environments. It would be very hard to compare it to Sharedkube - a Kubernetes platform in which multi-tenancy is a part of how its core works. However, vCluster could be used as an extension for Sharedkube users which they could deploy in their namespaces to interact with a full cluster for playing with operations and administration around Kubernetes.

The Discussion Never Stops

Cloud native landscape is expanding and growing extremely fast thanks to its amazing community. We are highly encouraging discussions on all subjects discussed in this article. If you came up with more problems that current Kubernetes platform market is struggling with and wish us to update this article or have any other questions - please let us know by writing to us at [email protected] or by joining our Slack.