19 May 2022
eBPF is a powerful Linux kernel technology that is used in several CNCF projects to provide faster networking, new security applications, and deeper observability. In this talk, we explore how eBPF, using the Cilium project, allows you to build a service mesh entirely without sidecars while still relying on proven Envoy proxy technology. We will look at how moving service mesh functionality into the kernel using eBPF leads to massive performance gains and simplification of the overall model while remaining compatible with existing control planes. Service mesh will become invisible at the kernel level similar to how namespaces, the foundation of containers, are invisible today. The sidecar-free model unlocks a simpler architecture, performance gains, scalability advantages, and even more transparency to applications. Together, we will look at the new architecture, compare performance numbers, and run through a demo.
- 2 participants
- 11 minutes
19 May 2022
Keynote: An update on the extremely boring and uninteresting world of Linkerd- William Morgan, Buoyant
In this keynote, William Morgan, CEO of Buoyant and one of the creators of Linkerd, will deliver a project update on the extremely boring world of the Linkerd service mesh, the CNCF’s only graduated service mesh. William will cover all the uninteresting things happening in this boring project and discuss some of its profoundly non-exciting approaches to some perfectly ordinary challenges.
In this keynote, William Morgan, CEO of Buoyant and one of the creators of Linkerd, will deliver a project update on the extremely boring world of the Linkerd service mesh, the CNCF’s only graduated service mesh. William will cover all the uninteresting things happening in this boring project and discuss some of its profoundly non-exciting approaches to some perfectly ordinary challenges.
- 1 participant
- 5 minutes
19 May 2022
Keynote: Expanding the 80/20 Rule for Creating Service Mesh Value- Idit Levine, Solo.io
While service mesh usage continues to grow, far too many companies are only seeing value from mTLS and observability. 80% of their value comes from 20% of the capabilities. As we connect and secure the world’s modern applications, we must strive to bring value to a broader set of use-cases. This means not only improving the performance and simplifying operations, but also exposing more of the value of service mesh to application teams, security teams, and ultimately to the forefront of the business.
In her talk, Idit Levine, CEO of Solo.io, will discuss innovative use cases that can be enabled by extending a service mesh. She will explore how the flexibility of mesh architectures can be used to enable more flexible, more secure, and more powerful usage patterns for companies.
While service mesh usage continues to grow, far too many companies are only seeing value from mTLS and observability. 80% of their value comes from 20% of the capabilities. As we connect and secure the world’s modern applications, we must strive to bring value to a broader set of use-cases. This means not only improving the performance and simplifying operations, but also exposing more of the value of service mesh to application teams, security teams, and ultimately to the forefront of the business.
In her talk, Idit Levine, CEO of Solo.io, will discuss innovative use cases that can be enabled by extending a service mesh. She will explore how the flexibility of mesh architectures can be used to enable more flexible, more secure, and more powerful usage patterns for companies.
- 1 participant
- 9 minutes
19 May 2022
Lightning Talk: Clearing the confusion about eBPF and service mesh - Yuval Kohavi, Solo.io
eBPF is an exciting technology that allows developers to extend the capabilities of the Linux Kernel without modifying the Kernel itself. Getting access to powerful Kernel capabilities can be extremely powerful, especially in networking, but what is the responsibility of this layer when it comes to service mesh? In this talk we discuss the importance of separation of layers, where eBPF fits for service mesh (and where it doesn't), and how to best optimize the service mesh architecture and experience for the real problems users have: security, observability, flexible policy enforcement, and overall traffic management.
eBPF is an exciting technology that allows developers to extend the capabilities of the Linux Kernel without modifying the Kernel itself. Getting access to powerful Kernel capabilities can be extremely powerful, especially in networking, but what is the responsibility of this layer when it comes to service mesh? In this talk we discuss the importance of separation of layers, where eBPF fits for service mesh (and where it doesn't), and how to best optimize the service mesh architecture and experience for the real problems users have: security, observability, flexible policy enforcement, and overall traffic management.
- 1 participant
- 11 minutes
19 May 2022
Lightning Talk: GitOps and Controllers: It’s Not That Simple for Multi-cluster- Alex Ly, Solo.io
GitOps has become a valuable approach to manage configuration for applications and infrastructure. Having a source of truth that can be automated, auditable, and is easy to understand is increasingly important when expanding to many deployments. However, enabling multi-cluster capabilities typically presents new challenges: not every cluster is the same, context is important, and managing every lower- level configuration across multiple environments can get cumbersome (and dangerous) quickly. This talk will focus on a specific example where multi-cluster GitOps is difficult: application-networking and security with service mesh. The goal is for platform teams to provide the right point of demarcation with abstractions that focus on the intent, while abstracting away the translation and orchestration of lower-level config (mesh-specific API resources in this case). We share our experiences building these abstractions with some of the largest deployments of service mesh in the world.
GitOps has become a valuable approach to manage configuration for applications and infrastructure. Having a source of truth that can be automated, auditable, and is easy to understand is increasingly important when expanding to many deployments. However, enabling multi-cluster capabilities typically presents new challenges: not every cluster is the same, context is important, and managing every lower- level configuration across multiple environments can get cumbersome (and dangerous) quickly. This talk will focus on a specific example where multi-cluster GitOps is difficult: application-networking and security with service mesh. The goal is for platform teams to provide the right point of demarcation with abstractions that focus on the intent, while abstracting away the translation and orchestration of lower-level config (mesh-specific API resources in this case). We share our experiences building these abstractions with some of the largest deployments of service mesh in the world.
- 1 participant
- 8 minutes
19 May 2022
Lightning Talk: MeshMark: Service Mesh Value Measurement - Lee Calcote, Layer5 & Mrittika Ganguli, Intel
Still trying to understand how to best gauge the performance of your cloud native infrastructure? Confused as to whether self-published, performance benchmarks are trustworthy or simply biased marketing in disguise? Measurement data may not provide a clear and simple picture of how well those applications are performing from a business point of view, a characteristic desired in metrics that are used as key performance indicators. Behold MeshMark: a performance index that provides you with the ability to weigh the value vs overhead of your cloud native environment. Convert performance measurements into insights about the value of individual, cloud native application networking functions. Join us as we distill a variety of microarchitecture performance signals and application key performance indicators into a simple scale. Explore the other side of the performance measurement coin: value measurement.
Still trying to understand how to best gauge the performance of your cloud native infrastructure? Confused as to whether self-published, performance benchmarks are trustworthy or simply biased marketing in disguise? Measurement data may not provide a clear and simple picture of how well those applications are performing from a business point of view, a characteristic desired in metrics that are used as key performance indicators. Behold MeshMark: a performance index that provides you with the ability to weigh the value vs overhead of your cloud native environment. Convert performance measurements into insights about the value of individual, cloud native application networking functions. Join us as we distill a variety of microarchitecture performance signals and application key performance indicators into a simple scale. Explore the other side of the performance measurement coin: value measurement.
- 2 participants
- 10 minutes
19 May 2022
Lightning Talk: Move Over API Gateway.... into Your Service Mesh- Marino Wijay, Solo.io
They say API Gateways are for your "north-south" traffic into your clusters and Service Mesh is for your "east-west" traffic. Is this really the case? As you deploy a service mesh for high availability, failover, and tenancy, you will find north/south and east/west start to converge. Instead of thinking of API Gateways and Service Mesh as separate and different, we should be thinking of them as the same thing. In this talk, we explore the role of modern API gateway and how we can make it part of the service mesh.
They say API Gateways are for your "north-south" traffic into your clusters and Service Mesh is for your "east-west" traffic. Is this really the case? As you deploy a service mesh for high availability, failover, and tenancy, you will find north/south and east/west start to converge. Instead of thinking of API Gateways and Service Mesh as separate and different, we should be thinking of them as the same thing. In this talk, we explore the role of modern API gateway and how we can make it part of the service mesh.
- 1 participant
- 8 minutes
19 May 2022
Lightning Talk: Multi-cluster Istio Mesh – Complex or Piece of Cake? - Laszlo Bence Nagy, Cisco
Setting up and then preserving a multi-cluster Istio mesh is cumbersome today, as it involves several manual steps. Those steps are not automatic, because there is no continuous synchronization mechanism between the participating clusters. With the open-source Cisco (formerly Banzai Cloud) Istio operator, forming and then sustaining a multi-cluster Istio mesh is almost fully automated. It is made possible by utilizing a cluster registry controller component, which provides continuous synchronization for the necessary resources between the clusters. In this session, you will learn: - How to form a multi-cluster mesh with ease - Learn how the necessary resources are synced between the clusters - Understand how the system recovers even when network endpoints are changed.
Setting up and then preserving a multi-cluster Istio mesh is cumbersome today, as it involves several manual steps. Those steps are not automatic, because there is no continuous synchronization mechanism between the participating clusters. With the open-source Cisco (formerly Banzai Cloud) Istio operator, forming and then sustaining a multi-cluster Istio mesh is almost fully automated. It is made possible by utilizing a cluster registry controller component, which provides continuous synchronization for the necessary resources between the clusters. In this session, you will learn: - How to form a multi-cluster mesh with ease - Learn how the necessary resources are synced between the clusters - Understand how the system recovers even when network endpoints are changed.
- 1 participant
- 11 minutes
19 May 2022
Multi-region Service Mesh: How Koyeb Built and Operates One Using Kuma and Envoy - Yann Léger, Koyeb
Service mesh is all over the place but how do you build and operate a service mesh across regions and at scale? This talk will cover how Koyeb, a serverless cloud provider, built a globally distributed mesh to provide easy-to-use inter-service connectivity across multiple cloud providers and continents. To achieve this, they built a custom stack with a multi-region service mesh using Kuma, which is an open-source control plane for service mesh orchestrating Envoy proxies. Yann will walk you through how they built distributed connectivity inside the platform, the key decisions they had to make, and what their architecture looks like. They now have a purpose-built stack based on Nomad, Firecracker, and Kuma.
Service mesh is all over the place but how do you build and operate a service mesh across regions and at scale? This talk will cover how Koyeb, a serverless cloud provider, built a globally distributed mesh to provide easy-to-use inter-service connectivity across multiple cloud providers and continents. To achieve this, they built a custom stack with a multi-region service mesh using Kuma, which is an open-source control plane for service mesh orchestrating Envoy proxies. Yann will walk you through how they built distributed connectivity inside the platform, the key decisions they had to make, and what their architecture looks like. They now have a purpose-built stack based on Nomad, Firecracker, and Kuma.
- 1 participant
- 19 minutes
19 May 2022
Opening + Welcome - Craig Box- Google [Program Committee Member]
- 4 participants
- 11 minutes
19 May 2022
Organize Your Mesh - How to Run a Multi-Tenant Service Mesh in Production- Christian Posta, Solo.io
Service meshes offer a breadth of benefits from securing to adding reliability to gaining visibility into your applications. However, as you start to scale your environment and start onboarding different teams or applications into the mesh you run into challenges of tenant isolation in terms of configuration management, resource consumption and security. In this session, Christian will present how to securely operate and run a multi-tenant mesh in production using the primitives available from service mesh like Istio. You will also learn how to take these concepts from a single cluster to multi cluster environment and successfully run applications across different clusters in a multi tenant unified service mesh.
Service meshes offer a breadth of benefits from securing to adding reliability to gaining visibility into your applications. However, as you start to scale your environment and start onboarding different teams or applications into the mesh you run into challenges of tenant isolation in terms of configuration management, resource consumption and security. In this session, Christian will present how to securely operate and run a multi-tenant mesh in production using the primitives available from service mesh like Istio. You will also learn how to take these concepts from a single cluster to multi cluster environment and successfully run applications across different clusters in a multi tenant unified service mesh.
- 2 participants
- 30 minutes
19 May 2022
Panel Discussion-The Future of Service Mesh: Is eBPF a Silver Lining or a Silver Bullet- Moderated by Craig Box, Google; Thomas Graf, Isovalent; Idit Levine, Solo.io, Vik Gamov, Kong & William Morgan, Buoyant
Service mesh implementations normally take one of two forms: a proxy per node, or a proxy per workload (the so-called "sidecar"). Linkerd went from A to B. Cilium is suggesting we can go from B to A. Is eBPF a savior, or are we hyper-optimizing a tiny piece of the datapath? And what else might the future of service mesh hold?
Service mesh implementations normally take one of two forms: a proxy per node, or a proxy per workload (the so-called "sidecar"). Linkerd went from A to B. Cilium is suggesting we can go from B to A. Is eBPF a savior, or are we hyper-optimizing a tiny piece of the datapath? And what else might the future of service mesh hold?
- 11 participants
- 41 minutes
19 May 2022
Protocol Detection: A Deep Dive into How Linkerd Achieves Zero-Config - Kevin Leimkuhler, Buoyant
Zero-config is one of Linkerd's claims to fame: for (most) Kubernetes apps, adding Linkerd doesn't require user config, even if the app uses arbitrary TCP protocols which Linkerd must proxy in a fully transparent manner. The use of protocol detection automatically determines the protocol based on the data on the connection. Linkerd maintainer Kevin Leimkuhler will describe the mechanics of how Linkerd's protocol detection works, covering the strengths and weaknesses of the current implementation, including so-called server-speaks-first protocols and why they need to be handled differently. He'll also cover how the implementation has evolved over the years as Linkerd adoption has grown to encompass even more types of applications and protocols, including the introduction of "skip ports" and "opaque ports". Finally, attendees will learn how opaque ports are implemented in the proxy using ALPN, and how Linkerd is still able to provide mTLS and golden metrics for this type of traffic.
Zero-config is one of Linkerd's claims to fame: for (most) Kubernetes apps, adding Linkerd doesn't require user config, even if the app uses arbitrary TCP protocols which Linkerd must proxy in a fully transparent manner. The use of protocol detection automatically determines the protocol based on the data on the connection. Linkerd maintainer Kevin Leimkuhler will describe the mechanics of how Linkerd's protocol detection works, covering the strengths and weaknesses of the current implementation, including so-called server-speaks-first protocols and why they need to be handled differently. He'll also cover how the implementation has evolved over the years as Linkerd adoption has grown to encompass even more types of applications and protocols, including the introduction of "skip ports" and "opaque ports". Finally, attendees will learn how opaque ports are implemented in the proxy using ALPN, and how Linkerd is still able to provide mTLS and golden metrics for this type of traffic.
- 1 participant
- 30 minutes
19 May 2022
Shh, It is A Secret: Manage Your Workload Certs in Service Mesh without Persisting any Secrets- Lin Sun, Solo.io
Most service mesh projects provide self signed CA but that is NON-STARTER for a production environment as most organizations already have their PKI system in place before they adopt any service mesh. While many service mesh projects have added the support for plugging in your intermediate CA or external PKI system, they however require persisting the intermediate or root CA’s private key as Kubernetes secrets which is a security concern for them. This talk discusses a few innovative approaches in the service mesh community to tackle this challenge and the tradeoffs among them.
Most service mesh projects provide self signed CA but that is NON-STARTER for a production environment as most organizations already have their PKI system in place before they adopt any service mesh. While many service mesh projects have added the support for plugging in your intermediate CA or external PKI system, they however require persisting the intermediate or root CA’s private key as Kubernetes secrets which is a security concern for them. This talk discusses a few innovative approaches in the service mesh community to tackle this challenge and the tradeoffs among them.
- 1 participant
- 26 minutes
19 May 2022
Tidy Up Microservices Connectivity with Apache Kafka® and Kuma - Danica Fine, Confluent & Viktor Gamov, Kong
Given the rising popularity of microservice-based architectures, Kubernetes has solidified itself as one of the most dominant container management systems available on the market. That said, deploying a host of RESTful/HTTP/gRPC services on Kubernetes has not been historically easy or efficient. Enter: service mesh. The service mesh easily facilitates the communication of synchronous microservices over a network. Leveraging an asynchronous communication method – such as that provided by Apache Kafka® messaging – may complicate things. Over the course of this presentation, Danica and Viktor will explore how to deploy Kafka-based microservices – including vanilla Kafka and Kafka Connect components – together with Kuma. We’ll also discuss the benefits and relevance of this particular approach.
Given the rising popularity of microservice-based architectures, Kubernetes has solidified itself as one of the most dominant container management systems available on the market. That said, deploying a host of RESTful/HTTP/gRPC services on Kubernetes has not been historically easy or efficient. Enter: service mesh. The service mesh easily facilitates the communication of synchronous microservices over a network. Leveraging an asynchronous communication method – such as that provided by Apache Kafka® messaging – may complicate things. Over the course of this presentation, Danica and Viktor will explore how to deploy Kafka-based microservices – including vanilla Kafka and Kafka Connect components – together with Kuma. We’ll also discuss the benefits and relevance of this particular approach.
- 3 participants
- 31 minutes
19 May 2022
Tune Your Service Mesh - Mohammad Reza Saleh Sedghpour, Umeå University
Service meshes improve developer productivity by factoring out cloud native patterns such as retry mechanism, circuit breaking, etc. into a unified network control plane. Modern cloud platform adopters realized how tough the tuning of these patterns can be in a highly interdependent and dynamic microservice architecture and how improper configuration of such patterns can reduce throughput and/or increase the response time. This talk will discuss the pitfalls of configuring circuit breaking and retry mechanisms in a multi-tier application using Istio. It will help you configure your environment systematically to enhance the performance of your application from an end-to-end perspective.
Service meshes improve developer productivity by factoring out cloud native patterns such as retry mechanism, circuit breaking, etc. into a unified network control plane. Modern cloud platform adopters realized how tough the tuning of these patterns can be in a highly interdependent and dynamic microservice architecture and how improper configuration of such patterns can reduce throughput and/or increase the response time. This talk will discuss the pitfalls of configuring circuit breaking and retry mechanisms in a multi-tier application using Istio. It will help you configure your environment systematically to enhance the performance of your application from an end-to-end perspective.
- 3 participants
- 20 minutes
19 May 2022
Unleash Declarative Data Access with GraphQL- Kevin Dorosh & Sai Ekbote, Solo.io
GraphQL is redefining the way that developers interact with APIs, putting application clients in control of the data they consume and placing new requirements on the platforms hosting these APIs. Understanding when to write code and when to let the platform do the work is a critical tradeoff to understand as you scale GraphQL adoption. In this talk, Kevin and Sai will share experience building GraphQL support directly into Envoy to support edge gateway and service mesh use cases. They will cover common deployment patterns, GraphQL-specific implications to security and policy controls, instrumenting existing mesh services (REST, gRPC, SOAP, Lambda) with GraphQL, and the benefits and tradeoffs between declarative and programmatic approaches to GraphQL composition. This will be a hands-on session with live demos and real talk, focused on patterns of adoption to easily implement GraphQL at scale. If you are a developer or platform engineer deploying GraphQL in your service mesh, this talk is for you!
GraphQL is redefining the way that developers interact with APIs, putting application clients in control of the data they consume and placing new requirements on the platforms hosting these APIs. Understanding when to write code and when to let the platform do the work is a critical tradeoff to understand as you scale GraphQL adoption. In this talk, Kevin and Sai will share experience building GraphQL support directly into Envoy to support edge gateway and service mesh use cases. They will cover common deployment patterns, GraphQL-specific implications to security and policy controls, instrumenting existing mesh services (REST, gRPC, SOAP, Lambda) with GraphQL, and the benefits and tradeoffs between declarative and programmatic approaches to GraphQL composition. This will be a hands-on session with live demos and real talk, focused on patterns of adoption to easily implement GraphQL at scale. If you are a developer or platform engineer deploying GraphQL in your service mesh, this talk is for you!
- 4 participants
- 31 minutes