Cloud Native Computing Foundation / KnativeCon EU 2022

Add meeting Rate page Subscribe

Cloud Native Computing Foundation / KnativeCon EU 2022

These are all the meetings we have in "KnativeCon EU 2022" (part of the organization "Cloud Native Computi…"). Click into individual meeting pages to watch the recording and search or read the transcript.

2 Jun 2022

Keynote: Knative and the Open Cloud: Why move Knative to the CNCF?- Aizmahal Nurmamat kyzy, Google

Google’s open cloud relies on open source, and we have long believed in the vision of Knative making it easy to run containers for serving. In this talk, we will cover why we believe that a long-term home in the CNCF is the right thing for developers and for Google’s open cloud.
  • 1 participant
  • 10 minutes
kubernetes
kubecon
native
google
collaboration
knative
people
special
milestones
microservices
youtube image

2 Jun 2022

Opening + Welcome - Aizmahal Nurmamat kyzy, Google + Carlos Santana, IBM [KnativeCon Program Committee Member]
  • 2 participants
  • 8 minutes
gratitude
thankful
native
thanks
community
people
hi
bienvenidos
invitees
conversation
youtube image

31 May 2022

Data Processing at Scale with Knative and Benthos - Mihai Todor & Murugappan Sevugan Chetty, Box

Knative serving provides push-based autoscaling (scale based on rps/concurrency) leaving a requirement on a "component" to push these requests. This works well for real-time http/grpc requests. How about event processing and batch processing? For event processing, we could leverage webhooks or knative eventing (different types of sources, brokers, etc). Challenge lies in processing batch data from databases, CSV files, etc. These are common enterprise use cases. To attain auto-scaling for each batch usecase a bespoke component needs to be developed and this is where Benthos shines. Benthos is a stateless data streaming engine that implements transaction-based resiliency with backpressure. When connecting to at-least-once sources and sinks, it's able to guarantee at-least-once delivery without needing to persist messages during transit. Data transformations can be expressed using a DSL. It's a safe, fast, and powerful way to perform document mapping within Benthos. In this session, Mihai and Murugappan will demo how to leverage the best of Knative and Benthos to process data at scale.
  • 3 participants
  • 17 minutes
canadian
canada
scaling
workflow
native
configuration
streaming
kubernetes
autoscaling
ktv
youtube image

19 May 2022

Accelerating KNative, Like Never Before - Yafang Wu, HUAWEI

KNative is the most popular Serverless project in the Cloud Native world today, as KNative has some terrific feature e.g portable when compare with other Serverless platform. At HUAWEI CLOUD, we build our serverless platform based on KNative, there're tens of thousands of workloads running on it now. When we're building this platform, we found that improving performance and minimizing operational overhead are the key challenges. In this sharing, we will go over: 1)Minimize memory overhead when you use KNative. 2.)Improve the performance of KNative Ingress Dataplane.
  • 1 participant
  • 14 minutes
kubernetes
container
service
cloud
server
computing
connectivity
workloads
utilization
kinetic
youtube image

19 May 2022

Closing - Evan Anderson, VMware [KnativeCon Program Committee Member]
  • 3 participants
  • 8 minutes
exciting
great
native
conference
thank
special
people
think
going
vmware
youtube image

19 May 2022

Connecting the World to Knative with Kamelets - Roland Huß, Red Hat

In Knative Eventing, sources are responsible for importing events from the outside world, converting them to CloudEvents, and sending them along to a Knative sink. But creating source support for a backend requires a considerable amount of effort and additional installation steps on your cluster. Kamelets are a new technology that provides a solution for this problem: they are general-purpose connectors from the Apache Camel ecosystem that are ready to use within Knative. Apache Camel is the prevalent open-source enterprise application integration framework that provides connectors to more than 300 systems. This presentation explains how to use Kamelets as event sources in Knative and connect them to your applications. You will learn to install Kamelet support on your cluster, discover Kamelets from a catalog, and how to deploy and manage them with a Knative client plugin or directly with YAML resources. In a live demo, we will see the combination of Kamelets and Knative in action. Get ready and join the camel caravan to connect the world to your event-driven applications!
  • 3 participants
  • 30 minutes
canadian
sources
connection
camelet
providing
cluster
issue
kubernetes
nativecon
workflow
youtube image

19 May 2022

Consuming and Replying to CloudEvents - Pablo Mercado, TriggerMesh

The presenter will share their experience managing CloudEvents with Knative components, exposing common scenarios and focusing on possible CloudEvents consuming patterns. CloudEvent consumers range from very simple one way receivers to components that compose non trivial orchestrations, and each scenario might require different reply and retry strategies, some of them beyond Knative's reach. This presentation will describe some of those scenarios and choices to manage them.
  • 2 participants
  • 30 minutes
cloud
streaming
flows
kubernetes
event
responses
processing
topics
cluster
ack
youtube image

19 May 2022

How Fast is FaaS? Reducing Cold Start Times in Knative - Paul Schweigert & Carlos Santana, IBM

Knative is used to build serverless-style systems. One of the key features of these systems is the ability to scale a service up/down on demand, only running pods when they are needed to handle a request. When scaling up, however, users are likely to encounter the “cold start” problem, whereby the latency of when a new pod is ready to handle requests is non-negligible (2-5 seconds or more). Scheduling and making a Pod available in Kubernetes fast enough to provide a FaaS (Function as a Service) experience is a problem that we face today as it involves many components and orchestration. In this talk, Paul and Carlos will discuss how autoscaling works in Knative, the cold start problem space, and the steps taken by Knative to reduce container startup latency. One innovative solution is to pause the container CPU while maintaining state, this will allow for a fast response by having warm containers available and orchestrated. This is a practice typically used in FaaS systems.
  • 6 participants
  • 31 minutes
kubernetes
service
knf
daemon
capacity
node
cluster
interface
https
rollouts
youtube image

19 May 2022

How We Built an ML inference Platform with Knative - Dan Sun, Bloomberg LP & Animesh Singh, IBM

Deploying and scaling machine learning(ML)- driven applications in production is rarely a simple task. However, serverless inference has been simplified and accelerated through the use of Knative. Knative runs serverless containers on Kubernetes with ease and handles all the details related to networking, requests volume-based autoscaling (including scale-to-zero), and revision tracking. It also enables event-driven applications by integration seamlessly with various event sources. In this session, the speakers will discuss why their organizations initially chose Knative when building their ML inference platforms, and how these efforts evolved into KServe (github.com/kserve) project. We will also discuss how we leverage Knative to implement blue/green/canary rollout strategies for safe production updates to our ML models, improve GPU utilization with scale-to-zero functionality, and build Apache kafka events-based inference pipeline. At the end of the talk, we will share some of the testing benchmarks (compared with Kubernetes HPA), as well as performance optimization tips that have enabled us to run hundreds to thousands of Knative services in a single cluster.
  • 3 participants
  • 25 minutes
ml
monitoring
analytics
ai
advanced
influence
model
gpu
microservices
scaler
youtube image

19 May 2022

Keynote: Knative, the future looks bright!- Naina Singh, Red Hat

Find out why the OpenShift Serverless team at Red Hat stands behind their vision of how Knative will continue to positively impact businesses, and help drive Knative forward as the premiere container-based serverless solution. The future looks bright!
  • 1 participant
  • 6 minutes
servers
serverless
kubernetes
microservice
container
native
services
api
applications
docker
youtube image

19 May 2022

Kn, The One-Stop Shop for Knative - Navid Shaikh, VMware (WG lead) & David Simansky, Red Hat (WG lead)

Knative simplifies serverless application deployments of cloud-native workloads. Knative brings you flexible consumption-based autoscaling and provides the primitives for creating production-grade event-driven applications. In addition, the command-line tool "kn'' simplifies the Developer workflow with Knative greatly. This presentation takes you through the steps of a typical developer workflow with kn. From the initial, imperative, and iterative creation of services, over GitOps integration into a CI pipeline up to the final production rollout, kn supports all of these stages. Kn also supports kubectl-like plugins for supporting features like connecting to backends like Kafka or a rich Function experience for building Knative services from scratch. After introducing Knative itself, we will show how easy it is to run applications with Knative from the command line with a set of live demos. At the end of this session, we are sure that kn became your new Knative best friend.
  • 4 participants
  • 30 minutes
ykkn
knc
native
readiness
cli
canadian
presentation
regard
ux
client
youtube image

19 May 2022

Knative Functions: An Introduction, Demonstration and Roadmap - Lance Ball, Red Hat (WG lead) & Mauricio Salatino, VMware

Knative Functions fall somewhere between a CaaS (Containers-as-a-Service) and a FaaS (Function-as-a-Service) and provide an experience similar to Google Functions or Azure Container Apps. Those platforms allow you to run your applications without the need to know about containers or Kubernetes. They take source code, often just a function, and convert it into a runnable artifact, deployed on a cluster, while hiding from you all of the Kubernetes and container details. In this talk, you will learn about Knative Functions: what they are; how the project was created and evolved; and most importantly how you can use them to quickly and easily deploy event-driven, Knative Serverless applications. Lance and Mauricio will be live coding to create, build and deploy functions that consume and produce CloudEvents in multiple programming languages, illustrating the polyglot nature of Kubernetes and the Serverless capabilities of Knative.
  • 6 participants
  • 28 minutes
game
functions
users
demo
execute
session
advanced
microservices
tweet
soon
youtube image

19 May 2022

Lightning Talk: Docker-free Functions for Knative - Zbynek Roubalik, Red Hat

Knative Functions allows developers to build and run their applications on Kubernetes without knowing anything about containers. Instead, the source code is built transparently using a local Docker or Podman installation and deployed as a Knative Service in a few simple steps. Sometimes, however, a local build in Docker or Podman is not possible or simply not the preferred option for a developer. Luckily there is now an alternative. In this lightning talk, Zbynek will present the latest feature of Knative Functions: On-cluster builds, which frees users from creating container images locally. The presentation will describe the different options for creating container images within Kubernetes clusters, the current status, and the plans for on-cluster function builds. A live demo will show the on-cluster builds in action.
  • 1 participant
  • 6 minutes
kubernetes
docker
deploying
container
developer
git
package
implement
tasks
robotic
youtube image

19 May 2022

Lightning Talk: Integrating Debezium and Knative or How to Stream Changes the Knative Way - Christopher Baumbauer, Atelier Solutions

This talk will highlight some of the work Chris did to take Debezium from streaming database change events into Knative to ensure an in-cluster data cache is kept up to date. While highlighting one useful use-case, the talk will go into more details on what it took to add support for streaming events using Knative instead of Apache Kafka, as well as some of the caveats and pitfalls to beware of if you are also looking at how to convert your microservices into a Knative enabled service.
  • 1 participant
  • 6 minutes
kubernetes
modernizing
native
process
oracle
cluster
client
deployments
dbzm
kafka
youtube image

19 May 2022

Lightning Talk: Modernizing Your IBM-MQ Applications with Knative and Kong - Sebastien Goasguen, TriggerMesh

IBM MQ has long been at the core of many enterprise applications especially those using mainframes. Enterprises are trying to extend the lifetime and relevance of such legacy systems in a Cloud-Native era by linking them with containerized workloads. Legacy applications still heavily rely on these systems and second, they cannot be easily replaced by Cloud solutions. In this talk we will show how you can bridge the two worlds of old legacy systems and modern Cloud architecture. We will make use of a Kong API gateway, Knative and some TriggerMesh components for event transformation and connection with IBM-MQ. We will take the example of a Mulesoft application that exposes a REST API in front of a mainframe system of record and we will decompose it using an event-driven system built on containers and using Knative and TriggerMesh. This will show how you can keep using your IBM MQ system but usher into a more open source and cloud-native world which speeds up your application workflow and reduces cost.
  • 1 participant
  • 12 minutes
modernizing
modernize
modern
software
proprietary
adapt
applications
devops
native
mqtarget
youtube image

19 May 2022

Lightning Talk: What is the Knative Asynchronous Component? - Angelo Danducci II & Michael Maximilien, IBM

Currently, all Knative services are called in a synchronous fashion. However, in many use cases, a blocking request / response primitive is not sufficient. In particular, for data processing and AI use cases, a blocking invocation approach is sub-optimal. The execution of these services is often long running and surpasses the timeouts for responses, or result in the client having to manage a multitude of pending blocking requests. A more natural invocation pattern is to allow for “fire and forget” or asynchronous invocations, where services are called in an async manner. Doing so allows the client not to block as the service execution is unraveled. The Knative async-component aims to achieve exactly this invocation pattern. Best of all, it does so in a natural and progressive manner that makes any service asynchronous with a simple label and lets the service’s caller decide when to invoke the service synchronously or asynchronously. The project is still in incubation but once it reaches beta-level, we can encourage Knative users with similar async use cases to download and try it in their own Knative clusters.
  • 1 participant
  • 6 minutes
asynchronous
serverless
compute
ai
services
ibm
requests
problem
application
synchronous
youtube image

19 May 2022

The Past, Present and Future of the Knative Community - Moderated by Michael Maximilien IBM; Evan Anderson, VMware, Roland Huß, Red Hat; Whitney Lee, VMWare; Sebastien Goasguen, TriggerMesh

With the 1.0 release and becoming a CNCF incubating project, the Knative community has reached two significant milestones recently. Join us for a fireside chat about the beginnings when Knative was still called Elafros, how the community evolved over the last four years, and what big features are on the horizon. Also, learn about how you can become part of the Knative community and join the journey.
  • 16 participants
  • 54 minutes
canadian
canadians
canada
native
hi
transitioning
client
discussion
kubernetes
vmware
youtube image