►
Description
Is it time for you and your organization to develop Serverless applications? Our expert panel will discuss why enterprises adopt serverless tech, what the tradeoffs are, and how to make a decision. Bring your questions!
A
Now,
good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
welcome
to
a
special
kubecon
eu
office
hours
session
today
or
this
session,
we
will
be
talking
about
serverless
and
serverless
functions
with
our
k-native
team
and
I'm
very
happy
to
have
the
k-native
team
on.
I
don't
feel
like
we've
had
enough
gay
native
on
the
channel
lately,
so
it's
very
good
to
have
you
all
on
josh
barkers
is
the
one
that
actually
organized
most
of
this.
I
know
he
was
just
about
to
take
a
sip
of
his
drink,
so
josh.
C
Hey
everybody,
so
I'm
josh
burkus.
I
am
in
charge
of
red
hat
community
for
all
things
cloud
native,
and
that
includes
k-native
and
serverless
stuff.
Hopefully
some
of
you
just
watched
the
serverless
workflow
spec
session
that
just
ended
and
we
are
going
to
get
into
k
native
and
functions
here
and
we
have
here
a
good
chunk
of
the
red
hat
k
native
serverless
team.
D
F
Hi
folks,
my
name
is
luke
kingland,
I'm
based
in
tokyo
and
I'm
also
working
on
the
functions
aspect
of
openshift,
along
with
roland
in
zabinyik.
B
G
I
see
actually
I
just
introduced
myself
as
well:
I'm
the
I'm,
the
random
extra
person
in
the
room,
so
I'm
the
marketing
person,
so
I'm
simon
seagrave,
I'm
the
product,
marketing
manager
for
openshift
serverless
and
I'm
here
to
support
the
team
as
the
questions
come
through.
So
please
engage
with
us
ask
some
questions
as
roland
spinick
and
luke
go
through
today's
session.
Would
love
to
hear
from
you.
D
Yeah
sure,
secondly,
we
have
some
some
slides
prepared
for
you,
so
there's
some
deduction
in
the
canadian
I
think
just
to
warm
up
and
then
we
will
go
into
more
details
about
the
different
components
of
openshift
servers
in
canada
and
especially
on
functions,
so
this
is
kind
of
a
new
thing.
On
top
of
that,
I
think
this
is
also
super
exciting.
So
let
me
share
my
screen.
D
A
D
Yeah,
so
let's
talk
about
service
and
by
the
way,
if
you
have
any
questions
also,
of
course
I
have
please
ask
immediately.
We
will
also
try
to
make
this
session
as
inductive
as
possible,
so
just
ask,
and
we
will
stop
for
and
answer
your
questions,
but
let's
start
about
serverless
with
k,
native
and
kubernetes.
So,
first
of
all
what
is
serverless.
You
know
probably
know
that
guy
and
you
probably
also
know
what
cgi
bins
are
if
you're
a
little
bit
older
like
me
and
of
course
the
this
guy
here
is
a
point
actually.
D
Is
it
really
more
than
just
just
a
cgi
bin
that
you
throw
over
the
fence
and
then
somebody
is
operating
your
cci
bit
in
some
sense,
this
is
true,
but
actually
I
think
we
can
show
you
today
that
service
is
much
much
more
than
we
go
to
cgi
bins
and
that's
the
yeah,
the
gist
of
this
talk
here.
D
There's
a
lot
of
buzz
around
serverless
and
if
you
ask
10
people
about
serverless,
you
get
probably
12
different
opinions
about
what
serverless
is,
and
so
so
we
came
out
to
up
this
kind
of
a
definition.
So
this
I
don't
mean
this
is
the
whole
tools,
it's
something
that
we
use
as
our
definition
and
we
also
make
a
distinct
difference
between
serverless
and
fast.
D
So
it's
really
about
a
deployment
model
where
the
servers
are
not
visible
to
you,
which
means
you
also
don't
have
to
manage
your
services
service,
and
so
you
can
really
be
have
a
fine
grant
billing
model.
So
it
means
that
you
can
only
have
to
pay
for
what
you
really
are
using
so
pay
as
you
go
so
to
say-
and
I
think
this
is
one
of
the
main
benefits
if
you
want
to
use
surplus
and
it
also
uses
a
deployment
packaging.
D
So,
in
our
case
this
is
a
container
image
so
this,
but
there
are
other
services
architectures
out
there
like
for
awesome,
lambda
or
microsoft,
functions
that.
D
Sorry,
the
different
deployment
format,
so,
on
the
other
hand,
side
there
is
a
functional
service,
which
is
a
programming
model
which
builds
on
top
of
serverless.
So
functional
servers
are
also
so
functions,
get
deployed
with
a
serverless
model,
but
it
allows
you
to
have
a
higher
level
abstraction,
how
you
create
your
application
so,
like
there's
a
kind
of
a
function,
signature
that
you
need
to
fulfill
and
then
you
just
head
over
the
function
itself
without
any
other
runtime,
and
so
it
also
adds
you
in
the
context
of
kinetic.
D
B
E
I
just
want
to
highlight
what
you
just
what
you
have
mentioned,
so
basically,
how
do
we
look
at
the
serverless
in
internet
and
functions?
So
if
you
have
your
container
with
your
application-
and
you
just
want
to
make
it
serverless,
so
we
call
it
call
this
serverlesscontainers.
So
basically
you
you
can
just
package
your
your
container
as
a
service
container
with
creative.
E
But
on
the
other
hand,
the
functions
provides
you
with
some
with
some
glue,
with
some
logic
that
that
basically
is
managing
the
handling
of
the
incoming
events
and
incoming
requests
for
you.
But
in
the
end
the
function
itself
is
is,
is
packaged
and
deployed
as
a
container.
So
basically,
both
approaches
deployed
on
the
cluster
are
behaving
the
same
way,
but
it's
really
the
matter
of
the
packaging
and
on
the
on
the
setup
and
on
on
the
glue
of
the
of
the
functions,
for
example,.
D
Yeah,
so
this
is
really
about
the.
This
is
kind
of
a
general
remark
about
serverless
functions
now,
let's
to
set
the
context
right.
So
let's
have
a
look
now
to
what
is
k
native
actually,
so
kenneth
is
a
serverless
platform
where
you
can
run
and
operate
your
applications
in
a
serverless
way,
and
it's
really,
as
we
mentioned,
it's
really
the
deployment
model.
So
it's
for
and
creative
is
really
the
platform
underneath
that
model
which
allows
you
to
deploy
and
manage
modern
service
workloads.
D
So
this
is
the
kind
of
the
safe
definition
of
kinetic
you
find
it
on
the
web
page
for
creative
itself
and
canadian
actually
consists
of
of
two
components.
D
So
there
are
two
major
components
there,
so
one
is
called
serving
the
other
one
is
called
eventing
and
serving
is
all
about
a
request,
driven
model
that
allows
you
to
run
your
containers
in
a
very
simplified
way,
and
it
one
allows
you
to
scale
your
applications
automatically
based
on
the
load
that
comes
to
your
service,
and
this
includes
also
scanning
down
to
zero,
which
means
that
if
your
application
is
not
used
at
all,
you
don't
have
to
pay
anything
for
your
application
to
be
deployed.
D
I
think
this
is
one
of
the
the
big
things
with
the
service
component
and,
on
the
other
hand,
there's
also
eventing
a
part
which
provides
your
common
infrastructure
for
consuming
events
from
external
sources
to
route
them
through
the
system
and
maybe
eventually
add
up
a
canadian
service
which
reacts
on
the
events
from
the
outside,
so
it's
perfectly
speeded
for
eda
type
of
applications
or
event
driven
architectures
based
applications
like
that.
So
this
is
the
high
level
view
on
that,
but
I
haven't
really
talked
about.
D
What
kinetic
actually
is
so
creative
is
an
open
source
project
which
has
been
started
in
2018
by
google
and
it's
a
community
driven
project
which
is
has
a
lot
of
vendor
backing.
So
there
are
big
companies
which
which
behind
creative
and
supporting
and
sponsoring
creative
development
there,
but
actually
it's
open
totally
open
for
the
community.
D
You
find
here
the
coordinates
for
the
project
on
on
the
slides,
so
we
it's.
The
code
is
on
github,
there's
a
dedicated
website
for
that,
and
actually,
as
I
mentioned,
there
are
big
companies
like
google,
reddit,
ibm,
vmware
and
so
on
that
are
supporting
canadiaf
and
recently
in
the
last
year.
Some
changes
happened
for
the
governance.
D
Actually,
we
have
different
a
very
open
governance,
which
is
a
little
bit
similar
to
kubernetes,
but
so
so
it's
similar
that
it
has
similar
kind
of
committees,
like
our
steering
committee
and
the
technical
oversight
committee,
which
is
for
the
general
direction
more
for
the
tenant
direction.
D
But
it's
not
it's
important
to
mention
that
creative
itself
is
not
based
on
a
foundation,
so
it's
still
kind
of
a
project
which
is
which
has
a
trademark
and
the
brand.
Then
this
trademark
is
still
kind
of
owned
by
google,
but
actually
we
have
also
so
there's
also
now
a
trademark
committee
which
deals
around
with
all
this
trademark
and
brand
related
issues,
and
this
seat
is
given
to
the
biggest
contributors
to
the
the
project
from
from
yeah.
D
Otherwise
there
are
multiple
working
groups
which
which
meets
regularly
and
they
are
working
on
certain
aspects
like
serving
eventing
but
also
for
the
client
and
other
stuff
like
here.
We
have
a
six
weeks
release
cadence
and
the
current
version
is
0.22,
but
we
are
working
hard
to
get
1.0
out
this
year
and
actually
all
of
the
work
that
we
occur
to
currently
is
really
focused
to
get
to
this
1.0
version
and
yeah.
D
The
next
question
might
be
how
you
can
try
out
canadians,
and
actually
there
are
several
options,
of
course,
so
you
can
run
creative,
of
course,
on
any
kubernetes
cluster.
I
forgot
to
mention
that
creative
itself
is
based
on
telescope.
It
is,
of
course,
a
very
important
thing,
but
sorry
for
that,
but
yeah
you
can
run
it
on
any
vanilla,
kubernetes
cluster,
so
kind
of
a
mini
cube,
one
kind.
You
can
run
it
in
the
cloud
on
any
kubernetes
offering
how
to
do.
D
This
is
described
on
the
canadian
website,
but
there
are
also
commercial
offerings
that
you
very
get
really
support
for
that
where
these
systems
run
creative
workload
payload
for
you,
one
of
them
is
ibm
cloud
engine
which
allows
you
to
to
run
creative
services
directly
into
the
cloud
like
you
can
do
with
google
cloud,
one
which
is
running
on
dke.
So
these
are
really
public
cloud
offerings,
so
you
have
kind
of
a
virtual
cluster,
creative
installation,
and
you
don't
have
to
worry
about
the
management
and
then
we
have
already
credited
openshift
serverless.
D
This
is
the
project
that
we
are
working
on
and
this
supports
all
canadian
features.
You
can
run
this
really
in
in
different
ways.
You
can
earn
it
on-prem,
you
kind
of
run
it
in
the
managed
way
in
the
cloud.
So
you
release
kind
of
all.
These
hybrid
cloud
features
there
and
you
can
get
full
support.
So,
if
you're,
so
you
have
really
plenty
of
choices
that
you
can
choose
from
to
to
to
use
kinetic
or
trying
it
out.
D
C
A
C
The
and,
of
course,
some
people-
I
came
here
already
being
k
native
users
with
technical
questions,
the
I
do
one
of
them
actually
wants
to
know
and-
and
we
might
want
to
fill
up
follow-up
and
chat-
wants
to
know
actually
how
to
enable
auto,
tls
and
k-native,
because
they've
had
a
lot
of
trouble
doing
that
they
say.
D
Yeah,
so
actually
this
is
a
good
question,
so
kinetic
itself
does
not
has
any
tls
related
features
intrinsically,
so
this
works
together
with
some
external
services
like
with
istio,
so
you
can
enable
tls
like
like
features
like
mtls,
for
example,
with
a
leveraging
service
mesh,
for
example,
but
otherwise
I
think
so
tls
should
be
supported
directly,
so
there
is
documentation
around
that.
D
So
maybe
if
this
is
a
very
an
issue
with
that,
I
really
would
like
to
to
ask
you
to
come
to
our
community
channels,
which
is
on
slack
or
open
a
github
issue
on
there.
We
are
there
to
help
and
because
it's
hard
to
analyze
it,
but
in
principle
we
have
this
kind
of
features
here
as
well.
Yeah.
D
Oh
it's
on
core
s.
Not
so
I
lift
or
def
slag
sorry.
This
was
wrong.
Can
I
have
to
death?
So
actually
we
can
place
it
then,
in
the
chat
as
well.
Yeah.
C
C
That
the
so
next
question
is,
does
the
serverless
containers
in
k
native
have
built-in
auto
scaling,
or
do
you
need
to
also
add
a
an
external
load
balancer.
D
Good
question
so
yeah.
Actually
I
don't
have
to
answer
all
the
questions
spinach
or
look
if
you
would
like
to
answer,
but
I
can
I
can
do
also
answer
it
so
jump
in
so
so
yeah.
So
actually
we
have.
Of
course
we
have
our
auto
scaling
intrinsically
creative,
so
because
it's
one
of
the
core
features
actually
and
the
good
thing
is
that
this
auto
scaling
is
based
on
consumption.
So
it's
not
so
much
like
you.
D
You
know
the
total
port,
auto
scaler
from
kubernetes,
which
is
by
default
based
on
cpu
and
memory
consumption,
but
actually
connectfx
has
a
very
sophisticated
autoscaler.
We
will
see
them
in
action
in
a
minute
when
I'm
going
to
the
demos-
and
this
is
really
counting
concurrent
traffic
by
default
and
looks
and
scales-
are
based
on
concurrent
traffic,
which
means
it
really
counts.
D
How
many
requests
are
in
flight
and
then,
depending
on
the
number,
depending
on
the
configuration
that
you
have
chosen
either
scales
up
your
ports,
if
you,
if
it's
longer,
if
it's
higher
than
a
certain
threshold
or
even
if,
after
a
certain
amount
of
time,
no
traffic
comes
in
then
really
scales
down.
The
number
of
replicas
for
your
application
to
zero
yeah,
so
long
answer
to
the
question
and
the
short
answer.
Yes,
of
course,
we
have
outputs.
G
Rolling
just
a
really
quick
question
one
when
we
get
asked
all
the
time
around.
Obviously
one
of
the
real
key
value
propositions
of
this
is
the
scale
to
zero,
which,
from
an
infrastructure
person's
perspective,
is
great
because
it
frees
up
resource.
So
you
can
over
subscribe.
Your
hardware
sweat
your
hardware,
a
little
bit
more
than
normal.
You
know
in
in
the
real
world,
with
the
engagement
with
with
customers,
and
what
have
you
that
auto
scale
functionality?
Are
you
seeing
how
many
people
are
you
seeing
out
there
actually
leveraging
that
scale
to
zero
versus?
D
Yes,
actually,
I
think
this
is
one
of
the
key
features
why
people
are
really
trying
to
use
it,
especially
because
when
they
have
an
overall,
very
inhomogeneous
traffic
shake
so
like
a
burst
traffic,
which
happens.
So,
let's
imagine
you
have
a
christmas
card
shop
which
only
sells
a
christmas
card
at
christmas
and
not
in
the
rest
of
the
year.
Then,
of
course
it
would
be.
It's
super
awesome
if
you,
if
your
application
does
not
sleeps
for
let's
say
11
months
and
then
fires
up
for
in
the
last
month.
D
So
this
is
the
extreme
example
of
course,
but
but
you
got
it
so
if
you're,
if
you
don't,
have
a
constant
traffic
shape,
which
means
constantly
flowing
in
in
traffic,
then
it's
super
convenient
also
to
to
scale
down
to
zero,
which
is,
for
example,
especially
useful
for
for
startups,
which
want
to
redu
minimize
the
risk
for
for
for
for
the
cost,
for
example,
so
that
you
can
try
out
things
and
if
it
doesn't
work
out
it
just
don't
cost
you
anything.
So
I
think
this.
E
Yeah-
and
I
would
like
to
highlight
that,
basically,
if
the,
if
the
deployment
is
scaled
to
zero,
you
don't
have
to
be
afraid
that
your
incoming
requests
to
this
scale
down
service
is
is
lost
because
k
native
innate
basically
holds
the
request
and
scale
schedule
deployment
to
one
and
then
forwards
the
request
to
your
tour
application.
So
so
no
requests
are
are
lost
during
the
scale
down
phase
awesome.
C
Well,
the
we're
getting
some
other
questions,
but
I
think
some
of
them
are
stuff
that
you're
going
to
have
in
these
slides
one
thing
I
will
interrupt
and
say
actually
because
because
somebody's
asked
this
a
couple
of
times,
it's
a
question:
we're
not
going
to
answer
somebody
who's,
asking
questions
specifically
about
the
google
cloud
run
and
we
this
is
the
red
hat
serverless
team.
So
we
can't
actually
answer
questions
about
google
cloud
run.
C
We
just
don't
know,
that's
that's
their
thing
and
I'd
encourage
you
to
attend
a
google
developer
relations
session
where
they
can
answer
those
questions.
So
if
you
want
to
go
ahead
and
continue.
D
With
this
okay,
let's
question:
let's
talk
about
serving
so
actually,
as
I
mentioned
serving,
is
one
of
the
core
components
of
kinetic,
and
it's
really
about
routing
your
traffic
to
your
application,
then
scale
to
zero.
We
already
talked
about
that
a
bit
and
also
to
have
revisions,
which
means
you
that
you
can
have
snapshots
of
applications
and
that
allow
you
to
roll
back
and
also
to
split
your
traffic
among
different
versions
of
your
application.
We
will
see
this
also
in
a
second
in
the
demo
yeah.
D
What
are
the
concepts
actually,
as
mentioned,
it's
really
demand-based
auto-scaling,
which
means
that
it's
really
based
on
the
real
traffic
that
gets
in,
and
so
you
get
a
really
direct
relationship
between
the
the
the
usage
of
your
service
to
the
resources
that
it
consumes.
D
So
the
number
of
pots
that
are
running
it
separates
the
code
and
the
configuration
of
the
code,
of
course,
is
in
the
container
image.
The
configuration
is
what
what
you
would
define
here,
and
also
that
the
configuration
is
really
kind
of
snapshot
like
here
and
it's
kind
of
opinionated.
It's
actually
this
actually
sorry
the
here,
the
the
slider
a
little
bit
older
so
like
some
of
the
restrictions
already
be
released.
D
D
You
can
have
this
with
creative
as
well,
but
one
of
my
main
exactly
this
is
how
it
looks
like
in
in
code
so
can
if
itself
is
implemented
with
crds
on
top
of
kubernetes,
so
it
uses
the
kubernetes
extension
model.
D
Then
you
have
the
high
level
service
object,
sierra
custom
resource
that
is
exposed
to
your
user,
and
with
this
you
can
you
have
kind
of
implicit
objects
that
are
created
on
behalf
of
of
the
server,
so
the
route
configuration
and
revision
are
typically
not
created
by
the
user
or
not
exposed
to
the
user
of
co.
D
Although
the
the
route,
for
example,
could
be
still
be
available
to
create
on
your
own,
but
typically
the
only
interaction
with
this
creative
is
through
the
service
and
as
you
see,
the
revision
is
kind
of
a
revision
history
and
the
configuration
itself
is
really
the
head
of
the
revision
set.
So
anytime
you
change
the
configuration
either
via
service
or
directly.
D
Then
a
new
revision
gets
created
for
you
on
behalf
by
the
backend
controller,
okay,
but
I
think
one
of
the
main
so,
but
one
of
my
favorite
things
is
really
here-
is
that
it's
really
in
addition
to
all
these
benefits?
It's
really
also
a
simplification
of
the
deployment
model
itself.
So
now,
here
you
see
a
classical
qrs
deployment
for
a
very
simple
application.
D
You
see
you
have
a
tons
of
yaml
file,
which
is
quite
huge.
You
need
also
beside
this
deployment.
You
need
also
a
service
in
front
of
your
deployment
community
service
for
that
and
then,
if
you
want
to
access
it
from
the
outside,
you
need
an
ingress
and
now
compare
this
to
the
to
the
actual
version
of
creative.
Again
sorry,
it's
an
older
slide.
It's
already,
of
course,
v1
the
api
version.
This
is
my
bad,
but
actually
you
have
also
a
service.
Unfortunately,
it's
also
called
service.
D
So
I
think
it's
now
time
to
go
demo
and
actually.
C
Okay,
wait
before
before
we
get
into
the
demo.
Can
somebody
because
of
some
of
the
questions
we're
getting
a
lot
of
questions
from
people
who
are
new
to
the
whole
concept
of
serverless?
C
So
does
somebody
want
to
give
the
like
two-sentence
elevator
pitch
of
what
you
would
want?
Serverless,
like
k
native
for
period.
D
Okay,
so
the
thing
is
that
it
allows
you
to
simplify
deployment
for
applications,
that
you
get
a
flexible,
auto
scaling
based
on
traffic,
and
we
will
see
it
with
eventing.
You
have
also
a
full
fl,
so
it's
a
perfect
platform
for
creating
event,
driven
architecture
based
programs.
I
think
this
is
that
so
actually
it's
really
about
you,
don't
have
to
to
really
to
worry
about,
for
example,
number
of
replicas
that
you
want
to
have
for
your
application.
E
Yeah,
I
just
want
to
mention
that
the
difference
between
the
hpa,
which
is
in
kubernetes,
is
basically
that
hpa
by
default
scale,
scale
based
on
the
cpu
and
resources.
So,
for
example,
your
application
is
consuming
a
number
of
memory
and
some
number
of
cpus,
but
some
deployments
needs
to
be
scaled
based
on
the
incoming
traffic.
So
so
this
is
where
the
cognitive
auto
scalar
is
different,
so
it
scales
based
on
the
number
of
concurrent
requests.
So
this
is
this
is
beneficial
for
a
lot
of
lot
of
deployments,
I
would
say:
yeah
yeah,.
G
There's
a
clue
in
the
name
there.
Actually,
the
auto
scaling,
you
don't
need
to
worry
about.
You
know,
take
into
consideration
about
how
you're
going
to
scale
your
application
up
and
down
it's
all
handled
for
you,
which
is
which
is
pretty
massive,
which
is
great
from
a
developer's
perspective
from
an
I.t
ops
person's
perspective
where
the
rule
nice,
you
know
again
resource
underlying
infrastructure
savings
there.
If
you've
got
all
this
there's
a
state
of
applications
or
serverless
applications
that
are
scaling
back
to
zero.
G
Obviously,
they're
not
running
concurrently,
all
the
time,
so
there's
massive
infrastructure
savings
to
be
had
resource
savings
that
is
to
be
had
when
you,
when
you're
scaling
these
apps.
You
know
back
to
zero
and
up
as
as
needed
as
well.
D
Yeah
actually,
this
is,
I
think
I
hope
it
makes
clear
the
demo.
Actually.
This
also
shows
you
that
it
comes
with
a
quite
decent
developer.
Experience
for
for
creative
sector,
it's
also
made
for
developers,
so
connective
is
not
only
for
operation
that
you
can
save
costs,
but
it's
actually
one
of
the
main
focuses
really
to
make
it
easy
to
get
your
your
payload,
your
applications
onto
the
cloud-
and
I
hope
I
can
make
this
clear
with
the
demo
now.
D
So
what
you
see
here
is
I'm
using
here
commander
on
the
top
of
this
of
the
screen.
You
see
a
watch
on
the
number
of
pots
that
are
running
the
cluster,
I'm
running
against
an
openshift
cluster
in
the
cloud
which
is
has
openshift
serverless
installed,
so
one
of
the
product
based
on
connect,
but
actually
this
is
just
pure
canada
if
it's
safe,
so
everything
I'm
showing
here
works
perfectly
fine
with
a
pure
cleanative
installation
on
top
of
kubernetes.
D
So
what
you
see
I'm
leveraging
here,
a
cli
which
is
called
kn.
D
This
is
the
canadian
cli,
but
you
keep
please
keep
in
mind
that
everything
what
I'm
doing
now,
with
the
things
with
the
on
the
demo,
you
can
also
do
as
well
with
yummy
files
itself
and
cube
control,
for
example,
but
actually
this
kn
gives
you
a
really
nice
user
experience
and
what
I'm
creating
now
is
a
service,
and
the
only
thing
that
you
need
to
to
provide
is
parameters
as
the
name
of
the
service
and
then
a
reference
to
container
image,
and
in
this
case
example,
this
is
a
very
simple
rest
service
which
is
just
generating
random
numbers.
D
So
it's
just
if,
when
you
curl
it,
we
will
see
it
in
a
second.
It
just
returns
your
random
number.
The
application
here
itself
is
based
on
caucus,
which
is
a.
D
Java
runtime
framework,
which
allows
you
extremely
fast
startup
times
and
extremely
low
memory
overhead.
This
is
also
done
by
reddit
and
you
can
use.
Coracos
is
really
super
amazing,
but
actually
there
are
probably
tons
of
other
talks
about
that.
So
sorry,
so
let
me
create
that.
You
see
that
actually,
for
with
this
cli,
you
wait
synchronously
until
the
service
is
up
and
you
see
it
in
at
the
top
that
it
gets
started.
You
see
other
that
this
goes
very
quickly
and
now
you
have
running
this
on
your
cluster.
D
You
get
automatically
an
ingress
created
for
you
in
the
service
net
and
you
get
out
the
the
url
here
and
you,
of
course
you
can
just
use
curl,
I'm
just
using
this
url
and
put
it
to
into
jq
for
for
formatting
issues,
and
you
just
get
back
here,
the
random
number
and
you
see
the
numbers
so
super
sophisticated
service
here.
Okay,
so
do
you
have
running
all
the
service?
D
D
Update
what
I'm
doing
here
now
is
that
I'm
updating
the
current
service
and
then
I'm
setting
some
auto
scaling
parameters.
You
have
tons
of
auto
scaling
parameters
that
you
can
use
to
influence
the
way
how
the
auto
scaler
acts.
In
this
case,
I
say
that
I
want
10
concurrent
requests
at
maximum,
which
means,
after
one
pot
serves
10
requests.
D
Then
another
port
gets
scaled
up
to
serve
the
other
request.
So
if
they're
so
roughly,
this
is
really
roughly
because
there
are
some
some
other
parameters
like
a
burst
capacitor.
But
I
don't
want
it
in
much
detail.
But
roughly,
if
you
have
this
concurrency
limit
110
and
you
have
100
requests
coming
in,
then
you
get
10
pots
which
serve
these
10
requests.
D
D
The
default
is
60
seconds,
and
you
see
already
in
the
top
here
that
the
pod
is
going
to
be
scaled
down,
because
I'm
talking
so
much
here,
okay
and
then
I'm
adding
here
some
environment
where
a
delay
of
tau
thousand
seconds
at
thousand
milliseconds,
sorry,
which
means
this
has
a
meaning
for
the
application
itself,
and
just
me
that
the
request
there's
a
sleep
for
a
thousand
seconds,
while
processing
a
request.
D
This
is
important
to
show
you
the
the
upscaling
behavior,
because
otherwise,
if
the
request
is
too
short,
you
don't
get
to
a
concurrency
of
10..
Okay,
let
me
let
me
do
the
update
and
you
see
for
every
update
you
get
again
a
new
deployment,
it's
just
to
verify
that
the
update
really
works.
You
can
look
in
all
the
revisions
that
are
there,
and
here
you
see
that
the
original
one
was
revision.
D
One
then
a
new
one
is
created,
which
is
revision
two,
and
this
revision
gets
all
the
traffic
here
at
the
moment
and
if
I'm
I'm
going
to
make
a
curl
like
it's
just
a
new
version,
but
actually
I'm
trying
to
to
run
now
a
mini
load
test
on
that.
So
hey
it's
a
cli
comment
which
makes
in
that
case
50
concurrent
requests
with
that,
and
let's
see
how
what's
happening
now,
if
I'm
running
that
you'll
see
that
immediately
tons
of
containers
are
really
scaled
up,
and
this
is
something
also.
D
You
have
seen
that
it's
not
something
that
it's
one
after
each
other.
There
has
been
multiple
pots
spanned
up
immediately
together,
and
this
is
because
there's
called
a
very
sophisticated
way.
How
to
detect
are
called
a
panic
mode,
which
means,
if
there
are
the
increase
of
requests,
are
very
quick.
Then
you
just
create
more
ports
in
advance,
so
in
anticipation
that
there
are
more
pots
than
only
that
you
get
more
requests
like
that,
so
yeah,
but
actually
I
I
think
I
already
have
shown
you
the
the
key
thing.
D
D
That
now,
when,
when
this
is
done
here,
then
all
your
pots
going
down
after
six
seconds
when
no
traffic
comes
in.
D
D
G
Is
great
roland's
your
christmas
christmas
card
shop
is
meeting
demand
you're
not
losing
any
customers,
yeah,
you've
scaled
out,
so
yeah,
brilliant.
D
Yeah,
there's,
of
course,
one
caveat
because
this
course
that
introduces
a
latency.
So
if
you
start
from
zero
to
one,
you
always
have
to
to
wait
until
the
conte.
The
first
container
starts
up.
This
is
so-called
code,
star
latency,
which
is
always
kind
of
a
problem.
We
try
to
to
minimize
that
so
this
code
solidity
has
different
components.
So
one
is
of
course,
the
application
itself.
It
needs
to
be
start
fast.
This
is
the
reason
why
we
use
quakers
but
because
qualcos
is
super
fast.
D
Okay,
sorry
that
so
this
was
the
demo.
So
actually,
if
are
there
josh
any
questions
around
the
demo,
so
actually
theory
or
mute?
I
think.
B
C
How
did
that
happen?
Okay,
we
have
quite
a
few
questions,
queued
up
and
I
don't
think
they're
necessarily
related
to
eventing,
but
a
bunch
of
them
are
related
to
scale
up
and
skill
down
one.
One
first
thing
is
a
sort
of
general
conceptual
couple
of
general
conceptual
questions,
and
I'm
gonna
start
with
the
sort
of
biggest
one
which
is
people
are
not
understanding
the
principle
of
scaling
up
and
scaling
down
a
serverless
thing.
C
D
D
This
is
the
assumption
that
if
your
traffic
comes
in
that
your
application
uses
more
cpu
when
you
have
10
requests
compared
to
one
request
when
it
process
them
simultaneously,
the
same
is
for
memory
so,
but
the
the
scaling
trigger
the
scaling
event
for
from
the
hpa
really
goes
on
this
kind
of
indirect
matrix,
which
of
course
gives
you
not
so
that
domestic
behavior,
like
you,
have,
if
you
really
scale,
based
on
the
actual
number
of
requests
or
the
the
duration
of
request,
so
that
this
case
is
really
concurrency,
which
means
parallel
request
in
flight,
and
so
the
the
auto
scale
is
just,
I
think,
it's
more
suited
when
it
maps
directly
to
the
really
to
the
traffic
into
the
load
that
you
have
and,
of
course,
if
you
think
a
little
bit
further,
also
maps
to
the
revenue
you
get
because
the
more
users
you
have
in
parallel.
C
Yeah,
the
so
a
follow-up
on
that
from
slack
is
we've
got
somebody
in
slack
who
works
in
telco
sector
and
they're.
Concern
about
using
serverless
is
delay
to
basically
start
up
a
new
container
when
you're,
when
you're,
auto
scaling.
C
D
Yeah
sure
so
actually
you
have
so
I
think
we
are
so
we
are
rare,
but
actually
spinning
or
look
if
you
have
any
ideas
about
as
well,
but
actually
I
think
the
one
thing
that
you
always
can
do.
You
can
set
the
min
scale
to
one
which
means
that
you
just
prevent
scaling
to
zero.
D
Of
course
you
lose
some
benefit
of
then
of
saving
stuff,
but
if
you,
if
you
know
that
you
will
always
pf
traffic
and
then
it's
super
important,
that
you
have
a
very
low
latency
for
every
request,
then
you
set
the
min
scale
to
one
and
then
you
have
always
at
least
one
port
running
for
applications.
So
we
can
avoid
the
code
start
issue
all
together
with
that.
But
then
we
also
try
to
to
improve
the
code's
latency
on
several
layers.
So
we
we're
looking
into
kubernetes
itself,
for
example
there.
D
D
So
this
is
one
of
the
one
one
of
the
things
what
we
look
into,
but
we
also
look
into
more
things
how
we
can
improve
kinetic,
but
at
the
end
we
must
be
clear.
It's
a
container-based
system,
so
you
will
always
have
an
overhead
which
is
probably
still
higher
than
you
would
get
for
proprietary
systems
like
lambda
or
any
other
stuff
which
do
not
use
containers.
D
But
on
the
other
hand,
you
have
the
whole
benefit
of
container
that
you
can
run
everything
in
the
container
with
serverless
on
creative,
but
not
only
specific
software
or
api
is
rooted
for
lambda,
for
example,
there's
only
a
handful
of
programming
languages
that
are
supported,
but
actually
with
service.
You
can
provide
everything.
So
actually,
if
you're
aware
of
that,
there
are
some
answers.
G
The
other
thing
as
well
to
point
out
is
really
comes
down
to
the
underlying
application.
How
it's
been
developed,
I
mean
an
application
that
takes
10
seconds
to
start,
for
example,
irrelevance
always
going
to
take
10
seconds
to
start.
So
that's
why
you
really
need
to
look
at
your
development
of
the
application
and
roland
touched
on
it
there.
Before.
Actually
I
mean
red
hat
here:
we've
got
the
java
runtime
framework
which
which
that
was
built
with
these.
You
know
with
this
in
mind.
G
A
lot
of
it
is
low
memory,
footprints
and
also
very
fast
start
times.
So
you
know
if
you've
got
that
older
legacy
type
application
that
does
take,
maybe
10
20
seconds
to
start
even
in
the
serverless
environment,
it's
still
going
to
take
10
to
20
seconds
to
to
scale
up
and
start.
So
that's
why
you've
got
to
start
looking
at
the
development
of
these
applications
with
with
speed
and
performance
in
mind
as
well
sure.
F
C
Yeah,
here's
actually
sort
of
a
bigger
concept:
question
which
was
we
have
somebody
coming
in
who's,
actually
used
who's
actually
familiar
with
amazon
beanstalk
and
they
wanted
to
say:
can
somebody
compare
sort
of
work,
dispatch
and
load
systems
like
amazon,
beanstalk
to
k
native
and
kubernetes
serverless?
C
D
B
D
Okay,
okay,
okay,
so
so
maybe
we
continue,
but
actually
one
question
to
spinach.
Now
it's
because
I
have
inventing.
I
think
we.
I
only
have
only
that
many
minutes
time.
So
maybe
we
can
just
start
with
functions
first
and
look
on
our.
D
B
D
E
D
It
let
me
let
me
just
go
just
let's
actually
come
back
to
the
event
sources
eventing
when
we
have
time,
but
actually
I
just
want
to.
Let's
talk
a
lot
about
event,
because
also
we
have
also
quite
nice
demo
around
that.
E
Yes,
so
so
around,
let's
describe
the
cognitive,
the
two
main
parts,
which
is
the
serving
and
and
the
venting-
and
we
are
we
are
building
on
top
of
that,
because
we
would
like
to
provide
extended,
let's
say,
developer
experience
so
with
playing
creative.
You
still
need
to
create
the
container
on
your
own
student.
You
need
to
package
it
and
do
the
deployment,
but
with
functions.
E
We
would
like
to
get
rid
of
all
of
this
stuff,
so
we
would
like
for
the
developers
just
to
write
the
business
logic
and
very
simply
very
simply
deploy
deploy
this
this
function.
This
function
gets
deployed
india
service,
so
it
can
benefit
from
all
locative
service,
from
the
auto
scaling
from
the
configuration
or
from
the
eventing
eventing
part.
So
our
function
stack
is
is
based
on
cncf
buildbacks.
So
if
you
are
not
familiar
with
x,
it's
a
cool
technology,
so
I
recommend
you
to
check
it
out
so,
basically
very
very
shortly.
E
It
allows
you
to
package
your
application
and
create
oci
oci
image
from
firmware
application.
So
there
are,
let's
say,
builders
and
stacks
which
varied
where
you
define
the
type
of
your
application
and
the
build
pack
is
then
doing
the
inspection
of
your
code
and
based
on
the
actual
actual
application
based
on
the
language.
E
It
builds
the
application
and
produce
the
image
for
our
functions.
We
support
currently
co-workers
runtime
time,
node.js,
springboot
and
golang,
and
we
are,
we
are
releasing
python
support
very
soon
yeah
this
is
the
functions
are
open-ended
open-ended.
So
basically
we
provide
you
with
with
a
function
template
where
you
just
need
to
when
you
just
need
to
implement
your
business
logic,
and
then
we
are
calling
the
the
build
packs
to
package
your
application
and
deploy
this
as
a
native
creative
service.
There
are
two
there.
E
There
is
one
more
important
aspect,
which
is
that
the
each
function
can
respond
to
http,
which
is
to
play
in
http
requests,
as
as
we
as
we
saw
with
the
with
a
serving
demo,
or
it
can
respond
to
cloud
events.
I
suppose
that
r1
will
be
talking
about
cloud
cloud
events
later
in
the
eventing
section,
but
basically
cloud
events
are
a
specification
where
you
can.
E
E
So
you
have
components
that
understand
code
events
and
they
don't
they
don't
really
care
what's
inside
what
inside
the
data,
so
you
can,
for
example,
have
a
kafka
kafka
connector,
which
listens
to
your
kafka
topic
and
converts
the
kafka
kafka
message
from
the
kafka
topic
to
the
cloud
event,
and
then
it
can
be
sent
through
the
photo
system
unified
way
the
same
for
other
services
right
radius
or
I
don't
know
whatever.
E
Whatever
else
can
come
to
your
mind,
it's
very
simple:
it's
very
simple
specification,
so
our
function
can
respond
to
this
cloud
event
as
well.
So
this
way
we
can
enable,
let's
say
a
real
event,
driven
architecture
and
development
or
what,
if
you
can
go
to
the
next
slide,
yeah.
D
E
E
I'm
not
sure
front
of
my
mind:
what's
currently
supporting
version
of
kafka
in
openshift,
but
basically
what's
openg
serverless
is
is
by
default.
You
can
install
it
through
the
operator
hub
in
your
operating
openshift
instance.
So
whatever
is
you
know,
you
know
which
it
should
be
should
be
supported
if
I'm
not
mistaken,
so
so
so
to
really
so
socially,
you
enable
the
developer
experience.
We.
E
We
have
created
a
plug-in
to
the
kn
cli,
which
I
once
showed
before
and
with
this
kncli
with
this
func
plugin
you
can
you
can
manage
your
functions,
you
can
build
them
deploy,
etc.
So
maybe
I'll
show
the
demo
short
anymore
for
for
now.
G
And
actually,
why
are
you
doing
that
there,
spaniard,
just
just
to
mention
to
the
folks
out
there?
This
is
currently
in
dev,
dev
preview,.
E
Yeah,
so
so
this
this
project
is
not
part
of
k
native,
but
we
are
building
on
top
of
k
native,
and
this
is
obviously
as
well
open
source
project.
So,
as
I
mentioned,
we
developed
a
func
plugin
on
the
on
the
kn
cli.
So,
as
you
can
see,
we
can
do
several
several
stuff.
So
actually,
let
me
let
me
just
quickly
create
the
same
application
that
roland
did
before
so
to
to
print
the
random
numbers.
E
So
all
I
need
to
do
is
to
run
okay
and
func
create,
and
I
need
to
name
my
function
so
I'll
name
it
random
and
okay.
Let
me
let
me
just
show
you
the
help
for
the
for
the
create
command
so
as
as
I
was
talking
before,
each
function
can
be
triggered
either
by
http
or
or
events.
So
we
need
to
specify
that
we
are
creating
the
function
default
before
version
is
http
and
then
we
need
to
specify
the
the
run
times
so
the
node
run
time
is
default
one.
E
E
As
you
can
see
over
here,
my
project
was
generated.
It's
a
it's
a
template,
as
you
can
see,
it's
a
pretty
standard,
node
project,
so
I
have
some
some
package
with
dependencies
and
then
I
have
index.js
file.
This
is
the
important
stuff.
So
basically,
this
is
the
place
where
you
need
to
put
your
business
logic
we
and
only
thing
that
the
function
needs
to
do
to
to
export
export
the
actual
actual
function
that
needs
to
be.
That
is
to
be
invoked
in
this
case.
E
The
invoke
invoke
function
is
been,
it's
been
called.
It
is
handling
differently,
post
request
or
get
request.
So
let
me
just
delete
this
one
and
I
will
click
quickly,
just
put
the
function,
implementation
to
save
the
time.
So,
as
you
can
see,
all
I
need
to
do
is
is
return
some
random
number
and
export
this.
This
function
to
to
deploy
this
function.
E
All
I
need
to
do
is
to
run
k
and
func
deploy,
and
I
specify
that
I
want
to
deploy.
My
random
function
what's
happening
right
now
at
under
the
hood,
so
we
are
calling
the
build
packs
to
to
analyze
this.
This
application.
It
finds
out
that
it
is
not
the
application,
so
node
builder
is
running
and
it
will
produce
a
container
with
your
node
application
and
it
will
get
deployed
on
openshift
as
a
native
service.
E
So,
as
you
can
see,
the
image
has
been
built.
Now
it
has
been
pushed
to
the
registry,
and
now
then
it
is.
It
has
been
deployed
on
openshift.
Before
we
look
at
the
application.
There
is
one
important
aspect,
which
is
this
fung
dojo
file,
which
is
which
holds
basically
the
configuration
of
our
of
our
function.
So
over
here
we
can
see
that
the
image
that
was
produced
by
this
function
image
digest
the
trigger
type.
E
This
is
the
builder,
so
it
was
the
node
builder
was
used
for
for
the
build
packs
and
over
here
we
can
specify
environment
variables
or
later
we
can
specify
secrets,
etcetera
that
should
be
used
by
by
the
service.
As
we
can
see,
the
function
was
deployed
as
a
schematic
service.
So
I
can,
I
can
curl
this
and
we
can
see
that
it
returns.
It
returns
random
numbers,
so
this
was
this
was
very
simple,
very
simple
function
triggered
by
http
and
roland.
D
So,
actually,
let
me
jump
over
so
actually
thanks
a
lot
for
the
function,
so
this
is
really
the
new
thing.
So
it's
worth
mentioning
that
it's
really!
This
is
not
creative.
This
is
on
top
of
creative.
This
is
done
as
part
of
objective
serverless.
We
are
working
together
with
upstream
community
to
find
a
solution
for
for
this
function,
part
which
allows
for
for
building
stuff,
but
I
think
this
is
worth
to
keep
in
mind
that
it's
just
powder,
but
actually
you
can.
D
D
Okay,
yeah,
so
actually
we
jumped
over
the
eventing
part,
because
I
I
think
it
was
important
to
see
this
functional
thing
in
action,
because
this
is
really
amazing
to
see
how
how
you
can
build
stuff
and
directly,
also
not
only
deploying
but
also
running
and
creating
things.
But
let's
talk
about
eventing,
what
what's
what's
kinetic
eventing
actually
so
this
is
really
about
a
universal
description,
delivery
mechanism
for
events
and,
as
spinach
mentioned,
it's
based
on
cloud
events.
D
So
this
is
one
of
the
the
data
standard
for
the
for
the
for
events
and
cloud
events
itself
is
a
cncf
standard.
This
is
important,
so
it
describes
more
or
less
the
format
how
events
so
the
envelope
of
these
events,
how
they
are
transported
and
they,
but
the
payload
itself,
can
be
arbitrary
data.
So
it's
just
about
that.
You
have
really
a
common
way
of
set
of
headers.
D
You
have
diff
different
transport
protocols
that
you
can
use
for
for
cloud
events,
but
it
gives
you
a
kind
of
often
standard
that
you
can
connect
to
your
eventing
infrastructure
and
this
eventing
infrastructure
and
cave
is
centered
around
the
concept
of
channels.
There
are
different
backends
for
these
channels
so
that
you
can
have
in
the
simplest
case,
you
have
a
memory
channel
which
allows
you
to
to
transport
your
your
events
through
in
memory
things,
but,
of
course
it's
not.
It
does
not
have
any
guarantees
for
your
event.
Delivery.
D
You
can
use
also
apache
kafka
for
for
the
back
end,
but
also
other
systems,
then
actually
there
are
two
other.
The
channel
is
one
of
the
concepts
and
the
other
two
concepts
is
a
source
and
a
sink.
The
source
is
actually
the
way
where
the
where
the
event
comes
from.
This
is
how
you
integrate
your
eventing
sources
into
the
system
and
actually
focus
source
is
more
like
an
adapter.
D
It's
not
really
the
original
of
the
event,
but
it's
more
like
the
part
that
translate
a
custom
event
format
into
the
cloud
event
format.
This
is
this
is
the
kind
of
way
how
how
can
event
sources
typically
work,
and
then
the
sources
at
this
event
that
come
out
of
the
source
are
routed
to
a
sink
in
some
way
and
there
before
we
talk
about
the
routing.
D
Let's
briefly
talk
about
the
sources,
so
it's
there
are,
as
I
mentioned,
they're
kind
of
adapters,
but
of
course
they
can
also
create
events
on
their
own
and
they
are
typically
declared
by
a
custom
resource.
So
every
source
has
also
an
own
custom,
resource,
type
or
custom
business
definition,
and
this
is
evaluated
by
a
source,
specific
operator
in
the
back
end.
And
then
you
connect
this
event
source
to
the
sync
and
then
every
time
an
event
occurs
for
this
source.
Then
the
event
is
is
put
into
the
into
the
sync
itself.
D
D
You
guess
what
it
does
it
just
emits
claudiben's
periodical
d
kelly
after
a
time,
timetable
or
schedule
just
a
chronic
expression,
api
server
source,
which
just
converts
events
that
come
from
a
kubernetes
api
server
into
a
cloud
event
and
then
moves
on
to
the
to
the
sync.
You
have
two
general
purpose
kind
of
sources.
So
the
sync
binding
is
kind
of
a
source,
so
people,
so
you
can
confuse
as
a
source
as
it
connects
an
arbitrary
pot
to
a
sink.
The
same
is
for
a
container
source,
but
beside
these
out
of
stock
sources.
D
This
way
that
you
have
also
custom
sources
like
for
github
kafka,
sauce
or
also
more
more
general
purpose
sources
like
a
camelet,
so
camera
is
another
technology,
it's
based
on
apache,
camel
and
apache
mlk,
and
this
allows
you
to
reuse
all
the
camera
component,
not
all,
but
a
considerable
set
of
camera
components
as
sources.
So,
if
you
don't
know
cameron,
don't
worry
so.
Actually
the
camera
is
a
an
enterprise
application
integration
framework
that
allows
you
to
connect
to
server
to
different
external
systems.
There
are
300
these
kind
of
components
or
connectors.
D
If
you
like
to
to
call
them
like
you,
can
connect
to
f3
aws,
you
can
connect
to
ftp
to
all
of
these.
This
kind
of
external
systems
and
the
cabinet
binding
allows
you
to
transform
these.
These
camel
component
generated
events
into
cloud
events,
so
yeah.
Let's
go
quickly
of
the
way
how
you
can
connect
such
a
source
to
the
server
as
a
service.
You
already
know
from
creative
serving,
but
actually
there
are
three
main
ways
how
you
can
connect
the
source
to
a
service.
D
Of
course.
This
has
some
drawbacks,
because
you
don't
get
any
advantages
from
a
messaging
system.
You
just
fill
it
in.
So
there's
no
clearing
support.
You
don't
have
support
for
back
pressure,
there's!
No.
When
the
source
goes
away,
you
might
miss
services.
D
When
the
sync
goes
away.
You
might
miss
events
and
so
on
so
forth.
So
this
is
really
only
just
for
testing.
I
would
say
a
more
advanced
way
to
do.
These
things
is
to
leverage
a
channel,
so
you
can
have
a
source,
so
a
source
can
push
events
into
a
channel,
and
then
your
sync
connects
to
the
channel
rear
subscription,
which
is
also
cd.
So
everything
what
you
see
here
on
the
diagram
are
crds
or
custom
resources,
and
so
you
can
build
up
arbitrary,
complex
event.
Topologies
with
that,
of
course,
this
can
can
get
confusing.
D
So
in
this
picture
here
the
orange
events
are
going
to
the
upper
sink
and,
on
the
lower
hand
side
there,
the.
B
D
Green
one
filled
it
here,
so
this
is
a
also
easy
way
how
you
can
create
an
eventing
topology
and,
of
course,
a
sync
can
also
return
a
cloud
event
in
this
case,
and
if
this
is
the
case,
if
the
return
value
of
this
thing,
which
is
typically
also
used
via
http,
it's
a
cloud
event,
then
it's
feedback
into
the
broker
and
gets
rerouted
again.
So
in
that
way
you
can
really
make
a
very
complex
topologies
like
that.
G
D
With
triggers,
for
example,
is
we
hadn't
built
a
demo
for
that?
Actually,
unfortunately,
enough
time
example
would
be.
Let's
assume
you
have
an
api
server
source,
which
emits
all
the
events
from
api
from
the
isr
of
twenties.
But
then
you
have
a
sync
that
only
wants
to
react
on
events
that
are
about
deleting
something
or
adding
something
or
modifying
something.
This
can
be
done
by
specifying
a
trigger
a
filter
expression
to
say.
D
Okay,
I'm
interested
only
into
this
delete
events,
and
then
it
also
only
get
these
delete
events
and
can
do,
for
example,
some
cleanup
stuff.
So
it
could,
you
could
imagine
a
sink.
That
is
there
sitting
there
looking
for
delete
events
and
then
because
you
need
to
make
some
custom
cleanups
for
that,
of
course,.
D
Mechanisms
for
kubernetes
that
can
do
that,
but
but
yeah
in
this
example.
This
could
be
happening.
Maybe.
E
Around
maybe
I
can
I
can
show
the
telegram
demo,
which
is
like
very
quickly
showed
all
this
all
these
benefits
it
could
be
like
in
a
couple
of
seconds.
So
if
you
let
me
share
my
screen,
I.
D
C
E
So
go
ahead,
go
ahead!
Okay,
so
let's
say
let's
say
if
you
were
asking
about
some
real
case
scenario.
This
is
not
a
such
a
real
case,
but
basically
imagine
that
I'm
in
a
telegram
conversation
with
some
telegram
board
and
I'm
chatting
with
this
board
and
once
I'm
every
time
I
set
a
message
to
this
board.
The
message
gets
analyzed
and
if
there
is
a
photo
in
this
message,
the
there
is
done
some
analysis
on
this
photo
and
if
there
are
some
faces
on
the
photo
I
will
get.
E
I
will
get
number
of
the
faces
and
the
people,
emotions
and
they'll
try
to
make
a
guess
on
the
age
of
the
persons.
So
we
can.
We
can
easily
easily
implement
this
with
kinetic
service
and
the
functions.
So
if
we
look
at
this
at
this
developer
console
option
over
console,
we
can
see
that
on
the
left
side
we
have
telegram
source.
This
is
the
kmlk
telegram
source.
E
It
basically
listens
to
the
telegram
conversation
and
it
is
sending
every
message
to
the
to
the
native
eventing
broker,
which
is
the
component
in
the
center,
and
then
we
have
three
functions
which
are
creative
services
connected
to
this
broker.
The
first
function
receive
receives
the
telegram
message
and
if
there
is
a
photo
in
the
message,
it
will
respond
with
another
cloud
event
back
to
the
broker.
E
Second,
second
function
will
take
this
cloud
event
with
the
with
the
photo.
It
will
do
the
analysis
and
reply
back
to
the
broker
and
then
finally,
a
responder
function
will
take
this.
This
final
result
from
the
analysis
and
it
will
respond
back
back
to
the
telegram
telegram
check.
So
let
me
just
quickly
show
it
to
you.
So,
as
you
can
see
right
now,
the
functions
are
scaled
to
zero.
E
So
I
will
just
type
hello
to
the
telegram
board
and
we
can
see
that
the
telegram
source
sent
send
the
code
event
to
the
broker
and
the
receiver
function
receives
receives
the
message,
but
it
doesn't
find
any
any
photo
in
the
in
the
message.
So
it
responded
that
there
is
no
photo
and
our
responder
function
responded
back
back
to
the
dragon
chat
with
this
with
this
nice
message.
So
actually
let
me
send
some
some
photo
as
you
can
see.
E
Can
see
that
the
photo
was
analyzed
by
the
first
function,
so
the
second
function,
the
processor
function,
basically
does
the
analysis
in
this
case
it
is
calling
microsoft
face
api
in
the
cloud
so
and
responding
back
back
to
the
broker
with
the
cloud
event
that
is
holding
this,
this
data
and
the
last
function
basically
take
the
cloud
event
are
responding
back
to
the
chat.
E
So
this
this
way
you
can
see
the
event
flow
through
the
system,
so
it
is
all
happening
through
the
broker
and-
and
the
last
aspect
is
that
this
these
lines
are
the
connections
to
the
to
the
broker
and
with
each
each
trigger,
you
can
specify
the
filter.
As
so,
as
you
can
see
here,
we
are
filtering
on
the
tide
of
the
cloud
event,
so
each
client
code
event
has
some
specific
types.
E
So
this
way
we
ensure
that
the
the
each
each
function
just
take
the
specific
code
event
with
a
specific
specific
type.
So
I
hope
that
this
gave
you
guys.
D
Something,
as
you
have
seen,
the
benefit
really
is
that
only
those
components
are
running
that
are
really
needed.
I
think
this
is
one
of
the
big
things
that,
if
you,
for
example,
have
one
millions
of
text
messages
without
any
image.
The
image
processing
thing
would
never
pop
up
because
it
doesn't
have
detected
any
image
like
that.
So,
and
I
think
this
is
one
of
the
the
benefits
of
this
eventing
sub
setup.
Okay,
yeah.
C
Are
you
all
ready
for
more
questions,
yeah
sure
cool,
so
another
question
actually
got
from
two
different
p.
Two
different
viewers
is
logs.
If
you
need
to
examine
logs
from
say,
triggered
functions.
D
Yeah,
so
these
locks
are
just
locks
coming
from
from
a
port.
So
actually
this
happened
in
the
potlucks
and
of
course
you
have
techniques
how
you
can
collect
all
these
potlucks
into
a
central
system
where
you
can
collect
all
these
locks
and
then
and
and
then
examine
them.
So
actually,
this
is
the
same
question:
how
you
can
collect
locks
from
ports
for
manual
applications,
so
there's
no
difference
between
serverless
locks
and
regular
locks,
so
yeah.
D
C
Okay,
anybody
feel
like
comparing
k
native
with
open
whisk,
which
is
not
a
name
I've
heard
for
about
a
year.
D
Yeah
sure
yeah,
actually,
I
can
can-
can
try
it
so
actually,
yeah
openwhisk
is
really
about
functions.
Openvis
is
not
really
container
based,
and
so
you
can,
you
can
deploy
functions.
Do
you
have
a
runtime
behind
that
and
the
benefit
of
the
benefit
actually
one?
What
openvis
really
can
do,
as
far
as
I
know,
is
really
they
have
kind
of
a
pool
of
workers
that
are
really
hot
all
the
time.
So
you
they're
really
optimized
for
code
stop
time,
so
you
can
really
directly
deploy
that
there.
D
But
of
course
you
cannot
run
arbitrary
workloads.
You
cannot
choose
your
language
programming,
language
like
say,
or
you
even,
for
example,
serverless.
Also
it's
what
could
be
possible
theoretically
is
really
that
you
can
migrate
your
own
applications
to
a
serverless
model
by
just
putting
them
into
a
container
image.
But
of
course
there
needs
to
be
some.
There
are
some
constraints
and
something
what
needs
to
be
so,
for
example,
your
applications
need
to
be
stateless,
but
theoretical.
D
E
C
Okay,
we
actually
got
another
one,
so
one
of
our
viewers-
well
you've
been
doing
this
presentation.
They
have
an
openshift
cluster
and
they
have
tried
to
install
serverless
on
their
openshift
cluster.
I
think
it's
actually
okd
because
they
said
they
tried
to
install
k
native
and
it's
not
showing
as
ready
do.
They
need
to
do
something
else.
D
Really,
no,
this
is
a
then
a
bug,
I
would
say
because
actually
this
is
a
super
super
easy
to
install.
Actually
we
could
have
shown
this
out
in
the
demo
within
two
minutes.
Actually
this
is
something
you
go
to
the
operator
lifecycle
manager
to
the
catalog.
You
select
openshift
serverless,
and
then
you
have
to
okay.
Sorry,
then,
then,
you
have
to
follow
the
instructions
there.
Of
course
you
need
to
create
one
of
the
canadian
serving
resource.
D
C
One
of
our
community
members
here
wanted
to
know
when
kafka
broker
is
coming
to
openshift
surveillance.
I
thought
it
was
already
there
yeah.
D
So
there
are
two
ways:
actually
what
you
already
have
is
a
broker
that
is
based
on
kafka
for
for
for
the
backends,
actually
they
for
on
a
kafka
channel,
so
the
broker
itself.
So
we
have
the
the
benefits
of
kafka
there,
but
there's
also
not
only
planned
but
actually
there's
active
work
ongoing
to
have
a
dedicated
kafka
broker,
which
really
does
not
need
to
have
a
channel
in
between.
D
So
you
save
one
translation
step
so
to
say
in
between,
and
but
I
cannot
give
any
eta
for
that,
except
that
it's
really
on
the
horizon,
so
so
to
say.
G
Yeah
and
what
two
weeks
ago,
a
week
and
a
half
ago,
we
released
serverless
1.14.
That's
now
got
the
kafka
plug-in
included
in
this
as
well
too.
G
D
For
shift
service
for
the
channel,
and
but
there
are
still
working
going
on
so
actually
we
already
can
provide
kafka
as
a
back-end
system
for
channels
and
also
as
a
source.
Actually,
we
also
can
use
kafka
kafka
classical
as
a
source
for
events
that
you
may
just
pick
up
messages
from
from
topics
and
convert
them
to
cloud
events
and
move
them
on
into
canada.
Okay,.
C
And
and
one
last
question-
and
this
is
the
hard
one
k
native
roadmap
like
when-
are
we
getting
to
1.0.
D
Yeah
ooh,
that's
a
hard
one.
Really,
that's
that's
true.
So
actually
I
can
maybe
quickly
describe
so
actually
there's
no
really
an
eta
for
that,
but
I
can
describe
the
process
how
we
try
to
reach
to
1.0
at
the
moment,
and
actually
we
are,
I
think
yesterday
was
a
toc
meeting
and
we
agreed
on
a
set
of
criteria
that
needs
to
be
fulfilled
to
get
to
ga.
So
we
set
the
ga
barrier
like,
for
example,
having
80
code
coverage
on
every
component.
D
This
is
one
of
them
and
others,
and
and
we
want
to
release
eventing
serving
the
client
everything
from
kinetic
at
the
same
time.
So
there's
so
that
we
have
one
coherent,
canadian
release
for
1.0
and
actually
we
have
the
check
mark.
We
will
check
everything
there.
I
think
we
are
confident
that
many
of
the
of
the
requirements
are
already
met,
but
actually
we
have
to
really
go
through
them,
and
if
this
is
done,
then
we
then
1-0
is
not
so
far
away.
D
D
We
wanted
to
have
a
super,
solid,
1.0
release
that
really
can
rely
on,
but
actually
you
see
that
kinetic
itself
is
already
super
stable
because
it's
part
the
foundation
of
many
products
out
there
already
it's
like
like
openshift
serverless,
so
you
get
all
the
ga
benefits
already,
if
you,
if
you
decide
for
you
for
operation
serverless,
even
if
you're,
based
on
a
on
a
non
1.0
version,
but
actually
the
api
versions
are
already,
we
won
so
actually
yeah.
So
it's
for
us.
It's
ready
to
to
just
jump
over
this
last
hurdle.
Okay,.
C
B
A
A
Yeah,
so
thank
you,
everybody
remember.
We
are
in
the
cncf
slack.
The
number
six
dash
kubecon
dash
red
hat
feel
free
to
go
in
there
and
ask
your
questions
about
k
native
I'm
sure
someone
in
there
can
help
us
get
you
to
the
right
answers.
So.