►
From YouTube: OpenShift Commons Briefing: Serverless, Kubernetes & Knative OpenShift Cloud Functions Natale Vinto
Description
Serverless in Kubernetes: Knative and OpenShift Cloud Function
Speaker: Natale Vinto (Red Hat)
Serverless paradigm is becoming a new methodology for developing and deploying applications without worrying about configuring any server to run them on. Hear the latest updates from Red Hat serverless open source solution based on OpenShift and K-Native, discussing all use cases matching serverless context for container-based workloads.
Natale Vinto gave an update on State of Serverless in Kubernetes: Knative and OpenShift.
Serverless, Kubernetes,Knative, OpenShift Cloud Functions
A
Welcome
everybody
to
another
openshift
Commons
briefing
this
time
as
one
of
my
thoughts
and
things
that
I
want
to
learn
more
about
serverless
and
Kay
native
and
how
it
all
works
with
OpenShift
and
what
the
implications
are
for
people
using
open
shift
and
kubernetes
I.
Think
a
lot
of
folks
have
heard
the
terms
Natali
Vento
who's
with
Red
Hat
out
of
our
Italian
front,
is
going
to
give
us
a
talk
about
that.
We
have
a
couple
of
other
folks
that
are
on
the
line
as
well.
A
B
I'm
working
read
that
does
open
shift,
especially
solution
architect,
so
my
role
is
really
to
bring
to
our
customer
to
their
container
journey.
We've
always
shift
and
I'm
recently
focusing
in
selling
computing
model
server
less
computing
model
and
to
show
that
opposite
can
also
be
the
platform
for
several
mass
for
micro
services
or
anything.
The
modern
IT
needs
today.
B
So
this
will
be
an
update
on
how
to
do
service
on
upper
shift
and
kubernetes,
and
lots
of
things
are
changing
fast
in
this
world,
and
it's
good
that
we
can
also
have
an
open
source
ocean
solution
in
this
new
paradigm
of
service,
which
is
really
thrilling.
The
IT
world,
so
just
I
would
like
just
to
share
with
you
those
news
that
we
have
from
red
dot
and
from
the
community.
So
if
it's
ok,
I
will
proceed
with
with
the
slide
yeah.
B
You
know,
of
course,
what
is
open
shift,
but
I,
just
wouldn't
I
just
wanted
to
focus
again
on
the
fact
that
almond
shift
it's
a
community.
It's
a
group
of
open-source
project
that
the
the
major
project
is
kubernetes,
of
course,
and
okd,
which
is
our
upstream
version
of
our
ships
container
platform
and
around
this.
There
is
an
ecosystem
of
a
many
open-source
project
that
build
and
contribute
to
the
OpenShift
ecosystem.
B
And
yes,
it's
also
because
it's
open
source
and
what
the
most
important
thing
is
that
that
we
are
talking
about
standards,
because
our
shift,
it's
about
the
cloud,
medical
computing
foundation
and
open
initiative,
of
course
under
standard.
This
is
very
important
in
technology
and
we
join
very
helpfully
the
Linux
Foundation
with
building
those
standards.
This
is
the
ecosystem,
actual
ecosystem
of
a
kubernetes
and
OpenShift.
And
yes,
this
is
a
serverless
data
center.
B
It
is
that
just
a
joke
from
our
own
code,
but
it
gives
the
sense
of
what
people
think
about
when
what
they
think
about
fabulous.
They
may
think
that
the
service
is
something
that
doesn't
exist,
but
actually
is
a
it's.
Not
it's
not
like
that.
If
we
follow
the
Wikipedia
definition
is
a
computing
execution
model.
So
really,
if
you
look
at
the
gap
diagram
on
the
right,
it's
real
some
pipeline
over
some
atom
and
trigger
some
action
and
give
some
rest
so
we've
with
several
s
computing
model.
B
You
have
always
despite
blind
this
process
and
actually
a
lots
of
definition
medal.
The
most
famous
is
from
marching
forward.
Yes,
is
something
that
lets.
You
run
your
server-side
logic.
Instead,
less
containers
even
trigger
it,
and
if
you
look
again
on
the
diagram,
is
it's
always
event.
Action
result
with
some
trigger,
and
this
is
called
add,
also
function
as
a
service
and
the
most
important
thing
that
there,
the
cloud
man,
the
cloud
netting
computing
foundation,
made
a
what
paper
are
around
several
s,
so
the
around
several
is.
B
There
is
also
standard
definition,
and
you
find
that
the
several
is
computing
like
concept
to.
Let
you
build
and
run
your
application
without
refining
any
server
management,
or
you
can
write
your
function,
deployed
your
function
and
execute
the
scale
in
somewhere.
So
what
is
a
function
as
a
service?
And
what
is
serverless
function
is
a
programming
model
turbulence
like
more
looks
like
more
are
dealing
model.
So
what
we
need?
We
need
together
because
we
need
the
something
to
run
function,
but
also
something
to
matter
in
this
function.
This
diagram
explain
good.
B
How
is
the
relationship
between
function
as
a
service
and
service?
So
service
is
really
the
engine
that
lets
you
run
and
scale
your
function
in
this
paradigm,
which
is
function
as
a
service.
This
is
there
are.
There
is
an
editor
architectural
revolution
around
this.
You
may
have
seen
those
texts
also
to
read
that
Sam
it
in
San
Francisco.
B
There
is
this
evolution
from
service.
Then
we
went
to
micro
services
and
now
we
are
on
function.
Actually,
the
micro
services
conversion
is
still
ongoing.
We
are
still
a
lot
of
a
model
it's
and
legacy
application,
but
here
we
have
some
much
finer
definition,
which
are
thoughts,
function
which
are
single
action,
ephemeral
and
related
to
some
context.
So
if
you
look
on
this
day
of
this
line
with
function,
you
have
more
predicative,
but
you
lose
control
because
you
don't
care
who
is
gonna
run
your
function.
B
This
is
a
guy
IT
guy,
before
the
micro
services
were,
and
now
several
eyes.
We
looks
like
we're
in
the
teleporting
in
a
Star
Trek
because
it
yeah
a
new
world.
How
does
it
work?
We
have
always
that
pipeline.
If
you
remember
some
haven't
fired
some
function,
some
string
or
some
haven't,
and
this
function
going
in
output,
something
that
need
to
be
consuming
to
somewhere.
Something
else
to
be
a
micro
services
could
be
another
function,
and
this
is
their
force.
B
If
you
look
here,
this
is
the
function.
The
lifecycle
define
it
in
the
standard.
So
in
this
function,
life
cycle,
you
have
the
definition
of
your
function,
your
code
that
need
to
be
created
from
some
circle
and
then
virtually
before
been
publish
it.
So
you
have
many
version
of
your
function
and
you
may
decide
to
update
or
just
to
go
through
the
revision
of
this
factor.
So
this
is
in
this
time.
The
question
is:
is
cerberus
open
source,
of
course,
because
we
are
talking
about
always
about
standard
standard
standards.
B
We
are
talking
about
flower
netting
computing
foundation
and
cloud
Amin's,
which
is
the
framework
around
the
several
has
made
about
vanity
computing
foundation
and
all,
of
course,
read
that
and
limbs
foundation
are
following
this
very
well.
So
if
you
look
in
this
slide
here,
we
have
a
landscape
of
project
around
several
s,
so
also
the
several
s.
Landscape
is
really
rich,
like
the
kubernetes
one
that
we
seen
before.
There
are
lots
of
open
source
project,
but
also
a
lot
of
public
cloud
implementation
service
like
AWS
lambda
as
a
function
or
Google+
function.
B
We
may
know
fabulous
more
for
those
public
routes,
but
now
it's
possible
to
have
also
in
our
Bible
12
solution
with
open
source
implementation.
If
you
look
at
this
Cerberus
timeline,
you
see
this
evolution
of
Cerberus,
starting
from
AWS
we
are
now
Dante
native,
is
the
kubernetes
implementation
that
is
the
baseline
for
surplice.
What
does
it
mean?
B
Yeah
I
just
wanted
to
show
you
some
data
around
several
s,
so
what
about
organizationally
interested
in
server
list?
What
about
technologies
for
server
list?
We
see
that
there
is
some
interest
for
backends
web
application
process,
automation
and
expectation
around
third
server
list,
because
the
most
important
thing
is
that
people
are
invited
by
this
fact
that
service
can
reduce
the
scaling
cost
in
the
complexity
and
the
operational
cost.
B
So
if
you
remember
the
previous
slide,
we
lose
control,
but
we
we
scale
up
very
fast
and
what
about
the
risk
around
several
s
if
we
lose
control,
we
may
be
concerned
about
a
security
issue
about
testing,
and
so
those
are
the
main
concern
that
are
the.
This
is
this
report
that
read
that
made
around
several
s
technology
region
and
what
people
expect
service
security
and
applicate
way,
one
example
of
this
and
the
technology
integration.
It's
a
really
common
landscape
for
HTTP
web
books
and
API
integration
nice-nice
also
to
have
it
in
the
kubernetes
events.
B
So
if
we
want
to
collect
from
kubernetes
and
often
shift
events
around
odd
slots,
I'll
check
it.
It's
a
good
use
case
that
we
can
have
it
in
place
and
languages
languages.
There
is
a
lots
of
interest
around
Java,
nodejs
and
use
cases,
use
cases
about
several
s.
For
instance,
in
your
monitoring,
if
you
want
to
monitor
something
in
order
to
trigger
some
action,
if
you
remember
the
pipeline,
it
was
event
action
and
then
the
result.
B
So
if
you
want
to
monitor
something
like
verifying
some
water
that
can
trigger
some
feeling,
server
monitoring
is
a
good
use
case
for
serverless,
but
also
storaged,
for
instance,
if
you
want
to
reach
to
get
some
Evan
from
the
storage
and
you
want
to
make
some
manipulation
or
trigger
some
time
action
from
your
storage.
Another
good
use
case,
of
course,
with
api's,
because
is
the
fact
that
can
scale
a
lot
remember.
B
The
services
are
always
stateless,
though,
when
you
run
your
function,
they
don't
have
so
much
logic
inside,
but
they
can
scale
fast
and
go
much
in
parallel,
so
the
web
api
is
also
good
use
case
and,
of
course,
IO
IO,
T
and
Sand
source,
because
you
could
have
a
multiple
source.
Multiple
definition
of
multiple
devices,
lots
of
devices
that
need
to
trigger
some
action
and
service
is
the
best
Arabic
for
this
use
cases
very
common.
As
we
said,
the
web
books
tasks
like
a
cron,
PDF
generation
or
image
manipulation.
B
All
around
this
world
and
everything
is
a
synchronous,
concurrent
and
easy
to
parallel.
Light
is
a
very
good
fit
on
serverless.
Well,
what
about
not
to
put
in
server
list,
because
it
looks
the
panacea
for
everything,
but
it's
not
actually
because
you,
if
you
have
a
some
real-time
application
that
require
ultra-low
latency
spec
for
networking
it
may
not
a
good
place
or
a
long-running
task.
That
cannot
be
split
into
steps.
B
If
you
look
at
the
implementation
of
service,
there's
always
a
timeout
that
you
need
to
set
up
so
a
function
according
to
run
more
than
this
timeout.
If
you
go
to
something
like
AWS
llama,
it's
about
five
minutes.
If
you
go
to
some
implementation
like
in
Connecticut
number
shift,
would
be
it's
always
five
minutes.
So
you
shouldn't
violate
the
part
of
adding
of
some
function
that
ran
for
a
long
time.
You
should
have
a
little
function
that
make
little
tasks
and
then
maybe
you
call
another
function
to
continue
the
work.
B
So
if
you
have
advanced
or
complex
topic
is
not
a
good
fit
and
if
you
require
lots
of
our
specific
memories,
if
you
that's,
because
service
is
fast
at
first,
so
you
run
those
function,
you
run
multiple
function
and
you
collect
the
output
for
them,
but
you
don't
really
care
about.
If
any
functional
fails,
it
may
happen.
So
you
have
to
be
aware
that
failure
may
happen
in
this
world
and
let's
go
to
the
most
interesting
part.
What
is
this?
How
to
do
service
in
my
open
source
server
list?
B
B
We
have
automated
operation
that
we
will
the
operators
kubernetes
operators,
we
have
history
of
our
service
mesh
and
on
top,
we
have
k
native
kennedy's
is
the
base
for
delivering
service
in
kubernetes,
and
it's
supposed
to
be
something
like
the
baseline
that
has
to
be
used
for
from
some
Tyner
implementation
of
cephalus,
like
if
we
think
about
cupola
or
if
apache
open,
which
is
another
service
implementation.
But
it's
more
complete
doesn't
fit
very
well,
take
a
metaphor
or
a
camel
k
which
is
of
4k
native.
B
So
the
concept
here
is
really
to
provide
the
pace
of
some
finer
seems
service
implementation
that
could
invoke
a
native
to
run
your
function
inside
kubernetes.
So
this
is
really
important
because
it's
really
integrated
into
in
kubernetes
and
you
every
visit
stack
to
build
your
function
as
a
service
on
top
of.
B
Servus
mesh
with
this,
of
course,
is
used
by
connective
in
order
to
make
high
security
routes
policies
and
also
make
a
split
around
traffic.
In
your
thoughts
and
operator
framework,
it
can
be
used
also
to
optimize
the
maintenance
or
the
components
around
KNAT,
but
at
the
end,
what
is
Canada
play
as
we
say
it
is
an
extension
to
kubernetes
and
as
a
building
blocks
for
container
bases
application
that
can
run
everywhere.
B
If
you
follow
up
the,
if
we
follow
the
definition
on
the
kinetic
project,
but
just
to
remember
this,
and
it
is
really
the
base
foundation
for
a
surveillance
arrived
in
kubernetes
and
offer
ship
to
provide
those
building
blocks
to
run
the
function
and
how
does
it
with
three
major
components?
One
is
the
field
is
a
sum
is
a
component
in
kinetics
that
can
build
container
from
your
source
code.
So
you
you
deploy
your
function.
B
B
It
starts
from
your
source
code
when
there
is
your
function,
so
you
have
to
tell
what
is
your
function
in
this
example
is
on
github
or
somewhere,
and
you
have
to
also
to
define
which
step
to
compile
your
function,
so
you
have
to
tell
which
container
to
start
to
the
planet,
to
compile
your
function
and
which
image
to
use
to
build
your
French
right
and
there
is
a
native
serving
soak.
A
native
serving
is
the
really
the
core
of
this
world.
You
have
another
terminology
here,
which
is
the
service
this
this
service
is
not
take.
B
A
community
service
is
a
top
level
controller
that
manages
the
route
and
the
configuration.
So
you
have
the
service
which
is
which
is
a
high
level
representation
of
your
function.
This
service
manage
the
route.
The
route
is
actually
some
route.
That
is
an
address
that
resolve
to
the
top
running
your
function
and
the
configuration
the
configuration
is
the
there
is,
the
layer
is
the
representation
of
your
revision
of
your
fraction.
B
So
if
you
remember
the
first
diagram
in
the
specification
about
the
functional
lifecycle
here,
the
function
life
cycle,
it
defined
the
in
terms
of
a
configuration
and
provision,
so
you
define
some
configuration
and
then
you,
your
function
version
is
an
era
vision.
You
can
decide
to
split
the
traffic
across
the
rectus
provision,
keeping
the
function
alive
or
just
let
the
connective
spawning
a
new
function
when
it's
needed.
So
this
is
the
very
powerful
mechanism
that
makes
the
Surrealists
implementation,
responsive
and,
of
course,
on-demand.
You
don't
need
to
keep
the
function.
B
The
function
will
be
scale
it
automatically
up
or
scale
it
down
the
in
terms
of
some
trigger
and
the
eventing
component
of
kinetic,
which
is
the
one
that
connect
inside
kinetic
internal
event
and
external
events.
It's
a
kind
of
complex
implementation
that
may
require
also
some
streaming
or
messaging
system
like
Kafka.
In
rezoned
implementation.
We
have
Kafka,
we
have
a
reference
in
410
AD
by
bending.
This
works
is
really
under
going.
Is
it's
really
improv
and
is
following
the
specification
from
cloud
events?
B
So
everything
that
you
see
here
is
some
specification
is
some
some
standard
that
has
the
first
declared
and
then
implemented
as
open
source.
The
question
is:
is
kinetic
enough
for
service,
like
we've,
seen
like
we
said
before,
tentative
is
a
building
block
to
run
a
function
as
a
service
in
our
platform.
So
the
idea
here
is
yes
is,
can
run
service,
but
some
other
related
implementation
should
invoke
tentative
and
take
care.
Maybe
you'll
make
their
life
easier
to
developers
to
simplify
the
creation
of
environment,
of
programming,
language
environment,
for
instance,
or
just
there.
B
B
B
B
To
come
back
to
our
concept
here,
so
we
have
a
one
to
one
month
between
the
12,
5-12
factor,
1/2
definition
and
a
native
customer
service
definition
in
terms
of
configuration
in
the
12
factor
up
there
is
the
one
one
point
is
to
keep
the
coda
configuration
separate
and
configuration
erudition
are
the
way
to
go
to
implement
it,
so
each
configuration
change
trigger
decoration
of
new
relation
is
an
is
about
this
topic
again
and
with
routing
and
services.
We
have
this
possibility,
though
this
is
an
important
another.
B
Another
time
we
see
that
we
have
a
standard
and
we
have
an
implementation
that
followed
this
tampered.
These
standards
in
the
in
the
open
source
way
a
little
demo
to
show
a
native
capabilities
around
Cerberus.
This
is
the
bitly
URL
that
I
prepared.
If
you
want
to
follow
in
the
world,
but
I
think
we
can,
if
we
can
go
directly.
B
B
I've
already
is
example
to
build
this
demo
and
once
you
configure
mini
shift
with
some
some
some
option,
like
memory
cpu
on
disk
and
also
additional
controller
hooks
mandatory
to
install
a
native
and
east
you,
oh
you
will
you
have
to
configure
some
security
content
constraints
because
ii
still
need
it,
and
and
then
you
you're
gonna,
deploy
a
kinetic,
can
use
as
we've
seen
before
easter.
So
you
have
to
deploy
this
in
in
this
case
and
then
you
will
deploy
Canetti
service
energy
serving
them
with
deploy
also
their
components,
the
energy
field
and
kinetic.
B
B
So
if
you
go
in
in
this
section
for,
for
instance-
and
you
follow
the
prerequisite,
then
you
will
be
able
to
run
your
first
server
less
your
face
function
in
the
opposite,
ok
native
basis
with
the
kinetic
field,
I
will
just
go
through
my
mini
shift
here.
So
you
see
here,
I
have
my
mini
shift
with
my
k/d
is
a
mini
shift
311
and
it's
empty
for
the
moment.
B
So
we
I
just
you-
have
just
to
clone
this
repository
and
follow
the
primary
site
and
follow
these
steps.
So
here
I,
don't
know
if
you
see
good
I'm
going
to
create
a
template
template
to
tell
to
kinetic
builder
how
to
build
my
my
function
and
then
I'm
going
to
create
my
configuration.
I
will
just
show
you
those
files
once
I
created
it
so
once
I
create
my
configuration,
I
need
also
to
define
the
route
diagram
before
we
have
a
configuration
we
have
about.
B
Some
configuration
has
been
created,
some
route
has
been
created,
so
you
can
follow
what
is
going
to
handle
the
hood
and
in
the
while
our
thought
finish
is
finishing.
That
is
a
compilation,
because
what
is
happening
now
is
that
my
not
Jes
function
is
going
to
become
put
inside
a
container.
We
build
that
is
our
tool
to
build
and
create
container
that
the
substitute
yes,
which
is
a
more
over
compliant
to
authorized
standards
and
consortium
and
once
is
built.
You
have
a
new
pot
in
your
in
your
in
your
deployment.
B
You
have
a
new
deployment
representing
your
function
and
this
deployment
will
create
running
your
front.
So
here
you
see,
this
pot
has
been
created
and
one
is
ready.
We
will
have
here
our
for
running
and
there
and
answering
with
our
function.
So
just
it's
ready,
but
I
just
wanted
to
show
also
because
in
the
demo
we
we
have
to
invoke
the
function
by
common
line,
but
I
will
I
will
like
also
to
show
the
source
code
of
course
of
this
function.
A
B
Your
function
and
and
then
you
have
let
them
plate
that
define
it.
So,
in
this
case,
I
will
show
that
I'm
going
to
call
this
script,
which
is
ready
here
for
us
just
to
invoke
our
function.
In
this
case
with
K
native,
we
need
to
invoke
our
function
using
the
internal
IP
address
on
the
platform
because
of
how
it's
done
before.
So,
if
you
just
execute
the
script
here,
I'm
gonna
show
you
in
this
case.
B
If
you
just
do
it,
you
will
have
the
function
which
is
answering
so
this
is
the
function
running
in
our
automatically
created
for
us
by
Fayed.
It's
the
built
with
bilder.
This
is
really
powerful
and
I
want
to
show
you
the
definition,
because
in
the
example
we
use
it
with
template.
So
we
have.
These
are
not
built
that
template
that
what
we
from
some
tilde
image
in
this
case
yarn
from
the
guise
of
a
no
Jes
and
make.
B
So
it
prepared
environment
to
in
terms
of
steps
to
put
our
function
inside,
and
one
thing
that
you
have
to
look
for
also
is
the
configuration.
So
our
first
revision
was
about
this
configuration
here,
and
this
configuration
is
based
on
the
is
properties
repository
on
github
is
using
just
this
repository
and
is
finding
that
not
Jes
code
because
I'm
giving
the
context
here.
So
it's
not
really
guessing
it's
just
giving
the
context
still
and
telling
it
that
you
need
to
use
also
this
template
to
build
our
my
function.
B
So
he
is
a
word
that
these
are
not
yes,
because
I'm
gonna
tell
which
template
to
use
in
this
case
and
I'm
gonna
tell
we.
Where
is
the
function
which,
in
my
case,
is
this
this
function?
If
we
come
back
on
on
the
demo
I
just
to
show
you
that
we
can
go
through
another
revision,
so
this
is
the
second
revision
in
the
second
revision.
The
defeat
of
this
second
revision
of
what
I
do
is
just
changing
the
environment
here
and
I'm
writing
Chow,
which
is.
B
B
Another
cool
things
to
see
here
it's
how
to
manage
revision
so
I
can.
How
can
I
go
through
those
revisions?
In
this
example,
I
I
just
went
to
the
give
to
that
this
route
that
first
revision
all
the
traffic
sent
to
this
road.
In
fact,
if
you,
if
you
look
here
what
I've
done
after
the
point
of
first
religion
is
tell
that
the
same
and
100%
of
all
the
traffic
will
go
to
these
services.
B
But
if
we
want
to
just
give
some
50/50
traffic
to
them,
so
if
you
see
here
is
referencing
the
revision
name,
the
addition
name
is
one
thing:
this
is
a
current
with
the
definition.
Also
you
can
see
in
the
deployment.
So
here
I
have
two
deployments
if
I
want
to
split
the
traffic
across
to
deployment
which
are
to
revision
in
the
K
native
world,
I
can
create
a
route
and
Express
this
traffic
splitting
in
term
percentage
in
the
yellow
5.
B
So
if
we
just
follow
the
example
in
the
in
the
in
the
red
meat
again,
if
we
want
just
to
split
the
traffic-
and
we
do
like
this
yeah,
we
can
try
to
invoke
our
function
again
and
see
that
the
traffic
is
really
something
like
50%
with
symmetric
that
they
get
inside
so
and
we
can
also,
if
you
follow
the
example
here,
you
can
define
multiple
traffic
splitting
90%
and
10%,
so
they're
really
them
the
metric
is
the
same.
The
logic
is
the
same.
You
define
a
route.
B
You
mentioned
some
revision
and
you
give
some
person
to
do
this
revision
in
order
to
have
this
traffic
splitting
another
question
that
the
another
thing
is
interesting
to
know
is:
those
function
exists
only
until
a
certain
address
off,
so
those
function
by
default
exist
for
five
minutes.
If
you
don't
fire
them,
they
exist
only
for
five
minutes.
When
they
go
down.
The
creative
autoscaler
will
create
a
new
thought
that
contained
that
that
the
latest
good
revision
that
you
deploy-
and
if
you
see
this
is
possible,
is
the
changeable.
B
So
if
you
want
to
change
those
parameter
in
terms
of
how
much
time
you
need
to
want
to
let
your
function
scale-
and
this
case
I
change
this
case-
if
you
want
just
a
lower
and
a
high
on
that
thing,
you
change
the
content
map
over
here,
and
you
have
the
change
of
this.
Of
course,
you
don't
have
to
put
one
hour
mark
because
you
are
violating
their
paradigm,
so
it's
not
any
more
sterilized
something
something
else,
and
another
thing
cool
to
see
is
the
Easter.
B
So
when
you
start
a
native,
you
also
have
East
your
inside
and
east,
your
English
gateway
and
egress
gateway
are
the
one
that
you
deal
when
you
want
to
invoke
your
function.
Of
course,
in
this
example,
you
are
invoking
by
the
in
terminal
IP
address,
but
since
this
is
under
development,
work
and
developing
is
going
really
fast
and
really
in
progress.
You
can.
You
will
also
be
able
to
define
your
route
and
Ike
level
level
and
invert
your
function
and.
B
B
Function
so
oven
shift.
Con
function
is
really
effort
to
bring
open
source
server
less
to
your
libraries
cloud.
So
if
you
want
to
run
your
function
as
a
service
in
your
private
club
with
not
only
on
the
public
cloud
but
your
private
cloud,
if
you
want
to
run
as
open
source
implementation,
you
are
to
extend,
contribute
and
work
on
function.
B
You
can
do
it
with
a
red
dot
front
function,
which
will
be
our
products
around
this
top.
So
if
you
want
to
work
with
community
project,
there
is
always
ok,
D&K
native
implementation.
If
you
want
the
products
and
repress
product,
there
will
be
read
that
function,
giving
support
on
writing
function
and
just
also
executing
in
the
same
or
read
that
way:
community
first
and
open
source.
First.
A
A
A
little
bit
of
chatter
Michael,
hasn't
lost
is
been
answering
a
few
questions
mostly
for
me,
I,
think
in
the
chat
and
so
I'm
just
going
to
unmute
him
quickly
to
and
we'll
see,
he's
gonna
back,
and
so
there
Michael,
you
should
be
there,
but
I'm
a
little
I
think
I'd,
like
a
little
more
clarification
on
the
relationship
or
the
dependency
on
sto
for
K
native
I
mean
how,
and
maybe
there
was
an
earlier
slide
or
something
that
I
that
just
went
by
too
fast,
but
that
that
is
still
a
little
bit
confusing
for
me,
and
maybe
you
could
tell
us,
you
know
why
you
have
to
install
sto
in
order
to
use
K
native,
or
is
that
just
me
missing
a
point
here.
B
Very
complex
also
is
still
under
development,
so
something
is
they
find
it
really
in
a
few
times.
So
it's
it's
complex,
really
new,
but
it
still
is
using
the
bike,
a
native
because
of
a
service
mesh.
So
when
you
want
to
go
across
those
revision,
those
traffic
splitting
you
need
something
that
lets
you
attach
your
service
dynamically.
B
B
Anymore,
but
in
any
container
in
any
container
representing
your
function,
there
is
East
your
sidecar
here
because
he
still
inject
those
site
cards
in
order
to
let
your
revision
be
third
shift
in
production
dynamically
as
a
service
inside
the
overshift,
though
the
relationship
is
really
related
on
the
fact
that
is
to
enable
fast
the
creation
of
new
addition
and
the
splitting
of
traffic
across
the
strip.
That's
why
history
is
used
year
and
is
a
mandatory
I
mean
it's
some
summer
requirement
by
kinetic.
B
A
That
that
makes
this
for
me
and
then
I'm
looking
to
see
if
there's
there's
other
questions
in
here,
it
seems
to
me
that
that
K
native
is
is
very
early
days,
so
I
mean
I,
haven't
seen
or
heard
anybody
yet
using
in
production.
In
any
way,
are
you
seeing
that
people
thinking
about
using
in
production
now
or
is
it?
Is
it
really
still
very
early
days?
It's.
B
Really
is
really
really
it's
really.
Development
is
really
ongoing
progress
now,
for
instance,
the
creative
eventing
part
is
really
a
draft,
so
they
are
just
following
the
CN
CF
and
the
cloud
events
specification
and
implementing
right
now
that
implementing
it
right
now
so
is
not
complete.
I
wouldn't
suggest
it
for
production
for
sure,
but
is
the
future
around
the
running,
Cerberus
paradigm
and
computing
model
in?
So,
if
you
think
about
one
years,
it
would
be
major
enough
major
and
there
will
be
also
many
implementation
on
top
of
plan
on
kinetic
planning.
B
In
order
to
let
you
run
the
function,
I
can
say
is
like
east
your
one
year
ago.
So
can
it
even
right
now
is
like
is
your
one
year
ago
is
something
that
is
a
really
new
cool
is
really
needed
by
everyone,
but
it's
not
ready
for
production
for
sure
and
also
for
testing
and
need
some
some
time,
because
some
components
are
not
ready
or
they
are
just
in
definition
right
now,
but.
A
This
is
going
to
this
session.
That's
alec
has
really
helped
me
personally
understand
a
bit
better.
The
you
know
how
it
all
works
together.
So
thank
you
very
much
for
that.
I
will
upload
this
on
youtube.
People
have
been
asking
that
as
well
probably
take
me
until
tomorrow
morning
to
upload
it
and
get
it
formatted
for
YouTube,
along
with
natales
slides
and
if
you
are
coming
to
coop
con.
Please
consider
joining
us
for
the
open
ship
commons
gathering
the
day
before
koukin
on
December
10th.
A
Coming,
like
you
said,
Natali
that
this
is
all
very
much
new
early
days,
but
it
is
definitely
something
we
at
Red
Hat
see
as
a
necessity
a
necessary
piece
of
the
the
cloud
native
ecosystem.
So
what
we're
looking
forward
to
working
with
the
community
on
it
and
making
sure
that
it's
in
a
year's
time,
maybe
sooner
ready
for
production
workloads,
though
yeah.
B
I
think
we
are
in
a
good
part,
I
think
by
the
way,
connective
is
the
standard
now
the
fact
of
our
kubernetes
for
running
surplus
workloads.
So
in
the
future
we
really
hope
would
be
enough,
mature
and
stable
tool
to
run
those
workloads
in
production,
and
we
are
working
as
rather
than
or
just
community
to
do
it
and
I
would
like
to
thank
you
for
giving
my
giving
me
this
lot
to
talk
about
the
Canadian
appreciate
and
thank
you
all.
The
people
in
this
ending
servant
I
know,
of
course,
available.
B
If
you
want
to
get
more
about
this
or
just
even
more
information
to
come
in
in
Seattle,
there
would
be
a
live
demo
for
from
our
rendered
view
that
we'll
also
do
some
them
about
a
native
events.
They
are
working
on
it,
so
it's
very
interesting
to
attend
on
those
that
they
will
have
and,
of
course,
I'm
looking
forward
to
join
you
guys
at
Commons
in
yah,
Oh,
awesome.
A
Please
do
join
us
for
the
open
ship
Commons
gathering
the
day
before,
if
you
are
already
registered
for
coop
con
reach
out
to
me
and
I
can
send
you
a
promo
code
to
get
you
in
for
free.
If
you
need
to,
and
we've
got
that
lots
of
good
people
on
on
the
agenda-
and
this
is
going
to
be
probably
one
of
the
many
topics
that
we
cover
off
on
that
day
as
well.
A
So
looking
forward
to
it
all
the
thanks
again,
Natalia
I
know
it's
later
in
your
day
over
there
in
Italy
and
I
really
appreciate
you
taking
the
time
to
do
this
and
yeah
again
this.
We
started
with
this
conversation
in
Helsinki
and
trying
to
get
you
there,
but
I'm
really
glad
that
we
finally
got
got
you
on
on
the
line
and
able
to
give
this
presentation.
So
thanks
a
lot.