►
Description
KubeCon NA 2020 Office Hours with Knative contributors Matthias Wessendorf and Roland Huss
A
B
Hello
welcome
everybody
to
the
kubecon
adjacent
office
hours.
We
are
starting
out
with
a
topic
that
a
lot
of
people
are
interested
in
these
days,
which
is
event
driven
applications
and
eventing,
otherwise
often
called
serverless,
and
to
kick
that
off.
We
have
members
of
the
k-native
development
team
here
in
order
to
do
a
little
presentation,
but
most
of
all
to
answer
your
questions.
B
C
D
E
F
G
Hey
everybody,
I'm
paul
maury.
I
am
the
lead
of
the
serverless
engineering
team
and
I
am
also
representing
red
hat
on
k-native
steering
committee.
Why
don't
you
go
next
savida.
H
B
Cool
well,
thank
you
very
much.
So,
as
you
can
see,
we
have
the
k-native
power
team
here.
So
ask
any
questions
that
you
have
about
k-native
or
serverless
or
event-driven
applications.
Someone
will
be
able
to
answer
them
in
the
meantime.
Will
you
think
of
your
questions
roland?
I,
I
think
roland
you're,
going
first
for
demos.
B
Presentations,
yeah,
okay,
so
roland's
going
to
go
first
with
a
short
presentation
on
serverless
and
feel
free
to
ask
your
questions
in
chat
or
on
slack.
While
he
is
presenting
that.
C
Yeah,
okay,
let
me
show
our
screen
so
yeah,
so
actually
what
I'm
going
to
show
you
in
this
very
short
demo
is
how
you
can
access
canadian
openshift
serverless
from
the
command
line,
so
actually
just
for
a
little
bit
of
background
information
here.
So
k
native
is
an
add-on
on
top
of
kubernetes,
which
allows
you
to
run
serverless
applications
or
serverless
payloads,
which
means
that
you
are
able
to
scale
your
applications
to
zero
or
up
to
many
replicas
based
on
the
incoming
traffic.
C
So
the
auto
scanning
part
is
very
important
here
and
yeah
so
and
we
have
the
the
other
parts
of
serving
which
we
will
see
more
or
less
in
the
demo
that
I'm
going
to
show
you,
but
I'm
I'm
just
want
to
start
to
get
our
yeah
to
start
with
the
demo,
and
then
I
think
questions
already
will
arrive
so
yeah.
First
of
all,
if
you
have
canadian
serving
okay,
you
have
to
install
it.
So
you
can
do
that
very
easily
on
openshift
container
platform,
so
there's
a
support
with
a
more
or
less
one.
C
Click
install
where
you
go
to
the
operator
catalog
to
the
olam
and
then
install
the
openshift
serverless
platform
itself,
so
this
has
been
already
done
here
and
but
now,
let's
assume
that
I'm
the
developer
and
I
want
to
create-
and
I
want
to
bring
one
of
my
applications
onto
this
platform
and
so
for
that
I
have
created
a
container
image
somehow
somewhere.
So
it's
running
somewhere
and
I'm
using
here
in
that
case
the
the
command
line.
So
actually
let
me
let
me
just
show
you
the
full
full
line
here.
C
C
Developer
console
directly
so
there's
a
menu
item
here,
where
you
can
download
the
binary
it's
available
for
several
platforms
like
mac,
os
windows
and
linux
linux,
and
then
you
have
the
server.
So
you
want
to
create
a
service
here
and
the
only
thing
what
you
need
is
a
name
for
your
service.
So
in
this
case
it's
called
random
and
a
reference
to
your
container
image.
In
this
case,
it's
a
it's
a
call.
C
It's
also
called
random,
it's
more
or
less
a
a
random
generator,
a
rest
service
which
returns
a
random
number
if
you
request
it
and
it's
based
on
qualcos
the
underlying
platform.
So
this
is
very
good
for
serverless
to
conserve.
This
is
all
about
dynamic
scaling
and
qualcos
is
very
well
suited,
for
especially
these
kind
of
things
to
scale
up
quickly
and
and
also
scroll
scale
down
quickly.
So,
let's,
let's
do
it.
C
So,
as
you
see
on
the
right
side,
I've
opened
also
the
console
a
little
bit
so
that
you
see
that
there's
a
visualization
directly
in
the
openshift
console.
There's
this
kind
of
developer
perspective
that
you
can
choose,
and
here
you
see
that
the
service
gets
created.
It
really
waits
until
it's
ready
to
serve,
and
then
you
have
your
service
up
and
running.
It
also
shows
you
here
the
url
that
you
can
use
to
to
access
your
service.
So
actually
you
get
a
route
for
that
and
you
can
just
use
here.
C
Curl
minus
s,
and
that's,
let's
use
here
the
url
that
is
available
here
and
we
use
jq
to
visualize
the
output
and,
as
you
can
see,
and
you
can
get
back
a
kind
of
a
unique
id
and
then
a
random
number,
and
if
I
call
it
again,
I
get
a
different
random
number.
Okay.
This
is
this
is
all
fine.
So,
but,
but
you
see
it's
super
easy.
C
C
So
actually,
what
I'm
doing
here
now
is
I'm
setting
the
concurrency
limit
to
10,
which
means
that
a
pot
is
able
to
serve
10
requests
by
per
per
second
and
and
if
there
are
more
requests
coming
in
with
a
num
with
a
higher
frequency,
then
your
number
of
pots
is
scaling
up.
C
Also,
there
are
in
order
for
demonstrate
the
this
kind
of
scaling
events.
I
I'm
using
a
so-called
autoscale
window
which
allows
you
to
determine
the
the
window
after
which
the
calculation
happens.
C
So
in
this
case,
for
example,
if
if
there
are
no
requests
coming
in
within
six
seconds,
then
your
application
scales
down
yet,
but
at
the
moment
you
see
that
my
application,
already
scales
down
but
by
the
default,
is
60
seconds,
but
now
for
for
this,
the
sake
of
the
demo
uses
six
seconds
and
then
I'm
also
editing
our
environment
variable,
which
is
usable,
which
is
has
some
meaning
for
my
application.
C
And
now
I
have
I
can
look.
I
have
now
a
next
a
new
revision,
so
revision,
another
concept
of
canadian,
so
that
every
change
that
you
make
to
your
service
configuration
resides
in
a
new
revision,
which
means
that
you
can
really
also
go
back
into
in
time
by
just
setting
the
traffic
target
to
an
older
revision,
and
we
have
here
a
new
revision
and
every
change
I'm
doing
on
the
configuration
will
result
in
a
new
revision.
C
But
at
the
moment
everything
hundred
percent
of
the
traffic
is
going
to
manually
revision
here
and
what
I'm
now.
I
want
to
show
you
this
the
scaling
thing,
so
I
have
here
a
command
line
tool
which
is
called
hey,
it's
a
kind
of
a
mini
load
testing
tool
and
I'm
now
running
50
concurrent
requests
directly
to
this
to
the
service.
And
now,
let's
see
what's
happening
here,
you
see
here
on
the
right
side
that
it
scales
up
immediately
and
and
also
here
on
the
pop.
C
You
see
also
that
there
are
now
actually,
I
would
expect
five
to
come
up,
but
you
will
see
that
there
are
more.
So
maybe
if
you
have
a
question,
why
it's
more,
I
think
marcos
can
easily
answer
that
sorry
kind
of
a
cliffhanger.
Now,
okay,
now
it's
running
so
you
see
also,
if
I'm
looking
here,
that
there
are
a
lot
of
new
deployments
coming
here
and
if
I
stop
there,
this
load
testing.
You
see
that
it's
very,
very
elastic,
so
it
really
goes
down
in
in
seconds.
C
So,
of
course,
typically,
you
would
have
a
longer
window,
so
not
six
seconds,
but
you
would
use
60
seconds
or
longer
so
that
it's
a
kind
of
yeah,
it's
not
so
so
immediately
yeah.
So
this
is
the
the
way
how
it
can
scale
up
and
down
based
on
traffic,
and
the
final
thing
I
wanted
to
show
you
is
how
you
can
make
a
rollout
for
your
application
to
a
new
version
and
how
canadian
and
openshift
servers
can
help
you
with
that.
So
for
that
I'm
just
looking
again
at
the
revision
list.
C
So,
as
you
have
seen,
we
have
these
two
revisions,
I'm
using
now
service
update.
Now
what
I'm
now
doing,
I'm
taking
one
of
these
revisions
here
with
a
name
and
I'm
using
the
latest
one
here.
Let
me
do
that
and
I
give
it
the
name
v1.
So
this
is
our
v1
version.
Now
let
me
see
come
on
yeah
here
we
go
so
actually
tagging
doesn't
result
in
a
new
revision
because
it's
just
a
meter
information
which
you
had
so
if
you
also,
I
just
have
done
that.
So
I'm
doing
it
again.
C
Hopefully
I
haven't
killed
it
yet,
not
sure.
What's
here
let
me,
but
I
wanted
to
show
you
that
you
have
now
the
stack
for
v1
and
in
the
next
and
the
final
step.
C
Now
what
we
are
doing
now
is,
I
make
again
a
service
update,
but
this
time
I
really
want
to
go
to
my
new
version
of
my
application,
so
in
this
case
I'm
using
image
version
20,
which
you
see
here,
but
at
the
same
time
I
want
to
split
the
traffic
here
to
50
to
the
old
version
to
v1,
so
I'm
using
the
tech
here
in
the
name.
C
So
I
have
to
copy
paste
here
and
I
want
to
put
it
50
to
latest,
which
is
now
my
new
version,
which
I'm
just
going
to
about
to
deploy.
Of
course,
5050
isn't
typically
not
some
realistic.
So
what
you
typically
do
is
a
much
lower
percentage
for
your
latest
version
to
get
us
kind
of
a
canary
release
where
you
just
want
to
try
out
things,
and
only
if
everything
is
fine,
then
you
scale
up
to
100
like
that.
C
So
let
me
do
that,
and
you
see
here
now
on
the
right
side,
so
it
takes
a
little
bit
because
a
new
revision
needs
to
create.
Also,
probably
the
image
needs
to
be
pulled
from
a
from
the
registry.
C
This
can
take
a
bit,
but
now
you
see
here
on
the
on
the
right
side,
let
me
it
a
little
bit
down
move
it
over
the
traffic
split
that
you
have
50
going
to
the
old
revision
and
50
percent
going
to
the
new
revision,
and
if
I
make
now
my
my
curl
on
the
same
thing
here
like
that,
you
see
that
a
new
one
spins
up
and
we
will
see
which
version
we
hit.
C
We
hit
version
one,
oh
here
and
I'm
doing
the
same
here
again
for
call
it's
again
one
also
it's
not
really
hundred
percent
deterministic,
but
in
the
long
run
you
get
a
50
50,
50,
50
distribution.
C
Okay,
I
think
this
is
just
to
kick
off
the
discussion,
so
you
see
with
ken
you
can
do
a
lot
of
things
very
easily.
Just
by
some
comments.
You
can
do
everything,
of
course,
as
well
on
the
over
the
openshift
console,
but
you
can
also
do
this
with
the
regular
declaration
files,
because
canada
itself
and
operation
servers
is
based
on
custom,
resource
definitions
and
custom
resources.
So
you
just
can
create
these
files
also
manually
yep.
That's
it.
B
Yeah
yeah,
we
don't
have
any
questions
queued
up
yet
so
who's
doing
the
next
one.
D
B
B
Awesome,
yes,
please
go
ahead
and
but
but
don't
let
that
stop
you
from
asking
questions
about.
F
E
D
D
Let
me
go
in
here
serving
and
eventing
have
been
installed,
as
well
as
something
new.
That's
now
on
the
sodas
operator,
the
k
native
kafka.
So
when
we
go
here,
you
can
see
that
earlier
today
we
did
preparation
and
install
k
native
kafka,
so
the
canadian
kafka
cr
basically
distributes
the
two
components
that
k
native
upstream
community
contains
for
dealing
with
kafka,
specific
workloads,
that
is
the
kafka
channel,
as
well
as
the
kafka
source.
D
And
if
I
now
go
on
to
my
k,
native
kafka
installation
here-
and
I
want
to
show
you
quickly
the
yaml
overview
of
what
I
have
installed
using
the
operator
is
you
can
see
here
that
in
the
specification
there
is
the
option
for
the
channel.
I
have
not
yet
installed
it
here,
but
I
did
actually
install
the
source
so
with
the
kafka
source.
D
My
demo
basically
will
show
you
how
we
get
messages
that
are
sitting
in
an
apache
kafka
topic,
how
we
can
actually
distribute
those
out
of
the
topic,
convert
them
into
cloud
events
and
distribute
them
to
various
k
native
serving
services.
D
D
A
little
bit
bigger
okay,
so
I
have
a
demo
namespace
here
and
as
a
preparation,
I
was
installing
the
canadian
serving
services
already
before
this
as
roland
was
explaining
before
when
there
is
like
no
workload,
they
are
actually
not
sitting
near
idle
and
and
wasting
cpu
resources,
etc.
So
they
are
basically
on
hold
here.
D
D
D
Okay,
you
can
see
here
I
have
that
already
there,
but
it's
not
running,
as
you
can
see
here
in
my
watch
window.
Okay,
let
me
go
to
my
source.
Folder
see
apply,
I'm
now
applying.
D
Source,
so
I
have
prepared
a
little
application
as
well
on
the
site
that
was
entering
a
few
messages
into
a
topic,
and
now,
when
I
launched
this
yamu
file,
what
will
happen
is
that
an
instance
of
the
kafka
source
is
being
created.
It
goes
to
the
topic,
receives
the
messages
and
is
distributing
them
to
the
reference.
The
canadian
serving
service
application
here.
D
D
D
D
Okay,
so
in
the
background
sorry
for
that,
I
was
receiving
already
messages
before
so
what
I've
done
now
in
the
background
is
there
is
an
application
that
is
writing
messages
to
the
kafka
topic
like
randomly
and
then,
as
you
see
now
here,
the
kafka
source
was
actually
able
to
receive
messages
there
and
then
one
instance
of
the
service
got
up
there.
Now,
let's
take
a
look.
D
I
think
I
made
a
mistake
before
with
the
name:
okay,
I
need
to
specify
the
container
as
well
user
container
okay.
So
what
we
see
here
is,
let
me
actually
make
that
a
little
bit
bigger
as
well.
D
What
we
see
here
is
my
k-native
serving
service
is
basically
an
http
endpoint
that
is
printing
out
the
payload
of
the
received
cloud
event.
So
what
we
see
here
is
a
little
bit
of
validation
and
we
have
a
valid
cloud
event.
We
have
some
of
the
contextual
attributes
that
we
have
there
like
the
specification.
We
are
using
cloud
event
version
one
and
then,
in
case
of
kafka,
the
type
from
the
messages
that
are
basically
created
from
the
kafka
source
have
this
particular
type.
D
You
can
see
the
source
that
is
basically
producing
the
messages,
what
we,
what
what
kafka
source
does
for
that
is.
Basically,
it
gives
you
the
name
of
the
namespace,
the
name
of
the
component,
that
is
in
there
and
the
actual
id
of
the
topic
here.
So
we
are
receiving
messages
from
our
kafka
source
called
kafka
source.
With
the
topic,
my
topic-
and
you
see
also
in
the
cloud
event
attribute
called
subject.
D
You
also
get
a
a
little
bit
of
information
on
the
partition
of
the
kafka
topic,
as
well
as
of
the
offset
of
the
actual
message,
and
then
we
have
some
tracing
on
there.
As
an
extension
attribute
within
cloud
events,
and
then
we
have
data,
and
the
data
here
is
basically
some
random
json,
that's
from
an
application
that
is
basically
producing
meter.
D
Events
like
as
in
an
iot
meter
event
like
temperatures
and
windows,
open
and
whatnot
of
a
of
a
building,
and
that
data
is
encapsulated
as
json
here
and
shown
there
in
the
data
content,
which
basically
holds
the
entire
data
of
the
cloud
event
object
here.
Okay,
so
that's
basically
a
demo
of
combining
apache
kafka
reading
messages
from
a
kafka
topic
and
then
distributing
that
to
a
k
native
serving
service.
B
B
We
well
thank
you,
so
kafka
is
our
primary
way
to
scale
out
doing
an
event-driven
application
if
we
need
to
handle
a
lot
of
traffic.
D
Oh
sorry,
I
include
the
wrong
sorry
yeah.
The
the
benefit
with
kafka
is
a
lot
of
people
have
that
already
and
the
nice
thing
there
in
integration
with
k-native
is
with
the
kafka
source.
You
can
basically
access
any
existing
topic.
That
is
there
and
read
the
native
binary
kafka
content,
and
then
you
can
distribute
it
to
your
serverless
application,
so
yeah
and
once
you're,
when
you
already
have
the
kafka
infrastructure
there,
it
also
makes
sense
to
bring
in
apache
kafka
as
the
backbone
for
your
like
channel
concepts.
D
So
if
you
build
on
top
of
the
simple
artifacts
of
like
a
source
in
connective
eventing,
we
have
a
few
more
concepts
where
a
source
is
not
only
able
to
sync
directly
to
a
serving
service,
as
we
had
now
it
can
also
emit.
It
is
vent
to
other
http
addressable
endpoints.
D
That
could
be,
for
instance,
like
the
broker
or
direct
channel,
and
then,
when
you
already
have
the
kafka
there,
the
kafka
channel
basically
will
read
the
messages
as
an
http
endpoint
and
persist
them
in
kafka,
and
so
you
will
not
lose
messages
that
are
stored
in
the
kafka
topic
there.
So
that's
the
big
benefit
there.
B
Yeah
I
mean
there's
a
lot
of
benefits
for
kafka.
I'm
a
fan
of
it,
as
the
database
in
fact
is
anything
else.
Is
anything
else
supported
if
somebody
doesn't
want
to
use
kafka
for
some
reason.
D
Canadian
eventing
does
have
various
integration
points,
so
the
community
has
different
sources
for
a
lot
of
services
like
there
is
a
github
source
you
can
get
notified.
If
you
monitor
a
certain
project
like
apr,
has
been
open,
etc.
There
is
the
equivalent
for
gitlab,
also
from
the
apache
camel
folks.
D
There
is
the
source
where
you
can
basically
run
all
of
the
camel
integrations
as
a
source,
so
you
can
do
some
stuff
like
dialing,
into
their
support
for
like
telegram,
so
you
can
interact
through
a
telegram
agent
and
distribute
messages
to
the
native
serving
services
or
your
like
serverless
applications.
D
There
is
also
a
source
that
is
emitting
events
for
prometheus,
so,
like
every
payload
that
you
have
like
on
your
monitoring,
you
can
also
distribute
that
to
your.
D
Application
using
that
source,
yeah
and
then
there's
also
interesting
interactions
going
on
when
you
build
up
on
so
with
the
notion
of
the
broker,
for
instance,
you
can
have
like
multiple
sources,
sending
events
to
a
broker,
and
then
you
can
define
a
so-called
trigger
and
based
on
a
trigger.
You
can
filter
like
on
the
metadata
attribute
of
the
cloud
event.
So
what
I
was
showing
you
before,
like
based
on
type
of
the
cloud
event,
you
can
actually
filter
and
say
there's
an
api
server
source
for
kubernetes.
D
D
C
Maybe
it's
also
worth
mentioning
here
that
you
can
easily
create
your
own
sources
as
well,
so
it's
super
easy
to,
for
example,
to
take
your
deployment.
So
actually
you
can
take
your
regular
q
and
80s
application
and
then
create
a
so-called
sync
binding,
which
allows
you
to
act.
Your
deployment
as
a
regular
source,
which
means
your
application
gets
deployed.
It
gets
injected
environment
variable
which
is
called
ksync,
and
this
is
just
an
url,
and
this
tells
the
deployment
that
it
should
send
cloud
events
to
this
url
if
some
event
is
happening.
B
C
Yeah
sure
so,
actually
maybe
I
can
take
that
so
actually
for
fossil
kinetic
is
kind
of
a
basic
platform
on
top
of
kubernetes
and
it's
called
a
serverless
framework
which
is
container
based
and
if
you
know
other
and
of
course
you
have
heard
about
functions
of
service,
which
is
also
kind
of
a
paradigm
which
is
used
either
together
with
serverless
or
as
an
alternative
to
serverless,
but
in
some
sense
serverless
is
kind
of
a
deployment
model
which
allows
you
really
to
make
this
very
flexible,
flexible
scaling
up
and
down
and
just
for
operating
wearing
function
as
a
service
is
more
like
a
programming
model
where
you
really
have
a
model,
how
you
can
create
very
small
snippets
of
code
and
deploy
that-
and
this
does
not
have
to
be
containers,
for
example,
aws
lambda.
C
This
doesn't
use
containers
for
that.
It's
just
you
just
throw
over
your
your
function,
code,
your
snippet
to
the
platform,
and
then
it
runs
it,
and
but
you
can
of
course
take
k
native
as
a
basis,
and
on
top
of
that
you
can
create
a
function
as
a
service
platform.
So
actually
it's
more
not
kind
of
safe.
It's
not
a
function
service
platform,
but
it
enables
that
so
that
anybody
else
could
do
that,
and
actually
this
is
funny
because
with
openshift
serverless
we
also
1.11.
We
did
something
similar.
C
I
think
this
is
more
or
less
the
one
approach
how
you
can
distinguish
between
serverless
and
functions
of
service,
and
if
you
like,
I
can
show
later
something
about
this
function
stuff,
this
new
one
which
is
really
brand
new
with
1.11
and
might
be
of
interest,
because
it's
one
way
how
you
can
leverage
the
kinetic
platform
as
a
as
the
basis
for
functions
of
service
platform.
So
actually
this
is
also
baked
into
the
kn
client.
And,
if
you
like,
I
can
I
can
show
you
how
this
works
later.
B
Oh,
that
that
would
be
awesome
actually
because
one
of
my
next
questions
was
going
to
be
openshift.
Serverless111
just
was
announced
today,
and
so
one
of
my
questions
was
going
to
be
what's
in
it.
What's
what's
in
111,
oh
wait.
Although
we
have
a
question
coming
in
from
the
stream
before
we
get
into
that
which
is
somebody's,
a
question
about
the
function
as
a
service
and
they
said,
does
it
need
k-native
to
run
or
can
it
run
without
k-native.
C
Yeah,
as
mentioned
kind
of,
is
kind
of
the
basis
for
that,
so
you
require
native
as
the
the
underlying
platform
for
the
functions
of
service.
So
it's
really
kind
of
a
kind
of
a
stack.
So
on
top
you
have
the
functional
service
functionality
and
then
then
there
is
canadian
and
then
at
the
bottom.
You
have
kubernetes.
B
Okay,
so
what
is
in
111.
D
D
One
is
the
k
native
kafka
crd
type
that
I
was
already
showing
you
in
the
console
layer
which
you
can
use
to
actually
install
the
kafka
source.
I
was
demonstrating
or
the
kafka
channel.
So
that's
new.
We
had
a
separate
kafka
operator
before
that
is
like
deprecated
right
now.
That
was
installing
the
car
native
kafka
channel
and
the
canadian
kafka
source.
But
it's
duplicated
now
it's
not
really
built
into
the
product
and
then
what
roland
already
mentioned,
the
developer
preview
for
the
functions.
D
D
B
Yeah
sure
go
ahead
well,
you're
setting
that
up
is
no
question,
so
the
functions
are
in
developer
preview.
So
what's
still
kind
of
to
be
done
there
before
we're
ready
to
tell
people
to
try
this
out
in
production.
C
Yeah
I
can,
I
can
take
that
so
actually,
developer
preview
means
for
us.
It's
really
the
the
first
thing
that
we
are
sending
out
to
to
to
the
public,
and
it's
really
also
kind
of
ask
for
feedback.
So
actually
what
we
are
doing
here
now.
Let
me
maximize
that
what
we
are
doing
here
is
really
we
really
working
hard
on
the
on
the
user
experience.
C
C
This
is
something
which
you
as
mentioned.
You
have
here
if
you
use
k
and
help.
So
this
is
again
the
canadian
client
and
we
have
baked
into
the
client
itself,
so
the
client
itself
has
kind
of
a
plug-in
architecture.
So
you
can
easily
use
external
plugins,
like
with
cube
control.
You
can
use
external
commands
as
plugins
for
kinetic
for
kn,
but
there's
also
the
possibility
to
inline
plugins
so
that
they
are
part
of
the
binary
binary
of
the
single
binary,
so
this
makes
it
easy
to
distribute
it.
C
So
if
you
download
kn
directly
from
from
the
openshift
console
here,
then
you
get
already
the
the
fast
functionality
included,
and
this
comes
here
with
this
funk
sub
comment,
and
so
what
I
can
do
now
here
car
k
and
funk
help
this
minus.
If
there
are
several
options,
and
now
we
are
entering
the
the
area
that
might
change
a
little
bit,
but
actually
that
you
can
still
have
you
have
two
phases
so,
first
of
all,
you
have
a
phase
where
you
can
scaffold
your
your
application.
So
what
we?
C
What
I'm
doing
here
now
is
kn
func,
create,
and
I
can
show
here
that
is,
for
example,
you
if
you
with
minus
app,
you
can
select
the
runtime,
which
means,
if
you
scaffold
you
decide
which
kind
of
language
or
framework
you
want
to
use.
C
C
If
you
look
here,
it's
just
looks
like
any
other
node
js
project
out
there,
and
then
you
have
a
kind
of
index.js,
which
is
the
entry
point
for
your
function
and
then
at
the
end.
Okay.
This
is
a
little
bit
more
because
it
shows
you
some
examples,
but
again,
the
content
of
this
example
is
still
to
be
improved,
but
at
the
end
you
have
just
you
you're,
exposing
here
just
one
of
the
functions,
and
this
is
the
one
which
gets
called
in
this
sense.
You
can
have
different
signatures.
C
You
can
have
signatures
which
includes
cloud
events,
so
that
you
can
process
cloud
events.
You
can
return
cloudy
bench
events,
but
then
it
gets
gets
worked
on,
for
example,
by
the
broker,
but
you
can
also
use
just
regular
rest
comments.
So,
okay,
then
you
code,
actually
here
the
the
function
code
in
in
here
and
then
the
the
other
step
which
you
are
doing
is
just
either
k
and
funk
build
which
we'll
just
create.
C
Okay,
also,
you
have
need
to
provide
a
registry
where
this
image
is
pushed
to
and
then
I'm
using
minus
minus
various
to
be
a
little
bit
more
verbose
that
we
see
something
here,
and
this
is
actually
is
really
using
build
pack
under
the
hood
so
that
it
creates
the
container
images
for
you,
but
it
uses
build
pack
in
on
not
in
the
local
way.
C
So
at
the
moment,
it's
really
calling
against
the
local
docker
demon,
which
is
running
on
my
machine,
but
for
the
future
and
also
then
for
the
tech
preview
for
the
for
this
functionality
of
us.
We
also
are
planning
to
put
this
onto
the
cluster
so
for
cluster
and
builds
that
we
leverage,
for
example,
either
s2i
or
the
build
pack
running
on
the
cluster
etcetera.
C
Okay,
then
the
image
has
been
built
and
what
we
can
do
now
is
just
to
deploy,
deploy,
deploy
that
and
again
so
at
the
moment
the
deploy
also
includes
the
build
phase,
so
this
can
be
tuned
and
whether
you
wanted
to
build
again
or
so
we
actually
you
see.
Buildpack
has
a
very
good
caching,
so
that
doesn't
shouldn't
take
too
long.
But
after
the
image
has
been
built
and
has
been
pushed
to
the
registry,
a
canadian
service
is
created
for
you
and
this
creative
service
will
then
reference
the
image.
C
That's
that
just
has
been
created,
and
so
it
should
be
yeah.
Your
code
is
in
there
at
the
moment,
so
it's
pushed
now
and
let's
see
so
now
you
see
that
now
the
steps
are
coming.
The
service
gets
deployed
here
on
the
top.
We
have
still
our
watch
on
the
pots
running
here.
The
container
gets
started
with
the
code
that
I
just
have
pushed
to.
The
registry
takes
a
bit
because
in
this
example
now
the
image
is
really
going
over
docker
io.
C
That
one
and
you
see
this
is
the
standard
response.
It's
not
very
meaningful.
Actually,
this
would
return
exactly
that
what
you
have
put
in
your
function,
so
the
idea
is
really.
How
can
you
get
from
your
local
code
to
a
running
application
on
creative
service
yeah,
more
there's
not
much
more
there,
so,
actually
the
the
very
really
the
important
thing
that
we
want
to
improve
here
on
the
user
experience
and
on
the
way
that
you
can
really
get
quickly
onto
your
cluster,
that
you
also
can
run
and
test
locally.
C
B
Awesome:
okay,
so
people
should
try
that
out
with
the
new
serverless.
So
we
have
a
question
actually
here
for
the
whole
panel,
because
it's
gonna
be
some
more
of
an
ideas.
Thing
which
is
nemo
wants
to
know
how
you
can
integrate
event,
driven
applications
or
function
as
a
service,
maybe
both
with
a
get
ops-based.
F
F
So
you
could
use
what
we
have
in
openshift
as
openshift
pipelines,
which
is
based
on
tactone
project
to
build
an
image
from
source
code,
outputted
image
and
then
output.
A
service
definition
out
of
your
pipeline
2
to
update
continuously
update
the
service
that
you're
running
use,
basically
in
the
same
scheme
that
ronan
has
shown
in
the
first
demo,
where
you
you
know
you
have.
If
you
update
the
image,
then
you
update
the
service
and
then
there's
a
roll
out
from
the
old
to
the
new
version
and
so
on,
and
so
on.
F
Every
time
you
push
a
commit
that
happens
again
and
again
and
again
as
far
as
I'm
aware,
there's
current
like
it's.
It's
on
our
roadmap
or
at
least
on
our
ideation
sheet,
to
look
at
a
very
tight
integration
of
exactly
what
was
mentioned
in
the
in
the
question,
because
that's
like
where
the
like
a
lot
of
value
at
in
the
end
will
be
as
in
tightly
integrating
all
of
these
nice
systems
with
nice
glue
to
make
it
very
usable
and
seamlessly
integrate
with
one
another.
Yeah.
C
C
So
the
idea
is
really
that
you
pick
up
your
stuff
from
from
github
or
somewhere
process
it
with
the
tecton
pipeline
as
marcus
mentioned,
and
then
is
one
of
the
final
steps
you
just
deployed
to
your
cluster
there
and
one
has
to
say
that
kn
or
clip
control,
of
course,
because
also
can
also
work
with
the
resource
descriptors
of
the
the
artifacts,
which
you
typically
check
in
into
a
good
repository
and
yeah.
So
this
is
the
way
how
we
approach
git
ops,
in
the
sense
that
it
really
it's
a
combined
way.
B
B
F
Well,
we
measure
concurrent
requests,
so
we
do
like
we.
We
do
look
at
how
many
requests
are
in
the
system
at
a
given
time
now,
there's
quite
a
bit
of
magic
in
the
system
to
actually
like
try
to
correctly
measure
this,
because
you
could
imagine
requests
that
take
nanoseconds
or
microseconds
or
whatever,
and
we
totally
want
to
support
these
use
cases
right
and
then.
F
F
Like
if
you
were
thinking
about
concurrency,
it's
exactly
what
what
you
would
expect,
I
guess
the
system
to
do.
B
Oh
yeah,
well,
what
I
actually
was
wondering
is
because
we're
usually
talking
about
rest
requests
right,
whether
we're
saying
okay
x,
number
of
requests
within
a
particular
window
of
time.
That
is
treating
requests
as
a
momentary
thing
or
if
we're
actually
checking
whether
or
not
a
response
has
come
back
to
the
client.
F
That's
a
good
point:
yeah.
The
entire
system
is
based
on
the
fact
or
based
on
the
assumption
that
all
of
your
workload
is
done
in
the
like
in
the
context
of
a
request.
So
a
request
coming
in
is
a
signal
for
us
that
concurrency
is
going
up.
F
A
response
being
sent
back
is
a
signal
for
us
that
concurrency
is
going
going
down
by
one
you're
not
supposed
to
do
anything
besides,
like
you're,
not
supposed
to
do
any
background
tasks
or
stuff
like
that
in
your
service,
which
would
impact
the
cpu
cycles
and
everything
in
a
meaningful
way
right.
So
it
can
obviously
have
asynchronous
cache
cache,
keeping
and
stuff
like
that.
F
B
F
No,
we
have
like
we,
we
have
written
a
custom
custom
semaphores
to
deal
exactly
exactly
with
that
issue.
Like
there's
yeah,
I
personally
have
written
one
of
them
and
it
was.
It
was
a
pretty
nice
experience,
but
yeah.
We
are
we're
squeezing
any
nanosecond
out
of
exactly
these
bits.
We
do.
We
do
want
to
be.
F
F
So,
in
terms
of
like
people,
don't
have
like
people
using
the
system,
don't
have
to
care
about
that
at
all.
It's
a
side
car
that
gets
injected
into
your
pot.
That's
handling,
all
of
that,
like
accounting
and
surfacing
the
metrics
in
a
way
that
we
can
consume
them.
F
For
instance,
we
also
optimize
that,
like
we
were
using
prometheus
based,
endpoints
and
changed
that
to
protobuf
based
endpoints
to
be
able
to
take
those
out
quicker
and
waste
less
cpu
cycles
on
g
super
encryption,
gzip
yeah,
you
know
what
I
mean:
gcp
encoding
them
stuff
like
that.
B
Cool
okay,
we
are
are
almost
at
the
end
of
our
time
here.
So
I
wanted
to
ask
the
panel
if
you
have
any
final
thoughts
about
what's
going
on
about
the
future
of
surveillance,
about
111,
go
ahead
and
share
those.
C
Yes
sure
so,
actually,
I'm
personally
I'm
very
excited
about
the
new
function,
functionality.
So
to
say
it's
a
it's
really
something.
I
think
a
step
forward,
there's
a
lot
of
things
which
needs
to
be
explosive
in
this
space
because
yeah
getting
out
the
best
developer.
Experience
is
really
one
of
my
holy
grails
and
I'm
really
hoping
that
we
are
on
on
the
right
direction
going
in
there.
But
of
course
we
need
your
feedback.
So
actually
we
need
everything
if
you
say
okay,
this
is
not
the
right
way
to
go.
C
D
Yeah,
this
is
true
with
the
eventing
parts
there.
It
also
has
good
apis
in
the
sense
that
rodon
mentioned
the
sync
binding
already
before
that
you
can
basically
bring
in
your
existing
workload
like
your
deployment,
with
a
little
bit
of
a
minimal
tweak,
where
you
basically
need
to
inject
the
http
endpoint
for
the
sync,
and
then
you
combine
your
deployment
with
the
type
of
the
sync
binding
that
basically
allows
you
to
actually
produce
events
for
the.
D
Eventing
system,
so
that
also
is
a
good
good
way
to
actually
integrate
third-party
systems
without
the
need
of
writing,
like
full-fledged
controller
for
your
own
custom
sources.
So
that's
definitely
a
nice
aspect
of
it
as
well.
G
Well,
one
one
thing
that
that
I
am
glad
to
see
like
being
explored
is
is
functions
and
how
they
relate
to
k
native
one
of
the
things
that,
like,
if
you're,
watching
this
and
you're
interested
in
serverless
or
or
interested
in
functions
and
you're,
maybe
on
the
fence
about,
should
you
try
making
the
leap
to
functions
or
should
you
try
wrapping,
maybe
things
that
you
you
have
already
in
a
k
service,
we'd
love
to
speak
to
you,
so
you
can.
You
can
reach
out
to
us
on
twitter.
H
Yeah
so
thing
is
like
yeah.
First
of
all,
like
there
is
recently
a
new
feature.
Called
multi-container
has
been
introduced
as
part
of
k
native.
So
I
think
it
has.
It's
been
integrated
in
the
openshift
serverless
also,
so
please
try
to
work,
try
to
use
it
and
let
us
know
the
feedback
so
that
we
can
improve
it
if
needed.
It's
in
alpha
alpha
state,
so
to
move
it
to
the
beta.
We
need
to
add
in
add
it
in
the
kn
cli
and
the
docs
perspective.
B
Awesome
well,
thank
you
so
much.
This
was
super
informative
and
you
know
if
people
have
further
questions
about
openshift
serverless
or
about
k
native,
they
can
find
us
on
the
red
hat
channel
on
cncf
slack.
They
can
also
go
to
k
native
slack.
The
official
community
slack
order,
the
canadian
mailing
lists-
and
I
please
you
know,
give
it
a
try.
Try
building
your
applications
around
it,
neiman
rama.
B
B
And
the
which
is
storage,
the
the
oh
wait.
We
take
one
minute
for
another,
quick
question.
If
anybody
knows
the
answer
to
this,
kristoff
wants
to
know
whether
or
not
we
have
integration
with
something
like
koku.
That
could
let
you
predict
how
much
what
your
costs
are
going
to
be
for
serverless.
C
B
Okay,
so
christoph.
That
sounds
like
a
good
thing
for
you
to
raise
in
a
community
forum,
because
obviously
this
is
a
question.
A
lot
of
people
are
going
to
have
the
and,
in
the
meantime,
that
link
that
I
sent
for
rahm
and
nemo.
You
can
use
that
as
well.