►
From YouTube: OpenShift Commons Briefing on Knative Project - Paul Morie, Roland Huss , Matt Moore, Scott Nichols
Description
OpenShift Commons AMA on Knative
Paul Morie and Roland Huss (Red Hat)
Matt Moore, Scott Nichols (VMWare)
2021-01-18
hosted by Diane Mueller (Red Hat)
OpenShift Commons Briefing https://commons.openshift.org/events.html
Knative https://knative.dev/
A
All
right,
everybody
welcome
to
our
openshift
commons
ama.
Ask
them
anything,
not
me
for
k
native
and
today
we
have
paul
murray,
matt,
moore
scott
nichols
and
roland
huss.
All
participants
in
this
wonderful
project
who
are
going
to
give
us
an
introduction,
tell
us
a
little
bit
about
the
road
map
and
hopefully
leave
a
little
bit
of
time
at
the
end
for
all
of
your
questions.
So
we
are
live
streaming
this.
A
So
if
you
are
in
any
one
of
the
multiple
live
streams
like
facebook,
youtube
or
twitch,
or
I
think
periscope
even
now,
throw
your
questions
in
that,
and
we
will
aggregate
them
here
and
force
these
people
to
answer
them.
So
without
any
further
ado,
paul
introduce
yourself
and
your
cohorts
and
let's
find
out,
what's
going
on
in
k
native
land.
B
All
right:
well,
I'm
paul
maury
everybody
and
I
work
on
k
native
at
red
hat,
and
why
don't
we
just
have
everybody
introduce
themselves,
so
you
can
be
sure
that
you're
satisfied
with
your
introduction
and
if
you're
not
satisfied,
you
have
only
yourself
to
blame
matt.
Why
don't
you
go
ahead
next.
C
Sure
hi,
I'm
I'm
matt
moore.
I
work
at
vmware,
I'm
one
of
the
folks
who
started
k
native
now,
a
little
over
three
years
ago
at
google
and
I've
been
doing
container
tooling
stuff
for
what
feels
like
forever
nice
to
meet
a
lot
of
you.
I'm
gonna
toss
the
hot
potato
to
roland.
D
E
Hey
howdy,
I'm
scott
nichols.
I
work
on
k
native
at
red
hat,
formerly
google.
I
work
on
the
eventing
side,
mostly
focused
around
source
creation.
So
that's
that's
how
events
get
into
the
cluster
and
make
interesting
things
that
you
can
do
with
events,
and
I
also
contribute
to
the
cloud
events
cncf
project.
E
B
A
B
Hey
there,
it's
me
all
right,
yeah.
I
guess
I'll
just
hold
this
if
you
want
to
see
my
face,
so
what
is
it?
One
of
the
one
of
the
things
that
that
is
sort
of
in
in
the
air
in
our
industry
is
the
confusion
between
two
related
but
different
things
and
those
two
things
are
serverless
and
faz,
or
functions
as
a
service
and
I've
I've
seen
k-native
referred
to
as
a
faz
a
few
times.
B
It's
really
not,
and
here's
how
I
would
put
the
difference
between
the
two
and
any
of
my
any
of
my
co-presenters
feel
free
to
riff
on
me
or
correct
me
if
I'm
wrong
or
whatever,
but
I
would
articulate
serverless
as
being
essentially
request,
driven
and
automatic
scale
to
zero
as
being
two
identifying
and
key
properties.
B
So
when
we
think
about
what
faz
is
it's
usually
a
lot
more
than
serverless?
I
think
to
me
personally,
my
own
opinion.
Faz
implies
serverless,
but
serverless
doesn't
necessarily
imply
faz.
So
when
we
talk
about
k-native,
we're
going
to
be
focused
on
serverless
elements
and
we're
not
going
to
be
focused
as
much
on
those
experiential
elements
that
are
the
difference
to
me
personally,
between
serverless
and
faz,
where
faz
is
something
more
than
serverless,
it's
got
a
lot
of
connotations
of
developer.
B
Experience
builds
and
sdlc
bound
up
into
it
that
are
more
than
just
serverless.
So
let's
talk
a
little
bit
more
as
I'm
double
fisting
these
these
devices.
Since
I've
got
my
phone
in
my
laptop
I'll,
see
if
I
can
operate
them
both
correctly,
what
is
k-native
really
now
that
we've
kind
of
maybe
close
to
hit
bedrock
with?
Maybe
if
we
were
thinking
it
was
a
faz
we're
sort
of
recognizing
that
it's
not
exactly
faz
and
it's
more
serverless
k-native
is
really
a
kubernetes
extension
that
is
focused
on
developer
productivity.
B
So
when
we
talk
about
extending
kubernetes,
I'm
sure
this
will
be
familiar
to
a
lot
of
people
in
this
audience.
That
kubernetes
extension
looks
like
a
kubernetes-like
api,
so
declarative
api
surface
usually
implemented
with
crds,
and
that
is
what
we
use
in
k.
B
Native
project
is
crds
and
the
accompanying
features
of
cube
like
web
hooks
for
conversion
for
validation
for
mutation
that
provide
a
kubernetes-like
api
and
api
experience
without
without
adding
code
into
cube,
so
we're
extending
kubernetes
to
solve
these
boring,
but
hard
problems
like
scaling
to
and
from
zero
and
scaling
on
demand
based
on
requests
and
having
a
history
of
revisions
for
our
application
and
routing
events
and
stuff
like
that.
B
So
things
that
boring
is
really
not
is,
is
not
being
overly
generous,
we're
maybe
being
facetious
when
we
say
boring
but
hard,
but
maybe
things
that
you
would
repeat
over
and
over
again,
I've
certainly
implemented
some
of
these
things
myself
in
previous
lives
before
kubernetes,
so
things
that
you
might
find
yourself
implementing
over
and
over
again
they're
tough
they're
hard
to
get
right.
They
take
they
take
a
lot
of
engineering,
know
how
to
get
exactly
right
and,
in
my
own
personal
impression,
that's
sort
of
the
value
proposition
of
k-native.
B
Is
that
we're
doing
a
lot
of
these
things
that
you
would
have
to
worry
about
yourself,
because
kubernetes
doesn't
already
do
them
and
giving
you
the
tools
to
kind
of
really
just
focus
on
your
business
logic
and
what
you
want,
your
application
or
the
system
that
you're
building
to
do
so.
There's
two
key
pieces
here
and
actually
this
this
is
an
outdated
slide
and
roland.
B
I'm
just
going
to
apologize
to
you
and
the
folks
that
work
on
the
client,
because
I've
left
off
the
client,
but
the
two
key
functional
pieces
that
we're
going
to
talk
about
today.
B
We're
also
going
to
talk
about
the
client,
our
serving,
which
is
about
the
like
scale,
to
and
from
zero
scale
up
to
n
on
demand
history
of
immutable
application
revisions
that
we
can
split
traffic
between
for
any
number
of
different
reasons
that
we
want
to
do
that
and
eventing,
which
is
about
connecting
in
a
loosely
coupled
and
late,
binding
way,
event,
producers
and
consumers.
B
I
don't
want
to
say
too
much
more
about
either
of
these
to
so
that
I
don't
steal
the
very
impressive
bolts
of
thunder
that
my
co-presenters
have
for
us
today,
but
that's
those
will
be
two
of
the
main
focuses
and
then
we're
also
going
to
hear
about
the
client
and
that's
that
is
those
are
the
high
notes.
So
co-presenters
do
you
want
to
just
add
anything
here
before
we
move
on,
or
is
that
sufficient
for
you.
C
B
2021
off
to
a
good
start
nailed
one
thing
already:
let's
move
on
to
a
little
history
lesson
so
matt,
why
don't
you?
Why
don't
you
do
the
first
couple
bullets
here
because
matt
is:
is
one
of
the
founders
of
k-native
project.
C
Sure
so
this
is
this,
is
you
know,
there's
always
a
lot
going
on,
but
this
is
sort
of
a
highlight
reel
of
some
of
the
sort
of
major
events
throughout
the
course
of
the
project.
So
back
in
sort
of
fall
of
2017,
we
started
some
of
the
really
early
prototyping
of
you
know
what
paul
described,
trying
to
sort
of
look
at
what
higher
le
level
abstraction
on
top
of
kubernetes.
For
you
know,
developer
productivity,
serverless,
fast
sale
thing
would
look
like
it
launched
publicly.
C
You
know
we,
you
know
tons
of
folks
joined
in
red
hat
joined
in
pivotal
joined
in
you
know,
lots
and
lots
of
folks
were
we're
discussing
it
and
it
launched
publicly
in
july
of
2018
and
a
lot's
happened
since
then.
C
So
at
the
time
we
also
had
another
area
called
canada
build,
which
was
intended
to
help
solve
this
sort
of
source,
oriented
nature
that
people
sort
of
traditionally
think
of
as
being
a
key
part
of
sort
of
fast
workloads
and
in
march
of
2019
that
that
spun
out
as
its
own
project,
which
you
may
know
now
as
tekton.
C
So
you
know
other
major
milestones.
The
serving
api
had
their
v1
revision
in
september
of
2019
and
one
of
the
one
of
the
big
things
that's
been
sort
of
a
recurring
theme
laced
through
some
of
the
more
technically
oriented
things
in
here
has
been
the
topic
of
sort
of
governance
and
paul's,
been.
C
You
know,
one
of
the
one
of
the
big
advocates
of
this
on
steering
and
in
may
of
last
year
we
had
our
first
toc
elections
and
marcus
nia
and
grant
joined
the
toc,
and
we
now
have
a
sort
of
vendor-neutral
representation.
No
one
company
has
more
than
two
of
the
five
seats
on
the
technical
oversight
committee,
which
was
a
really
interesting
milestone
in
the
sort
of
open
aspects
of
k-native
as
a
project.
C
Shortly
thereafter,
in
in
summer
of
2020,
the
eventing
api
went
v1
and
in
november
of
gonna
say
this
year.
This
past
year
we
had
our
first
steering
elections
where
paul
won,
one
of
the
elected
seats
and-
and
I
guess
who
is
one
of
the
other
folks
who
started
the
project
a
few
years
ago-
won
one
of
won
the
other
elected
seat.
C
So
we
now
have
both
a
steering
committee
and
a
technical
oversight
committee,
where
no
one,
no
single
vendor
you
know
has
all
the
say
right.
So
it's
very
exciting
times
for
the
project.
B
Yeah
one
thing
that
I'll
just
add
is
that,
when
matt
talks
about
the
technical
oversight
committee
and
steering
committee
is
that,
in
addition
to
the
vendor
neutrality
element
that
matt
described
the
folks
on
those
committees
are
serving
as
individuals
not
as
employees
of
their
vendors.
So
in
this,
in
the
sense
that
it's
vendor
neutral
in
the
sense
that,
like
you,
we
can't
have
more
than
two
people
employed
by
the
same
vendor.
B
B
Yes,
anybody
else
any
of
my
co-presenters
want
to
add
anything
to
this
slide.
B
Excellent,
okay,
all
right,
so
we
got
the
first
meme
of
this
presentation
cloud
events.
I
had
a
couple
different
variations
of
this
meme.
B
One
of
them
is
the
one
that
you
see
the
other
one
said
cloud
events
I
wanted
to
put
scotty's
face
on
it,
but
I
didn't
have
enough
time
and
I
think
dan
pop
generously
offered
to
do
it,
and
then
I
forgot
about
it.
So
this
is
what
you
get
we'll,
maybe
make
one
that's
even
more
funny,
but
wanted
to
say
a
few
words
about
cloud
events.
B
B
Scott
in
particular,
I
think
from
the
group
of
co-presenters
that
I
had
today
is
very
active
in
the
cloud
events
space.
So
scott,
you
might
want
to
add,
add
stuff
to
to
what
I
say
after
I'm
done,
but
the
reason
that
I
mention
cloud
events
is
that
when
we
talk
about
things
being
event
activated
and
event
driven
driven
in
k
native-
and
you
know
in
particular,
eventing
there's-
probably
a
fairly
obvious
connection
there
with
eventing.
B
The
message
format,
that
is
the
lingua
franca
of
k
native
project,
is
the
cloud
event
format
and
it
is
supposed
to
facilitate
interoperability
between
different
producers
and
consumers
and
be
a
vendor
neutral
format
that
can
be
adopted
and
it's
what
we.
What
we're
using
is
an
event
format
inc
in
in
k
native
scott.
Do
you
want
to
add
anything
to
that
anything
important
that
we
should
know
before
we?
We
continue
our
journey
through
this
introduction
to
k-native.
E
Yeah,
I
think
the
the
only
missing
important
bit
about
cloud
events
is
that
the
specification
describes
how
to
turn
the
that
the
core
nugget
of
your
event
between
protocols
and
back
to
this
protocol-less
version
of
that
event,
and
so
the
reason
this
is
important
is
I
can
go
and
write
my
faz
have
it
based
on
cloud
events
and
then
all
of
a
sudden,
I'm
not
locked
into
the
protocol.
I
choose
when
the
project
started
so
k
native
serving
is
really
about.
E
E
So
you
could
be
running
in
kafka
in
production,
but
maybe
nats
on
your
desktop,
because
it's
a
lighter
weight
to
run
or
something
like
that
or
even
pure
http.
E
So
we
get
a
we
get
away
with
this
because
we
depend
on
cloud
events
to
be
this
kind
of
like
neutral
converter,
between
these
protocol
specific
eventing
formats,
and
this
conical
form.
B
All
right:
well,
let's
talk
about:
let's
do
a
little
bit
more
in
depth
on
serving
so
we
can
learn
a
little
bit
more
about
how
that
scaling
works.
Matt.
I
think
this
is
your
slide.
What
does
k
native
serving
get
me
sure.
C
So
so
I
think
you
frame
this
well
at
the
beginning
right
talking
about
a
lot
of
the
stuff
you
have
to
do
in
terms
of
sort
of
you
know,
there's
a
lot
involved
to
launch
a
production
service
on
top
of
kubernetes
and
a
lot
of
you
know
I
I'd
use
the
word
boilerplate
in
terms
of
the
the
kinds
of
things
you
need
to
set
up
to
sort
of
operate,
a
service
right.
C
You
you
have
deployments,
you
have
services,
you
have
ingresses,
you
have
hpas,
you
have
all
of
these
things
right
that
you
know
you
need
to
do
when
you're,
adding
new
services
and,
as
folks
shift
from
sort
of
you
know,
big
monolithic
applications
to
you,
know
the
new
hotness
micro
services
or
even
functions
right.
You
end
up.
You
end
up
needing
to
do
that.
C
A
lot
more
right,
and
so
the
way
I
like
to
think
about
serving
is
sort
of
reducing
the
incremental
complexity
of
launching
new
services,
and
you
know
having
this
goal
of
enabling
developers
to
effectively
focus
on.
C
You
know
the
business
value
they
want
to
provide
in
those
services
right
so
really
with
k-native
serving
what
you
bring
is
just
a
container
image
that
has
your
http
based
application
in
it
and
what
you
get
is
you
get
a
dns
endpoint
for
your
application,
possibly
exposed
externally.
C
You
know
we
if
you've
configured
automatic
tls,
these
will
be
tls
terminated
endpoints
without
you
developers
having
to
do
anything
as
paul
mentioned
earlier,
as
you,
you
know,
create
changes
in
your
application.
Each
each
version
of
your
application
is
stamped
out
as
what
we
call
a
revision
over
time.
C
The
the
next
slide
sort
of
illustrates
a
little
bit.
What
makes
this
a
powerful
concept
is,
it
enables
you
to
reason
about
sort
of
versions
of
your
application
over
time,
and
this
is
most
useful
when
you
want
to
say
canary
sending
some
traffic
to
a
new
version
of
your
application
or
all
of
the
traffic
to
a
new
version
and
roll
forwards
and
backwards
in
time,
depending
on
your
production
needs.
C
You
go
back
for
just
one.
Second,
I
just
want
to
make
sure
I
yeah
okay
and
I
think
the
two
other
really
interesting
things
are
request
based,
auto
scaling.
So
as
your
application
gets
more
or
less
traffic,
and
this
may
be
because
you
were
rolling
out
a
new
version
or
not
the.
We
will
basically
right
size,
your
application
and
you
know,
have
10
replicas,
20,
replicas
or
even
0
replicas,
depending
on
sort
of
the
volume
of
traffic.
C
Your
application
is
serving
the
last
thing
that
we
do
that
I
think
is
really
interesting
to
call
out,
and
this
is
to
enable
those
sort
of
fast
style
use
cases
or
what
folks
think
of
when
they,
they
think
of
fast
right
with
with
your
lambdas
or
your
google
cloud
functions
or
your.
Why
not?
C
A
lot
of
these
fast
models
have
this
ability
to
have
the
the
sort
of
runtime
layer
take
care
of
concurrency
control,
and
so,
if
I
want
to
say
only
let
one
request
through
to
each
instance
of
my
application
at
a
time,
you
can
do
that
through
this
idea
of
container
concurrency
and
it's
one
of
those
things
that
we
have
built.
So
your
application
doesn't
need
to
deal
with.
C
You
know
concurrency
control,
which
can
be
tough
to
get
right,
especially
when
you
start
to
blend
it
with
things
like
load,
balancing
and
auto
scaling
and
getting
really
good
performance
out
of
some
of
those
things
so
yeah
so
yeah.
So
the
next
slide
really
illustrates
the
resource
model
here.
So
I
mentioned
the
sort
of
one
resource
that
you
need
to
deal
with
for
launching
new
services.
This
is
the
service
resource
in
its
simplest
form,
you'll
pretty
much
just
give
it
a
container.
C
This
exposes
an
http
endpoint
under
the
hood.
It
creates
a
what
we
call
a
configuration
resource
which
tracks
the
sort
of
latest
state
of
configuration.
For
you
know
what
what
is
running
and
those
revisions
are
that
history
of
changes
to
that
configuration
resource.
C
I
I
like
to
make
the
analogy
of
revisions-
are
sort
of
like
git
commits
and
configurations
are
sort
of
like
the
floating
head
of
your
git
branch,
and
so
you
know,
as
you
make
changes
to
your
branch
new
commits
happen,
but
the
old
commits
are
always
there,
and
so
you,
you
can
always
sort
of
reference.
C
Those
older
commits,
if
you
need
to,
and
so
the
route
is
what
controls
where
traffic
is
sent
over,
that
history
of
configuration
of
revisions,
and
so
you
can
either
have
us
automatically
track
the
latest
or
you
can
you
know
if
you
want
to
sort
of
take
complete
control,
you
can
you
know
control
percentage
based
rollouts,
you
know
to
distribute
traffic
across
some
number
of
revisions,
and
you
know
do
one
percent,
two
percent.
C
You
can
even
do
zero
percent
splits
and
do
what
we
call
tagging
if
you
wanted
to
sort
of
pin
but
qualify
a
new
revision
prior
to
sending
it
any
of
your
sort
of
main
traffic
load.
So
you
can
do
some
very
powerful
and
sophisticated.
You
know
canarying
and
qualification
prior
to
rolling
things
out,
but
the
configuration
to
do
these
things
is.
It
ends
up
being
typically,
quite
quite
small,
and
so
we
take
a
lot
of
the
complexity
out
of
some
of
these
things
that
you
know
can
get
very,
very
complicated.
C
So
next
slide.
I
think
it's
our
next
meme,
if
I'm
not
mistaken.
Okay,
yes,
so
this
is
this.
Is
my
favorite
bit
of
innovative
flood
right?
So
when
we
first
launched
this
was
actually
true,
we
did
actually
need
istio,
but
one
of
the
pieces
of
feedback
we
got
pretty
quickly
was
that
you
know
there
are
other
networking
layers
out
there.
You
know
some
folks
don't
want
sort
of
the
full
mesh
style,
networking
layer-
some
you
know
just
want
to
deal
with.
C
You
know
ingress
style,
networking,
and
so
one
of
the
things
that
we
did
was
we
built
an
abstraction
between
sort
of
that
sort
of
describing
what
we
need
from
the
networking
layer.
Where
you
know,
kubernetes
ingress
wasn't
quite
cutting
it,
and
I
think
everyone
has
sort
of
accepted
that
two
kubernetes
ingress
v.
You
know
now
v1.
It
was
v1
beta
1
at
the
well
at
the
time
and
for
what
seems
like
forever.
But
so
we
built
an
abstraction
that
you
know.
C
C
So
so,
yes,
we
do
not
need
istio.
Istio
is
just
one
way
of
running
k
native.
Do
you
want
to
add
anything
to
that
paul.
B
You
know
the
the
thing
that
that
I
wanted
to
add
before
we
move
on
is
just
that,
and
I
I
don't
think
we
touched
on
this,
but
it's
important.
The
the
service
api
and
configuration
apis
are
subsets
of
the
pod
spec,
and
so,
when
we
see
the
demo
later
in
our
talk,
let's,
let's
just
make
it
make
sure
that
we
highlight
that
so
that
people
can
see
it
in
action.
B
The
reason
I
mention
it
is
because
I
I
suspect
that
a
lot
of
a
lot
of
deployments
that
people
have
today
would
translate
directly
into
services
if
folks
wanted
to
try
that
out
and
see
how
their
existing
deployments
work
in
an
event
driven
mode
where
they're
scaled
down
and
back
up
depending
on
load.
But
otherwise
I
think
you
covered
it
very
thoroughly
and
I
will
just
go
ahead
and
advance
the
slide
now
to
the
roadmap.
C
Okay,
so
this
is
this
is
a
sort
of
taste
of
some
of
the
things
we're
working
on.
C
Excuse
me,
sorry,
I
had
a
tickle
in
my
throat
so
see
these
are
some
of
the
things
that
we
have
sort
of
cooking
in
various
stages
of
development.
One
of
them
is
domain
mapping,
the
idea
of
being
able
to
assign
sort
of
a
vanity
url
to
your
k
native
services.
So
you
don't
have
that
food.bar.example.com!
C
You
can
have
like
you
know,
myawesomeblog.mattmore.io
and
you
know,
assign
you
know
proper
dns
names
with
now
tls
termination
in
front
of
your
k,
native
services.
C
So
this
is
an
alpha,
I
believe
it's
available
in
our
dot
19
and
our
dot
20
releases,
which
1.20
is
hot
off
the
presses,
dot,
20
added,
auto
tls
to
it.
This
is
an
alpha
api.
So,
if
folks
want
to
give
feedback
on
this,
we
would
really
appreciate.
E
It
is
magic
by
the
way
it's
it's
one
of
the
coolest
features
that
k
needed
shipped
in
a
long
time.
I
am
so
excited
about
it.
It
basically
what
it
results
in
is:
tls
terminated
pet
project
domains
across
your
cluster.
It's
amazing.
C
Yes,
finally,
something
to
do
with
all
those
domains
you've
been
buying,
so
one
of
the
other
really
cool
things
that
I
like
that's
been.
This
has
been
in
the
works
for
a
few
releases.
Now
this
idea
of
gradual
rollout.
So
we
we
support
very
fine
grain
traffic
control
where
you
can,
you
know,
take
revisions
directly
and
split
across
them
with
pretty
fine
grain,
but
one
of
the
most
common
modes.
C
We
see
folks
doing-
and
this
is
you
know,
when
you're
getting
started-
you
sort
of
just
want
to
roll
out
to
the
latest
all
of
the
time,
but
depending
on
how
much
traffic
you're
getting
if
we
just
shift
things
over
in
one
big
swoop,
you
know
our
ability
to
scale
from
zero.
To
you
know.
C
Huge
number
might
be,
you
know,
limited
by
factors
at
the
kubernetes
layer,
and
so
what
the
gradual
rollout
project
is
doing
is
basically
making
it
smarter
about
sort
of
being
able
to
shift
traffic
to
that
over
some
amount
of
time
that
you
have
specified,
and
that
way
you
know,
as
you
start
to
scale
up
to
bigger,
you
know
deployments
you
can
you
know
not
drop
traffic
as
you're
rolling
out
new
versions
without
needing
to
you
know,
do
your
your
own
whole
complex
orchestration,
one
of
the
one
of
the
last
things
I
wanted
to
touch
on.
C
C
When
that
lands,
it
does
meet
our
needs
and
we
can
retire
that
abstraction
and
leverage
you
know
just
raw
kubernetes
to
do
a
lot
of
you
know
what
we
want
and
then
two
aspects
that
we
will
be
sort
of
pushing
on
forever
are
scaling
in
every
dimension.
You
can
imagine
as
well
as
request
latency,
so
I
will
hand
things
I
think
back
over
to
paul.
I
think
that's
mine.
B
Yeah
we've
got
a.
We
got
a
couple
questions
about
serving
in
the
chat
here.
The
first
one
is
from
dan.
Can
this
integrate
with
helm
for
deploying
helm,
charts,
rolling
bad
charts,
or
does
it
replace
it.
C
F
So
yeah
I
mean
the
the
that
that
is
an
alternative
reading
and
that's
that's
fine
I'd
like
an
answer
to
that
as
well,
but
it
was
more.
F
I
saw
you
talking
about
the
way
that
the
you
can
you've
got
this
idea
of
a
canary
a
roll
out
and
a
roll
back,
and
I
kind
of
we've
been
expecting
to
use
helm
to
do
that,
and
I'm
just
wondering
is
this
an
alternative
or
because
we've
committed
quite
a
bit
to
to
model
our
applications
to
be
deployed
as
helm
charts,
and
I
just
wanted:
does
this
integrate
with
it?
Or
does
this
replace
it
as
a
mechanism
for
rolling
and
upgrading
between
versions
of
services.
C
So
that's
that's
a
good
question,
so
I
don't
think
we
do
anything
specifically
to
integrate
helm
sort
of
more
deeply
than
what
you
can
do
with
helm
and
our
yaml,
and
you
should
be
able
to
use
k-native
sorry
helm
to
roll
out
k-native
services
and
much
the
way
you
can
roll
out.
You
know
other
resources,
but
I
think
we
haven't
done
anything
to
sort
of
integrate
with
home.
C
More
deeply
with
respect
to
you
know,
awareness
of
its
revision
model
and
but
there's,
I
think
in
principle
nothing
stopping
you
from
leveraging
helm
to
manipulate
a
native.
You
know
canadian
natives
concept
of
traffic
control,
so
one
of
the
things
we
did
introduce
is
we
have
this
more
sort
of
sophisticated
way
of
sort
of
doing
fine
grain
control,
where
you
can,
rather
than
just
having
us,
generate
names
for
each
new
revision.
C
F
C
So
today,
today,
all
revisions
need
to
live
in
the
same
name,
space
and
most
of
the
resource
model
within
serving
expects
things
to
live
within.
You
know
a
single
name,
space
and,
to
some
extent
it's
designed
so
that,
if
you,
if,
if
you're,
leveraging
namespace
as
your
as
your
tendency
model,
you
should
be
able
to
end
users
credentials
to
manipulate
k
native
serving
resources
within
that
name
space,
and
they
should
be
able
to
operate
productively.
If
that
makes
sense,.
B
There's
one
other
question
that
I'll
call
out
for
now:
it's
not
the
only
question
in
there,
but
it's
the
one
that's
closest
to
serving,
and
the
question
is:
will
let's
see
when
is
there
an
eta
on
functions?
Coming
coming
to
k,
and
I
would
say
at
this
point
there
really
is
not
an
eta
that
we
can
give
we
had
like
in
our
community.
We
had
maybe
a
couple
times.
B
The
subject
has
come
up,
but
we
so
far
haven't
and
I
think,
there's
a
very
great
interest
in
having
a
concept
of
functions.
That
is
that
that
is
on
top
of
k-native
that's
community-based,
but
we
so
far
haven't
been
able
to
agree.
I
don't
think
on
an
approach,
so
I
can't
really
give
an
eta
now.
It's
definitely
on
our
radar,
and
I
appreciate
you
know
the
equip
the
question
being
asked.
B
I
think
I
think
what
would
what
would
be
great
from
the
if
the
the
person
who
asked
the
question
is
very
interested
in
it.
I'd
love
to
get
a
note
about
that,
to
the
k
native
dev
or
k
native
users,
mailing
list,
they're,
they're,
k,
native
dash,
dev
and
k,
nate
dash
users
and
it's
homed
in
google
groups,
I'd
love
to
get
some
surfacing
of
that
in
there
I've.
B
Definitely
I
will
definitely
surface
that
it
came
up,
but
so
far
we
we
can't
really
give
an
eta
on
it
and
I
think,
in
the
interest
of
time
it's
probably
best
to
advance
the
slide
now
and
I
think
that
will
be
eventing
so
scott.
Why
don't
you?
Why
don't
you
take
this
section
as
the
the
eventing
rep
on
our
little
call.
E
Thanks
paul,
okay,
so
what
does
a
canadian
eventing
get
me
a
good
question?
E
I
can
read
the
slides
here
so
we're
enabling
async
app
development
through
event,
driven
from
anywhere
loosely
coupled
and
late
bind
producers
and
consumers.
E
Producers
generate
events
before
consumers
exist
things.
So
basically,
eventing
has
a
hard
problem
because,
there's
you
know,
20
30
years
of
eventing
history
in
compute,
right,
like
serverless,
is
fairly
new
and
there's
no
real,
like
cookbook
patterns
of
how
you
cook
up
a
serverless,
containerized
thingy,
but
eventing
patterns
and
messaging
patterns
have
been
around
forever.
E
E
My
consumer
moves
to
a
new
clustered
url
or
a
new
cluster
or
resolves
to
a
new
address,
or
it
gets
deleted
and
recreated
somewhere
else
and
that
being
able
to
heal
the
clusters
of
venting
mesh
is
something
that
we
really
focused
on
around
inventing
one
thing,
the
slides,
don't
really
say,
is
eventing's
really
broken
up
into
a
few
different
big
major
chunks.
E
We
started
out
with
messaging,
we
have
a
messaging
api
group
and
it
kind
of
it.
It
puts
a
thin
abstraction,
on
top
of,
like
pub
sub
components,
turns
out,
that's
really
hard
to
build
with,
because
it's
very
imperative
on
how
you
would
assemble
your
your
cluster.
So
we
came
up
with
a
second
model
that
sits
on
top
of
that.
That
can
leverage
it,
but
it
doesn't
have
to.
We
call
eventing
and
eventing
is
more
like
actually
can
we
go
to
the
next
slide?
E
E
You
could
consider
it
like
a
query
once
that
query
matches
that
that
event
gets
copied
out
of
the
broker
and
delivered
to
a
subscriber,
and
we
have
a
bunch
of
magic
here
to
let
this
be
discoverable
and
late
bound
and
self-healing,
and
things
like
that.
B
E
Yeah,
okay,
so
the
as
we
were
developing
of
eventing
components.
We
we
kind
of
had
this
idea.
I
think
villa
and
I
were
talking
to
matt
and
we
hit
upon
this
idea
that
well
potentially
k
native
serving
we
don't.
We
don't
want
those
components
to
be
coupled
right.
So
eventing
knows
nothing
about
serving
it's
independent,
but
they
do
share
some
common
interfaces
that
we
call
duct
types,
but
basically
the
the
trigger
can
point
to
this
duct
type
that
we
call
addressable,
which
basically
says
in
your
status.
E
E
E
E
So
eventing's
roadmap
we're
we're
working
on
stabilization,
there's
a
lot
of
features
that
they
work,
but
they
could
use
more
tests
and
those
tests
could
be
a
little
more
stable.
So
we're
really
focusing
on
that
right
now.
E
E
E
So,
where
serving
brings
you
really
easy,
kubernetes
scale
to
zero
containers
eventing
enables
this
really
easy
shim
on
top
of
other
protocols,
to
help
you
decouple
your
choices,
so
that
you
could
make
different
choices
later
without
having
to
re,
recreate
your
entire
application
right,
but
that
that
that
thin
shim
needs
some
more
features,
like
maybe
some
smarter
filtering
in
the
triggers
or
improving
the
the
reply
contract
so
like.
E
How
do
I
know
in
the
data
plane
that,
if
I'm
going
to
invoke
some
subscriber,
how
does
that
subscriber
understand
that
it
can
reply
to
the
broker
to
re-ingress
and
invent
back
in?
So
why
would
you
want
to
do
that?
Well,
we
had
this
interesting
thought.
What
if
the
broker
allowed
you
to
reply
to
events
and
then
those
new
events
that
you're
replied
with
gets
ingress
back
into
the
broker,
so
you
never
have
to
know
which
broker
invoked
you
right,
so
a
smaller
footprint,
smaller,
more
reuse
of
your
deployed
components.
E
So
then,
in
the
next
six
months,
we're
still
catching
up
on
the
auto
scaling
of
the
eventing
components,
still
working
on
that
we're
partnering
with
the
projects
like
keda
to
to
look
at
well
pole
based
scale
models.
E
Maybe
kate
is
the
way
and
so
like
we'll
we're
we're,
adding
hooks
and
plug
points
and
some
standards
on
how
you
get
your
eventing
components
to
scale
with
external
things
like
cada.
E
Is
it's
it's
kind
of
an
implementation
of
the
cloud
event
specification
with
a
bunch
of
other
opinions.
One
of
the
things
that
we're
working
on
in
cloud
events
is
the
discovery
and
subscription
apis,
and
I
think
you're
going
to
see
that
trickle
down
into
k
native
in
the
next
six
months
or
so.
B
D
D
Actually,
of
course,
you
can
everything
you
can
do
everything
with
resource
files
as
well,
but
actually
I
see
a
dedicated
cli
for
canada
has
some
advantages,
so
you
can
distinguish
between
two
mode
over
run
d,
so
one
is
the
imperative
mode
so
that,
as
you
know,
from
cube
control
as
well
and
you
can
actually
manage
nearly-
I
think,
all
of
the
native
core
entities
which
are
user
facing
directly
with
crowd
operations.
So
you
can
create
them.
D
You
can
make
updates
and,
of
course,
list
them
in
very
details
in
a
human
consumable
format
and
of
course
you
can
delete
them
as
well,
so
that
you
can
group
them
also
into
different
areas
like
we
have
for
creative
serving
we.
We
know
how
to
manage
services,
we
can
create
manage
kinetic
canadian
services
and
also
revisions,
and
also
for
eventing.
We
have
different
yeah
for
every
entity.
You
have
a
kind
of
a
noun,
so
it's
always
the
same
schema,
so
you
have
kn,
then
you
have
the
noun
and
then
the
verb.
D
D
Also
with
kn,
but
there's
also,
this
called
so-called
declarative
handling
of
creative
services,
which
allows
you
really
to
describe
your
target
state
that
you
want
to
have,
and
this
has
the
same
semantics
like
cube
control
apply,
which
means
you
have
get
a
three-way
merge
with
the
stuff
which
happens
in
the
meantime
between
two
runs
of
apply,
for
example.
So
it
includes
the
same
way
and
actually
it
even
reuses
the
way
how
cube
control?
Does
this
merging
and
also
borrowed
from
the
cubecontrol
architecture,
is
the
plugin
are
the
plugins
that
are
similar
to
control?
D
There's
one
thing
which
is,
I
think,
which
is
in
addition
to
the
way,
how
you
clip
control,
handles
plugins,
so
plugins
and
cube
controller,
just
external
programs
which
are
executed
by
like
actually
by
executing
it.
From
from,
like,
like
a
direct
like
a
comment,
so
it's
a
separated
process
for
that,
but
you
can
also
create
an
inline
plugin,
so
which
means
if
your
plugin
is
written
in
golang,
then
you
could
also
make
a
separate
on
own
build
of
your
of
kn
and
then
inline
that.
So
this
is
quite
quite
nice.
D
If
you
want
to
have
kind
of
a
single
binary
which
includes
a
certain
amount
of
plugins-
and
we
are
currently
working
all
on
our
cube
on
a
k,
n
builder
project
which
allows
you
to
declare
the
plugins
that
you
want
to
include
and
then
just
run
that-
and
you
get
just
one
blob
of
binary-
that
you
can
execute
with
all
the
plugins
included.
This,
of
course
only
works
for
golang,
but
the
the
regular
black
and
white
structure
works
for
any
language.
D
Of
course,
then
we
also
added
recently
github
support,
as
we
call
it,
which
means
we
have
dedicated
text
on
task
that
you
can
reuse
in
the
tectum
pipelines
for
deploying
your
creative
services
and
brand
new
fresh
from
the
press
is
an
offline
generation
of
resource
files.
So
you
can
actually
operate
kn
against
your
local
file
system.
So
you
don't
do
not
need
to
have
a
direct
connection
to
your
cluster,
but
you
just
add
an
option:
minus
minus
target.
D
You
provide
a
directory
or
a
file
name,
and
then
it
just
creates
the
resource
files
directly
from
the
arguments
that
you
provide
to
kn.
So
this
is
a
very
easy
way.
How
you,
even
if
you
do
not
have
a
cluster
at
hand,
but
you
also
do
not
remember
the
schema
of
the
creative
services
of
the
creative
resources.
Then
you
can
just
re
use,
kn,
use
the
help,
messages
and
then
just
use
some
arguments,
and
this
will
build
up
for
you,
the
yaml
files.
D
This
is
very,
very
convenient
and
then,
of
course,
you
can
take
that
file,
commit
it
into
your
source,
control
management
system
and
go
on
go
ahead.
So
this
is
very,
very
nice,
so
we
just
have
the
support
for
clean
service
create,
but
actually
we
are
also
continuing
this
theme
by
adding
it
to
update
and
to
yeah
and
to
list
and
so
on.
So
it
really.
It
is
really
a
very
nice
feature
which
I'm
pretty
excited
about
yeah.
This
is
a
nutshell,
that's
what
can
can
do
for
you?
So
actually
it's
really!
D
D
Okay,
for
can
it's
a
little
bit
different
because
it
supports
eventing
and
serving,
and
so
for
example,
as
there
was
some
point
in
time
where
serving
was
stable,
but
the
vending
was
not,
and
in
this
period
we
had
support
for
for
both
of
them.
But
we
marked,
of
course,
the
eventing
support
kind
of
experimental.
We
also
have
other
features
which
are
marked
experimental,
but
otherwise
kn
is
totally
stable.
It's
included
also
in
products
like
an
open
shift.
It's
already
shipped
with
that,
and
you
completely
rely
on
that
yeah.
D
B
D
Okay,
cool
yeah:
what
are
the
roadmaps
actually
for
the
near
future?
We
of
course
want
to
continue
on
the
all
the
topics
that
we
have
started.
We
want
to
support
more
sources
and
we
also
want
to
support
arbitrary
sources
so
sources
that
are
not
known
when
kn
was
compiled
or
built,
and
we
want
to
leverage
metadata
that
are
offered
to
us
so
like
crds,
but
also
the
creative
discovery
api
which
is
easier
to
consume
for
us,
because
here
these
are
a
little
bit
hidden
and
typically
only
meant
for
administrators.
D
So
a
regular
user
is
not
necessarily
able
to
read
crds
and
based
on
this
meter
information.
We
want
to
support
different
command
line
arguments,
so
we
really
want
to
offer
dynamically
common
arguments
that
are
based
on
the
type
that
you
are
managing,
so
this
is
kind
quite
challenging,
but
actually
this
will
give
you
a
nice
user
experience.
I'm
you
know
pretty
sure
then
another
thing
which
I'm
I'm
very
excited
about
kind
of
also
called
called
cameras.
D
Sorry,
so
if
you
don't
know,
camelets,
then
no
problem,
because
it
is
really
brand
new
technology.
D
It's
based
on
camel,
which
is
also
it's,
which
is
an
enterprise
application,
integration
platform,
and
the
good
thing
about
this
yet
so
about
a
vegetable
camera
is
that
it
comes
with
around
300
plus
components
that
you
can
reuse
as
canadian
sources,
which
means
they
can
connect
to
external
systems,
and
even
if
you
don't
know
now,
if
you
don't
know
nothing
about
apache
camera
using
camerlets,
you
can
leverage
all
this
existing
stuff
directly
and
just
use
this
creative
source,
for
example,
to
connect
to
systems
like
telegram,
salesforce
servicenow,
whatever
you
want
to,
and
cameras
will
convert
all
these
events
to
cloud
events
and
yeah.
D
So
support
for
that
is,
will
be
implemented
as
kind
of
plugins
as
well.
And
then,
of
course,
we
always
are
looking
for
new
plugins
in
the
creative
sandbox.
So
sandbox
is
kind
of
a
melting
pot
or
something
extension
which
are
not
really
part
of
the
kinetic
core
directly.
D
So
it's
really
something
where
we
also
put
a
different
kinetic
plug-ins
into,
and
two
of
them
will
be
a
lock,
lock
plugin,
which
allows
you
directly
to
print
out
service
logs,
like
you
know,
from
stern,
for
example,
but
also
another
plug-in
like
directly
creating
events
locally
on
the
command
line
and
injecting
it
into
the
eventing
infrastructure,
which
is
very
convenient
for
testing
and
debugging.
D
And
finally-
and
we
are
always
trying
to
improve
here-
is
user
experience
so,
but
we
also
rely
on
your
feedback
for
that.
So
actually
we
have.
We
know
some
weak
points
in
the
user
experience,
for
example,
the
way
how
we
can
specify
graphic
splits,
which
you
can
definitely
do
with
kn,
but
we
feel
this
can
be
better
and
yeah,
so
we
are
going
to
improve
on
this
story
and
yeah.
So
this
is
the
world
map
virtue,
which
we
will
work
on
in
the
next
six
months.
I
would
say.
B
All
right,
thank
you
very
much
it.
You
know
listening
to
this
stuff
that
we're
all
talking
about
in
this
presentation.
I'm
like
this
sounds
pretty
cool,
but
will
it
blend
so
scott?
I
believe
you
got
a
demo.
You're
gonna
show
us
to
establish
whether
or
not
it
will
blend.
B
E
Okay,
here
we
go,
you
need
to
turn
it
off
there.
We
go
all
right.
I
only
got
a
few
minutes,
so
you
know
I'll
just
use
this
brick
and
extra
effort.
E
Cool
okay:
here
we
go
so
I
have
been
talking
to
my
friends
over
at
falco.
If
you
don't
know
what
falco
is
it's
a
thing
that
watches
this
events
that
are
interesting
and
turns
them
into
web
hooks?
There's
this
other
project
called
falco
sidekick,
which
turns
those
events
into
some
other
thing,
and
I
was
like
well,
you
guys
don't
really
have
cloud
events
there,
and
so
I
helped
them.
Add
it
and
here's
a
demo
of
using
falco
and
and
sidekick
to
do
some
stuff,
so
cube
cuddle.
E
So,
let's
see
first
off,
let's
take
a
look
at
the
graph
here
in
my
cluster
right
now.
I
have
a
sync
binding
that
that
links
the
falco
sidekick
into
the
ingress
of
the
broker,
but
remember
the
broker
only
consumes
cloud
events,
so
cloud
events
are
going
to
bounce
around
in
there
and
then
I
have
this
trigger
to
send
anything.
That's
from
falco.org
with
the
type
falco
rule
output
to
this
sockeye
service.
So
what
the
heck
is
sockeye,
it's
another
fish
program.
E
E
E
Cool
I've
got
one,
and
I
also
got
this
event
here
that
says
terminal
in
shell.
That's.
That
seems
like
an
interesting
event.
I
don't
really
want
people
to
have
interactive
shells
on
my
my
cluster
here
so
so
I
made
a
very,
very,
very
simple
application.
All
it
does
is
it
listens
for
a
cloud
event
and
it
it
does
a
coupe
cuddle
delete
on
the
pod
that
comes
in
right,
very
simple,
and
to
implement
that
I
get
some
r
back
to
allow
me
to
get
and
delete
pods.
E
I
wrote
a
k
native
trigger
that
says
for
things
from
falco
with
that
same
rule
with
with
the
rule
text
terminal
and
shell,
send
it
to
this
drop
service,
which
is
a
k
native
serving
service
and
the
yaml
for
that
is
here.
So
a
couple
things
to
note,
I'm
asking
for
it
to
be
only
cluster
internal,
because
I
don't
really
want
to
expose
the
pod
killing
device
to
the
internet.
That'd
be
real
bad.
I'm
going
to
show
you
because
of
hard
mode
co
in
action.
So
so
here
we
go
we're
going
to
deploy
that.
B
One
thing
I'll
just
add,
as
scott
is
doing,
that
is,
if
you
look
under
the
template
part
of
the
services
spec,
that
is
in
the
the
top
pane
there
on
scott's
display.
You
can
see
that
this
looks
pretty
close
to
a
pod
right.
Scott.
E
Wait,
no,
it
doesn't
look
like
a
pod,
it
looks
like
a
deployment.
Pods
don't
have
the
template.
B
E
If
I
rearra,
if
I
change
this
to
deployment-
and
you
know
the
correct
and
then
added
a
bunch
of
other
like
a
kubernetes
service
and
some
other
stuff,
and
then
I
would
have
the
same
setup,
it
just
wouldn't
scale
to
zero.
It
wouldn't
have
auto
tls.
E
See
so
I
can
see
that
the
I've
got
my
drop
thing:
it's
cluster
local,
it's
ready
to
go,
and
so
now
let's
go
and
exec
back
into
that
the
fun
sql
pod.
E
And
we,
oh
so
what
happened
here?
The
falco
detected
that
somebody
created
this
the
terminal
and
shell.
We
got
another
event
here,
terminal
and
shell.
E
If
we
refresh
graph,
we
can
see
what
the
new
graph
looks
like,
so
we
still
have
the
falcon
sidekick
ingressing
to
the
broker.
We
have
another
trigger
for
drop
or
this
candidate
of
service.
E
So
now
we
can
see
the
event
stream
that's
coming
through
the
broker,
but
we
can
also
cherry
pick
the
the
this
terminal
and
shell
and
send
it
to
the
drop
service
which
goes
and
invokes
death
onto
my
my
interactive
shell,
that
I
maybe
I'm
trying
to
do
some
malicious
things.
So
I
whip
this
up.
Super
quick,
it's
not
a
ton
of
code.
I
got
to
show
you
code,
which
is
cool.
B
B
Okay,
will
it
blend
it
blended?
So
that
was
good.
Let's,
let's
talk
about
2021
goals,
real
quick
as
we're
we're
running
over,
and
I
thank
everybody.
That's
still
watching
so
number
one.
My
own
personal
opinion
here
is
that
we
we
have
the
candidate
of
apis
are
at
a
v1
level
and
v1,
meaning
you
know
good
expectation
of
backward
compatibility.
B
B
We
also,
I
think,
are
really
interested
inside
the
community
of
folks
that
develops
k-native
and
having
more
integrations
and
the
one
that
scott
has
just
showed.
Us
is
a
really
great
example
of
the
type
of
integration
that
I'd
like
to
see.
You
know
the
more
things
there
are
that
spit
out
cloud
events
and
can
consume
cloud
events.
The
the
more
utility
k
native
is
gonna
have
for
everybody.
B
So
if
we
think
about
how
do
we
make
something
that
is
most
useful
to
everybody,
the
the
more
integrations,
the
better
and
matt,
I
think
you
put
the
improved
ux
on
here
so
I'll.
Let
you
speak
to
that.
C
Yeah
sure
so
one
of
the
things
that
we've
started
to
do
is
there's
been
a
bunch
of
sort
of
user
interviews
where
we've
been
talking
to
folks
looking
to
get
started
with
k-native
and
rumblings
that
we
might
start
a
user
experience
working
group
to
sort
of
look
at
you
know
getting
started,
as
I
think
is
one
of
the
really
important
sort
of
journeys
that
a
lot
of
users
take,
and
you
know
we
want
to
look
at
a
bunch
of
these
and
you
know,
make
sure
it's
as
streamlined
as
possible,
so
that
you
know
we
can.
C
B
Yeah
no
problem,
of
course
we
want
more
adoption
and
I
see
there's
a
I
see
there
is
a
question
in
the
chat
from
from
somebody
on
youtube
any
update
on
how
serverless
is
being
adopted
in
the
community
these
days-
and
I
can
I
can
speak,
you
know
from
the
numbers
that
we
track
for
openshift
serverless,
which
is
the
the
red
hat
product
derived
from
from
k
native,
that
we
saw
a
pretty
good
adoption
growth
in
2020,
but
I've
my
own
personal
opinion,
and-
and
this
is
why
more
adoption
is
on-
is
on
our
2021
goals.
B
Here.
I
think
that
in
general,
serverless
is
still
a
fairly
advanced
topic,
and
you
know
if
you
think
about
the
growth
of
the
the
kubernetes
community.
That
kind
of
looks
like
a
hockey
stick
right
and,
and
what
that
tells
me
is
that
if
we
think
about
the
appetite
for
advanced
topics
that
probably
there
are
still
a
lot
of
folks
that
are
beginning
their
kubernetes
journeys
right
now,
I
expect
the
demand
and
adoption
opportunities
will
grow
and
it's
it's.
B
B
I
think
there's
a
lot
more
adoption
out
there
to
get
us,
so
that's
definitely
something
that
I
think
that
we'll
work
a
lot
on
in
the
community
and
we
also
really
want
to
grow
the
pool
of
contributors.
So
I'll
just
take
this
opportunity
to
say
you
know
if
you're,
if
you're
watching,
and
you
think
that
this
project
sounds
interesting
to
you
and
and
it
might
be
something
that
you
would
be
interested
spending
your
time
on.
B
However,
how,
however,
much
time
you
had
to
give,
I
would
just
say
that,
like
I,
I
think
we
have
a
really
great
friendly
community
in
knativ
and
we're
also
really
interested
in
in
growing
the
pool
of
contributors.
So
I
would
just
say
like
if
you
have
any
thoughts
about
like
you
know,
you
want
to
contribute,
but
you're,
not
sure
what
you
could
do,
maybe
you're
not
as
focused
on
code.
B
I
will
just
say
that,
like
I
think
that
there
is
something
for
everybody
to
contribute
in
open
source
and
I'm
I'm
really
interested.
If
you
have
the
urge,
if
you
have
the
interest,
if
you
have
the
desire
to
contribute
but
you're,
not
sure
how
you
could
do
it,
I
would
love
for
you
to
to
ping
me
and
talk
to
me
about
it.
You
can
hit
me
on
on
twitter
at
cheddarmint,
and
you
can
also
get
me
p-m-o-r-I-e
at
redhat.com
and
I'd
love
to
talk
to
you
about
how
you
could
contribute.
B
We
definitely
could
use
your
help
and
it's
a
lot
more.
You
know
when
I
think
that,
probably
in
this
audience
like
there's,
maybe
an
unconscious
bias
toward
thinking
of
like
open
source
contribution
as
being
all
code,
and
that's
just
simply
not
the
case.
So
if
you
can,
if
you
can
read
documentation
and
tell
us
what
did
or
didn't
work
for
you
from
that
documentation,
that's
contribution
that
would
be
very
valuable
for
us
writing,
docs
participating
in
things
while.
B
Sure
it's
it's
a
very
easy,
it's
a
very
easy
url
to
remember
it's
knative.deb
native.dev
and
that's
a
good
jumping
off
point
to
find
any
number
of
things,
some
of
which
are
on
this
next
slide
here.
Our
github
organization
is
called
the
k-native
organization.
B
E
B
Again,
you'll
find
that
from
knative.dev
too,
and
I
think
someone
else
trying
to
talk
go
ahead.
E
Yeah
join
the
slack
it's
slack.canada.dev
that'll,
get
you
an
invite
code
to
come.
Hang
out
with
us.
C
I
believe
that
is
the
slack
workspace,
the
other,
the
other
urlslac.knatif.dev
will
get
you
an
invite
link.
You
can
also.
You
know
ping
me
on
kuberneteslac,
matt,
moore,
no
e
and
I'm
happy
to
share
invite
codes
to
if
folks
need
them.
A
And
I'll
annotate
and
correct
that
slide
and
I'll
upload
the
video
to
the
openshift
commons
playlist
shortly
and
tweet
that
out
with
the
slides.
So
with
that,
I
think
we
need
to
end
and
wrap
up
and
respect
everybody's
time
and
really
thank
you
all
for
for
coming
and
all
the
work
that
you
do
in
the
k-native
community.
It's
wonderful
to
see
your
faces.
I
think
each
one
of
you
probably
could
do
an
ama
on
your
individual
topics,
so
we
probably
will
have
you
all
back
in
the
coming
months.
A
So,
thanks
again
and
for
everyone
who
asked
questions,
thank
you
for
participating
and
we
will
be
back
again
with
another
ama
next
week
on
a
topic
to
be
decided
still.
So
thanks,
paul,
matt,
scott
and
roland
be
safe
everybody
and
take
care.