►
From YouTube: CNCF TOC Meeting - 2019-09-17
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
B
Right
cool:
can
you
hear
me?
Okay,
I
can
loud
and
clear:
okay,
cool
okay
for
those
of
you
who
may
not
know
or
miss
the
TOC
presentation.
I
guess
comments
back
that
we
did
a
review
of
cloud.
Events
cannot
have
been
today
specification.
It's
not
a
typical
open
source
project
code.
It's
a
spec
about
how
to
modify
current
events,
to
add
well-defined
metadata
to
help
manage
the
routing
and
filtering
or
very
common
middleware
type
of
app
type
of
functionality
without
requiring
a
middleware
to
actually
understand
the
business
logic.
B
So,
as
I
said,
it
defines
common
metadata
across
events,
common
location
for
that
metadata
to
appear
so
middleware
cues,
basic
processing,
I,
haven't
seen
business
logic,
we're
delivering.
Obviously,
the
specification
civilization
rules
for
common
transports
like
HC,
P
and
P
TTP
stuff,
like
that,
it's
realization
for
JSON,
as
well
as
a
primer,
some
SDKs
as
well
as
extensions
as
well,
that
didn't
meet
the
criteria
for
actually
part
of
the
spec
itself.
We
have
had
some
demos
that
previous
cons
and
stuff
with
some
links
there,
you
guys
can
look
at
that.
B
B
You
we're
actually
more
at
a
point:
9,
actually
we're
hoping
to
approve
point
9
this
week,
which
will
technically
be
a
release,
candidate,
1.0
and
then
approve
that,
hopefully
before
coop
con
and
have
some
wonderful
PR
around
that
for
those
who
have
never
seen
it
before.
On
the
right
hand,
side
you
could
see
when
a
cloud
event
looks
like
basically
take
the
HTTP
message
with
the
stuff
in
bold,
which
is
the
cloud
event
stuff
and
basically
turned
any
HTTP
message
into
a
cloud
event.
B
So
all
we
did,
there
is
add
four
new
attributes,
and
that
turns
into
a
cloud
event,
and
that's
that
comment
that
a
metadata
the
help
people
do
routing
for
things
like
what
type
of
event
that
is
and
who
sent
it
in
stuff
like
that.
Okay,
so
with
that
quickly
behind
us,
let's
jump
into
why
we're
here
as
of
right
now,
cloud
events
is
a
sandbox
project
and
we're
going
for
an
incubator
status.
B
So
let's
go
in
and
jump
to
the
next
slide
and
just
a
quick
reminder
for
those
you
don't
know
we
have
to
meet
three
criteria.
We
have
the
document
that's
being
used
by
at
least
three
independent
and
users
healthy
number
of
committers,
as
well
as
demonstrated
an
ongoing
good
flow
of
commitments
and
merge
stuff.
Okay.
B
So
let's
go
to
the
next
slide,
so
the
first
criteria,
I
guess
so
a
little
bit
of
a
preface
before
we
actually
get
to
the
first
criteria,
because
this
is
a
specification,
it's
a
little
bit
difficult
to
to
come
across
sort
of
end-users.
It's
a
it's
a
different!
It's
a
different
situation,
basically
than
normal
open-source
project.
However,
we
did
want
to
highlight
all
the
different
companies
that
are
actually
implementing
cloud
events
so
you'll
see.
B
This
is
a
very
distinguished
list
of
companies
here
on
this
list
and
you
you
basically
can
assume
that
people
who
are
using
those
particular
products
who
are
going
through
the
right
code
path
to
use
cloud
events
are
obviously
using
cloud
events.
So,
for
example,
let
me
pick
on
K
native,
because
that's
the
one
I'm
involved
in
anybody
using
K
native
event
thing
is
going
to
be
using
cloud
it
much
under
the
covers.
It's
just
built
into
that
system.
I,
don't
know
and
I
can
make
that
same
statement
of
all
these
other
ones.
B
They
may
have
a
separate
code,
bathroom
events,
but
we
do
know
that
it
is
being
used
by
some
people
of
these
products.
The
challenge
you
run
into
is
a
lot
of
people.
Don't
ask
they
feel
comfortable,
stating
in
public
that
hey,
yes,
we're
using
this
particular
technology
to
the
degree
that
we
need
for
the
TOC
review
here,
but
I
did
want
to
mention
that
we
know,
for
it
is
being
used
for
sure,
at
least
in
these
products.
So
usually
those
products
are
very
likely
actually
using
it
into
the
covers.
Okay.
B
So
let's
go
to
the
next
slide.
So
here
are
the
three
that
we
did
manage
to
get
to
get
convinced
our
approval.
2-Dimension
Roberto
from
Adobe
has
two
different
end-users,
white-bread
and
Pandora,
and
then
I
came
across
Accenture
and
the
public
willing
to
admit
their
product.
The
reactive
interaction
gateway
is
actually
using
condiments
onto
the
covers
as
well,
and
you
can
look
at
their
documentation
to
see
it
for
the
Adobe
stuff
they're
using
the
GDP.
B
Our
events
are
the
passing
around
GDP
our
events
as
cloud
events
under
the
covers,
so
they
are
actually
using
it
through
their
product.
Alright,
let's
go
to
the
next
slide.
Obviously,
stop
me
if
you
guys
have
any
questions,
otherwise,
I'm
going
to
try
to
go
fast
to
get
the
other
guys
in
on
the
call
so
criteria,
two
number
of
committers
again
I
have
to
sort
of
preface
this
a
little,
because
we
are
a
spec
project,
not
a
code
project.
B
The
rate
of
change
in
our
spec
is
much
much
lower
than
you
would
see
in
a
code
open
source
project
and
our
goal
here.
When
you
submit
a
pull
request
to
change,
the
spec,
isn't
to
try
to
get
the
one
or
two
maintainer
is
to
approve
it
right.
So
it's
not.
How
quickly
can
you
get
your
code
in
there?
The
point
here
is
to
get
community
consensus,
because
this
isn't
about
changing
code
in
one
open-source
project
that
you're
hoping
to
be
use
off
the
people,
but
still
just
one
code
base.
B
This
is
about
convincing
the
community
that
this
is
where
they
enough
to
be
implemented
by
a
lot
of
people
as
many
as
possible.
So
consensus
and
community
building
is
incredibly
important
here,
right
meeting
that
minimum
bar
one
or
two
maintainer
x',
like
many
open-source
projects,
have
isn't
going
to
cut
it
right.
We
really
really
need
to
get
that
consensus
building
here.
So
when
you
take
that
into
account
with
the
other
factors
like
many
of
the
PRS
that
we
have
aren't,
Oh
aren't
authored
by
a
single
user.
Alright,
I'm
z'.
C
B
About
consensus
building,
so
the
other
thing
is
most
that
people
have
other
main
jobs
and
that's
true
of
everybody
for
every
project,
but
unlike
many
open
source
projects,
where
people
we
have
a
group
of
people
who
are
basically
seem
like
they
live
there,
24/7
coding,
because
it's
a
specification
rate
of
change
is
lower.
This
is,
as
I
said
in
here
at
the
side,
gig,
okay.
B
So
again,
the
number
of
PRS
are
getting
much
much
lower
than
open
source
project,
so
I'm
just
giving
you
guys
a
fair
warning,
and
we
also
need
to
make
sure
that
those
PRS
are
very,
very
carefully
reviewed.
Okay,
we
just
don't
want
something
to
slip
in
under
the
covers
that
you
were
surprised
about.
Consensus
is
all
most
important
to
us.
Okay,
as
I
said,
PR
Kent
is
an
accurate
representation
of
contribution
there,
a
lot
of
are
things
going
on.
B
However,
having
said
all
that
we
do
have
SDKs
which
operate
by
quote
normal
open
source
coding
projects,
and
you
can
look
at
those
in
terms
of
participation
and
those
follow
the
normal
rules
of
you
know
you
do
more
PR
as
you
get
nominated
for
the
maintainer,
and
you
work
your
way
up
the
chain
that
kind
of
stuff,
so
those
are
quote
normal
okay.
So
let's
go
to
the
next
slide.
So
with
that
in
mind,
we
do
not
have
committers
than
normal
sense
right.
B
Technically,
the
only
people
that
have
it
have
the
right
access
to
the
repo
are
they
maintain
are
the
advent
I
think
is
like
two
of
us
all
right.
What
ends
up
happening
is
issues
are
open.
We
discuss
them
on
weekly,
calls
or
offline
through
the
issues
themselves
and
get
up
when
PRS
are
opened.
We
only
approve
PRS
during
the
weekly
phone
calls
and
technically
any
significant
changes
to
be
ours
have
to
be
in
at
least
two
days
in
advance
to
give
people
a
chance
to
actually
review
them.
B
That
way,
no
one
feels,
like
anything,
was
slipped
in
at
the
last
minute
and
have
a
chance
to
review
it
properly.
Okay,
the
PRS
themselves
can
technically
veto.
Is
the
right
word
here,
but
it
couldn't
be
righter
phrase.
Technically
anybody
can
sort
of
block
a
PR
and
I
mean
that
by
anybody
at
all,
not
just
the
people
who
are
regular
come
to
the
phone
calls.
B
Anybody
who
just
happens
to
show
up
on
an
issue
can
make
a
comment
on
there
and
if
it
sounds
like
it's
not
completely
insane
right,
it
sound
like
it's
a
valid
concern.
We
want
to
address
it
that
basically
puts
the
PR
in
block
state
and
we
have
to
resolve
all
open
comments
on
PRS
before
we
accept
the
PR
in
there
now.
Obviously,
that
means
things
could
technical
a
little
slower
and
they
do
a
time
submitters
of
the
spec.
B
You
got
to
get
right
about
consensus
right
and
but
in
the
end,
what
ends
up
happening
is
that
forces
people
to
work
offline
to
come
back
with
a
solution
that
is
more
broader
support
in
it?
Okay,
now,
obviously
not
everything
can
actually
be
kumbaya
and
everybody
agrees
on
everything.
So,
ultimately,
though,
ultimately,
though,
if
something
does
happen,
and
we
can't
get
to
a
unanimous
agreement
on
things,
we
eventually
do
take
a
vote.
B
So
then
the
question
is
well:
if
you
don't
have
maintain
errs,
who
gets
to
vote,
what
we
ended
up
coming
up
with
is
a
rule
that
says
people
who
show
up
to
the
weekly
phone
call
on
a
regular
basis,
get
a
get
to
have
votes
or
get
that
voting
rights.
I
should
say
what
that
means
is.
If
you
were
there
for
the
last
three
out
of
four
meetings,
but
by
there
I
mean
you
or
your
alternative
from
that
from
your
company
are
there
for
the
last
three
or
four
meetings,
then
you
have
voting
rights
now.
B
All
that
really
means
is
that
you
care
enough
to
actually
participate
in
the
weekly
phone
calls.
Okay,
now
you
may
look
at
that
and
say:
okay,
that
sounds
fine,
for
people
make
the
phone
call.
But
what
are
the
people?
Who
can't
make
the
phone
call?
Well
then,
that
kind
of
goes
back
to
you
know.
Anybody
can
block
the
PR
through
a
comment
on
the
issue
and
you
might
still
say
well,
that's
doesn't
seem
quite
right
because
they
don't
get
the
vote.
B
If
you
look
at
the
technical
votes,
we've
only
really
had
five
and
if
you
look
at
all
of
those
they've
all
been
landslide
votes
right,
and
that
tells
me
that
we're
not
trying
to
squeeze
an
issue
through
with
a
one-vote
margin
of
error,
kind
of
a
thing
right.
These
are
generally
have
consensus
built
into
them,
and
it's
just
one
lone
holdout
that
just
couldn't
manage
to
convince
the
community,
but
everybody
else
basically
says
no.
B
This
is
the
right
way
to
go
and
I
think
the
fact
that
they
are
overwhelming
landslide
votes
tells
us
that
the
process
we
have
in
place
to
ensure
that
we
have
a
community
consensus
is
actually
taking
hold
and
the
fact
that
we
don't
have
traditional
maintained
errs,
isn't
really
a
problem
and
in
fact
we
even
had
people
ask
on
the
phone
call
of
the
community.
Do
we
want
to
change
our
rules
and
most
people's
reaction
right
I
should
say
everybody's
reacting
that
the
last
thing
we
asked
to
this
was:
let's
not
fix.
B
What's
up,
let's
not
change,
what's
not
broken.
Basically,
so
everybody
is
basically
okay
with
it.
So
I
feel
pretty
good
about
the
fact
that
we're
a
little
bit
different
from
the
normal
process.
Okay,
so
hopefully
you'll
see
that
we
are
community
based.
It's
not
traditional
PR
account
type
thing,
but
we
do
have
something
in
there
to
make
sure
that
everything
is
equitable
as
best
we
can
all
right.
Next
slide
criteria
demonstrate
ongoing
flow
of
commits,
as
I
said
we
have,
we
do
have
an
ongoing
flow
of
commits.
B
That's
a
good
number!
Well,
you
don't
only
have
maybe
three
maintained
errs
in
the
group
and
that's
not
really
fair
or
representative
of
the
level
of
contribution
from
everybody
in
the
community.
However,
if
you
look
at
the
graph
you
can
see,
we
do
have
a
constant
flow
of
PRS.
Again,
it's
not
a
high
count.
This
is
a
spec,
not
code,
but
you
can
see.
There
are
some
most
weeks.
We
have
at
least
one
some
weeks.
We
have
a
whole
bunch
right.
B
So
there
is
a
fair
amount
activity
going
on
in
the
group,
so
we
are
making
fairly
good
progress.
On
average,
we
have
around
twenty
seven
people
attending
the
phone
call
every
week.
I
think
that
that's
pretty
good
for
a
speck
most
people
would
rather
get
shot
in
the
head
or
than
actually
work
on
a
span
set
of
codes.
B
So
27
people
on
a
weekly
call
is
pretty
darn
good
and
that's
spanning
78
different
organizations
with
four
people
coming
from
or
coming
to
us
from
nine
companies
right,
they're,
just
self
affiliated,
so
I
do
think
it
shows
that
we
actually
have
a
fairly
good
rate
of
participation
in
the
spec
itself
going
forward
all
right,
let's
go
to
next
slide,
I
think
that
actually
might
be
most
of
it.
Okay,
the
next
two
slides
technically
talk
about
the
SDKs
in
terms
of
their
activity.
B
B
You
can
see
I
think
that's
more
of
a
PR
count,
kind
of
thing
you
can
see
what's
going
on
there
in
terms
of
activity,
but
this
the
SDK
work,
isn't
technically
part
of
the
review
process
for
going
to
be
incubated
as
I.
Don't
think
it
is
it's
more
the
spec
itself,
but
I
just
want
you
to
show
you
that
there
is
other
activity
going
on
that's
code
related
that
is
part
of
the
community.
I
know
I,
went
kind
of
fast,
but
I
think
that
kind
of
hits
the
main
points.
B
Oh
excellent
question,
so
the
shorter
answer
is
no,
but
I
will
throw
one
thing
out
there
that
have
to
make
clear
to
people.
This
is
not
what
I
would
call
yet
another
common
event
format
right
many
times
in
the
past.
People
would
try
to
create
a
cloud
of
them
things
structure.
That
is
that
you
know
that
all
events
are
supposed
to
adhere
to
and
there's
one
one
cloud
event
to
rule
them
all
kind
of
a
thing.
That's
not
what
this
is
about.
B
E
B
B
Yeah,
go
back
one
more
thing,
yeah
this
one,
oh
yeah!
So
if
you
look
at
this
I'm
in
the
grey
box
right,
that's
just
an
HT.
Pms
is
coming
across
right
now.
If
you
look
at
the
for
HTTP
headers
that
are
in
bold,
those
do
actually
tell
you
very,
very
key
bits
of
information.
The
spec
version.
It's
the
convent,
spec
version
itself,
that's
not
as
key
that
very
much,
but
the
next
one,
the
type
right
that
tells
you
the
type
of
event
that
this
is
right.
B
So
someone
receiving
this
message
if
they
are
doing
some
sort
of
generic
filtering
and
the
person
who
specified
the
filter,
says
I
want
everything
from
big
cocom
right.
They
can
say
to
your
cloud
of
and
middleware,
give
me
all
cloud
events
whose
seee
type
attribute
is
comm
big
code
dot
star
right,
so
this
piece
of
middleware
actually
doesn't
have
to
understand.
B
So
you
can
do
basic
filtering
on
these
fields
like
source,
meaning
where
this
event
came
from,
and
the
middleware
doesn't
have
to
understand
that
this
is
an
event
from
you
know
some
AWS
service
or
IBM
or
Google
service
whatsoever
right
as
long
as
they
add
these
little
bit
of
headers
to
it.
The
middle
wares
to
be
able
to
get
his
job
done.
Does
that
answer
your
question?
Clinton?
Yes,.
B
A
B
F
Something
that
I
think
has
is
adopted
cloud
of
mance
I,
don't
know
if
there's
any
I
mean
that
should
be
like.
Have
you
done
like
an
open-source
kind
of
survey?
I
mean
companies
might
not
go
on
the
record,
saying
they're
using
it,
but
have
you
looked
for
other
open-source
projects
that
might
be
adopting
that
this,
the
standard
or
the
spec?
Yes,.
B
Oh
okay,
so
obviously,
on
one
of
the
first
slide,
I
showed
some
places
where
it
is
being
used.
I
think
Canada
might
have
been
when
the
open
only
open
source
ones.
There
I
think
everything
else
was
provide
Terry
from
an
open
source
perspective.
I'm,
pretty
sure
there
are
a
couple
places
out
there
that
are
being
that
are
using
it.
I
just
don't
know
what
there
are
offhand
I.
Don't
think
it
actually
done
an
official
survey.
To
answer
your
question.
B
B
B
A
G
Awesome
go
for
it
steer
way
all
right.
So
this
is
the
graduation
review
for
tough
and
tough,
as
as
I
think
many
of
you
know,
because
you
probably
you
know,
heard
talks
or
discussions
about
tough
overtime
is
the
purpose
of
it
is
to
let
you
do
things
like
update
or
install
software
and
to
make
this
process
be
secure.
This
secure
this
process,
even
when
an
attacker
goes
and
does
things
like
breaks
into
your
repository
steals.
A
key
is
a
man-in-the-middle
on
your
network
and
so
on.
G
It's
also
one
of
these
things
that
surprisingly
easy
to
go
and
deploy
an
adopt.
It's
it's
something
that
you
sort
of
drop
into
your
system
and
it
works,
and
people
don't
even
know
it's
there.
In
fact,
sometimes
we
don't
know
it.
In
fact,
I'll
even
say
most
of
the
time
we
don't
know
when
people
have
actually
adopted
it.
The
the
way
that
we
find
out
is
people
putting
a
blog
post
to
talk
about
it,
or
in
one
case
there
was
someone
who
had
forgotten
to
have
their
key
have
a
long
enough
life
time.
G
So
they
started
to
get
error
messages
that,
with
that
head,
we're
talking
about
tough
in
them,
so
that
was
sort
of
bittersweet
to
learn
about
their
adoption
via
error
messages
from
them,
having
not
managed
their
keys
correctly.
But
tough
itself
is
a
specification
project.
We
have
a
very
strong
security
focus,
as
you
might
imagine,
with
a
project
like
this,
and
our
intention
is
to
have
a
minimal
design
with
low
turn.
G
It
was
created
in
2010
to
address
issues
that
I
found
when
I
worked
with
some
of
the
forces
at
Tor
to
try
to
do
a
new
updater
for
their
sort
of
nation.
State
actor
threat
model
that
they
that
they
deal
with
on
a
daily
basis,
and
we
were
admitted
to
the
CN
CF
in
2017,
along
with
notary
notary,
is
the
most
widely
used
cloud.
G
They
are
the
most
widely
used
implementation
of
tough,
at
least
in
the
cloud,
although
there
are
other
large
companies
like
data
dog
that
use
our
reference,
implementation
or
Python
reference
implementation
of
tough
in
production
and
there's
an
automotive
variant.
That's
very,
very
popular
called
obtain
we're
on
the
server
side.
It's
basically
just
vanilla,
tough,
with
a
few,
very,
very
minor
tweaks
and
on
the
client
side.
G
It
deals
with
the
fact
that
cars
are
very
difficult,
challenging
weird
environments,
and
you
know
you
have
a
bunch
of
devices
that
don't
have
their
own
connections
out
and
don't
have
a
notion
of
time
and
don't
have
a
lot
of
other
things
that
we
sort
of
take
for
granted
in
in
cloud
environments,
there
are
about
a
dozen
different
implementations
of
tough
or
the
tough
variant
obtained
by
different
organizations.
I'll
talk
a
little
bit
about
some
of
these
in
a
moment
and
tough
itself
being
a
standard
process.
G
First
I
want
to
just
say
thanks
to
Doug,
for
like
really
hammering
home
a
lot
of
points
about
how
specs
are
different
than
projects.
That
was
like
the
absolute
perfect
opening
act.
Thank
you
so
much.
You
did
such
a
great
job
of
covering
that
we
have
a
formal
process
for
changing
the
tough
standard
that
we're
very,
very
conservative,
about
and
really
try
to
build
complete
consensus
within
our
community
and
have
even
for
all
of
the
changes
we
made.
G
Tough
had
actually
a
hundred
percent
consensus
for
them
after
a
lot
of
lobbying
and
a
lot
of
discussion
with
different
adopters
next
slide,
please
all
right,
so
the
production
use
of
tough,
as
I
said
in
the
cloud
native
space,
it's
we're
used
by
a
lot
of
large
companies.
As
you
can
see
there,
you
can
find
on
our
adoptions
page
which
links
at
the
bottom.
G
You
can
find
the
links
to
blog
posts
and
other
things
that
talk
about
these
I
estimate
and
I
might
be
a
little
off
on
this,
but
something
a
pour-over
80%
of
the
cloud
users
use
notary
and
at
or
you
know,
somewhere
around.
20
percent
used
a
tough
reference
implementation.
We're
also
used
a
lot
in
automotive
through
obtain
anyone.
Who's
dealt
with
automotive
knows
it's
a
very
secretive
industry.
G
It's
very
strange
to
me,
because
we
we
have
people
that
are
selling
products
based
on
it,
but
won't
let
us
list
their
name
on
us
like
a
website
or
say
things
about
it
like
when
I
do
you
know,
I
talk
to
the
press
about
yeah,
you
can
buy
it
from
lots
of
places.
I
can't
say
this
and
it's
once
again,
I.
Don't
really
understand
why
that
industry
is
so
secretive.
But
there
are
quite
a
few
implementations.
G
We
have
at
least
one
or
actually
we
have
multiple
OMS
in
the
u.s.
in
Asia
and
in
Europe
that
have
that
are
that
have
adopted
obtained
and
so
in
the
next
three
to
four
ish
years.
The
projection
that
we
seen
is
is
that
over
a
third
of
new
car
sold
in
the
United
States
will
include
obtained
as
the
way
they
do,
updates
obtain
itself
was
adopted
under
the
joint
Development
Foundation
and
the
Linux
Foundation.
G
So
that's
where
the
spec
sort
of
lives
now
and
we
were
also-
we
are
also
an
I
Triple
E
isto
standard
for
obtain,
and
we
have
a
lot
of
use
like
outside
of
like
cloud
and
automotive
to
Facebook,
is
going
in
and
has
given
a
bunch
of
money
for
Python
to
go
and
integrate
tough
into
warehouse
Google's
using
us
in
fuchsia.
We
have
lead.
We
have
a
bunch
of
other
programming
languages,
we
have
Linux
that's
going
in
adopting
tough
and
so.
G
Next
slide,
please:
okay,
thanks,
alright!
So
the
in
terms
of
committers.
Once
again,
it's
a
funny
thing
to
sort
of
look
at
this.
But
if
you
look
more
broadly
at
the
reference
implementation,
then
looking
at
our
committers,
we
have
different
folks
from
different
groups:
notary
I'm,
just
putting
it
up
there.
There
are
separate
scenes.
You
have
project
they're,
not
part
of
this.
G
A
graduation
review
for
tough
there'll
be
a
separate
discussion
about
notary
at
a
future
point,
but
notary
and
and
and
tough
both
have
relative
committers
and
organizations
obtained,
has
over
a
hundred
people
participate
in
the
forum
and
has
something
like
sixty
people
that
are
standards,
participants,
and
we
regularly
have
you
know
a
couple.
Dozens
people
on
weekly
standards
calls
which
is,
as
as
a
Doug
said,
so
nicely
it's
very
hard
to
get
people
to
really
care
and
dig
in
and
look
at
this
and
we've
had.
G
You
know
well
over
a
hundred
people
from
about
fifty
different
organizations.
We
have
vendors
regulators,
folks
from
agencies
like
Mistah,
or
you
know,
others
like
DHS,
and
things
like
that
that
come
to
our
up
team
meetings
like
come
to
specific
meetings.
Just
for
obtain
fly-in
in
order
to
talk
about
and
help
to
move
the
industry
along-
and
you
know
it's-
we've
had
a
ton
of
support
from
OMS.
G
One
number
I
do
have
approval
to
say
publicly
is
in
our
very
first
meeting
seventy-eight
percent
of
cars
on
us,
whereas
had
a
representative
in
that
meeting
from
like
their
security
team,
and
our
attendance
has
only
increased
over
time.
So
we're
really
something
that
you
know
the
security
folks
in
the
automotive
industry
are
very
active
on
the
SPAC
itself
is
low
churn,
and
so
this
also
makes
a
lot
of
our
implementations
be
quite
low
term.
G
Alright,
so
looking
at
the
flow
of
commits
a
significant
change,
any
significant
change
to
tough,
like
any
significant
addition
or
modification,
requires
a
process
called
a
tap
process.
This
is
a
tough,
those
are
tough
augmentation
proposals
and
so
what
this
process
does
is
it
basically
gets
all
the
important
stakeholders
together,
it's
written
in
sort
of
like
an
RFC
style
format.
G
That
would
not
represent
something
like
changing
code
and
those
have
add
the
stats
for
those
they're.
A
notary
and
tough.
Also
both
have
a
you
know,
history
of
committers
from
different
groups
that
are
integrating
or
doing
other
things
with
them,
and
so
you
can
get
some
commit
information
there
as
well.
Next
slide,
alright
and
I
think
this
is
my
last
slide
here
so
I
just
wanted
to
mention.
We
have
checked
all
the
boxes.
We've
adopted
the
CN
CF
Code
of
Conduct.
We
have
our
governance
and
contributors
process.
You
can
find.
G
List
for
tough
they're
the
adopters
was
obtained
is
once
again
a
little
harder
because
we
can't
make
a
lot
of
things
public,
but
you
can
find
a
lot
of
information
about
that
on
the
up
team
site.
We
have
a
CII
best
practice.
This
is
badge
we
are
at
silver.
We
are
two
things
away
from
gold
by
the
way
the
you
know
any
prod.
G
That
would
make
it
part,
or
that
could
be
done
to
make
that
process
a
little
better
but
I
feel
very
proud
of
where
we're
at
we're
by
far
the
highest
CN
CF
project
in
this
regard,
I
think-
and
if
you
want
to
look
at
the
stats
about
this,
you
can
see
that
on
there
and
with
that
I
will
answer
a
few
questions
that
I
saw
fly
by
I.
Think,
okay,
so
does
do
someone
want
to
jump
in
and
and
ask
or
should
I
answer.
G
G
Basically,
you
can
view
it
almost
like
a
superset,
but
it's
a
superset
with
some
with
some
tweaks
that
make
more
sense
in
automotive,
and
so,
if
you
take
the
server
side,
part
of
of
tough,
like
tough
server
implementations,
you
have
about
ninety
percent
of
what
you
need
for
obtain
the
things
you're
missing
or
you
obtain
the
vehicles
report
back
information
about
the
the
versions
of
the
different
ECU,
the
different
little
computers
in
the
car
and
so
on.
So
obtained
is
sort
of
a
superset
of
top
and
on
the
individual
component
in
the
vehicle.
G
Beefy
components
do
something
that
is
basically
tough,
plus
a
little
bit
of
extra
functionality.
That
makes
sense
for
cars
if
they're
very
weak
components,
they
do
something
that
is
like
a
weak
subset
of
tough,
because
the
little
microcontroller
that
decides
when
you're
pulling
your
seatbelt,
whether
it
should
tighten
or
not,
is
a
really
weak.
Little
microcontroller,
though
it's
a
little
weak,
tiny
computer
in
there,
and
you
can't
do
all
of
the
more
expensive
things
you
would
need
to
do
to
decide
how
to
update
that
or
you're.
Don't
light
in
your
car.
G
Other
very
weak
computers
like
that,
so
it's
a
stripped-down
streamlined
version
of
top
that
has
weaker
gear
security,
guarantees
and
acknowledges
this.
It
like
repeatedly
and
explains
the
differences
and
what
you
lose
and
so
on
with
this
so
obtain
obtain.
You
can
view
it
as
a
mostly
a
like
a
super
set
of
tough.
E
G
When
we
make
changes
to
tough,
we
work
with
the
automotive
community
because
a
lot
of
the
times
like
it's
it's
effectively,
almost
always
the
case
that
you
want
the
flow
to
be
between
the
two.
If
there's
something
good
and
obtain
the
tough
would
benefit
from
you
want
that
to
come
down,
and
you
want
the
opposite
to
be
true,
so
we've
had
some
flow
between,
but
they're,
not
strictly
lockstep.
G
If,
if
there
are
such
you
know
obtain,
you
know,
obtain
and
tough.
The
people
who
are
like
the
process
that
you
go
through
to
approve
changes
to
each
are
different
and
there
are
different
communities,
but
they
have
a
lot
of
enough
overlap
and
so,
once
again,
I
think
viewing
tough
as
as
mostly
a
subset
of
obtain,
but
also
being
a
part
of
it.
That's
more
focused
and
more
applicable
to
non-automotive,
because
there's
it's
it's
not
that
tough
is
like
obtain.
G
It's
that
obtain
is
all
the
weird
stuff
that
has
to
happen
to
make
it
work
in
a
car,
and
if
we
went
in,
we
did
medical
device
version.
There
would
be
all
the
weird
stuff
you
have
to
do
for
medical
devices,
but
tough
is
the
core
of
both
of
those
projects
and
tough
would
be
the
or
of
you
know
anything
else.
In
those
regards.
G
H
C
H
H
Kudo
is
a
little
bit
different
from
from
other
approaches
to
boolean
operators,
in
that
it
actually
ships
with
a
controller
already,
and
so
folks
that
build
with
Kudo
don't
have
to
implement
their
own
controller.
They
instead
just
write
a
yama
spec
that
they
can
use
to
define
the
operations
for
their
particular
code,
and
so
you
know,
kudo
controller
can
can
that
way,
manage
multiple
different
types
of
lower
codes.
H
The
main
abstraction
is
in
kudo
actually
is
inspired
by
d
service
Commons,
which
is
a
similar
sort
of
tool,
kit
or
SDK
for
building
data
service
orchestration
on
top
of
Apache
mesas,
and
it's
been
used
for
a
couple
of
years
to
to
run
these
data
services.
You
know
things
like
cough
going,
Cassandra
and
others
in
productions
for
a
couple
of
years,
and
still
folks
that
have
used
these
services
came
to
us
and
said:
hey
you
know,
can
you
give
us
a
similar
experience
on
top
of
kubernetes
and
and
that's
how
cuda
was
born
next
slide?
H
So
when
we
talk
to
folks
that
are
building
operators,
you
know
here's
some
challenges
that
we
that
we
found
that
people
run
into.
So
obviously
you
know
a
controller
or
an
operator
is
not
a
simple
piece
of
software
and
we
found
a
lot
of
folks
that
just
don't
have
the
skills
on
staff
on
the
team
to
write
a
lot
of
distributed
systems
code
and
go
and
you'll
operators.
Typically,
you
know
at
least
the
ones
that
that
are
production
grade
are
more
advanced,
have
one
in
10,000
lines
of
code.
H
So
and
if
you
look
at
a
lot
of
data
services,
a
lot
of
the
big
data
stuff
is
really
written
in
jvm
languages,
and
still
those
teams
simply
don't
have
the
people
on
staff
and
they
find
it
challenging
to
hire
people
also
to
write
these
things
and
go
also.
We
found
a
lot
of
code,
duplication
between
operators
still
folks
that
have
to
build
multiple
ones.
It's
just
a
lot
of
code.
They
have
to
write
and,
more
importantly,
also
maintain.
H
So
when
client
API
is
change
and
new
versions
come
out,
they
need
to
make
sure
that
stuff
still
works.
So
it's
a
pretty
significant
burden
in
an
another
challenge
we
found
that
is
that
it's
not
that
easy
to
to
integrate
with
other
CNCs
ecosystem
tools,
and
so
CUDA
has
some
some
abstraction,
some
ideas
for
how
to
do
that
that
we'll
get
to
a
little
later
next
slide
so
talking
to
users
that
want
to
deploy
operators
on
their
clusters.
H
We
also
found
a
couple
of
challenges,
and
you
know
Kelsey
sent
this
tweet
a
few
months
ago
that
you
know
basically
says
people.
People
really
struggling
with
this
still
and
and
what
makes
it
complicated
for
folks
is
that
really
different
operators
have
different
workflows
and
different.
Api
is
a
lot
of
times
when
people
deploy,
they
specifically
distribute
data
services.
They
have
to
run
multiple
right,
they
might
use.
For
instance,
you
know
Kafka
to
ingest
events
from
you'll,
see
IOT,
sensors
or
other
their
sources
and
then
run.
H
You
know
a
spark
streaming
job
behind
that
and
put
some
data
into
elastic
or
Cassandra,
so
they
run
multiple
of
these
things
together
and
so
the
the
DevOps
people
that
manage
these
clusters.
They
have
to
know
all
these
different
operators.
They
have
to
know
how
to
debug
with
them.
When
things
go
wrong,
they
have
to
know
these
different
api's,
and
so
what
they
found
themselves
with
is
is
controller
sprawl
right.
H
I'll
go
into
a
little
bit
a
detail
about
the
other
ones
in
a
few
slides,
but
think
about
those
things
as
run
books.
You
know
this
all
started
when
you're
building
Apache
mesas
frameworks
was
was
also
incredibly
hard
and-
and
we
found
that
the
people
that
are
building
these
things,
they
often
have
more
of
an
Operations
background,
and
you
know
weren't
too
familiar
with
distributed
systems
engineering,
but
we
found
abstractions
that
feel
natural
to
them
that
they're
used
to
using
from
your
writing
run
books.
So
it's
it's
a
language.
H
This
abstractions
allows
them
to
to
sequence
those
operations
that
feels
natural
to
DevOps
people.
Kudo
reduces
the
amount
of
code,
duplication
and
boilerplate
between
different
operators.
You
know
that's
a
good
thing
for
many
reasons.
Obviously
you
know
less
work,
less
maintenance
burden.
You
know
less
chance
for
bugs
and
security
issues.
It
also
reduces
the
number
of
controllers
in
a
clause
so
that
people
have
to
maintain-
and
you
know,
control
access
to
and
and
upgrades
so
another
thing
that
that
it
introduces
is
an
extension
mechanism.
H
Let's
say
it's
a
EULA,
it's
very
common
at
a
bank,
for
example,
or
a
pharma
company
who
are
regulated
and
have
specific
security
requirements,
or
you
know
some
other
type
of
policy
from
their
from
their
IT
team
that
they
have
to
follow
to
to
deploy
an
operator,
and
so
often
the
only
chance
to
do
that
is
to
to
actually
take
an
operator
and
fork
it.
So
you
know
that's
the
that's
not
ideal,
and
so
what
what
kudo
does
is
you
know
this
is
this
under
development
we
have
a
a
process
to
creative
flavor
we're.
H
Essentially,
an
organization
can
take
a
base
operator,
that's
available
in
the
open
source
and
they
can
customize
it
to
meet
their
regulatory
requirements
or
other
types
of
policies
that
they
have
very
common
issue
that
we
ran
into
and
then
essentially
it's
you
know
it's
a
it's
a
tool
that
gives.
Is
these
software
vendors
a
way
to
ship
the
best
practices
for
the
day
to
operations
alongside
their
software
oftentimes?
H
How
does
kudo
help
the
end
users,
so
the
folks
that
just
want
to
run
these
operators
it
ships
with
a
plugin
for
a
cube,
cuddle,
cube
public
kudo,
that's
sort
of
your
main
interface
to
deploy
and
upgrade
and
manage
kudo
based
workloads,
and
it
provides
a
standard,
8
interface.
So
you
know
if
I
want
to
run
Kafka
and
Cassandra
and
elastic
together,
I
used
to
say
in
command
line
tool.
H
It
provides
a
similar
interface
and-
and
that
really
helps
with
this
issue
that
I
mentioned
earlier,
where
folks
have
to
you
know,
use
different
api's
or
different
debugging
tools
without
operators
that
are
not
built
on
the
same
foundation.
So
the
example
you
see
on
this
slide
on
the
right
here
is
kudo
a
planned
status
which
prints
how
you
know
how
much
progress
kudo
made
deploying
a
particular
plan.
H
So
I
can
easily
follow
along,
say:
you
know
which
steps
are
completed
here,
which
phases
are
completed,
whereas
it's
stuck
and
then
you
know
I
can
dig
in
further
and
investigate
if
something
went
wrong,
so
really
simplifies
deploying
multiple
operators
in
a
cluster
that
way,
because
I
only
have
to
run
one
controller
for
that
actually,
and
you
know,
use
qlq
dough
to
deploy
these
these
packages.
The
CMO
manifests
to
allow
this
one
controller
to
manage
multiple,
different
types
of
workloads,
so
in
a
nutshell,
simp
fied,
api
and
CLI
experience
next
slide.
H
So
here's
an
example
of
what
defining
an
operator
with
Kudo
actually
looks
like
it's.
You
know
a
part
of
the
what
what
you
would
build,
but
it's
sort
of
the
main
part.
This
is
what
a
plan
looks
like
so
in
this
case
we're
defining
a
deployment
plan
for
for
this
workload.
That's
the
the
top-level
item
here.
H
A
plan
breaks
down
into
phases
and
so
phases
are
essentially
a
grouping
mechanism
for
different
tasks
that
need
to
be
executed
and
phases,
get
executed
using
a
strategy,
so
there's
a
serial
or
parallel
strategy,
because
different
workloads
need
either
serial
or
parallel.
You
know,
there's
also
the
option
to
plug
in
custom
strategies
it's
on
there
under
development.
Within
each
phase
you
have
steps
and
steps
again
have
a
strategy
parallel
of
serial
and
while
these
are
pretty
simple,
abstractions
right
plants
and
phases,
the
steps
and
a
strategy
to
go
along
with
them.
H
We've
orchestrated
some
pretty
complex
workouts
with
that.
So
HDFS,
for
instance,
has
a
pretty
complicated
life
cycle.
Some
things
get
deployed
in
parallel
is
some
things
get
deployed
in
serial,
but
you
know
these
pretty
simple.
Abstractions
allowed
us
to
actually
do
this,
and,
and
so
we
found
it
to
be
both
both
simple
and
really
powerful,
even
for
for
advanced
workloads
within
each
step.
You
have
tasks
and
then
tasks
are
you
know
very
simply.
Our
templated
kubernetes
manifests.
So,
alongside
with
the
coudl
definition,
you
ship.
H
C
So
so
so
what
we're
trying
to
do
a
little
bit
is
threaded
needle
with
kudo,
so
just
to
give
some
comparisons
of
of
different
things.
We're
trying
to
do
first
of
all
cop
all
compared
to
operator
SDK
and
queue
builder,
and
so
really
kudo
is
built
on
top
of
builder.
Initially,
we've
dropped
down
more
to
controller
runtime,
but
we're
looking
at
ourselves
as
a
polymorphic
controller,
and
what
we
wanted
to
do
is
have
a
single
or
a
subset
of
controllers.
C
They
have
CLI
is
already
built
around
that
they
have
teams
working
on
these
and
and
so
we're
arranging
cooter
around
existing
clients
and
tooling,
rather
than
rebuilding
all
the
abstraction
allottee
and
it
go
using
go
API
s--.
We
love
queue.
Builder,
cumulus
gray,
controller
SDKs,
great
operator
operator
SDK,
it's
great
what
it
could
be
compatible
with
those
things,
but
we
want
to
be
a
little
more
opinionated
and
take
the
60%
of
the
way
if
you
follow
those
opinions
to
take
it
to
80
90
percent.
C
If
you
follow
those
opinions
and
we
want
to
a
lot
of
people
to
build
operators
using
a
set
of
Cooper
Nettie's
primitives,
we
have
our
built
in
testing
harness
rather
than
having
to
do
a
bunch
of
software
development
with
these
SDKs
for
a
for
certain
use
case
like
again
we're
not
trying
to
replace
these.
These
toolings
we're
trying
to
be
the
right
choice
for
the
right
situation.
C
If
you
were
to
look
at,
you
know
high
level
framework
for
operators
that
that's
what
that's,
what
kudos
is
intended
to
be
next
slide,
so
that
actually
brings
up
a
comparison
to
meta
controller
and
for
those
who
aren't
aware
of
meta
controller
is
is
very
much
in
the
same
space.
It's
a
polymorphic
controller
for
multiple
types
of
applications
that
can
effectively
call
it
webhooks
to
define
various.
C
Manifests-
and
it
runs
those
in
a
set
of
orders,
so
meta
controller
ships
with
a
custom
set
of
sorry
for
the
wrong
cable,
meta
controller
ship,
certain
set
of
controllers.
We
avoid
that
and
we
just
try
to
use
ku
Bernays
parameters
directly
and
Kudo
is
intended
to
also
be
an
operator
for
CRD.
So
so
Kudo
will
do
reference
counting
on
those
CRTs
at
all,
don't
make
sure
those
are
registered,
whereas
if
you
were
to
look
at
the
old
Vytas
operator
for
meta
controller,
you'd
have
to
run
the
sed
operator
independently.
C
So
kudos
supports
this
idea
of,
depending
on
sets
of
CR
DS
that
come
from
necessarily
elsewhere,
and
that
can
set
the
next
point
like
Kudo
we
what
were
wanting
to
support
dependencies
so
that
you
can
build
a
lot
of
modular
operators.
So
so,
for
example,
one
of
our
reference
implementations
is
Kudo,
which
depends
I'm
sorry
Kafka,
which
depends
on
on
suit.
C
We
look
at
application,
awareness
in
upgrades
and
scale
up
and
scale
down
if
you're
looking
at,
for
example,
sed
density
requires
api
action
to
be
able
to
add
a
member
remove
a
member,
and
so
we
want
to
enable
developers
to
add
building
that
application
awareness
in
a
high-level
way
to
their
applications.
Next
slide,
please
so
looking
at
how
we
love
help
we're
actually
about
to
support
him
as
a
manifest
format
and
really
the
differences.
C
We're
looking
at
with
help-
and
us
is-
is
more
that
you
know
we're
we're
a
framework
for
operators,
whereas
helm
is
a
framework
for
and
templating
system
for
applications,
so
we're
looking
at
what
happens,
post,
deploy
and
and
so
really
get
drifted,
texture,
repair,
alerting
and
monitoring
which
we're
working
on
and
again
those
sequency
steps.
We're
also
looking
for
higher-level
features
of
support
ability,
as
well
as
doing
some
work
around
sandbox.
C
Even
since
isn't
solving
that
root
killer
problem,
which
is
a
hard
problem
solve
I
know,
you
know,
help
3
does
not
have
tiller,
so
we're
working
on
ways
to
to
solve
that
so
so,
coming
in
the
next
version
of
help.
Is
this
idea
of
being
able
to
rely
on
help
charts
as
a
base
and
then
progressively
help
enhance
help
charts
into
an
operator
where
you
can
start
to
add
other
plan
around
upgrading
around
backup/restore
other
things
you
might
want
to
do,
but
use
a
well-tested
help
chart
as
your
base
for
that
next
side?
Please!
C
So
look
at
the
Kudo
ecosystem,
like
I
said
we
built
on
built.
We
built
on
top
of
Q
builder
and
controller
runtime,
we're
well
involved
with
the
API
machinery
sig
instead
of
kubernetes
and
chat
with
him.
A
lot
like
I
said
before
we're
wanting
to
extend
upon
existing
help,
charts
we're
also
looking
at
seed.
Add
bundles
we're
not
trying
to
solve
other
than
getting
an
initial
idea
of
how
to
do
the
sequencing.
The
application
definition
problem
we
want
to
provide
the
I
have
an
application.
C
What
do
I
do
now,
solution
there
and
solve
that
problem
and
I
think
that's
all
I
have
to
say
on
that
slide.
So
next
slide,
so
roadmap
we're
looking
at
becoming
a
better
at
registering,
managing,
C
or
DS,
so
that
you
get
an
operator
like
experience
of
custom,
C,
Rd
plus
controller.
We
will
rely
on
some
some
bug
fixes
that
landed
in
kubernetes
115
to
do
that,
and
so
that
development
work
has
begun.
C
We're
looking
at
incorporating
other
things
for
the
community
like
application,
CRD
dependencies
are
in
progress
so
that
you
can.
You
can
have
a
web
of
dependencies
between
operators
and
build
up
larger
abstractions
based
on
individual
operators,
as
Toby
mentioned,
we're
working
on
extensions
right
now,
package
distribution
is
now
in
and
that
actually
at
that
point
has
been
fixed.
C
As
of
our
last
release,
and
then
we
plan
on
testing
these
in
a
large
mixed
workload
way
out
in
the
open,
so
that
if
you
were
to
install
a
Kafka
operator,
we
have
vetted
that
across
you
know
to
a
certain
scale
for
a
certain
number
of
Kafka
operators,
and
so
so,
when
we
say
an
operator
is,
is
stable.
What
we
mean
is
is
Kafka
running
on
zookeeper
and
while
all
those
components
are
stable,
working
together
as
a
dependent
set
of
tooling
next
slide.
C
So
why
is
the
ncf?
We
we've
built
this
project
out
from
the
beginning,
hoping
to
drive
a
larger
day
to
awareness
inside
the
CNC
f.
We
want
to
grow
that
that
sentiment
and
continue
that
work.
So
we
followed
open
contribution
for
the
beginning.
We
followed
everything
required
I.
You
know
it
was
a
large
topic
in
the
last
TOC
meeting
about
about
the
certain
things
you
need
to
do.
C
We've
tried
to
do
that
from
the
beginning
to
be
a
neutral
home
and
try
to
promote
the
mission
of
this
rather
than
the
cert
project,
and
and
what
we
really
wanted
to
do
is
have
people
building
more
operators
before
staple
services
on
top
of
kubernetes
and
provide
a
platform
in
which
to
do
so.
So
you
know
always
that
so
forth.
C
Here
is
a
vision
for
that,
and,
and
hopefully
we
can
bring
in
more
and
more
people
to
help
really
sharpen
that
knife
and
and
bring
a
really
great
day
to
and
sample
service
experience
to
kubernetes
and
and
again,
like
it's
really
about
growing
the
community
around
this
project
and
around
what
we're
trying
to
do
for
you.
I
have
deployed
my
application.
I've
deployed
my
sample
service.
Now.
What
do
I
do
next
slide?
Please
I!