►
From YouTube: CNCF TOC Meeting - 2019-04-09
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Yes,
it's
very
quiet
Alexis,
it's
about
five
minutes
past
now,
so
we'll
get
started,
I
see
Alexis,
Joe
and
Michelle
from
the
TOC.
If
there's
someone
else,
I
missed,
let
me
know
via
text,
but
we
have
a
full
schedule
today,
so
we
might
as
well
get
started.
So
we
have
three
project
presentations
today.
So
this
is
our
TOC
meetings
that
dedicated
to
community
presentations
and
hearing
from
projects
out
there
that
have
been
in
the
backlog
we
have
three.
A
Today
we
have
NSM
key
cloak
and
strumsy
that
we'll
be
presenting
so
we'll
each
give
them
about
15
minutes
to
present
with
sometimes
four
questions
and
then
we'll
go
from
there.
So
without
any
you
know
further
wasting
time.
Let's
have
the
NSM
community
present
I'll
hand
it
off
to
Ed
I
believe
sure.
Do
you
yeah
Taylor's
going
for
it
so.
B
Service
mesh,
so
kubernetes
networking
actually
does
a
fairly
brilliant
job
of
optimizing.
For
the
average
case
right,
you
push
the
things
developers
care
about
front
and
center
in
the
API.
You
know,
which
are
things
like
l3
reachability
network
policies
for
isolation
services,
to
give
you
some
very
basic
load
balancing
functionality,
and
it
completely
hides
most
of
the
implementation
details
that
developers
don't
care
at
all
about
like
interfaces,
subnets,
etc-
and
this
is
super
well
done
next
slide,
but
it
doesn't
actually
handle
every
case
that
people
are
wanting
to
do
around
now,
working
in
kubernetes.
B
So
there
are
a
bunch
of
cases
and
you
know
the
more
we
dig
into
it.
The
more
these
come
up.
You
know,
cloud
native
and
a
fee
is
one
various
l2
l3
enterprise
that
our
domain
use
cases
come
up
a
lot
and
there's
a
whole
lot
of
other
kinds
of
things:
innovation,
wise
that
people
are
wanting
to
try
that
don't
fit
very
well
into
the
standard,
kubernetes
networking
model-
and
that's
fine,
actually
next
slide,
and
so
people
look
at
history
of
a
lot
of
these
things.
That
SEO
is
super
good.
B
If
the
problems
you're
looking
to
solve,
live
at
l7
right
that
you're
dealing
with
HTTP
messages,
it's
probably
the
solution
before.
That's
not
what
network
service
mesh
does
it
all,
but
it's
less
good
for
things
for
your
payloads
or
Ethernet
frames,
IP,
packets
or
other
kinds
of
things
that
live
below
l7.
B
B
So
we
started
with
the
presumption
that
kubernetes
was
not
going
to
change
for
these
use
cases,
which
are
a
small
minority
of
what
people
want
to
do,
and
so
we
had
to
figure
out
what
to
do
with
the
existing
infrastructure
to
meet
those
needs,
and
we
have
done
that
loosely
coupled
meaning.
How
do
we
make
it?
So
you
don't
get
one
giant
glom
of
networking.
B
B
Instead,
it
handles
payloads
that
are
IP,
packets
or
Ethernet
frames,
or
possibly
more
exotic.
Things
next
slide
so
again
back
to
service
fundamental
problem,
stolons
kubernetes
networking,
but
also
wants
to
be
able
to
get
a
secure
connectivity
to
corporate
intranet
next
slide,
so
in
network
service
mesh.
The
way
you
do
this
is
you
define
a
network
service.
A
network
service
is
very
analogous
to
a
normal
service
in
kubernetes.
It
has
a
name.
It
also
has
a
spec
which
has
a
payload.
This.
C
B
B
You
want
to
your
pod
next
slide,
so
conceptually
network
service
mesh
has
three
basic
concepts.
The
first
is
the
network
service
we've
already
introduced.
This
is
fundamentally
the
intersection
of
connectivity,
security
and
quality
of
service
whatever
it
is
that
you
want
in
the
abstract
to
happen
in
terms
of
the
service
you
want
from
the
network
next
slide
and
an
example
here
would
be
secure.
Internet
connectivity
next
slide.
The
second
concept
we
have
is
a
network
service
endpoint.
This
is
very
analogous
to
end
points
in
kubernetes.
B
It
is
the
concrete
implementation
that
does
the
thing
that
you
want
next
slide
again
in
like
in
points
and
kubernetes,
and
so
in
this
example,
a
VPN
gateway
pod,
something
that
we
do
in
Mission
Control
and
do
whatever
it
has
to
do.
To
talk
to
your
corporate
VPN
concentrator
would
be
an
example
of
this
and
then
the
third
concept
is
this
l2
l3
connection.
This
will
often
pop
up
in
Sarah's
pod
as
an
interface,
but
conceptually
it's
just
saracens
traffic
to
her
corporate
intranet
and
it
gets
there
next
slide.
B
But
we
all
know
that
things
don't
stay
simple.
If
you
start
up
in
a
world
with
a
VPN
gateway
pod,
eventually,
you're
InfoSec
people
are
going
to
want
you
to
interpose
other
things
that
do,
for
example,
firewall
rules
based
on
policy
between
sarah's
pod,
on
the
VPN
gateway,
or
you
know
it
will
inevitably
become
more
complicated.
You
will
have
IDs
boxes
and
IPS
boxes
and
a
whole
host
of
other
things
that
people
want
functionally
along
this
chain
and
so
network
service
measure
has
the
concept
of
composing
these
things
together.
B
So
every
pod
in
the
system
that
is
exposing
a
network
service,
say
firewall
pata,
VPN
gateway,
pod
they're,
both
exposing
the
network
service,
secure
Internet
connectivity,
but
they
can
expose
them
with
what
we
call
destination
labels.
In
this
case,
a
people's
firewall
and
op
equals
VPN
gateway.
Next
slide
with
the
firewall
pod
offer
secure,
Internet
connectivity
it
realizes.
It
doesn't
really
know
how
to
do
that.
It
knows
how
to
do
a
piece
of
that,
so
it
also
requests
it
and
it
could
put
a
label
on
its
request.
B
What
comes
next.
All
it
knows
is
it
provides
part
of
a
service,
and
then
it
consumes
the
rest
of
it
and
you
can
simply
add
an
IDs
between
the
firewall
and
VPN
gateway
pod,
just
by
changing
the
network
service
definition
in
order
to
element
your
policy
excellent.
So
this
is
super
important
because
it
means
you
get
a
minimal
blast
radius
when
IT
inevitably
decides
that
something
else
has
to
happen
here.
You
have
to
have
additional
pieces
of
the
chain.
B
Thanks
fine,
so
you'll
know
here
that
that
Sarah,
the
developer
never
sees
IP
subnets
routes
or
interfaces
for
folks
who
are
interested
I
could
talk
about
the
technical
details
of
how
we
handle
that,
but
they're.
Never
anything
that
actually
impinges
on
the
next
slide.
Also,
this
does
not
require
any
kubernetes
upgrades.
B
We
don't
need
any
changes,
improve
in
any
use
at
all
to
make
this
work
and
you
can
continue
using
whatever
C&I
plugin
you
wish
to
use
in
order
to
get
your
traditional
criminalities
networking,
because
we're
completely
orthogonal
to
CNI
and
then
getting
back
to
the
end
here
with
community.
So
for
community
we
have
stars
132
about
46
forks
we've
got
code
contributors
from
about
contributions
from
about
22
folks.
Typically,
our
weekly
community
meetings
tend
to
be
a
little
bit
about
220
folks,
there's
a
fair
bit
of
interest.
B
We
are
currently
earning
over
1600
views
from
Eric.
You
come
talk
and
we're
getting
a
bizarrely
large
number
of
views
on
our
weekly
meetings.
I,
don't
quite
understand
why
and
then
we
have
a
number
of
different
companies
who
are
involved
as
existing
sponsors
who
are
working
on
aspects
of
this
problem.
D
I've
got
one
if
nobody
else
has
so
I
was
curious
about
the
approach,
so
the
decoupled
approach
and
the
fact
that
the
developer
and
launcher
of
the
application
doesn't
really
need
to
know
much
about
what's
happening
back-end.
But
it
seems
that
the
current
API
kind
of
determines
the
implementation
being
a
string
of
pods
down
this
service
chain.
We
can
call
it
that
which,
which
typically
has
big
performance
implications.
Have
you
given
any
thought
to
changing
the
implementation,
which
would
seem
not
to
impact
the
application
end
of
the
API.
B
B
Absolutely
one
of
the
things
that
I
have
not
gone
into
in
this
presentation
because
of
the
limitations
of
time
is
the
network
service
mesh
architecture
itself,
while
it
is
beautifully
with
kubernetes
and
we've
done
a
lot
of
work
to
make
it
work
well
in
kubernetes
is
not
actually
welded
to
kubernetes,
and
so
within
the
architecture
you
could
be
getting
your
network
service
from
the
physical
network.
You
could
be
getting
it
from
VM
running
in
some
vim
somewhere.
It's
very
agnostic
on
that
on
the
point
of
where
it's
actually
running
at.
B
In
addition,
the
the
architecture
is
completely
agnostic
as
to
how
how
you,
essentially
granular
you
break
the
work
up
into
or
how
monolithic
you
make
it.
So
one
of
the
things
and
discussions
with
people
who
are
looking
at
building
with
building
network
services
that
we've
had
as
the
discussion
is
you
sort
of
have
two
ways
if
you're
building
a
network
service
in
point
that
you
can
use
network
network
service
mesh,
the
first
is
just
getting
you
that
first
hop
plug
somebody
has
a
workload
you
want
to
get
granularity.
B
The
work
look
plugging
into
your
network
service.
At
that
point,
you
can
leave
network
service
mesh
and
do
whatever
thing
it
is
you're
going
to
do
for
anything
that
happens
past
that
point
right,
the
secondly
you
can
use.
It
is,
if
you
do
have
this
composition,
chaining
effect
that
you
would
like
you
can
use
network
service
mesh
to
achieve
that.
But
the
architecture
itself
is
very
agnostic
as
to
when
you
get
off
the
bus,
because
there
going
to
be
a
lot
of
different
answers
as
to
what's
the
right
way
to
approach
that
problem.
D
Okay,
yeah,
that's!
Actually
it
was
my
question.
My
question
was
actually
more
about
if,
if
a
serviced
mesh,
sorry
a
network
service
required,
you
know,
half
a
dozen
of
these
some
services
have
you
given
any
thought
to
actually
composing
those
into
a
single
implementation
rather
than
six
pods,
which
I
think
was
what
was
in
the
presentation?
Oh
that's.
B
Entirely
up
to
whoever
is
building
the
network
service
endpoint
network
service
mesh
is
giving
you
the
virtual
wires
to
connect
them.
Okay,
you
as
the
implement
a
tur
employment
or
the
network
service
endpoint
can
decide
the
degree
to
which
you
would
like
to
centralize
all
of
that
in
a
single
blob
or
disaggregate
it
into
multiple
blocks,
and
you
know
quite
honestly,
I
am
not
smart
enough
to
know
what
I
don't
think.
B
There's
a
universal
right
answer
for
that
I
think
you
will
see
a
lot
of
experimentation,
as
people
figure
out
again
is
the
normal
trade-off
between
efficiency
by
centralizing
versus
flexible
Maiki
centralizing
and
in
one
of
the
really
exciting
things
in
this
space.
Is
we
have
no
idea
what
the
right
answer
is?
Really
we
have
notions,
but
we
don't
really
know,
and
people
are
going
to
experiment
to
figure
out.
E
B
No
I
see
your
point
about
the
segregation
and
then
that
will
vary.
The
reason
I
was
asking
for
clarification
is
from
the
point
of
view
of
the
consuming
workload.
I
wasn't
sure
how
we
could
make
it
simple,
simpler
that
it
is,
but
I
think
your
underlying
point
is
if
I'm
someone
who
is
building
a
network
service
that
I
want
to
offer,
then
you
know
there
are
options
there
and-
and
you
know,
this
aggregation
doesn't
reduce
complexity.
There's
no
question
about
that.
You
always
have
to
trade,
the
value
of
that
against
the
cost.
A
Yep
cool
I
think
that
does
about
us
for
for
time
right
now,
so
thank
you.
Edie
I
posted
the
proposal
from
the
NSM
team
on
github
so
feel
free
to
take
a
look
at
it.
It
ready,
has
kind
of
cleared
the
bar
for
at
the
sandbox
and
to
toc
sponsors
from
Joe
and
Matt,
but
other
than
that.
I
think
you
head
for
your
time
and
if
you
have
questions
reach
out
to
their
community.
Thank.
F
G
Yeah,
so,
okay,
so,
first
of
all
what
is
key
coke?
Well,
it's
a
centralized
authentication
authorization
services
for
modern
applications
and
services.
We
built
it
focusing
on
usability
for
application
developers
trying
to
make
it
as
easy
as
possible
to
install
key
coke
as
well
as
easy
to
secure
applications
and
services
wiki
coke.
G
So,
with
regards
to
centralized
authentication,
we
have
support
for
openly
Connect
and
sam'l.
This
allows
easy
applications
to
delegate
all
identical
to
Kiko
and
applications
are
also
able
to
obtain
tokens
that
allow
them
to
now
invoke
API
services
with
end
to
end
user
authentication
and,
of
course,
this
works
for
micro
services,
that
it
does
the
voice
on
a
steal
and
I
know
this
off
deployments
like
that.
So
next
slide
we
also
have
support
for
centralized
authorization.
G
So
if
you
want
to
delegate
the
permissions
and
associated
policies,
you
can
do
that
to
Kiko
canned
and
then
fully
centrally
managed
access
to
all
services
from
from
Kiko
next
slide.
As
I
said,
we
have
all
the
login
pages
that
you
put
you
require,
as
well
as
allowing
users
to
self
register,
allowing
users
to
recover
their
password
or
configuring
OTP
and
all
of
the
things
so
that
you
don't
have
to
develop
those
for
your
own
applications
like
site.
G
We
have
an
extensive
admin
console
that
allows
the
admin
to
manage
most
of
the
aspects
of
the
Kiko
server
and
we
also
have
an
account
console
that
allows
your
end-users
to
manage
their
own
accounts
and,
of
course,
we
have
accompanying
REST
API.
So
you
can
bake
this
kind
of
capability
into
your
own
applications.
If
you
want
to
ok
next
slide,
so
Kiko
Confederate
identities
from
a
new
locations.
So,
as
is
there,
we
can
use
the
internal
key
called
database
for
your
users,
but
we
can
also
load
identities
from
aldub
Active
Directory,
any
custom
user
store.
G
G
Not
only
do
we
do
scaling
and
high
availability
with
inside
a
single
site,
we
also
have
support
for
replication
across
multiple
sites
next
slide.
So
it's
easy
to
secure
your
applications
and
services.
Wiki
Kok.
You
have
a
few
options.
Depending
on
what
type
of
application
you
have.
We
have
a
number
of
key
cog,
specific
client
adapters.
We
also
have
the
Kiko
gatekeeper
project,
which
is
a
reverse
proxy
solution
that
can
be
deployed
as
a
sidecar.
G
We
also,
of
course,
support
both
open
ID
connection
in
sam'l
two,
so
you
can
use
any
compatible
libraries
here
or
if
your
applications
already
have
support
for
one
of
these,
you
should
be
saw
it.
The
next
site,
Kiko,
is
pretty
opinionated
and
we
try
as
much
as
we
can
to
avoid
feature
creep,
but
at
the
same
time
key
cookies,
highly
extensible,
and
you
can
customize
it
a
lot
when
you
need
to
some
examples
here
is
that
you
can
completely
replace
the
whole
authentication
flow
and
provide
complete
alternative
ways
of
authenticating
users.
G
For
instance,
forecasts
and
WS
Federation
next
slide
a
few
things
our
roadmap,
so
the
highlight
is,
we
are
planning
to
migrate
to
a
communities
native
Java
stack
called
caucus.
This
will
allow
us
to
significantly
reduce
startup
time
as
well
as
memory
footprint
and
we'll
get
numbers
comparable
to
go
based
projects.
G
We're
also
going
to
work
a
lot
on
improvements
around
the
storage,
focusing
on
support
for
zero
downtime
upgrades,
as
well
as
improving
and
simplifying
the
multi-site
setup.
We'll
also
be
looking
at
potentially
using
etcd
for
persistence
as
an
alternative
to
the
current
relational
database
persistence
layer
that
we
have
will
soon
be
adding
support
for
operators
as
well
as
observability.
G
F
So
other
projects
started
2014
it
faced
quite
rapid
adoption,
mainly
because
the
prototype
behind
it
solved
a
real
problem,
and
it
made
it
very
easy
and
quick
for
developers
to
secure
their
application
and,
as
you
can
see,
on
a
graph
from
github
over
time,
we
had
a
study
and
pretty
decent
amount
of
external
contributions.
Next
file
is,
there
are
few
community
statistics
worth
highlighting,
especially
a
decent
traffic
on
the
mailing
list.
Wharf
measuring
is
also
the
docker
image
full
count.
F
Actually,
when
we
were
originally
scheduled
to
present
this
dag
to
TOC
in
October
back,
then
this
number
was
around
five
million,
so
it
more
than
doubled
in
half
a
year
and
this
kind
of
speaks
for
itself
excited
was
Sanford
website
traffic
we've
seen
it
tripled
during
past
18
months,
and
we
also
observed
that
we
had
pretty
much
linear
growth
like
this
since
the
beginning
of
the
project.
Next
slide,
we
tried
to
gather
some
data
from
github
profiles
of
people
contributing
to
key
cog.
Those
were
the
company
names
which
came
up
with
those
profiles.
F
F
We
have
also
run
a
Community
Survey,
asking
a
number
of
questions.
How
key
cog
is
being
used?
One
of
the
questions
was:
are
you
willing
to
be
a
public
reference
and
those
were
the
company
names
which
came
up
agreeing
and
again,
you
can
see
quite
a
few
recognizable
brands
here.
We
know
from
side
conversations
to
rememeber
brands
using
he
copes
internal
technical
internally.
Next
slide.
F
One
of
the
questions
we
asked
was
how
we
deploy
ACOG
and,
if
you
add
up
your
keyboard,
a
decided,
lowest
and
open
script,
you
can
see
that
pretty
much
two-thirds
of
Key
Club
deployments
reported
by
our
community
are
in
clouds
related
environments.
So
this
also
speaks
a
lot
next
slide,
and
this
is
among
the
many
reasons
we
think
that
we
fit
in
SEF
nicely.
F
So
we
are,
we
have
a
very
healthy
community
with
number
of
contributions,
as
young
highlight
that
we
try
to
be
focused,
avoid
feature
crepe
and
really
focus
on
embracing
future
facing
standards
like
operatic
connect,
in
order
to
both
being
already
adult
widely
adopted
in
cognitive
ecosystem.
We
primarily
aim
application
developers,
your
lightweight
portable
and
highly
customizable
next
slide.
F
F
C
Hey
Chris
I
know
when
we
originally
were
going
to
propose
this.
You
know
the
project
is
a
lot
in
the
last
six
months.
Do
we
sandbox
is
appropriate
kind
of
feel
like
there's
a
pretty
wide
base
and
I
mean,
of
course
we
want
more
adoption,
but
I
think
it's
been
adopted
more
than
what
we
had
in
our
original
one.
Do
you
feel
like
it
could
maybe
skip
sandbox
I
mean.
A
C
D
H
I
think
you
know
my
biggest
concern
here
is
that
you
know
I
and
I.
Don't
know
much
about
cleat
ki
cloak,
but,
like
you
know,
hearing
about
the
the
plans
for
improving
the
way
that
it
gets
deployed
makes
me
think
that
it's
a
little
bit
of
the
you
know
traditional
Java
application
and
so
I
think
one
of
the
things
that
I
would
want
to
look
at
carefully
when
we
look
at
sort
of
you
know.
Moving
towards
an
incubation
level
would
be.
H
Something
like
you
know
is
is
deployment,
something
that
is
feels
very
natural
on
something
like
kubernetes
or
other
dynamic
environments.
Are
there
extensibility
hooks
that
are
usable
without
sort
of
building
jars,
and
it
sounds
like
those
things
are
on
the
roadmap
and
I
think
that's
that
seems,
like
things
are
moving
in
the
right
direction
there.
It
also
seems
like
those
are.
Those
are
very
nascent.
I
And
my
overall
preference
would
be
to
you
know
to
get
them
into
the
incubation,
like
you
know
quickly
and
then
start
that
conversation
and
personally
get
them
into
sandbox
quickly
and
then
and
then
start
to
have
our
conversation,
because
I
think,
if
we,
if
we
try
to
skip
over
or
sandbox,
then
we're
going
to
have
to
you
know.
You're
gonna
spend
spend
more
time
kind
of
kind
of
circling
on
it
and
trying
to
figure
it
out.
I
A
A
F
D
If
we
have
time
so
the
the
sort
of
elephant
in
the
room
when
it
comes
to
centralized
authentication
authorization
is
typically
high,
availability
and
and
scalability,
because
if
if
the
centralized
thing
is
either
unavailable
or
slow,
so
are
all
the
applications
that
rely
on
it.
And
there
wasn't
much
talk
about
that
in
the
presentation
it
sounded
and
I
may
be
putting
words
in
your
mouth
here
that
there
is
some
centralized
over
here.
The
puzzle
was
working.
If
it's
unavailable,
then
the
services
and
available
is,
is
that
true?
D
F
F
F
H
F
F
Certainly
so
ever
all
the
names
you
mentioned
in
the
presentation
are
not
productive
related.
So
all
of
this
is
purely
community
data.
I,
don't
have
any
permission
to
kind
of
like
lift
the
customer
data
and
also
we
know
that,
like
from
the
mailing
list,
we
know
that
there
are
pretty
serious
deployments
actually
than
people
approaching
multi-tenancy
within
a
certain
extent,
I.
A
Think
I
think
we'll
continue
the
discussion
on
the
mailing
lists
and
we'll
kind
of
go
from
there
and
kind
of
seek
input
from
the
TLC
and
and
maybe
an
additional
TLC
sponsor.
So
thank
you
for
your
time
and
given
that
we
have
20
minutes
left,
we
should
respect
the
last
project
to
go
so
next
up,
I
think
is
strumsy.
J
Ahead,
hello,
everyone,
my
name
is
David
Ingham
I'm,
an
engineering
director
at
Red,
Hat
and
I've
got
some
of
my
engineering.
Colleagues
with
me
today
were
hands-on
with
this
project.
So
in
a
nutshell,
strimm
Z
is
a
fully
open
source
project.
That's
focused
on
the
deployment
and
management
of
Apache
Kafka
on
kubernetes,
so
essentially
it
contains
container
images
for
Kafka
and
zookeeper,
which
is
a
prerequisite
component
of
a
Kafka
cluster
and
provides
operators.
C
J
Configuring,
the
cluster
and
also
the
topics
and
users
that
that
are
running
on
that
cluster
and
yeah,
like
the
slide,
says,
leverages
the
power
of
cube
for
scaling
and
high
availability,
etc.
I'll
talk
about
that
a
little
bit
next
slide,
please.
So
why
are
we
bringing
this
to
CN
CF
this?
This
charts
interesting.
So
this
came
from
the
recent
Apache
Kafka
survey
that
illustrates
how
people
are
deploying
Apache
kafka,
and
you
can
see
that
there's
34%
here
that
are
focused
on
kubernetes
and
this
is
an
increasing
trend.
J
So,
in
my
experience,
working
with
customers,
increasingly
organizations
are
looking
to
deploy
Apache
cough,
get
close
to
the
workloads
that
are
consuming
and
producing
messages
into
the
into
the
event
streams.
Next
slide,
please!
So
there's
a
lot!
That's
great
about
running
Kefka
on
kubernetes,
so
category
itself
can
be
a
pretty
daunting
application
to
to
deploy
and
provide
ops,
for
you
can
think
of
it
as
a
cluster
database
management
system.
As
an
analogy,
so
there
are,
there
are
many
components.
Typically,
you'll
run
with
a
cluster
of
several
Broker
instances.
You'll
have
several
zookeeper
instances.
J
You'll
have
other
components
for
something
called
Kafka
Connect,
which
is
interacting
with
third
party
systems
or
mirror
maker,
which
is
about
replica
a
topic
between
sites,
so
you'll
have
you'll,
have
a
large
number
of
processes
that
are
used
to
support
the
patchy
Kafka
and
dealing
with
that.
You
know
providing
the
OP
support
for
that
dealing
with
the
physical
servers,
the
scalability,
etc
is
a
real
task
and
kapha
gone.
Kubernetes
really
simplifies
what
it
takes
to
be
able
to
deploy
and
manage
a
cluster
like
this
with
with
a
much
lower
operational
overhead.
J
Ok
next
slide,
please.
So,
as
we
know,
kubernetes
provides
a
number
of
underlying
facilities
for
dealing
with
stateful
components,
but
it's
still
a
pretty
complicated
test.
So
specifically,
Kafka
needs
a
list
of
the
things
here,
like
stable,
Broker
identities,
a
way
to
do
discovery,
provisioning
the
management
of
durable
state
being
able
to
reattach
the
durable
state
in
the
event
of
a
failure,
and
you
can
earn
a
reward,
is
increasingly
kubernetes
is
moving
to
address.
J
Stateful
applications
like
this,
so
there
are
these
raw
primitives
that
are
provided,
but
it's
still
quite
a
complex
task,
and
rather
than
have
every
everyone,
that's
looking
to
deploy.
Kefka
on
kubernetes
have
to
deal
with
these
low-level
primitives.
It
makes
a
great
deal
of
sense
to
provide
a
common
way
of
doing
this,
which
sort
of
lowers
the
barrier
of
entry
for
deploying
and
managing
Kefka
clusters.
J
That's
the
premise
next
slide,
please
so
I
mentioned
operators
I'm
sure
most
folks
are
familiar
with
operators
that
are
part
of
this
call,
but
essentially
we
use
customer
resource
definitions
to
define
the
different
entities
that
we're
working
with
so
the
clusters,
topics,
use,
etc
and
then
we'll
use
the
operators
to
monitor
those
custom
resources
and
make
changes
to
the
system
make
forward
progress
to
the
system
to
make
the
real
implementation
match
the
desired
implementation.
That's
provided
in
the
custom
resources,
so
we're
observing
we're
comparing
the
current
state
and
then
we're
acting
on
that.
J
So
we
have
three
operators
there's
the
cluster
operator,
so
this
is
responsible
for
when
its
primary
role
is
for
managing
the
cluster.
So
here
you
provided
a
definition
where
the
the
number
of
broken
nodes
that
you
would
like
configuration
of
those
broken
nodes
and
configurations,
the
storage
in
the
form
of
a
custom
resource,
and
then
the
operator
monitors
those
custom
resources
and
will
deploy
or
update
or
delete
the
cluster
based
upon
based
on
your
wishes.
J
J
The
cluster
operator
manages
the
zookeeper
faff
for
you,
so
you
don't
have
to
deal
with
spinning
up
a
manager
managing
and
zookeeper
pod
yourself.
That's
implicitly
managed
by
the
by
the
cluster
operator
and
I'll
show
you
a
picture
of
that
in
a
moment:
okay,
yeah
and
then.
Finally,
the
the
user
operator
is
responsible
for
managing
users.
The
cluster
operator
also
looks
after
other
components,
and
so
kefka
connect
mirror
maker
components,
as
well
as
the
caveat
brokers
themselves.
Next
slide,
please.
J
So
this
is
just
a
emphasizing
the
point,
so
you
know
it's
a
kubernetes
definition,
a
kubernetes
native
definition
of
the
different
components
of
the
system,
so
the
the
first
snippet
here
is
a
the
Kafka
customer
resource
and
then
the
middle.
We
have
the
topic
and,
on
the
right
hand,
side
we
have
the
user.
So
this
is
what
gets
picked
up
by
the
operators.
Thank
you
thanks,
Ike
I.
J
Think.
If
you
push
the
next
slide,
we
might
see
some
animation
yeah.
So
here
what's
happening
is
the
operator
is
picking
up
the
custom
resource
and
he's
spinning
up
the
zookeeper
nodes
and
then
we'll
spin
up
the
Kefka
broken
nodes.
It
will
also
instantiate
the
topic
and
user
operators
on
on
your
behalf.
J
So
the
only
thing
that
you
need
to
deploy
is
the
cluster
operator
next
slide,
please
and
one
more
time,
I
think
yeah,
and
then,
when
we
make
a
change
to
the
cluster
and
an
update
is
required,
the
operator
is
embodied
with
intelligence
to
do
the
appropriate,
rolling
deployment
of
the
various
components.
So
this
is
something
that
requires
a
little
bit
of
skill.
J
It's
a
essentially
what
we're!
What
we're
doing
here
is
taking
knowledge
that
that
typically,
the
caf-co
operator,
the
human
caveat
operator,
would
have
and
we're
trying
to
automate
that
and
embody
that
the
software
operator
to
do
these
tasks
on
the
user's
behalf.
Therefore,
minimizing
the
amount
of
work
required
and
lowering
the
barrier
of
entry
next
slide.
Please.
J
So
then,
following
on
from
that
point,
the
ambition
is
to
really
take
that
to
to
the
limit
so
to
improve
the
system
with
smarts
for
dealing
with
automatic
configure
a
reconfiguration,
so
a
concept
called
cluster
balancing,
which
is
something
that
one
typically
needs
to
do
at
a
Kefka
cluster,
to
provide
automatic
scaling
to
look
for
anomalies
in
configuration,
etc.
So
the
goal
is
to
make
it
possible
for
anyone
to
be
able
to
operate
a
cackler
cluster
for
a
commercial
production
deployment
with
with
a
lot
less
specialist
knowledge.
Next
slide,
please
so
in
terms
of
community.
J
So
I've
just
got
a
couple
of
slides
here,
similar
to
what
we've
seen
in
the
other
presentations.
So
the
work
began
in
October
2017
in
earnest
and
you
can
see
the
data
there.
So
it's
a
pretty
popular
project
and
we've
had
consistent
contributions
to
throughout
that
period
and
next
slide.
Please
and
similarly
in
terms
of
website
growth,
I
don't
have
analytics
before
September
2018,
so
I
can't
show
you
what
happened
before
that.
But
in
the
time
between
then
and
now,
we've
seen
about
a
250
percent
growth
in
monthly
visitors.
J
We've
got
a
pretty
active
slack
channel
where
folks,
that
are
using
streamlet
streams
in
earnest,
come
and
ask
questions
and
and
get
feedback
on
their
configurations
next
slide.
Please
I,
won't
read
through
these
I
took
these
from
the
TOC
issue,
so
I
think
Chris,
as
Chris
will
paste
in
the
TOC
issue,
perhaps
so
that
everyone
can
see.
But
you
know
you've,
we've
got
four
red
hat
people
on
the
call
here.
I
just
wanted
to
give
some
quotes
here
from
people
outside
of
Red
Hat.
J
J
J
So
we
would
like
I,
guess
formally
declare
we'd
like
to
become
a
sandbox
project
at
CN
CF
and
the
main.
The
main
focus
here
is
about
increasing
awareness.
So
whenever,
whenever
we
present
about
streams,
you'll
talk
to
people
about
stream
Z
and
they
play
with
it.
The
reaction
is
uniformly
positive,
I
just
wish
we
could
have
more
people
become
aware
of
the
work
that
that's
happened
here.
So
that's
the
that's
the
primary
reason,
I
think
to
sort
of
increase
awareness,
and
as
part
of
that
you
know,
we
would
like
to
get
greater
community
involvement.
J
So
we've
we've
got
a
reasonable
community.
Do
you
know
the
people
that
the
quotes
that
I
mentioned
on
the
previous
slides
that
people
from
other
firms
that
have
made
contributions
here?
And
so
it's
not
purely
a
red
hat
thing,
but
I'd
really
like
this
to
become
a
true
community
project
and
and
just
become
a
neutral
home.
You
know
so.
J
Well,
I
think,
there's
there's
some
general
general
lessons
learned
about.
You
know
you
can
look
through
string,
Z
and
and
see
the
you
know,
just
the
challenges
of
dealing
with
a
complex
stateful
system
and
how
to
automate
how
to
automate
that
I
think
what
the
idea
you
know
it
used
to
be
the
case.
If
we
rewind
a
couple
of
years,
it
used
to
be
the
case
that
just
the
idea
of
bringing
something
like
Kafka
to
kubernetes,
you
know
you'd,
be
you'd,
be
scared
off
of
that.
J
You
know
when,
when
stateful
sets
first
arrived,
that
was
the
thing
that
sort
of
made
it
potentially
possible.
But
it's
still
a
you
know,
pretty
complex
task,
I
think
what
string
Z
is
shown
is
that
you
know
it
is
possible
to
to
build
systems
like
this,
and-
and
you
know
you
get
a
lot
of
advantages
by
automating-
a
lot
of
what
would
typically
be
done
by
operational
expert.
It's
that
that
you've
employed
to
manage
your
your
cluster.
You
know
in
the
same
way
that
you
hire
your
database
management
system.
Experts
I.
E
C
E
Can
imagine
yeah
coming
other
than
Africa
I
mean
this
I'm
coming
at
this
question,
that's
popped
up
on
the
chat
about
there's
other
operators
yeah.
You
should
have
a
look
at
it.
There's
this
Chris
pointed
out.
There
are
several
other
operators
in
the
operator
framework.
Slash
awesome
operators,
including
there's
also
a
confluent
operator
people
to
know
you
know
what
is
your
view
about
if
you
want
to
have
the
whole
community
and
it's
not
be
just
the
red
hat
thing,
yeah
then
I
think
there
needs
to
be
a
rationale
which
could
be
well
actually.
E
As
specifically-
and
this
is
a
really
important
question
for
us
as
an
organization
because
there's
gonna-
if
you,
if
you
wonder
about
what
projects,
could
we
have
that
our
kubernetes
4x
there's
potentially
a
lot
of
x's
out
there,
we
can
end
up
with
billions
of
these
things,
so
you
know,
there's
we've
got
to
draw
the
line
somewhere.
We
don't
know
where
to
draw
it.
So
we've
got
to
start
asking
tough
questions
about
specifics.
E
J
D
You
yeah
I
was
just
curious.
You
guys
seem
to
have
done.
You
know
some
work
running
zookeeper
as
well
as
Kafka,
and
the
zookeeper
stuff
alone
could
be
quite
independently.
Useful
I
was
just
wondering
if
you've
ever
considered
splitting
that
part
out,
so
that
someone
wanting
to
run
zookeeper
but
Kafka
would
be
able
to
use
that
part
of
the
project
we.
D
J
E
Think
that
could
be
the
key
to
getting
this
thing
into
a
good
good
state.
It
looks
like
some
of
the
other
operators
may
have
kind
of
been
experiments
and
the
confluent
being
closed
source.
Obviously
answers
that
question
I
do
think
that
I
mean
I
would
like
to
put
on
my
hand
as
a
potential
sponsor,
because
I'm
very
excited
about
this.
This
area,
but
I
would
like
to
know
what
is
your
step
by
step
plan
for
this
becoming
a
community
neutral
project
rather
than
you
know,
just
that?
Just
the
red
hat.
J
Thing
well,
I
think
to
be
to
be
fair,
I
think
it's
a
community
neutral
project
right
now,
in
the
sense
that
you
know
we
accept,
can
contribute
contributions
from
non
red
hat
people
and
they've
been
they've,
been
a
number
and
those
comments
that
I
plucked
from
the
TOC.
They
weren't
from
Red
Hat
people
and
they
were
like
light
band
and
we
work
and
other
firms
that
have
been
contributing
to
the
operator.
But
I
I
know
what
you
mean
like
we
have
a
significant
team.
That's
working
on
this,
so
it
is
a
you
know.
J
You
could
form
the
perception
that
it's
a
red
hat
thing
and
that's
the
McCall
doubt
here.
That's
one
of
the
reasons
that
we
want
to
bring
it
to
CN
CF
to
build
a
real
community
around
it.
You
know
Red
Hat,
we
know
we
know
the
value
of
multi
vendor
rich,
open
source
projects
and
we'd
like
to
turn
this
into
that
yeah.
C
E
We
went
through
a
similar
evolution
with
weave
cortex,
which
is
now
our
CN
CF
project,
and
we
actually
work
on
it
with
companies
that
we
compete
with,
and
that
actually
works
really
quite
well
in
fact,
but
it
key
stage
was
moving
to
open
governance
and
a
community
and
roadmap
and
encouraging
that
way
of
working.
Yeah.
E
A
The
interest
of
time
we're
minute
over
gonna
want
to
be
respectful
for
everyone's
time
here.
I
kicked
off
discussions
for
both
Keith
Logan
strings
II
on
that
TOC
mailing
list.
So
please
move
that
discussion
there
and
hopefully
these
projects
will
find
POC
sponsors
and
we'll
go
from
there
so
other
than
that.
I
want
to
thank
everyone
for
their
time
today
and
thank
you
for
fitting
in
three
projects
today,
it's
great
take
care
all.
Thank
you.