►
From YouTube: CNCF TOC Meeting - 2019-05-14
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Cool
I
think
Taylor
feel
free
to
get
started.
We
could
just
go
to
the
agenda
slide
right
now.
I
have
Brendan
Brian
and
Joe
betta
not
attending,
but
overall
the
TFC
members
are
here.
So
we
have
six
out
of
nine
attending
and
quorum
is
met,
so
feel
free
to
take
it
over
Liz.
If
you
want
to
here's
the
agenda,
I.
B
A
C
Just
by
way
of
background
at
the
storage
working
group
had
produced,
and
some
contents
and
Pat
being
involved
in
dragging
some
parts
of
the
community
and
the
students
who
can
take
this
further.
We
have
some
other
initiatives
in
knowing
that
there
was
so
currently
running
a
survey
with
end
user
forums.
C
C
C
B
D
A
D
Sorry
yeah
I'm
happy
for
us
to
vote
on
this
I
think
that
in
the
past,
when
we've
I
said,
people
have
often
taken
it
a
bit
more
seriously
and
brought
out
some
last-minute
questions
which
we
can
account
for
during
the
process
and
also
we've
pretty
good
about
fixing
problems
with
a
one
point.
One
release
as
well.
So
let's
just
vote.
A
E
Yes,
I
debbie
me
I'm
Doug,
so
because
innocently
because
we
have
some
new
TRC
members,
I
decided
to
give
a
little
bit
of
background
on
why
the
crowd
offense
project
got
started
and
stuff.
If
you
guys
are
already
familiar
with
this,
don't
hesitate
to
stop
me
and
I'll
skip
it,
but
I
figured
just
in
case
I
try
to
cover
it
for
you
guys
so
Bailey.
E
If
the
middleware
walk,
for
example,
want
to
do
some
sort
of
advanced
filtering
more
often
than
not.
It
would
have
to
understand
this.
Oh,
this
was
an
s3
event,
as
opposed
to
some
other
kind
of
event.
To
be
able
to
do,
the
filtering
and
people
were
looking
for
some
sort
of
internal
ability
around
this
space,
and
so
that's
kind
of
why
cloud
events
sort
of
got
started
and
just
has
a
quick
little
graphic
here.
E
And
so
what
cloud
events
was
able
to
do
was
to
try
to
harmonize
that
and
try
to
make
life
a
little
bit
easier
for
people.
So,
let's
put
on
the
next
slide.
So
how
we're
going
to
be
doing
this
or
how
we
did
this?
We
looked
at
doing
is
basically
defining
instead
of
common
metadata
across
events
and
the
way
I
kind
of
tend
to
look
at
this
is
we
were
doing
four
events.
What
HTTP
did
four
simple
message:
exchanges,
meaning
HTTP
at
a
very
high
level,
is
very
basic.
E
It's
just
a
set
of
headers
a
blank
line
followed
by
random
data
in
the
body
right,
but
the
commonality
of
those
HTTP
headers
allows
from
some
very
powerful,
yet
simplistic
things
like
basic
routing,
right
and
sharing
of
a
location
of
where
metadata
can
be
found.
If
you
do
get
agreement
on
what
particular
metadata
you're
looking
for,
and
so
we
kind
of
follow
a
similar
pattern
here
right
by
defining
some
common
metadata
that
appears
on
all
events.
E
You
know
general
stuff
like
that,
but
one
of
the
outcomes
of
the
white
paper
was
some
recommendations
for
the
TOC
around
possible
areas
of
Interop
and
harmonization,
and
one
of
the
low-hanging
fruits
was
events,
and
so,
as
a
result
of
that,
the
TOC
voted
and
said.
Okay
go
look
at
what
you
can
do
around
event
thing
and
that's
how
cloud
events
kind
of
got
started
all
right
and
obviously,
if
you
have
any
questions,
please
jump
in
and
ask
them
anytime.
E
Otherwise,
I'll
just
keep
on
rambling
at
lazer
speed,
all
right
so
overview
of
what
the
spec
itself
is.
It's
actually
very
simplest
expect
as
I
said
it's
just
what
kind
of
metadata
do
we
want
to
see
and
all
in
all
events
to
be
able
to
call
to
be
able
to
be
called
a
cloud
event?
If
you
look
at
this
list,
the
first
four
that
are
in
bold
are
really
the
only
ones
that
are
required.
Everything
else
is
optional
and
they're,
not
exactly.
You
know
rocket
science
here
right.
E
Every
event
gets
the
unique
ID,
a
unique
source,
which
is
basically
URL,
saying
where
this
event
came
from
the
cloud
of
inspect
version
and
then
the
type
and
you
can
think
of
type
as
the
the
the
type
of
events
right
so
is
it
a
create
versus
a
delete
kind
of
thing,
very
basic
stuff
things
that
almost
every
single
event
should
have
anyway,
so
it's
in
there
now,
the
other
optional
attributes
are
things
that
are
useful
but
main
honestly
be
required
for
all
events.
Things
like
the
subject
right.
What
is
this
event
about,
as
opposed
to?
E
Where
did
it
come
from
the
time
it
was
generated,
information
about
the
payload
right,
the
content
encoding,
the
schema
of
the
payload
things
like
that
they're
useful,
but
not
necessarily
always
going
to
be
there.
So
at
a
most
basic
level,
you
could
think
that
any
of
messes
is
falling
around
the
system
could
be
turned
into
a
cloud
event
by
just
adding
those
four
top-level
properties
and
we'll
get
into
a
little
bit
of
what
that
might
look
like
in
a
second.
But
once
you
wants
me
to
find
those
properties.
E
What
we
then
did
is
said:
okay,
what
are
those
things
look
like
in
different
transports
and
serialization
formats?
So,
for
example,
we
define
what
a
cloud
event
looks
like
for
for
a
JSON
payload
or
for
when
you
want
to
serialize
the
cloud
event
in
JSON
or
what
it
looks
like
when
it's
being
transported
over
HTTP
versus
MQTT
that
kind
of
stuff?
Okay,
but
again
the
point
I
remember
the
point
of
this
isn't
to
transform
people's
data
or
business
logic.
This
is
about
the
metadata
to
help
move
the
message
for
one
place
to
another.
E
So
it's
about
the
metadata
enough.
The
data
okay
moving
forward,
so,
let's
take
a
look
at
this
example.
Go
hit
the
button.
Ok,
so,
first,
let's
start
off
with
a
very
simple
HTTP
request:
nothing
too
exciting
here,
a
simple
post
and
we
created
a
new
item
in
some
database
or
something.
So
what
would
it
take
to
turn
this
into
a
cloud
event
boom?
E
Those
four
attributes-
and
this
is
in
the
binary
format
of
HTTP.
So
basically,
what
we
did
is,
we
said,
add
these
four
HTTP
headers,
which
are
completely
safe
to
add
to
any
message
anyway,
and
when
this
now
becomes
a
cloud
event
and
any
receiver
that
understands
cloud
events
can
now
use
this
logic.
To
do
some
processing,
maybe
filtering
or
routing
or
is
appropriate
without
ever
haven't,
actually
know
that
this
was
a
new
item,
type
of
event
from
big
cocom
or
big
yeah,
bigger
comm
right.
E
So
we
we
have
enabled
some
basic
high-level,
middleware,
routing
and
filtering
type
stuff
without
ever
actually
changing
the
business
logic
or
the
format
of
the
event
itself.
Just
a
couple
of
extra
headers,
very
simplistic
type
stuff
so
hit
the
enter
key.
Let's
go
the
next
one
and
we
also
allow
for
you
to
define
or
to
serialize
the
cloud
event.
As
a
top-level
thing
mean.
Let's
say
you
actually
do
want
to
use
a
standardized
wrapper.
In
essence,
for
your
event
thing
we
do
define
the
cloud
event.
E
Format
itself
knows
the
content-type
changed
from
just
application,
JSON
to
application
cloud
events
plus
Jason,
and
really
the
only
difference
here
as
we
move,
the
data
from
the
headers
into
the
body
and
the
old
body
became
under
the
property
called
data,
but
it's
still
the
exact
same
data
just
serialize
slightly
differently.
If
you
want
to
actually
have
a
common
format
for
your
event
or
the
even
thing
metadata
as
well,
all
right,
let's
keep
moving
forward.
So
in
turn
the
deliverables.
We
obviously
have
the
specification
itself
that
defines
the
metadata.
E
We
have
the
serialization
rules
for
three
pop
of
the
format's
jason
AMQP
and
protobuf,
and
some
popular
transports.
It
should
be
Nats
and
qtt.
You
can
read
lists
there.
We
did
come
in
to
one
interesting
problem
where,
because
we're
trying
to
promote
interoperability,
there
were
some
people
that
wanted
to
add
a
serialization
for
their
particular
format
or
their
preferred
transport.
But
we
didn't
feel
like
it
was
really
a
a
community
or
a
inner
ability.
E
Transport,
for
example,
rocket
MQ,
was
one
of
the
ones
that
popped
up
and
the
reason
we
didn't
feel
it
felt
we
didn't
feel
it
felt
like
a
generic
transport
was
because
it
was
a
transport
for
one
particular
purpose
of
mine
with
only
one
implementation,
and
so
we
felt
like
it
was
a
little
bit
proprietary,
even
though
it
is
open-source,
and
so
we
went
back
and
forth
misaligned.
We
realized
that
there
probably
going
to
be
quite
a
few
transports
or
even
C
realizations-
that
might
fall
into
this
category
of
useful
and
used
by
people.
E
But
it's
not
really,
but
but
by
creating
specification
for
we're
not
really
helping
interoperability,
because
it's
really
only
one
implantation
for
it,
but
we
don't
want
those
guys
feel
excluded.
So
we
ended
up.
Doing
was
having
basically
a
web
page
that
has
a
list
of
pointers
to
these
quote
proprietary
transport
bindings
that
are
hosted
by
the
people
that
own
the
transport
themselves,
so
that
you
still
feel
part
of
the
community,
but
we're
not
necessarily
promoting
these
at
the
same
level
as
we
do,
as
we
felt
were
a
real
community
transports
and
C
realizations
right.
E
So
there's
one
call
that
out
to
because
that
was
a
big
discussion
point
for
us.
So
in
terms
of
other
dealer
rules,
we
have
a
primer
which
is
basically
just
you
know,
our
ramblings
about
design
decisions
and
guidance
for
people
in
terms
of
how
to
imprint
this
stuff
stuff,
that's
very,
very
useful,
but
not
really
designed
for
a
spec
per
se.
So
that's
why
we
pull
down
the
spec.
We
have.
What
is
that
six
different
SDKs?
You
see
the
list
there,
and
then
we
have
a
list
of
extensions.
E
E
It's
very
hard
to
track
people
from
a
traditional
PR
type
perspective,
so
we
based
it
upon
participation
on
the
weekly
phone
calls.
Instead,
so
far,
it's
worked
out
fairly
well.
I
think
the
people
that
you
end
up
voting
are
the
ones
who
are
most
active,
and
so
that
works
out
really
nicely,
even
though
it
is
slightly
different
than
other
types
of
open-source
projects
we
do
have
interrupts
with
Vince,
typically
at
coop
cos
themselves,
but
people
have
used
our
demos
that
meetups
and
stuff
like
that.
You
can
actually
see
in
the
top
right-hand
corner.
E
This
is
actually
going
to
be
the
demo
that
we're
going
to
show
off
at
coop
con
next
week.
It's
basically
simulating
events
going
through
a
common
bus
at
an
airport,
and
it's
actually
based
upon
the
Accra
semantic
model,
which
is
produced
by
the
airport
council,
international
group
and
basically,
what
it's
about
is
there's
a
common
bus
where,
as
people
walk
up
to
the
vendors
that
are
in
the
bottom
part
of
the
picture
and
order,
coffee
they'll
get
deliver
the
coffee.
E
But
if
the
coffee
retailer
in
run
out
ran
out
of
cups,
what
they
would
Vence
and
the
retailers
on
the
top,
would
it
would
get
the
event
send
that
a
notification
or
other
event
saying
they
have
a
delivery
thing
to
be
made.
So
you
get
the
trucks
the
right-hand-side
start
flying
around
the
screen,
picking
up
packages
delivering
to
the
retailers
and
yet
we're
gonna
actually
have
the
audience.
E
Members
for
this
demo
actually
participate
by
actually
already
coffee
to
their
phones,
and
so
it
should
be
a
lot
of
fun
because
we
actually
get
customers
or
the
participants
in
the
audience
involved.
In
the
demo
itself,
but
the
point
of
this
is
actually
see
events
flying
around
and
you
can
actually
see
the
list
of
events
on
the
right
hand
side
there-
those
kind
of
small,
but
you
can
actually
click
on
those
to
see
the
actual
cloud
event
that
flew
through
the
system.
E
We
could
talk
a
little
bit
about
that,
but
anyway,
the
point
here
is
to
show
in
our
ability
and
how
cloud
events
has
actually
made
this
a
little
bit
easier
and
to
show
that
it
actually
is
being
considered
to
be
used
by
some.
You
know
real
world
people
like
the
Accra
skies
as
of
right
now
we
are
still
at
0.2
with
zero
point
three
just
around
the
corner.
E
We
were
hoping
to
make
it
in
time
for
coop
con,
but
realistically
I,
don't
think
we're
gonna
make
it
because
obviously
it's
next
week,
but
I
do
expect
it
to
happen
relatively
soon
within
a
matter
of
weeks.
The
what's
interesting
is
you
can?
If
you
actually
look
at
the
version
numbers
going
forward,
you
can
see
that
in
terms
of
actual
real
hard
core
work
in
terms
of
new
properties
or
attributes
that
are
going
to
get
out
to
the
spec
you
can
see
by
the
time
we
reached
0.3
we're
basically
going
to
be
done.
E
0.4
is
all
about
possibly
other
transports
and
we
haven't
had
a
new
one
in
a
while
there.
So
that's
almost
done
as
well.
Zero
five
is
just
resolve
outstanding
issues,
they're
just
in
our
backlog.
So
really,
even
though
0.3
sounds
like
a
low
number
in
my
opinion,
were
actually
closer
to
like
a
point,
8
verging
on
point,
9
type
of
thing,
because
I
think
once
we
get
to
0.5
we're
at
that
stage
now,
where
it's
more
of
testing
and
verification
that
this
actually
is
useful
to
go
forward
to
get
to
1.0.
E
Now
we
haven't
verified
or
discussed
how
long
we're
going
to
sit
in
this
testing
stage.
Yet
that's
going
to
come
up
when
we
get
to
the
0.5
stage,
but
I
actually
don't
anticipate
it
being
too
long,
so
I'm
hoping
maybe
within
a
couple
months.
We
actually
may
feel,
like
you
know
what
we're
getting
close
to.
One
point,
though:
maybe
we
start
thinking
about
you
know
possibly
doing
a
vote
at
some
point.
Don't
have
to
see
how
it
plays
out.
That's
just
my
personal
take
on
it
all
right
coming
forward
in
terms
of
adoption.
E
We
do
have
quite
a
few
people
using
it.
These
are
the
way
through
three
that
I
got
confirmation
that
it's
okay
to
use
their
names
here,
so
Microsoft
server
lists,
comm
SAT
are
using
it
the
one
it
probably
the
bigger,
open
source
projects.
That's
using.
It
is
Kay
native
they're,
actually
using
it
as
the
basis
for
their
inventing
infrastructure.
E
So
as
events
come
in
decay
native
and
get
shuffled
around
between
the
various
components
or
even
off
to
the
functions
themselves,
they
actually
get
transformed
into
a
cloud
event
to
provide
that
level
of
consistency
so
that
when
they
started
adding
things
like
filtering
mechanisms,
sookay
native,
they
can
now
do
filtering
based
upon
cloud
event
metadata
right.
So
they
don't
have
to
actually
understand
whether
the
event
again
came
from
s3
verses,
Azure
versus
something
else.
E
They
can
use
the
cloud
event
metadata
to
do
filtering
and
people
can
specify
these
filters
without
having
to
care
about
what
kind
of
business
logical
are
they
that
came
from
or
even
the
transport
that
was
used
right.
So
again,
that's
exactly
the
cloud
event
was
designed
for
Steaua
Lau.
You
do
that
basic
level,
middleware
routing
filtering
where
they
haven't
understand
the
events
themselves.
E
When
that's
all
said
and
done,
whether
we
actually
reach
one
point,
though,
or
we
feel
like
we're
entering
that
testing
stage
where
cloud
events
is
kind
of
died
down
in
terms
of
a
weekly
churn,
we're
then
gonna
shift
our
focus
back
to
the
service
working
group,
because
it's
pretty
much
the
same
members
of
both
organizations
and
we're
trying
to
we're
then
need
to
figure
out.
You
know
what
we're
gonna
work
on
next.
If
anything,
we
did
work
on.
E
We
started
processing
a
workflow
specification
for
orchestration
of
functions,
but
they've
kind
of
got
stalled
because
most
people
want
to
finish
our
cloud
events.
First,
we'll
return
back
to
that
and
see.
If
that
really
is
what
I
want
to
work
on
going
forward?
Okay,
so
Before
we
jump
to
the
next
slide,
though
that
kind
of
ends
the
cloud
events
specific
bits:
do
people
have
any
questions.
E
E
Yeah,
so
the
list
I
have
up
there
I
believe
these.
Those
folks
have
it
as
part
of
their
product
in
some
way
and
so
I
assume
they
must
have
end
users
that
are
using
it.
But
if
you
just
go
to
the
next
slide,
and
thank
you
Sarah
for
being
a
straight
man
and
getting
to
the
next
topic
I
want
to
talk
about.
It
is
when
we
go
to
incubator
or
we
consider
Glens
incubator
in
terms
of
the
TLC
requirements.
E
It's
very
easy
to
know
the
one
end
user,
as
I
am
Oh
most
cases,
I
think,
but
when
you're
a
spec,
what
is
an
end
user
and
there's
so
down
the
bottom
here
I
say
you
know,
is
an
end
user,
a
product
using
your
spec
or
is
it
end
user,
a
user
of
the
product
using
your
spec
and
so
I
kind
of
wanted
to
tea?
This
up?
E
F
B
It
has
been
lingering
for
sure
the
the
question
of
what
the
different
graduation
criteria
meet.
You,
our
graduation
incubation
criteria,
meaningful
specs.
It's
it's
come
up
for
tough
as
well,
so
it's
definitely
something
we
need
to
talk
about,
hoping
that
we
can
cover
that
off.
Maybe
there
normal
tear,
see
meeting
I
think
you
should
have
collagen
agenda
yeah.
G
Plus
one
of
that
and
there's
also
this
ongoing
issue,
I
know
you
commented
on
that
as
well.
Doug
and
Justin
as
well.
Iii
commented
on
that.
So
maybe
you
can
ping
the
toc
members
to
add
their
their
opinions.
I
think
that
the
end-user
point
really
like
I
think
before
we
answer
the
end-user
question.
I
think
the
GOC
needs
to
decide
like.
Is
that
really
the
right
question
to
be
asking
of
the
spec
spec
projects
anyway?
What
are
we
actually
looking
for
as
incubator
status
and
I
listed
a
few
points
in
that
issue?
E
G
H
H
So
I
want
to
give
a
shout
out
to
everybody
who's
working
on
this
project,
and
this
presentation
is
for
our
incubating
review
and
for
those
of
you
who
aren't
familiar
with
hi
KB
I
want
to
give
a
quick
project
overview
in
history
of
what
we've
done
so
ty
kV
is
the
open
source
distributed
transactional,
key
value,
storage
layer,
the
type
R
stands
for
titanium.
The
chemical
element.
Kb
of
course,
stands
for
key
value
store
and
we
got
our
inspiration
originally
from
the
designs
of
the
Google
spanner
project
as
well
as
HBase.
H
Some
of
the
core
features
are
ty.
Kv
is
a
strongly
consistent
database,
so
not
a
eventually
consistent,
no
sequel
database,
some
of
the
other
ones
that
are
out
there
in
the
market
it
supports
distributed
transaction.
It
has
a
coprocessor
framework
that
supports
distributed
computing,
which
is
actually
a
very
powerful
mechanism.
That's
being
used
quite
a
bit
to
essentially
leverage
parallel
processing
to
process
very
complex
data
queries.
It
is
natively
horizontally,
scalable
and
also
replicated
across
geography
using
the
wrapped
consensus
protocol
and
a
quick,
a
historical
tidbit.
H
We
actually
leverage
quite
a
bit
of
the
SCB
wrap
implementation
when
we
first
started
building
ty
kV
about
announced
three
years
ago
and
now
SED
is
of
course
part
of
CN
CF
as
well.
We
became
a
sandbox
project.
Last
September,
our
sponsors
were
Brian
Cantrell
and
Ben
Heineman
at
a
time,
and
the
reason
and
the
goal
and
the
vision
for
submitting
ty
KB
to
become
part
of
CN
CF
was
really
to
provide
this
persistent
cloud
native
database
layer.
H
What
you
see
here
is
a
brief
architectural
overview
of
ty
KB
itself.
The
boxes
that
lay
most
high
Kb,
you
can
imagine
them
as
either
physical
machines
or
discs
or
VMs
and,
of
course,
containers.
These
are
individual
ty
kV
instances
that
can
be
scaled
horizontally,
the
little
region
block
in
the
middle
of
it.
H
Those
are
essentially
data
chunks
that
are
key
value
pairs
that
are
broken
up
into
smaller
chunks,
that
we
call
a
region
internally
in
our
documentation
and
each
region
is
replicated
using
the
wrapped
consensus
protocol
right
now,
the
just
so
you
know
the
default
size
of
these
region
is
96
megabytes,
but
you
can
configure
that.
However,
you
want,
depending
on
your
use
case,
underneath
each
of
the
boxes
is
a
little
leopard
which
also
signals
a
rocks
TV
instance.
So
what
we
leveraged
rocks
TV
quite
a
bit.
H
Each
Tyco
heavy
instance
has
a
rock
CB
instance,
underneath
it
and
rock
CB
is
a
storage
engine
that
was
first
created
by
Facebook,
actually
leveraged
leveldb
from
Google
originally
and
is
still
maintained
by
a
Facebook
and
is
a
very
battle-tested
storage
engine
that
both
Tyco
AV
and
a
lot
of
other
database
products
use
as
storage
engine.
There
are
a
couple
of
cloud
native
projects
that
we
use
by
default
as
well.
We
use
G
RPC
as
the
communication
layer
between
different
components.
We
use
Prometheus.
As
the
you
know.
H
Basically,
the
metrics
gather
of
everything
that's
going
on
in
ty
kV
and
we
built
ty
k
be
using
rust,
which
is
a
relatively
new
system,
language
that
is
gaining
a
lot
of
popularity,
a
lot
of
enthusiasm-
and
you
know
we
actually
started
using
rust
before
it
was
even
1.0,
which
was
an
interesting
bet
at
the
time,
but
we
definitely
benefit
a
lot
from
that
choice
and
have
gained
a
lot
both
working
with
rust,
the
language
and
also
with
rust
the
community.
So
that
was
a
very
unfortunate
choice
on
our
part.
H
Next
slides
here
is
some
basic
community
stats
that
we've
gathered
since
joining
CN
CF
and,
as
you
look
through
them,
a
lot
of
the
details
are
in
the
dev
stats
dashboard
as
well,
but
one
thing
I
just
want
to
mention
and
emphasize:
is
the
contributor
counts?
This
is
the
number
that
I
got
just
this
morning
as
I
was
a
preparing
to
you
know
polish
up
this
slide.
We
have
about
a
hundred
fifty
contributors,
and
just
so
everyone
knows
only
so.
H
We
only
have
about
20
people
working
on
ty
kV
at
pink
cap,
which
is
the
company
that
originally
started
building
type
a
B.
So
the
contributor
I
think
community
is
very,
very
healthy,
very
diverse
and
there's
a
lot
of
other
people
that
are
really
invested
in
building
and
maintaining
and
improving
ty
kV
beyond
just
the
folks
that
are
affiliated
with
pink
half
the
company,
and
here
are
some
of
the
other
kind
of
more
community
building
progress
that
we've
made
since
joining
CN
CF.
We
have
formalized
a
governor's
document.
H
We
have
a
best
practice
batch
from
CII.
There
are
actually
a
lot
of
new
features
that
we
are
working
on
in
tkv
as
the
approaches
3.0,
which
is
the
big
next
big
version
that
we
are
working
towards,
but
two
of
them
that
I
will
mention
that
I
think
is
interesting
to
this
group.
One
is
batch
messaging
in
G
RPC.
H
We
use
a
lot
of
G
RPC,
anti
kV
and
batch
messaging
is
a
new
improvement
that
we're
working
on
to
really
alleviate
some
of
the
performance
bottleneck
as
your
system
becomes
more
complicated
and
as
there
are
more
components,
being
developed
or
being
deployed,
rather
in
your
cluster
to
batch
messages.
When
the
receiver
side
is
too
busy,
we
would
actually
you
know
not
stream
all
the
messages
and
actually
batch
them
depending
on
the
busyness
of
the
receiver.
So
that's
one
improvement
and
another
one
is
mo
text
by
the
RAF
store.
H
So
as
your
system
gets
bigger,
as
you
have
a
giant
ikv
cluster,
which
is
how
a
lot
of
our
users
are
deploying
tech-heavy,
the
raft
communication,
you
know
between
ensure
consistency
and
high
availability,
that's
become
a
bottleneck
as
well,
so
we
are
providing
a
new
improvement
where
we
can
mo
type
that
a
lot
of
the
raft
communication
to
alleviate.
You
know
that
performance
bottleneck,
which
has
seen
quite
a
bit
of
improvement
in
terms
of
performance.
Already
in
our
testing.
We
have
a
new
maintainer.
Now
his
name
is
chef
Wang
Xuan.
H
H
Perhaps
so
he
is
now
a
new
maintainer
that
we
have
exercised
our
new
government's
document
to
elect
formally
into
the
Thai
kV
project
and
the
last
set
of
bullet
points
that
I
want
to
spend
a
little
time
on
is
this
ecosystem
that
is
already
forming
around
the
Thai
kV,
going
back
to
the
original
that
we've
had
to
have
this
be
a
building
block.
So
since
joining
CN
CF,
we
have
discovered
so
far
a
three
separate
Redis
anti
kv
open
source
projects
just
by
it.
H
Folks
who
wanted
to
build
that
they're
called
Qaeda's
tighten
and
tighten,
is
my
best
interpretation
of
that
name,
and
there
is
another
prometheus
metrics
into
Thai
KB
project
as
well.
That's
been
fostered
in
the
open
source
community
called
type
Prometheus.
In
fact,
this
project
is
mentioned
in
the
I
think
the
official
Prometheus
documentation
as
one
of
the
options
for
remote,
endpoints
and
storage
for
both
read
and
write,
so
in
a
pretty
visible
way,
and
there's
more
that
we
can
do
here.
H
Still
this
vision
of
becoming
part
of
the
cloud
native
landscape,
you
know
to
be
a
building
block,
it's
really
becoming
a
reality
in
some
ways.
Next
slide
and
in
terms
of
adoption,
we
have
a
full
list
of
adopters
that
is
available
for
everyone
to
check
out
on
a
github
repo,
but
here
are
just
some
of
the
large
ones
that
I
will
mention
for
the
purpose
of
this
presentation.
One
is
a
union
pay
which
is
I,
think
probably
one
of
the
largest
kind
of
digital
payments
and
credit
card
gateway
out
there.
H
For
those
of
you
who
travel
a
lot,
which
I
think
is
very
much
this
group,
it's
a
very
well
travel
group.
You
probably
see
this
logo
in
a
lot
of
the
fancy
shopping
stores
in
international
airports.
As
you
do,
your
transfer
penguin
is
one
of
the
largest
internet.
Oh
sorry,
insurance
and
banking
institutions
in
Asia
is
also
an
adopter
of
Thai
kV
I.
Believe
King
on
technology
is
also
a
CNCs
member
as
well
book.
My
show
is
probably
the
top
three
most
transacted
website
in
India.
H
It's
a
very
successful
internet
company
over
there
and
the
core
use
of
that
is
to
book
movie
tickets,
Bollywood
shows
and
other
entertainments
in
India.
That's
also
a
high
kv
user
and
the
shocky
is
one
of
the
most
popular
e-commerce
platforms.
They're
based
in
Singapore,
they're,
part
of
the
SCE
group,
I
think,
and
they
have
their
business
throughout
Southeast
Asia
in
pretty
much
all
the
countries
in
ASEAN
as
well.
H
So
these
are
just
some
of
the
some
of
the
more
notable
adoptions
I'd
like
to
highlight,
and
the
core
use
case
here
is
still
a
combined
use
between
Thai
kV
and
Thai
DB,
which
is
a
sequel
processing
layer.
That's
also
open
source.
That
speaks
to
my
ESCO
compatibility,
and
you
know
with
those
two
things
together,
you
have
this
relational
distributed
database
that
is
used
the
horizontally
scalable
next
slide.
H
But
what
I
also
want
to
note
is
that
there
are
a
lot
of
Thai
kV
only
end
users
that
are
adopting
this
project
as
well,
without
really
anything
else
that
goes
around
with
it.
Some
of
the
ones
are
JD
cloud,
which
is
JD
comms
kind
of
public
cloud
platform.
Ella
me
is
a
goober
east
of
China.
If
you
want
both
in
terms
of
its
size
and
in
term
of
its
business,
and
try
and
rent
a
little
pick
in
the
middle,
it
is
a
very
popular.
H
They
call
it
a
recom
erse
platform,
so
it's
kinda,
like
secondhand
commerce
platform.
You
know
people-to-people
similar
to
let
go,
which
is
an
elegance
company
that
I
think
is
based
in
europe,
but
relatively
popular
in
north
america
as
well.
So
tundra
is
a
version
of
that
and
may
to
which
is
on
the
bottom
right
corner,
that
is,
a
public
company
I
think
listed
in
Hong
Kong.
H
So
we
have
done
a
lot
of
work
to
contextualize
the
labeling
of
a
lot
of
the
issues
that
we
have
in
ty,
KB
things
like
Help,
Wanted
or
mentor,
which
means,
if
you
work
on
this
issue,
you
actually
have
one
of
the
court
ikb
members
mentor
you
throughout
the
process,
as
you
get
through
it
or
easy.
If
you
want
to
pick
up
something
relatively,
you
know
easy
to
chew
on
to
get
your.
You
know
mojo
going
if
you
will,
and
the
screenshot
I
cook
here
is
a
open
issue.
H
Still
that
has
some
of
these
labels
just
and
as
an
example.
So
it's
gonna
and
off
right
after
this
presentation,
I
will
not
be
against
it
and
the
reason
we're
doing
a
lot
of
this
is
that
because
we
use
rust,
you
know,
as
many
of
you
know,
is
still
a
relatively
new
language.
They're
learning
kind
of
curve
for
rust
is
relatively
high
and
being
one
of
the
largest
rust
production
projects
out
there.
H
We
want
to
do
a
weekend
to
lower
the
barrier
of
entry
to
that
as
well,
but
being
good
stewards
of
the
Thai
Cavey
community
and
to
bring
more
people
into
our
community
and
other
things
that
we
can
do
is
to
further
diversify
our
maintainer
ship.
Even
though
we
have
shell
Wong
from
Drew
comm
already,
they
are
more
that
we
can
do
to
bring
in
other
diverse
maintained,
errs
from
other
institutions,
as
well
as
having
a
much
clearer
roadmap
with
targeted
release,
dates
and
more
areas
that
we
can
do
to
improve
next
slide.
H
So
this
is
just
a
list
of
resources
that
you
can
check
out
to
further
evaluate
hai
kV
for
incubating
status,
review
and
beyond
that,
whether
it's
website,
Twitter
slack
reddit
I've,
also
linked
incubating
PR
that
we
found,
which
I'm
pretty
sure,
has
now
a
link
to
the
technical
due
diligence
document.
That's
generally
from
the
TOC
has
drafted
with
the
help
of
the
storage
working
group,
so
really
appreciate
everyone
who
has
already
done
so
much
work
to
do.
Diligence
on
Tai
kV
as
we
move
forward
with
our
incubating
review.
I
J
H
H
The
tidy
be
operator
is
open
source
and
that,
of
course,
you
know,
deploys
both
hi
kV
and
other
parts
of
tiny
B,
but,
like
I
mentioned
in
this
slide,
we
are
working
to
you
know:
either
work
with
a
community
or
get
other
people's
help
to
develop
operator
or
home
chart
like
deployment
for
ty
kV
itself.
But
yes,
the
current
operator.
Implementation
is
all
open
source
and
you
can,
you
know,
find
that
fairly
easily
cool
thanks,
yep.
B
B
H
I
I
But
this
approach
of
turning
the
orchestrator,
if
you
will
into
a
storage
layer
itself,
ended
up
being
really
quite
broadly
adopted
and,
and
today
is
a
there's,
actually
a
pretty
big
piece
as
I
understand
it
of
VMware's
business.
So
at
the
highest
level
you
can
think
of
what
open
EBS
is
doing
is
is
leveraging
kubernetes
itself
to
deliver
storage
to
kubernetes.
I
You
can
learn
more
about
us
at
these
different
resources.
We
have
been
running
our
own
dev
stats,
as
you
can
see
now
that
will
I
think
be
over
on
the
CN
CF
side.
So
there's
there's
quite
a
number
of
non
Maya
data
contributors.
We
are
about
60
of
those
to
give
you
an
idea.
We
have
about
sixty
an
engineer
in
the
organization
and
a
quick
comment.
I
would
say
you
know.
Thank
you
to
Alexis.
I
Thank
you
to
saying
thank
you.
Thank
you,
as
my
Alexis
just
popped
up
behind
me.
Thank
you
to
Alex
from
the
CN
CF,
sig
storage,
sig,
community
and
a
whole
lot
of
others
who
have
really
helped
us
understand
a
little
bit
better
Chris
would
be
a
notable
one
and
who
spent
time
with
us
to
kind
of
coach
us
and
direct
us,
sometimes
correct
us
as
we
as
we're
over
enthusiastic
or
early
on,
but
we've
gotten
a
lot
of
mentorship
from
this
community.
So
thank
you
a
next
slide,
please.
I
So
what
is
it
as
I
kind
of
alluded
to
when,
as
the
name
suggests,
it's?
It
is
blocks
Jorge
and
I,
and
it
does
run
within
kubernetes
and
it
does
use
what's
there
available?
What
could
that
be?
That
could
be
an
existing
sand
that
could
be
cloud
volumes
that
could
be
disks
or
SSDs,
etc.
That
are
bare
metal
connected
and
it
presents
those
up
as
block
there
is
within
the
community
a
it's,
a
slimmed
down,
NFS
engine
that
is
used
sometimes
on
top,
but
we're
best
known
for,
as
it
says,
replicated
block
storage
yeah.
I
We
we
tend
to
call
the
category
container
attached,
storage,
I
will
say
at
times
I've
had
flashbacks
to
the
early
days
of
software-defined
storage,
where
we
started
talking
about
software-defined
storage
and
every
hardware
vendor
in
the
world
said.
That's
us
too,
so
you
know,
cloud
native
is
one
of
those
terms
that
that
has
been
embraced
by
I
think
all
almost
all
vendors.
I
So
we
tried
to
define
it
a
little
more
specifically
as
container
attached
storage,
so
I,
with
some
input
from
the
community
have
written
a
couple
blogs
about
that,
and
I
would
argue
that
Alex's
company
and
some
other
companies
fit
into
that
space
as
well.
It's
not
simply
open
EBS
and
there's
a
number
that
take
at
a
high
level,
very
similar,
architectural
or
approach
with
that
I'd
like
to
hand
it
over
to
cure
and
kind
of
a
walk
through
the.
How
and
the
architecture
here.
K
Quickly
skip
these
two
slides
that
we
have.
This
just
goes
on
to
repeat
what
have
been
said.
We
make
use
of
kubernetes
as
much
as
possible
for
orchestration.
The
whole
functionality
of
the
storage
controllers
is
the
one
that's
part
of
opening
ears
and
that's
what
we
call
a
storage
controllers
or
targets
those
are
packaged
and
delivered
as
continuous
and
orchestrated
by
kubernetes
itself.
The
functionality
that's
provided
by
these
storage
containers
typically,
is
what
you
get
from:
enterprise
storage
in
terms
of
storage
management,
expansion,
I
availability,
data
protection,
etc.
K
To
this
blog
that
even
submitted
about
content
attached
storage
have
provided
the
link
here,
so
we
go
to
the
next
slide.
Please
some
of
the
examples
for
this
kind
of
storage.
It
kind
of
falls
into
the
hyper-converged
storage
topology
that
the
storage
working
group
submitted.
The
white
paper
openly
base
is
one
of
them,
so
some
of
the
other
products
like
bugs
storage,
is
along.
K
The
way
this
differentiates
with
ruh
is
rook
is
mostly
an
orchestration
control.
Brain
that
could
actually
operate.
Any
engine
would
be
self
or
open.
Ebs
also
could
be
plugged
into
that
one
next
slide.
Please!
This
one
actually
has
a
little
bit
of
an
animation.
This
kind
of
views
and
overview
of
how
open
it
is
works.
Yeah,
please
keep
clicking
room.
K
K
Each
of
those
nodes
can
come
with
additional
discs.
This
is
what
we
call
as
the
storage
attached
to
the
nodes
itself
as
part
of
the
open,
Ibiza
installation.
We
have
something
condensed
in
note
disk
manager
that
discovers
all
these
discs
and
makes
them
available
for
operators
or
administrators
to
create
what
we
call
as
storage
pools
in.
K
There
are
multiple
pools
that
we
support
in
this
example,
have
taken
the
example
of
a
c-store
pool,
yeah,
just
click
to
go
ahead,
so
that
really
picks
up
some
of
these
disks
and
creates
a
storage
pool
cluster
that
basically
runs
a
deployment
with
because
on
each
of
those
nodes.
Next,
this
account
system
was
simply
English
and
then
the
administrator
defines
the
storage
class
that
allows
the
users
to
use
this
open
abyss
volumes,
so
the
Dalibor
doesn't
have
to
do
anything
specific
to
open
base.
They
just
have
to
use
the
storage
class
defined
by
the
administrators.
K
K
In
this
case,
let's
say
that
the
application
required
high
availability
and
the
user
can.
Actually.
The
storage
draft
defines
how
many
number
of
replicas
you
want
and
the
they
get
provisioned
with
anteye
affinity,
feature
of
coconut
itself,
then
in
this
case
it
exposes
I
scuzzy.
So
it
connects
to
the
through
the
application
node
via
the
kubernetes.
Ask
a
CPV
click.
Please
yep.
K
The
one
of
the
cool
things
about
this
is
the
sister
target
itself
is
completely
stateless
and
that's
the
one
that
controls
the
arrival.
So
if
node
one
goes
down,
the
sister
dies,
the
Setauket
part
just
gets
rescheduled
onto
another
node
and
connects
to
the
availability
precursor
and
start
serving
the
data
and
ceased
on
target.
Does
a
synchronous,
replication,
Eckstrom,
please?
K
So
it's
we
have
a
chart
available
for
open,
appears
under
the
stable
charts.
It's
a
single
click
install
and
installing
the
applications
can
also
just
make
use
of
the
default
storage
class.
That
ship
with
open,
appears
or
initiators,
can
define
new
storage
process
and
can
provide
them
to
different
types
of
applications.
So
one
of
the
cool
things
about
open
base
is
also
that
it
provides
different
configuration
parameters.
K
For
example,
if
you
are
running
a
MongoDB
which
can
do
its
own
replication,
the
storage
files
can
be
defined
to
have
a
single
replica
or
actually,
and
the
staple
set
will
make
sure
that
each
of
the
duplicates
are
in
different
nodes.
At
the
same
time,
the
openness
will
make
sure
the
data
that's
coming
to
each
of
these
MongoDB
is
coming
from
different
nodes.
What
I
show
here
in
the
right
side
is
a
diagram.
That's
actually
generated
from
the
view,
scope,
application.
K
K
Getting
into
a
little
bit
of
a
high
level
architecture
here
I
in
the
PRI
link
to
a
detailed
architecture
which
we
present
it
to
the
storage
workgroup
at
a
high
level.
The
open
areas
consists
of
two
types
of
components:
one,
the
cluster
level,
which
are
basically
the
operator,
your
defendant
to
CSI
controller
and
that
note
device
operator
that
makes
sure
that
devices
are
not
used
by
multiple
components.
It
basically
maintains
the
storage
device,
inventory
management
and
similarly,
there
are
note
components
that
work.
K
For
example,
like
the
storage
system
pool
that
we
saw
is
one
of
the
known
component
in
the
next
slide.
I
just
have
one
example
of
how
the
system
volume
rocket
works.
So
if
an
application
note
that
kind
of
triggers
the
creation
of
system
volume
talking,
the
multiple
duplicates
connect
to
the
system
volume
target
via
a
service
which
is
a
cluster
like
P
and
even
the
application
nodes
connect
to
this
target
via
surface
IP.
K
So
even
if
the
sister
of
autumn
target
moves
to
a
different
node,
you
don't
have
to
change
anything
at
the
application
layer
to
connect
to
this
data.
It
brings
you
some
fairly
different
patterns
of
community
association
volume
traffic
consists
of
the
data
processing
project.
There
are
side
cars
that
kind
of
attached
to
this
that
help
with
managing
these
volumes
or
even
like
exporting
the
previous
matrix,
etc.
K
K
The
immediate
roadmap,
it's
mostly
day
to
operations
that
we
want
to
focus
on
and
some
performance
improvements
as
well.
We
are
getting
a
lot
of
requests
in
terms
of
supporting
the
use
cases,
support
for
that
kind
of
stuff.
It
details
a
lot
of
backlog
items
are
kind
of
maintaining
the
milestones
this
in
terms
of
grooming
them
all
that
we
typically
use
Travis
itself
as
a
CI
platform.
Today
it
does
the
testing,
and
my
data
has
set
up.
Your
open
levy
is
similar
to
CN
CF
CI
that
helps
with
testing
across
multiple
platforms
as
well.
K
Alright,
how
are
we
aligned
to
the
cloud
Native
community
title
integrated
in
the
kubernetes
and
follows
the
principles
in
terms
of
being
an
open
source
and
open
architecture,
its
scalable
or
exactly
scalable
with
the
nodes?
The
storage
can
be
expanded
to
all
the
nodes
that
are
in
the
cluster
or
it
can
be
kind
of
dedicated
to
some
of
the
nodes
it
does
well
in
terms
of
integrating
to
other
scenes.
You
have.
Projects
like
open
is
Prometheus
ad
City
that
I
mentioned
here,
but
even
the
other
projects
that
are
in
the
sandbox
right
now.
K
Why
do
we
want
to
submit
to
see
in
self
there's
been
a
lot
of
interest
in
from
both
collaborators?
They
want
to
help
with
some
of
the
component
projects
in
open
DPS,
for
example
like
the
indium
itself,
as
well
as
people
who
build
solutions
with
using
communities
and
all
the
other
components.
I
want
to
provide
an
applique,
a
complete
stack
using
the
cloud
native
projects,
and
they
would
me
feel
more
comfortable
to
innovate
on
open
is
if
it's
a
vendor
neutral
organization.
That's
a
primary
motivation.
K
So
far,
I
would
say,
like
you
know,
just
having
propose
here,
we've
been
getting
a
lot
of
good
feedback,
that's
that
has
helped
us
to
fix
the
governance
part
for
example,
and
forcing
licensed
chairs,
etc.
Looking
forward
to
more
improvements
in
terms
of
governance,
any
questions
that
I
can
take
at
this
point.
K
So
I
think
the
concerns
that
have
been
there
mostly
we're
in
terms
of
performance.
They
would
ask
for
how
do
we
do
this
to
date,
more
performance,
that
kind
of
stuff,
and
in
the
early
on
when
we
were
using
the
early
versions
of
the
kubernetes,
since
we
depended
on
the
communities
to
reschedule
their
pods,
we
had
to
tune
the
Toleration
set
cetera.
Without
that
it
would
take
more
than
30
seconds
to
kind
of
reach
it.
You
would
offer
even
longer
that
would
mark
to
open.
It
is
volumes
into
read-only.
D
K
We
actually
have
architected
open
EBS
to
work
with
multiple
engines,
so
it
currently
works
with
g
VAR
c
store,
as
well
as
local
babies.
Example.
Indium
has
the
capabilities
to
manage
the
disks
that
are
attached,
so
it
kind
of
arguments
the
capabilities
that
are
provided
by
the
kubernetes
local
PD
itself
in
terms
of
data
operations
or
management
of
those
schedule,
applications
that
are
running
on
local
TVs,
so
C
story
is
one
of
the
components
one
of
the
data
engines
that
we
support.
K
B
D
You
know
it's
sandbox
and
I.
Think
they've
they've
done
enough
for
sandbox
I
mean
I,
have
I'll
go
on
the
record,
I'm,
really
worried
about
any
storage
project.
You
know,
though,
Brian
has
left
us
and
moved
on
to
non
TOC
projects,
but
he's
left
us
at
a
long
shadow
of
doubt
on
all
things
storage,
and
we
just
need
to
be
super
careful
not
to
mislead
users,
there's
just
so
many
things
that
can
go
wrong.
D
I
think
we
just
need
to
have
a
slightly
higher
standards
for
anyone
anything
to
do
with
storage,
and
this
came
up
again
rook
and
we
just
need
to
be
very
careful,
so
I
really
really
like
it.
If
a
few
more
eyeballs
came
on
to
this
I
feel
much
more
comfortable,
I.
Think
from
from
all
of
that,
all
of
the
other
stuff
that
I've
seen
it's
it's
done
well
and
I
got
involved
because
we'd
collaborated
with
the
team
successfully
I'm
very
happy
with
how
that
went,
but
it
would
be
great
to
get
a
few
more
people.
L
It's
primarily
different
than
rook
in
that
it's
not
a
privat,
necessarily
just
a
provisioner.
Rook
was
a
facilitator,
whereas
this
is
more
of
a
storage
technology.
So
I
would
agree,
I'd
like
to
have
a
little
more
diversity
in
terms
of
where
which
ones
are
actually
providing
storage
rather
than
rook,
which
is
just
a
high-level
provisioner,
so
cuz
to
me.
It
feels
very
much
just
like
any
other
block
storage.
Maybe
I'm
missing
something
there,
but
there
are
several
in
the
kubernetes
landscape
already.