►
From YouTube: CNCF TOC Meeting - 2018-07-03
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Again,
we
have
proposals,
we
have
sponsor
requests
and
we
have
a
backlog
today,
we'll
be
hearing
from
tik
be,
but,
as
somebody
associated
with
poor
sex
I
feel
no
shame
in
requesting
a
second
sponsor
I
believe
that
ken
has
offered
to
be
a
sponsor
for
the
project.
Having
spoken
to
the
team
and
I,
don't
know
if
Brian
controllers
on
it
was
able
to
speak
to
them,
but
I
think
you
may
be
able
to
act
on
their
behalf,
otherwise
tell
them
to
go,
find
someone
else.
Hey.
B
A
A
D
D
So
I
guess
I'll
share
my
screen
and
launch
this
slide
for
everyone
and
go
for
it.
Yeah
make
sure
everyone
can
see
me
share
screen
all
right.
Hopefully
everyone
can
see.
This
slide
looks
awesome,
awesome,
okay,
great!
So,
once
again
my
name
is
Kevin.
I
am
from
the
company
pink
app
I'm,
their
general
manager
here
in
North,
America
and
I,
along
with
our
co-founder
and
CTO
Ed
Wong
will
be
presenting
Tai
KB
to
everyone.
Thank
you
again
for
the
opportunity.
D
D
If
you
have
time,
I
will
also
do
a
quick
demo
on
my
laptop
to
give
you
a
little
bit
of
a
feel
of
how
to
spin
up
a
Thai
KB
cluster
on
your
laptop
and
when
we
have
time
happy
to
take
any
questions
from
everyone
on
the
call.
So
a
quick
history
about
pink
app,
it
was
founded
in
April
of
2015
by
three
infrastructure.
D
Engineers
were
working
in
some
of
the
largest
Internet
companies
in
China,
like
Nettie's
and
JD
calm,
and
was,
of
course,
one
of
them
and
we
set
out
to
build
a
Taibbi
platform
tie
for
your
curiosity.
It
just
stands
for
titanium.
There
are
several
components
to
the
Chi
DB
platform.
One
is
tidy
B
itself,
which
is
actually
a
stateless
sequel
layer.
That
is
my
sequel
compatible.
The
focus
of
today's
presentation
is
ty
kV,
which
is
a
distributed,
transactional
key
value,
storage
layer.
We
also
built
something
called
Chi
spark
recently.
D
That
is
a
spark
plug
in
that
also
talks
directly
to
ty
KB,
to
help
a
lot
of
our
users
process,
more
complex,
analytical
queries.
Last
but
not
least,
we
also
have
a
project
called
placement
driver,
which
is
a
cluster
that
does
the
metadata
storage
layer
that
communicates
with
Ty
KB,
to
do
scheduling,
auto,
balancing
and
also
timestamp
allocation
ty
KB.
The
project
was
open
source
a
little
over
two
years
ago.
D
On
April
2016
is
current,
version
is
2.0,
it's
license
is
Apache
2.0,
and
here
is
the
link
for
you
to
check
out
the
repo
in
terms
of
the
community
progress.
So
tidy
B
as
a
whole
is
actually
one
of
probably
the
most
popular
active
open-source
database
project
out
there.
It
has
more
than
thirteen
thousand
five
hundred
stars.
Tyke
ad
itself
has
more
than
thirty
three
hundred
stars
with
70-plus
contributors
and
roughly
around
three
thousand
commits
right
now.
D
That
was,
of
course,
a
lot
of
what
drew
people's
interest
with
a
Google
spanner
project.
That
is
also
where
we
got
our
original
inspiration
from
for
ty
KB
as
well,
but
unfortunately
spanner
is
an
open
source
and
isn't
so
accessible,
and
our
vision
for
ty
kV
is
to
build
a
building
block
for
other
cool
amazing,
powerful
systems
to
be
built
on
top
of
it.
So
far
we
built
IDB
anti-spark
ourselves.
D
Cluster
here
is
a
layout
of
the
Thai
KB
architecture,
so
ty
KB,
the
component
uses
G
RPC
to
communicate
with
the
placement
driver,
as
well
as
any
clients
that
can
be
built
on
top
of
it.
It
exposes
two
kinds
of
API
one
as
a
raw
key
value.
Api
one
is
a
coprocessor
api
that
facilitates
pushdown
computation.
It
uses
the
RAF
consensus
protocol
to
provide
data,
replication
and
high
availability
and
underneath
each
Thai
kV
instance,
and
you
can
essentially
imagine
each
instance
as
one
single
machine.
D
We
also
have
a
rocks
TV
instance,
where
we
leverage
that
community
for
the
as
our
storage
engine
for
a
Thai
KB-
and
here
are
some
of
the
technical
highlights
of
Thai
kV.
As
I
mentioned,
it
does
scheduling
and
auto
balancing.
We
also
have
a
multi
raft
implementation,
because
each
Thai,
kV
node,
has
several
of
actually
oftentimes
many
different
raft
groups
that
are
replicated
across
different
ikv
nodes.
So
each
tied,
kV
node
has
multiple
raft
groups
that
it
has
to
facilitate
the
communication
between
different
ikv
nodes.
D
We
also
have
a
dynamic
range
based
partitioning
feature
that
allows
these
raft
group
to
be
split
merged
or
the
leaders
can
be
automatically
transferred
in
order
to
remove
and
resolve
hotspots.
The
way
we
implement
as
the
transaction
is
through
a
phase
commit
with
optimistic
luck
and
high
kb
is
written
entirely
in
rust,
which
is
a
relatively
new
systems
level,
language
that
is
getting
a
lot
of
traction
and
adoption,
and
the
nice
thing
about
rust,
as
many
of
you
may
know,
is
that
it
does
not
have
GC
stop
time
or
a
lot
of
run.
D
Time
cost
in
fact,
I.
Think
ty.
Kv
is
one
of
the
largest
rust
in
production
project
out
there.
Aside
from,
of
course,
Firefox
here
is
one
example
where
sequel
can
be
realized,
on
top
of
ty
k,
be
using
ty
DB,
which
is
what
we
built
internally
well
with
the
community
as
well.
So
the
way
it
works
with
habibi
is
that
Taibbi
actually
has
several
layers
that
we
build
ourselves
with
a
my
sequel,
compatible
layer,
a
parser,
a
cost-based
optimizer
and
a
coprocessor
distrib.
D
But
here
is
a
visual
representation
of
how
the
coprocessor
works.
So
what
essentially
happens
is
that
when
Taibbi
receives
a
sequel
query,
it
will
go
through
the
parser
to
break
down
each
of
the
query
into
different
physical
plans
and
partial
and
each
of
the
the
plan.
The
partial
aspect
of
the
plan
are
actually
pushed
down
into
multiple
high
Kapinos
simultaneously,
where
all
the
computation
is
actually
done
inside
high
kb
notes,
at
the
same
time,
to
compute
partial
results
for
a
particular
query.
These
partial
results
are
again.
We
turned
back
to
tidy
b
and
tidy
B.
D
Does
the
final
reassembling
of
all
the
partial
results
that
can
be
sent
back
to
the
client,
and
this
is
an
implementation
that
we
work
on
a
lot
to
be
able
to
take
advantage
of
the
distribute
nature
of
tie
AKB
and
all
the
computing
power
that
it
has
access
to
inquiry
to
speed
up
more
and
more
queries
and
in
one
of
our
future
roadmap
plans.
We
actually
plan
to
support
more
built-in
functions
that
we
can
push
down
into
ty
kv
nodes
as
well.
D
Here's
another
example
of
how
ty
kV
is
being
used.
So
far,
I
alluded
to
this
a
little
earlier,
which
is
how
code
al.com
uses
ty
kv.
They
have
their
own
s3
implementation.
They
have
a
bunch
of
s3
buckets
with
a
lot
of
blob
storage
and
they
are
also
using
ty
KB
as
their
meta
data
storage
right
now
for
their
a
production
mode.
D
And
here
is
the
latest
benchmark
the?
Why
CSB
benchmark
that
we
did
just
last
month-
and
here
is
the
environment
and
the
hardware
that
we
use
to
do
this
benchmark
and
you
can
see
the
insert
EPS
results
as
well
as
the
read
QPS
result
here.
One
thing
to
note
is
that
this
is
standard
default,
3ty,
kV,
node
deployment,
and
you
know,
of
course,
in
actual
in
production
environment.
Most
of
our
users
deploy
way
more
than
three
tidy
nodes
to
store
more
data
to
increase
their
capacity.
D
So
you
know
this
kind
of
this
result
will
be
much
better
and
I
think
the
student
will
be
much
higher
in
even
in
production
environment,
but
this
is
the
the
benchmark
that
we
did
last
month
for
tied
kV
oops
here
is
a
quick
overview,
comparison
between
high
kV
and
some
of
the
other
popular,
no
sequel
databases
out
there.
Of
course,
every
single
database
tries
to
solve
different
problems
in
different
ways
using
different
technology.
D
So
not
everything
can
be
compared
in
a
completely
Apple
to
Apple
sort
of
a
way,
but
tie
kV
is
original
and
still
the
current
goal
is
to
first
and
foremost,
support
distributed
transaction.
That
has
strong
consistency,
and
that
is
the
sort
of
that.
The
first
goal
and
the
first
level
priority
that
high
kV
looks
to
support
which
is
different
from
some
of
the
other
know.
Sequel
databases
out
there
here
is
a
visual
overview
of
one
of
the
features
that
I
mentioned
before,
which
is
dynamic.
Splitting
and
merging.
D
D
While
the
followers
are
not,
you
know
doing
a
whole
lot,
and
if
this
is
starting
to
form
a
hot
spot,
then
the
system
will
facilitate
an
automatic
graph
leader
transfer,
where
we
will
do
a
logical
leader
transfer
here
in
region,
B
to
move
the
leader
from
the
first
machine
to
the
second
machine
and
there's
no
actual
data
movement.
Here,
it's
just
a
transfer
of
leader
within
the
RAF
consensus
protocol.
D
Currently
ty
kV
is
integrated
with
Henson
cloud
and
you
cloud
and
most
recently
we
are
also
got
on
JD
dot-coms
cloud
provider
or
cloud
solution
and
of
course,
in
the
future,
we
look
to
integrate
with
all
the
major
cloud
vendors
you
know
all
over
the
world
and
as
far
as
cloud
native
synergy
is
concerned,
with
other
components.
Currently,
we
have
a
darker
compost,
deployment
for
testing
and
development
and
local
machine,
which
will
be
part
of
my
demo.
D
D
We,
our
team,
is
actually
one
of
the
largest
maintainer
of
the
rust
implementation
for
both
Prometheus
and
G
RPC,
and
we
also
use
a
lot
of
etcd
and
is
a
active
contributor
to
e
GC
D,
because
we
have
really
been
leveraging
etcd
since
day,
one
when
we
started
building
ty
kV,
because
it
had
a
very
mature
RAF
implementation
and
also
very
rigorous
testing
regimen
that
we
really
leveraged
and
we
didn't
fork
it
completely,
because
we
wrote
ty,
KB
and
rust.
So
we
kind
of
have
our
own
rus
implementation
of
etcd.
D
D
A
lot
of
them
are
using
ty
kV
in
combination
with
other
components
like
tidy
B
anti-spark,
but
quite
a
few
companies
are
using
pi
kv
by
itself,
and
one
of
those
companies
that
I
want
to
talk
about
is
lemma,
which,
like
I
mentioned,
is
a
food
delivery
platform
with
260
million
users.
So
it's
bigger
than
a
lot
of
the
perhaps
more
well-known
food
delivery
platforms
that
we
hear
here
in
North,
America
and
Europe
all
combined.
It
was
recently
acquired
by
Alibaba
for
95
billion,
u.s.
dollars
and
problem
or
the
pain
point
that
they
were
facing.
D
Centers
for
luma
and,
what's
really
interesting,
is
that
little
mud
build
their
own
Redis
layer
on
top
of
high
KB
because
they
wanted
to
continue
using
Redis
a
lot
of
the
application
developers,
love
using
Redis.
So
that's
what
they
did
to
make
titanium
work
for
them
and,
if
you're
interested
in
digging
deeper
into
how
they
used
high
kV.
We
recently
published
a
use
case
story
written
by
a
little
most
engineers
that
you
can
look
at
via
this
link.
D
Ty
KB
nodes
right
here
and
what
I
will
do
now
is
spin
up
a
my
sequel
cluster
as
well
as
a
spark
cluster,
so
that
you
can
see
how
ty
KB
can
be
the
underneath
storage
layer
to
facilitate
both
components
talking
to
each
other
and
reading
from
the
same
data
source,
but
before
I
do
that
I
want
to
show
you
real,
quick
monitoring
mechanisms.
So
each
of
these
deployment
has
a
core
fauna.
Implementation,
defaulted
support,
3000
and
if
you
log
in
using
just
admin
admin
again,
this
is
just
for
testing
and
development
purposes.
D
You
can
monitor
your
entire
clusters.
You
know
metrics
and
current
status.
If
you
go
into
tidy,
be
clustered
ikb,
you
can
look
at
the
store
size,
the
available
side,
and
things
like
that.
So
there's
a
bunch
of
stuff
that
you
can
play
with
inside
the
Garifuna
implementation
and
one
more
tool
which
we
build
in-house
is
something
called
pidb
vision,
and
this
is
defaulted
to
port
80
10
and
here
you
have
a
cool
little
data,
visualization
tool
that,
as
is
ring,
and
each
of
the
partial
ring,
is
basically
one
Tyco
AV
node.
D
If
you
look
a
little
bit
deeper
in
there,
you
see
a
bunch
of
empty
blocks.
These
are
just
empty
storage
spaces,
the
dark,
green,
our
wrath
leaders
and
the
dark
gray
are
wrapped
followers,
and
you
can
essentially
visualize
a
raft
as
it
goes
through
the
entire
tidy
beat
or
tie
KB
deployment.
So
this
is
how
that
works
now
back
to
terminal
the
demo.
What
I
would
do
is
launch
a
my
sequel
instance.
D
So,
in
the
interest
of
time,
I
will
just
do
a
lot
of
copy
and
pasting
of
commands
so
launch
my
sequel
and,
as
you
can
see,
this
is
kaity
be
compatible
with
my
sequel
and
I
will
also
launch
a
spark
instance.
This
will
take
a
little
while,
so
it
will
let
this
run.
Let's
go
back
to
my
sequel
and
I'll.
Show
you
what
is
in
here
so
this.
D
So
we
have
a
few
databases
and
we'll
actually
use
this
one
called
TPC
h20
one
for
the
demo.
It
just
has
a
bunch
of
sample
data
in
there,
so
TVs
h-01
and
let's
see
what's
in
this
database,
so
it
just
has
a
bunch
of
different
tables
in
here.
One
of
them
is
carnation
orders
things
like
that.
So,
let's
see
what
is
in
nation
all
right,
just
what's
up
countries
with
some
random
information
in
here
and
right
now
we
have
our
spark
plug
and
ready.
So
I
am
going
to
input.
D
A
couple
of
commands
to
launch
high
spark
which,
like
I
mentioned,
is
a
spark
plug
in
that
works
directly
on
top
of
pi
kb
as
well.
So
these
are
the
two
standard.
Ty
spark
commands,
and
last
one
we
will
hide
this
instance
to
the
same
database
called
capezios
or
one,
so
they
should
be
talking
to
the
same
data
source
and,
let's
just
see
if
that
is
the
case,
we'll
use
sequel
against
Park
sequel,
select
from
nation.
D
D
So
let's
insert
Belgium
into
this
table,
and
if
you
see
that
we
have
Belgium
on
the
bottom
right
here,
a
new
new
member
of
this
country
list
and
if
we
do
the
same
command,
you
immediately
the
change
being
made
and
visible
on
the
tie
sparks
side
as
well.
So
you
can
easily
imagine
where
multiple
updates
and
changes
are
being
made
on
the
my
Seco
side
and
the
tie
spark
side
can
immediately
do
queries
and
analytical
process
sing
on
the
spark
side,
all
being
supported
and
stored
inside
ty
KB.
D
That
really,
is
you
know,
beyond
the
strength
of
the
current
community
right
now,
we
would
love
to
see
more
language
support
right
now.
We
only
have
a
go
client
for
Taibbi
in
a
java
client
for
tai
spark.
One
of
all
community
members
has
already
started
building
an
open-source
Redis
proxy.
He
caught
it
tightest,
so
you
can
check
out
his
repo
here,
but
of
course
it's
still
very
much
a
work
in
progress.
D
We
wanted
to
support
column,
family
structure
as
well,
so
there's
a
lot
of
things
that
we
will
love
tidy
tight,
K
beads
you
have,
and
with
cnc
of
support,
I'm
sure
we'll
be
able
to
accomplish
that.
So
again,
thank
you
for
your
time,
we'll
love
for
you
to
be
our
TOC,
sponsor
and,
of
course,
reach
out
to
me
and
EDD
anytime.
If
you
have
any
questions
and
we
will
actually
be
praying
for
the
technical
proposals
right
now
and
we
will
share
that
with
everyone,
hopefully
within
the
next
week.
So
that's
about
my
presentation,
I.
E
D
Mean
I
think
I'm
sort
of
leaving
this
up
to
the
TOC
to
give
us
what
you
think
will
be
the
best
I
guess
level
of
entry
for
the
current
status
of
the
project.
You
know,
given
the
number
of
adoption
that
we've
seen
so
far
for
its
high
kV
I,
think
it
probably
would
work
for
incubation.
But
then
again,
you
know
I'm
not
too
familiar
with
a
different
kind
of
criteria
and
what
goes
into
these
considerations
so
we're
being
in
open
and
receptive
to
your
opinions,
about
which
level
is
most
appropriate.
B
D
B
D
D
That
you're
looking
to
do
that's
definitely
something
that
we're
looking
to
do
and
I
guess
we
just
haven't
quite
got
around
to
it.
Stephanie
says
that
you
know
requires
some
resources
from
our
side
as
well,
but
we
would
definitely
love
to
you
know.
Go
through
that
process
as
well.
You
know,
since
our
prohm
process
probably
gets
us
somewhere
along
that
way,
but
I
think
having
him
do
it
with
us
is
probably
you
know,
will
definitely
open
it
doing
that
yeah.
B
I
mean
I
would
really
encourage
you
to
do
that.
I
know
it's
it's
time-consuming
and
it's
expensive,
both
in
terms
of
resources
and
potentially
monetarily,
but
I
do
think
it's
really
worth
doing,
because
if
Jebsen
has
I
mean,
as
you
know,
Jepsen
has
become
the
gold
standard
for
actually
did
allowing
people
to
understand
what
the
true
consistency
guarantees
are
of
these
furnished
projects.
B
This
is
a
really
interesting
I
mean
we've
got
all
the
same
problems
that
you're
seeing
and
the
the
GC
pauses
are
just
a
deal
breaker
for
me
for
any
so
that
the
and
I'm-
obviously
maybe
not
in
this
forum,
but
aside,
very
curious
with
your
experiences
with
Ross
we're
having
a
lot
of
really
good
experiences
with
rust.
And
it's
looking
like
a
very
interesting
trajectory,
so
I'd
love
to
get
your
take
on
that
as
well.
But
I
would
be
happy
to
help
that
to
help
facilitate
you,
however,
I
can
from
so
okay.
D
F
D
D
So,
right
now,
of
course,
given
that
high
K
V
hasn't
been
so
much
on
its
own,
the
current
usage
very
much
is
connected
to
tidy
B
and
you
know
tie
spark
depending
on
what
the
user
is
for,
but
it
can
be
easily
adapted
like
what
the
autumn
of
folks
are
doing,
with
the
Redis
taxi
to
be
used
as
a
building
block
for
really
where
you
see
fit
right
on
top
of
it.
So
what.
D
F
B
F
F
D
D
In
terms
of
so
this,
that
would
just
be
data,
that's
coming
in
or
kind
of
encoded
from
the
relational
side
to
Cavey
store
and
then
they're
broken
up
into
different
region,
mostly
by
their
size,
so
kind
of
like
in
this
situation,
where
you
can
see
different
tables
going
to
the
key
value
pairs
and
then
broken
down
into
different
regions,
and
then
these
regions,
I
guess
if
it
gets
too
big,
could
be
split
up.
Just
for
you
know,
performance
purposes
or
hotspot
for
me
are
reasonable.
Okay,.
F
A
D
Gotcha
yeah
I
mean
in
terms
of
region
splitting
I
think
the
main
consideration
is
the
size
of
the
region,
but
also
the
amount
of
traffic
that
this
region.
Assuming
that
this
region
is
a
leader
replica
for
that
particular
rap
group,
then
it
could
get
split
up
into
a
difference.
Machine
is
to
remove
hotspot.
That's
I
would
be
like
one
scenario:
I
think
where
that
will
get
automatically
facilitated.
G
D
First
question:
first,
so
for
eg,
CD
right
so,
like
I
mentioned,
we
have
been
leveraging
etcd
since
they
won
because
of
this
rap
implementation,
and
also
you
know,
we
really
leverage
its
testing
rigor
to
use
it
to
test
our
own
ty
KB
system.
We
actually
use
etcd
embedded
in
our
placement
driver
implementations.
The
placement
I
per
cluster
directly,
but
I
for
its
high
kv.
We
because
we
use
rap,
we
use
rust
to
code
it.
D
We
didn't
form
a
PCB
completely
but
kind
of
made
our
own
rust
implementation
of
etcd
in
ty
kv,
so
that
is
kind
of
the
past
and
the
current
and
for
the
future.
We
are
very
involved
in
the
different
kind
of
EEP
PCD
kind
of
roadmap.
That's
going
forward.
One
of
the
things
that
I
mentioned
is
the
raffle
earner
feature
that
we
are
really
looking
to
implement
for
the
next
evolution
of
Thai
KB,
because
one
of
the
I
guess
drawbacks,
and
this
actually
goes
into.
D
Your
second
question
is
because
Thai
KB
is,
you
know
by
nature
a
key
value
store.
It
doesn't
quite
support,
complex
analytical
queries
and
the
speed
and
the
performance
that
say,
HBase
potentially
would
or
any
other
column
family
database
would,
and
that
is
sort
of
the
inherent
structural
limitation
that
high
KB
has
in
that
implementation.
So
there
is
a
limit,
for
example,
to
how
fast
our
tight
SPARC
implementation
could
really
go.
B
D
It
sits
on
top
of
what
high
kV
is
now
and
you
can
see
that
as
being
one
of
the
not
so
much
complaints
per
se,
but
least
consideration
or
limitations
that
people
use
Thai
KB
for
when
it
comes
to
analytical
processing.
But
with
the
raft
learner
feature
being
more
mature
and
being
implemented,
we
can
see
that
as
a
really
good
solution
to
support
faster
analytical
queries
to
be
able
to
process
on
topic,
ikb.
D
C
D
A
Just
one
request:
if
you
go
ahead
and
do
the
incubation
route
probably
start
sounding
out
potential
interviewees
in
your
production
user
base,
because
I
think
the
more
we
hear
from
them,
the
better
it
is
for
streamlining
the
DD
process
that
it
would
you
okay,
I,
have
you
got
a
jump
Chris?
Could
you
Shepherd
the
rest
of
the
call?
Please
I'm,
really
sorry
yeah.
C
C
All
right,
thank
you,
cool!
Thank
you,
everyone
inori's!
So
not
too
many
updates.
You
know
just
go
to
slide.
37,
it's
a
pointer
to
the
working
groups.
30!
It's
the
project
review
39
is
just
a
reminder
that
we
have
a
few
events
upcoming
for
this
year.
At
least
we
have
Shanghai
in
Seattle.
If
you
are
interested
in
submitting
a
talk
to
Shanghai,
the
CFP
closes
at
the
end
of
this
week,
I
think
it's
on
the
7.
C
C
G
Is
just
the
the
last
big
shout
out,
please
consider
submitting
a
talk
or
multiple
talks
for
cube
con
Shanghai,
which
is
November,
14th
and
15th
this
week
is
your
last
time
to
do
it.
You
can
write
it
out
as
you're
looking
at
the
fireworks
tomorrow
if
you're
in
the
US,
and
please
encourage
the
folks
in
your
organization
to
submit
as
well
thanks.