►
Description
This session will feature the etcd maintainers, who will be available to answer questions about etcd.
Panel Members:
Sam Batschelet, Redhat,
Joe Betz, Google
Xiang Li, Alibaba
Brandon Philips, RedHat
Gyuho Lee, Amazon
A
I
think
we're
all
set
so
hello,
I'm,
Chris,
short
I
work
for
Red
Hat
I'm,
a
CNC
F,
ambassador
I'm,
just
facilitating.
So
if
you
need
me
to
run
Mike's
take
notes
whatever
you
do
me
to
do
I'm
here,
for
that
any
questions
concerns
before
I
get
started.
I
will
stop
everyone
and,
like
ten
minutes
left
and
say,
hey,
there's
two
minutes
left
has
everyone
that
needed
to
say
something
said
something.
So
if
you
want
to
wait
till
the
end,
that's
fine,
but
please
feel
free
to
speak
up.
B
B
C
I
have
recently
reported
a
buck
in
a
city
client
to
like
in
on
github
and
I.
Have
I
was
surprised
to
find
that
actually,
there
is
a
like
a
lack
of
unit
testing
for
a
decline
in
the
ad
city
code
base.
I
think
it's
probably
coward
in
communities
main
river,
but
I
was
just
like
surprised
that
I
think
I
haven't
found
like
many
tests
for
watch
functionality,
NC
declined,
for
example,
and
so
like
in
general,
I
guess.
The
question
is:
how
do
you
balance
between
testing
in
communities
versus
covering
its
on
that
city
side?.
D
In
terms
of
testing,
we
have
a
bunch
of
nectar
integration
tests,
the
inside
SCD.
It's
called
client
p3
integration
test.
So
for
that
integration
test
we
just
spin
up
tonic
temporary,
a
city
cluster
locally,
and
then
we
just
run
the
like
the
correctness
test
or
like
Network
competition
test.
So
we
simulate
on
a
total
partition
inside
a
local
cluster
so
other
than
there
like
everything
like
inside,
like
between
SAT
component,
but
we
don't
do
like
to
any
like
between
the
SC,
D
and
then
I
keep
on.
E
Just
add
one
more
thing
so
for
from
ICDs
a
decline,
it's
actually
pretty
simple
because
he's
using
like
G
RPC
as
the
communication
layer
and
the
protocol
layer.
So
we
don't
really
try
to
test
the
RPC.
We
just
try
to
test
the
layer
above
G
RPC,
so
we
just
try
to
make
sure
that
the
SAT
functionality
works
and
we
do
test
a
rebalancing
part
which
you
ho
mention
like
here
down
o
and
add
a
node
bag
thing
apart.
C
From
JRPG
actual,
there
are
like
lots
of
go
routines
actually
running
at
this
part
of
it
city,
client,
and
so
the
specific
issue
was
that
that,
if
you
do
something
like
suicide
change,
like
remove
the
base
certificate,
for
example,
the
watch
connection
is
still
keeps
running.
But
you
stop
getting
any
events,
I
think
on
the
client
side
and
yeah
I
think
it
was
probably
something
to
do
with
like
I.
Don't
know
you
need
to
start
there
and
you
go
routine
rather
than
keeping
the
old
one
running
or
something
like
that.
Yeah.
E
B
It's
worth
pointing
out,
there's
a
couple
different
sections
of
tests,
so
there's
always
the
unit
tests
which
are
easy
to
find,
but
then
there's
the
client
v3
integration,
plus
the
full
system
system,
integration
plus
the
functional
testing.
So
you
know
it's
everyone's
well:
I
have
to
go
look
around
a
couple
places
before
I
am
sure
that
I've
actually
determined
if
a
test
is
missing
or
not.
F
I
heard
this
morning,
I
was
talking
to
Stefan
as
TTTS.
He
actually
asked
me
something
about
so
I
think
it
used
to
be
that
for
compaction
at
city
in
kubernetes
at
City.
It
doesn't
do
compaction
by
its
own.
It's
scheduled
by
the
API
server
I'm,
assuming
that's.
He
said
from
@cd
3.2
to
3.3.
There
are
some
changes,
I
didn't
follow
him
quite
a
while
also
are.
Is
it
compaction
moving
to
at
CD
part,
or
are
you
aware
of
it.
B
So
there
were
a
couple
additions
in
3
2,
&
3
3,
the
automatic
compaction
that
you
can
do
on
the
at
CD
side.
It
was
improved
and
made
more
flexible,
but
right
now
the
default
is
still
that
the
you
you
run
at
CD
without
that
turned
on,
which
is
the
default
fret,
C
D
and
then
the
coupe
API
server
requests
a
compaction.
Every
5
minutes.
B
Can
you
hear
me
now,
okay,
I'll,
talk
louder.
Yeah
do
I
need
to
repeat
that
there
was
theirs
in
at
CD,
three
two
and
three
three.
The
compaction
was
made
more
flexible,
so
you
can
configure
how
it's
automatically
run,
but
by
default
with
kubernetes,
the
coop
API
still
server
still
requests
it
once
every
five
minutes.
E
We
hope
that
it's
okay
application
yourself
can
drive
the
compassion,
because
application
knows
that,
like
okay,
if
I
don't
need
previous
versions
right
as
it
itself
doesn't
really
have
these
information.
So
if
you
can
always
grab
the
compaction
from
the
application
side,
for
example
in
kubernetes
right,
it's
not
that!
Okay,
all
my
API
server
has
catch
up
with
ICD
and
then
I
can
come
back
all
the
Virgin's
right.
If
there
is
a
like
say,
a
PS
always
catching
up
with
as
ideal
poison.
E
1/2
a
compaction
at
this
reversion,
because
the
old
versions
really
for
just
catching
up
or
like
configuring,
configuration
roll
out
roll
back,
but
for
kubernetes
a
poet
you
don't
have
computer
and
roll
back,
so
I
just
want
to
catch
up
with
the
history.
So
for
the
I
guess,
Aria
point!
No!
That
way
when
you
should
do
a
compaction.
B
D
D
D
C
Question
is:
are
there
any
new
features
in
its
addy
plant?
That
would
be
very
useful
like
to
integrate
the
communities
so
previously,
I
think
in
with
version
2
of
the
city.
There
was
some
obstruction
around
like
storage
and
watching
and
other
things,
and
at
some
point
it
was
like
there
are
so
many
features
in
that
city
that
we
want
to
use
that
we
just
stick
to
that
city
without
like
too
much
obstructions.
So
is
there
any
like
a
new
functionality?
You
say
that
we
useful
in
the
future.
B
E
I,
don't
think
we
are
fully
utilizing
the
features
that
we
have
in
a
C's
3,
but
from
the
operational
side,
I
think
we
are
adding
some
more
like
features
like
learner,
which
allows
you
to
have
a
point
in
time,
snapshot
much
more
easy
and
make
a
CD
cluster
more
reliable,
and
we
are
adding
like
promote
feature
which,
like,
if
you
have
a
slow,
if
you
have
never
partitioned
the
partition,
node
will
not
come
back
and
disrupt
the
cluster.
That
is
to
promote
feature
helps.
So
there
are
a
bunch
of
like
reliability
and
usability
like
features.
E
C
Followed
question:
there
is
a
limitation
on
the
size
of
the
convenient
his
resource,
mostly
like
one
I
think
it's
like
1
megabyte
is
guaranteed
to
be
processed
well
by
LCD.
But
if
you
get
go
above,
there
is
no
guarantee
that,
like
I
said,
you
will
be
able
to
have
not
that
kind
of
size,
because
it's
like
evilest
aeration.
It's
not
really
meant
to
be
like.
If
you
need
to
store
like
huge
blob
of
data,
you
should
use
like
some
volumes
or
something
rather
than
a
density.
B
So
recently
we
looked
at
the
total
storage
limit,
which
it's
defaulted
to
four
gigs
for
a
while
I
thinkand
was
it
3,
2
or
3
3
up
to
2
8
gigs.
It's
the
default.
We've
been
experimenting
with
going
up
to
16,
32
64,
larger
sizes,
and
that
seems
fine,
but
the
snapshots
take
a
lot
longer.
So
that's
kind
of
one
of
the
limiting
factors,
but
then
maybe
talk
about
backing
stores
as
well.
B
The
resource
per
limit
I,
it's
far
as
I,
can
till
that
the
main
gating
factor
there's
in
kubernetes
itself.
It
will
refuse
to
decode
a
watch,
object
larger
than
a
megabyte
I
think
at
FTD.
Right
now
we
have
a
1.5
megabyte
default.
That's
configurable!
If
you
want
it
larger
I,
don't
know!
If
there's
much
benchmarking,
you
guys
know
if
there's
any
benchmarking
around
that.
E
As
it
is
serializable
datastore,
which
means
that
it
will
process
all
the
requests
in
sequence,
one
by
one
right,
so
if
you
have
a
very
large
blob
phase,
4
megabytes
right,
so
the
next
request
need
to
waiting
there
too,
for
the
for
mag
bytes
to
write
down
to
the
disk
for
4
megabytes
may
take
up
200,
milliseconds
or
maybe
even
some
seconds.
If
you
have
our
slow
HDTV
right,
that's
one
of
the
limiting
factor.
The
other
limiting
factor
is
like
I
city
is
a
single
closet,
Shari's
right.
E
Why
once
you
have
our
large
as
a
datastore,
let's
say
like
okay,
I
have
one
megabytes
value
and
I
have
like
thousands
of
them
in
your
datastore
total
size
will
grow
to
that,
maybe
tens
of
gigabytes
or
maybe
even
hundreds
of
gigabytes.
So
if
you
lose
one
ICD
and
then
you
bring
up
another
one
transmitting,
this
snapshot
will
take
a
much
longer
time
right.
So
we
want
to
limit
the
recover
time
mean
time
to
recover,
to
like
three
minutes.
That's
why
we
don't
really
want
to
set
the
default
data
size
much
louder,
but
at
Alibaba.
E
Actually,
we
said
it's
much
louder
just
because
we
have
the
enough
bandwidth
and
we
have
enough
in
the
empty
sky.
That's
the
limiting
factor
for
the
upstream
I.
Don't
think
we
are
going
to
change
the
provider
just
for
protecting
yourself
in
the
disaster
scenario.
We
don't
want
to
shoot
your
food
by
yourself
right,
yeah,.
I
B
I'm
not
aware
of
that.
It's
it's
an
interesting
idea.
I
think
we
could
entertain.
That
would
be.
You
would
have
to
know
that
your
system
can
tolerate
it.
So
it'd
be
a
little
most.
We
know
kubernetes
counts,
so
that
might
be
a
good
option
for
people
there,
something
that
could
be
looked
into
as
a
way
of
sizing
down
snapshots,
making
them
faster
for
kubernetes.
I
Latest
it'd
be
lovely,
but
I
would
argue
that
anything
recent
is
gonna,
be
good
enough.
I
mean
the
difference
between
the
revision
when
I
start
writing
the
snapshot
and
when
I
finish
by
the
time,
I'm
finished.
Writing
it's
not
the
latest
revision
anymore,
I
assume
so
I
would
say:
I
wouldn't
even
be
too
paranoid
about
making
it
the
rate
remote
they're
the
latest.
As
long
as
it's
a
recent
revision.
E
I
think
for
now
the
real
cost
for
ladder
closers
not
really
a
policy,
a
snapshot,
because
we
are
all
right
doing
the
incremental
snapshot,
which
means
that
we
only
like
so
in
a
city,
there's
a
beep,
a
three
on
disk,
and
then
you
made
some
changes
and
we're
only
a
snapshot
that
that
would
help
to
describe
that
cost.
It's
very
little.
E
Actually,
the
real
cost
is
when
you
actually
want
to
transmit
the
snapshot
to
another
like
newly
drawn
node
right
for
this
you
for
this
case,
we
cannot
just
transmitting
the
ladies
to
reversion,
because
if
you
do
that
the
nodes
in
the
system
will
have
different
history
and
if
the
client
reach
into
different
nodes
in
the
closet,
it
will
see
different
things
and
the
fundamental
thing
University
is
consistency.
I,
don't
think
we
can
tolerate
this
right.
I.
D
E
So
to
solve
this
problem
actually
actually
I
just
mentioned
that
we
we
are
working
on
a
new
feature
called
learner.
So
basically,
you
attach
a
learner
to
the
clusters.
I'm
gonna
keep
on
receiving
the
like
new
staff
there
and
they
keep
on
writing
the
snapshot
down
to
these,
which
means
that
you
won't
even
try
to
take
a
snapshot
anymore
right.
E
So
right
now
the
real
cost
for
a
larger
cluster
is
actually
around
compaction.
So
when
you
do
the
compaction,
you
need
to
issue
a
bunch
of
Dilys
to
a
city
and
that's
at
least
this
will
I
see
from
our
use
case.
This
is
the
most
expensive
thing
in
our
city
right
now,
because
we
know
Armin
has
so
increments
now,
when
you
would
like
say
doing
the
infamous
time
trial
you
keep
on
like
having
more
data
to
a
city,
and
when
you
do
the
deletion,
you
need
much
more
things
to
Dedes.
That's
like
real
cost.
B
You
asked
about
new
features.
I
should
also
point
out
we're
working
on
something
to
do
a
che
downgrades,
so
you
can
do
without
losing
any
availability.
Do
a
full
downgrade.
That's
something
winters
working
on
over
here
yeah
and
that
kind
of
many
of
the
many
of
the
features
you're
gonna
hear
about
it's
not
so
much
like
a
new
user
facing
feature
to
the
end
points,
but
it's
a
feature
for
opera
operability,
because
we
think
it's
really
important
that
it's
easy
for
people
to
operate
at
CD,
underneath
their
kubernetes
clusters.
J
I
just
came
in
so
I,
don't
know
what
format
we
are
following
it,
but
one
of
the
new
things
that
has
been
done
recently
in
the
community
project
is
they
published
as
cue
versions,
cue
support,
which
kind
of
clarifies
what
is
supported
by
the
communities
project
today
for
upgrades
and
downgrades.
For
instance,
Jordan
Liggett
has
proposed
us
a
pull
request
and
whoever
is
commenting
on
it.
J
B
The
good
news
is
it's
much
simpler,
sed
has
a
fairly
small
API
surface
area,
whereas
kubernetes
has
a
massive
one.
It's
far
so
I
know
our
our
rules
are
really
simple,
which
is
all
the
three
series.
A
pis
are
backwards
compatible,
meaning
if
you've
developed
against
three
o,
and
you
can
use
the
three
once
over-
3
2,
3,
3,
server
and
so
on,
and
we
will
only
add
new
backwards,
compatible
features.
That's
the
general
rule,
I,
don't
think.
We've
ever
broken
that.
K
So
I
so
I've
been
running
kubernetes
clusters
since,
like
pre
100
territory,
so
I
think
in
practice.
What
has
happened?
Is
they
single
versions
of
Ivette
CD
are
what's
actually
tested,
deployed
and
trouble
shut
trouble
shot
among
the
community
with
single
versions
of
kubernetes,
so
I
feel
like
what
you're
saying
is
sort
of
more
aspirational
than
yeah.
B
Well,
I
would
I
would
say
it's
true
for
the
API
surface
area,
the
GPG
RPC
api
is
we
check
them
they're
backwards
compatible,
I
mean
we
haven't
broken
the
schemas
and
things
like
that.
I
think
you're,
talking
about
interesting
point,
which
is
there
are
operational
behaviors
which
have
changed
throughout
the
versions
of
sed
everything.
From
like
how
often
we
do
a
snapshot
to
how
much
data
you
can
store
so.
K
So
I've
been
I've
been
a
co-chair
of
said
scalability
for
a
long
time
and
just
about
to
hand
it
off
to
Shaun
here
and
I.
Think
we've
seen
performance
regressions,
which
means
that
there's
a
lot
more
to
the
API
contract.
Then
the
data
flowing
back
and
forth
in
a
specific
way,
there's
performance
characteristics
and
other
things
that
are
really
really
important,
especially
with
the
deep
use
of
watches
and
other
things.
And
so.
J
H
B
Changed
the
default
settings
from
of
the
communities:
okay
just
rose
to
be
the
old
setting
until
but
it's
not
necessarily
a
better
setting.
I
think
I
think
you
could
make
it.
This
is
the
the
tenth
of
take
a
snapshot
number
its.
It
does
use
more
memory.
So
that's
a
change
in
the
way
that
the
server
behaves,
but
you.
L
B
B
That's
certainly
true,
I
mean
I'm.
Definitely
I've
been
involved
in
a
series,
so
yeah
it's
it's.
When
you
actually
test
at
C
D
against
a
particular
workload
and
you're
changing
its
internals
and
its
behavior,
then
that's
you.
Can
you
can
run
into
certain
unexpected
things?
So,
for
example,
if
we
replace
the
backing
store
with
something
else,
then
we
would
want
to
vet
that
with
kubernetes.
That
might
be
a
good
thing
to
talk
about
downstream
testing
with
our
upstream
testing.
D
One
thing
we
want
to
do
for
the
next
year:
we
want
to
improve
the
test
coverage,
like
conformance
test
coverage
between
sed
and
the
APS
over.
So
one
thing
we
can
do,
we
can
like
run
the
coop
mark
like
periodically
with
the
latest
CD,
and
then
we
can
also
like
implement
some
kind
of
conformance
test
between
APs
over
in
density.
This
so.
E
I
think
for
large
skill
clusters.
You
have
to
test
it
yourself
to
verify
both
the
APS
over
the
controllers,
the
city
and,
basically
everything
right.
So
we
are
running
large
cloud
service
at
Alibaba
and
we
actually
modified
kou-kun
mark
to
suitable
for
our
workload
so
for
different
workload.
A
city
may
behave
very
differently
and
ApS
silver
maybe
have
pretty
differently
and
because,
like
what
Joe
said
is
like
okay,
the
API
is
compatible,
but
the
internal
behavior
of
a
city
may
change.
E
We
think
that
okay,
this
change
may
be
beneficial
for
a
particular
workload,
but
maybe
in
the
end,
is
hurting
the
other
like
Rocklin
right.
So
one
thing
that
we
could
do
is
like
improving
the
upstream
like
performance
tests
like
scenarios,
including
more
tests
into
the
upstream
scalability
testing,
so
that
we
can
have
more
workload
against
more
variants
of
ApS
over
and
more
of
a
city.
But
from
my
experience,
if
you
want
to
run
large-scale
kubernetes,
you
have
to
verify
all
the
software
and
all
the
stacks
to
make
sure
that
you
worked
well.
J
So
circling
back
to
the
question,
my
point
was
not
that
it's
not
obviously
so
when
I
search
for
first
time
for
saying
it's
okay,
how
does
a
CD
release
work?
How
how
many?
How
is
how
long
has
it
supported
I,
found
a
pull
request
after
a
lot
of
search,
which
explained
that
now,
from
now
on
that
city
will
follow
kubernetes
projects
because,
and
then
it
explained
very
clearly,
there
I
was
suggesting
to
have
that
on
the
github
at
CD
repo
as
a
as
a
Magnon
file
that
says.
J
Ok,
this
is
how
the
release
we
do
and
every
release
of
koopa
Minoru
kubernetes
released
him,
and
I
see
that
that
is
useful
where
every
release
is
compatible
up
to
this
many
months,
though,
it
is
obvious
on
the
release.
If
it
says
it,
there
are
people
who
use
it
with
just
google
search
and
for
them
it
is
just
very
easy
and
I'm
I'm.
Actually
writing
a
design
proposal
for
kubernetes
to
automate
that
process
that
every
release
of
kubernetes,
the
github
readme
also
goes
and
changes.
It
is
very
obvious
for
kubernetes
also.
B
I
think
we
have
a
search
ability
problem
there.
There
is
a
dock
hidden
somewhere
in
there
about
patch
management
of
the
STD
releases
and
our
official
policy
that
we're
adhering
to
is
that
we're
maintaining
now
three
versions
of
sed
so
3
1,
3,
2,
&,
3
3
are
supported,
I'm
the
patch
manager
for
3
1,
&,
3
2.
So
the.
J
B
C
B
Yeah
there's
plenty
to
do.
We've
got.
We've
got
a
lot
of
work
on
non-voting
members
which
allow
us
to
reconfigure
clusters
on
the
fly
without
impacting
availability.
All
this
down
upstream
testing
I
think
needs
work.
Any
work
in
clients
is
a
really
approachable
thing
that
somebody
could
do
I
mean
we
need
the
language
clients
in
as
many
languages.
We
think
our
ecosystem
supports.
E
There
are
few
more
performance,
scalability
released,
related
issues
like
supporting,
fully
concurrent
read
so
for
now
so
say
when
the
kubernetes
api
server
tries
to
issue
a
large
range
query
to
a
city.
It
may
block
other
run
queries
or
even
write
queries
which
costs
huge
latency,
spikes
and,
from
skillopedia
point
of
view,
maybe
once
for
more
like
values,
more
keys
into
a
city.
F
D
D
B
It
will,
then,
you
know,
send
those
through
at
CD
all
the
way
to
the
API
server,
which
will
deserialize
all
of
them
filter
them
down
to
just
the
couple
that
are
actually
needed,
and
this
can
happen
over
and
over
so
they're
clearly
flows
in
there.
Where
you
could,
you
could
take
one
as
an
opportunity
and
look
at
the
access
patterns
and
find
find
opportunities
to
improve
the
system.
E
So
from
kubernetes
I
said
in
the
questions
that
there
are
much
more
like
work,
we
can
do
if
you
are
interested
right.
So,
for
example,
when
we
update
a
part,
we
actually
update
the
full
object
a
few
times
into
a
city
which
caused
a
lot
of
like
trouble,
because
you
keep
on
like
storing
the
same
data
over
and
over
again
and
that's
blow
up.
The
data
set
of
a
city
but
simple
like
things
that
we
can
do
is
just
to
like
store
the
data
between
the
two
like
versions.
There
are
bound
off
like
things.
E
L
Hi
there
sorry
I'm
a
little
late
and
you
might
have
covered
some
of
these
things,
but
I
was
really
looking
forward
to
catching
up
with
you
guys,
who's
hex
fusion,
in
slack
danby
I,
don't
know.
If
you
remember
me,
we
like
emailed
a
bunch
back
a
few
months
ago,
thanks
all
for
help.
Is
there
one
metric?
That's
like
the
most
important
metric
to
monitor
for
at
CD,
like
we
just
had
one
metric
that
we
could
mock
that
we
could
monitor
what
would
that
be
for.
F
Think
there
is
a
health
/
health
endpoint,
HTTP
endpoint.
If
you
hit
that
it
will
first
I
mean
the
node
has
to
be
running
and
if
it's
a
cluster
it
has
to
be
has
a
leader
has
a
quorum
and
I
think
it
tries
a
small
quorum
either
read
or
write.
So
I
think
that's
a
pretty
good
condition
if
you
want
a
magic
just.
E
L
E
There's
a
bunch
of
like
general
matrix
and
we
have
a
dashboard
and
it
says
other
alerts
for
you
already
so
I
think
that's
pretty.
That
is
pretty
good
right.
It
really
depends
on
like.
What's
your
use
case,
you
say
you
are
running
a
city
for
pretty
small
clusters,
probably
hitting
the
house.
Endpoint
is
good
enough
and.
L
E
Sure
that
your
data
is
not
increasingly
likely,
you
are
not
putting
data
about
the
data
is
like
keep
on
increasing
that
probably
something
around
there.
But
if
you
want
to
make
sure
that,
as
it
is
scalable
thing
that
that
we
won't
see
the
t's
coyote
and
network
I/o
latency
between
the
nodes
right
yeah.
F
I've
got
to
sort
of
follow
up
things,
I
think
Zhang
mentioned
something
about
the
large
reeds,
so
I
guess
at
city
back
and
is
multi-version
concurrency
control
stall.
It
seems
it
sounds
like
currently.
If
we
have
a
large
read
it
sort
of
blocks
other
other
operations,
whereas
there,
theoretically,
you
should.
You
should
be
able
to
support
concurrently,
one
right
plus,
multiple
reads:
right:
yeah.
E
E
All
right
si
Swamy
is
so
like
I'm,
going
a
little
bit
deeper
there,
because
so
far
I
said
II
one
race
one.
So
when
your
usual
rate
twice
ad
right,
I
City
counties
that
you
will
see
the
most
recent
value,
which
is
right
hand
before
your
race
come
into
a
city
right.
So
the
problem
is
the
reason
you
to
waiting
there
for
a
little
bit
longer
to
for
the
right
to
go
through.
And
then
you
start
your
your
reads
right
before
we
just
want
to
make
it
simpler.
E
So
we
just
patch
everything
into
one
transaction
and
doing
everything
that
one
trip
and
that's
just
simplify
the
developments.
And
now,
if
you
want
to
skew
it,
how
here
just
make
the
rate
peaking
there
for
a
little
longer
being
able
to
finish?
And
then
you
start
that's
how
it
works.
But
then
you
have
a
little
bit
latency.
So
probably
what
you
want
really
do
is
okay
for
this
single
transaction
right
put
in
some
risk
and
maybe
one
right
right,
patch
them
and
for
the
other
rates.
F
Read
some
more
and
we
can
catch
up
offline,
okay,
another
follow
up
with
when
Joel
said
something
about
so
I
think
you
had
example
where
the
API
server
requests,
lots
of
nodes
or
pods,
and
that
city
sends
all
all
of
those
informations.
It's
basically
a
large,
read
and
all
the
way
to
API
server.
Then
a
PS
server
filters
down,
so
it's
not
all
needed
by
the
API
server,
but
it's
adding
a
lot
of
workload,
but
in
order
to
pre-filter
that
in
at
CD
that
will
make
that
CD
kubernetes
aware.
F
E
I
think
I
welcome
with
some
kubernetes
api
developers,
so
one
thing
that
we
can
do
is
to
make
ICT
understand
the
portable
value
encoding
so
that
we
can
do
simple,
probable,
filtering
or
JSON
filtering.
It's
pretty
standard
API
already.
So
when
you
usual
range
to
a
CD,
plus
your
JSON
query
or,
like
probably,
and
then
a
city
can
filter
it,
it
doesn't
have
to
be
like
kubernetes
aware
right
and
one
more
thing
is
like
what
a
lid
is,
what
we
did
just
add
more
indexing
into
the
kubernetes
abs
over
and
just
query
that
last
thing.
K
So,
just
just
that
echo
your
comment
earlier
about
there's
probably
more
mileage
to
be
ad
overall
by
working
the
kubernetes
part
rather
than
NCD
so
I
think
good
job
to
the
NCT
team.
But
I
would
say
the
question
really
is:
where
do
you
think
we
stand?
I
think
I,
think
game
days
gone
by
and
six
scalability.
Our
official
position
to
the
community
was
if
you're
gonna
run
large
clusters
run
and
split
at
CD
mode,
which
is
an
attempt
to
try
to
deal
with
certain
performance
characteristics,
mostly
what
most
of
which
were
I'll
say.
K
E
So
in
terms
of
scalability,
so
we
are
trying
to
run
certain
key
companies
closed
her
at
Alibaba
and
we're
targeting
at
the
end
of
or
maybe
in
the
middle
of
next
year.
So
from
our
pretty,
we
already
do
some
like
from
testing.
So
from
our
perspective,
if
you
have
like
10k
nose
or
certain
k
nose,
probably
one
ICD
cluster
is
to
you
like
good
enough.
E
Maybe
what
you
need
to
do
is
to
like
maybe
change
the
bike
and
maybe
use
some
other
like
a
city
bike
on,
but
you
still
keep
the
API
compatible
and
use
one
ICD
closer
for
the
core
object.
The
problem
is
like:
we
still
want
to
reason
the
ordering
of
the
requests
coming
into
the
ICD
and
then
like
for
your
API
layer
for
the
kubernetes
api
layer.
You
still
have
this
ordering
the
resource
version
right.
If
you
split
them
to
imam
mahdi
ycd
clusters,
you
lost
the
ordering
between
watches.
That's.
B
That's
the
classic
one
is
that
people
will
take
just
at
CD
events
and
split
it
off,
and
we
know
that
opera
the
kubernetes
operates.
Well
that
way
right
now.
That
mechanism
allows
you
to
split
just
about
anywhere,
and
I
would
say,
based
on
testing
that
what
the
outcome
of
doing
that
split
now
the
software
is
probably
pretty
much
undefined,
but
that's
an
opportunity
right
like
maybe
there
are
reasonable
other
split
points
that
we
could
define.
B
We
could
also
benefit
a
lot
from
the
kubernetes
community
if
we
could
say
make
resource
versions
opaque,
so
people
can't
accidentally
count
on
them
being
the
same
across
things
that
they're
not
supposed
to
be
guaranteed
to
be
the
same
on.
There's
been
some
pushback
on
that,
but
I
think
that
might
be
an
area
that
we
could
make
some
improvements,
because
if
we
can
get
really
strong
guarantees
that
the
resource
version
is
only
the
same
for
some
subset
of
things,
then
we
know
where
our
split
points
are.
E
Problem
is
like
when
you're
racking
your
controller.
We
need
to
aware
of
that
right.
Some
object,
don't
have
like
other
and
guarantees
between
each
other.
That's
the
software
developments
overhead
and
today's
controller.
Maybe
we
had
look
at
them
and
say:
okay,
those
two
objects
can
be
divided
because
nobody
reasoning
the
ordinary
between
these
two
objects
right
when
events
that
absolutely
can
be
divided,
because
nobody
reason
the
events
ordering
with
the
company's
core
object
or
ordering
yeah.
H
A
J
E
We
try
to
make
one
I
city
more
performance,
because
so
even
you
have
like
10k
kubernetes
cluster,
you
probably
will
store.
Maybe
a
few
meeting
keys
into
a
CD
I.
Don't
think
that's
a
very
big
problem!
Even
you
for
today
is
a
key
I,
see
the
architecture
so
tall.
You
need
to
turn
a
CD
a
little
bit
to
support
I,
say
10k
kubernetes
cluster.
If
you're
curious
cause
that
will
go
over
10k
me
say
a
hundred
K.
If
you
are
going
quiz
it
probably
at
that
time
you
have
to
split
a
CD
to.
J
E
B
I
should
also
mention
on
the
scaleability
topic.
This
is
something
I'm
getting
increasingly
interested
in,
so
maybe
I'll
be
seeing
you
guys
in
six
scaleability,
more
I
get
the
feeling
that,
sometimes
that
the
bottlenecks
shift
around
between
the
two
systems
and
it
depends
on-
and
you
know
what
hardware
you
throw
out
at
what
your
workload
looks
like
so
I
think
one
of
the
things
that
we
could
potentially
get
in
more
involved
with
his
identifying.
You
know
where
to
spend
our
energy
to
get
the
most
bang
for
our
buck
as
we
move
forward.
J
So
I'm
also
co-chair
for
working
group
ideas
right
now
and
I
feel
that
if
the
the
the
kubernetes
project
thinks
about
you
know
their
release
Cadence's
differently,
it
might
impact
at
CDs
release,
Cadence's
or
because
I.
When
I
read
that
pull
request
on
that
CD,
it
says
we
are
gonna,
follow
kuben,
you
know
because
of
kubernetes
project,
we're
gonna
do
whatever
he
says
this
way.
So
it's
good
that
it
would
be
good
that
you
could
represent.
J
Also
in
that
working
group
saying
how
it
impacts
you
really
meet,
it's
a
it's
on
slack
that
is
on
kubernetes
lag
that
as
a
working
group
there's
a
mailing
list
also
there's
a
session
at
3
o'clock
here
and
6
or
6.
So
yeah
I
was
thinking
from
that
perspective
of
release,
cadences
of
kubernetes
match
or
somehow
so
I'm.
Just
thinking
from
that
perspective,.
B
Yeah
I
think
typically
I'd
CD
runs
a
little
slower,
which
is
probably
good
for
a
data
system,
but
I,
it
kind
of
depends
on
feature
set
and
what
we
need
right,
I
mean
and
where
the
community
is
at
I
would
love
to
see
the
Cole
community
get
off
at
CD
2
and
onto
supported
3,
1,
plus
versions,
and
then
you
know
make
sure
that
we're
actually
running
against
to
the
earlier
point.
You
know
paired,
tested
versions
that
we
know
work
yeah.
B
Right,
yeah
and
we're
trying
we're
trying
to
keep
up
but
I,
think
I.
Think
once
we're
done
with
the
latest
kubernetes
rulers.
We
will
be
at
the
latest
that
CD
release,
which
will
be
a
nice
place
to
be,
and
that
means
that
you
know
we're
not
we're
not
dragging
behind
inversions
we're
testing
against
a
modern
one
and
we've
caught
up
and
we've
deprecated.
Oh
so
I
think
that
story
starting
to
make
more
sense,
but
it's
something
we
always
have
to
work
on.
So
there's
always
new
releases
coming
out.
L
L
I
Just
quick,
sorry,
just
quickly
from
cig
API
machinery,
we've
had
this
discussion
and
I
mean
the
official
kubernetes
position
on
this
is
that
Etsy
D
is
our
reference
implementation
and
if
you,
the
cloud
provider,
wish
to
do
something
other
than
a
Etsy
D
you
are
welcome
to.
But
it's
not
our
problem
if
we
upgrade
to
a
newer
@cd
feature
and
we're
using
the
sed
client-
and
you
don't
implement
that
on
your
storage.
That
is
your
problem,
and
that
is
the
official
kubernetes
position.
M
Cuba's
works
best
with
that
city
right
and
that's
what
the
upstream
people
you
know
and
the
taste
and
everything
used
is
there
anything
going
on
making
humor
is
more
like
a
pluggable
mechanism
where
you
know
not
just
a
TV,
but
how
other
you
know
other
database
system.
You
know
I,
think
there
are
some
effort
going
on
I
heard
some
discussion
actually
this
morning
to
have
it
more
pluggable
versus
at
City.
Only
right
now
so.
E
Our
city's
post,
the
API
layer
and
the
storage
layer,
so
I
city
use
bode
B,
which
is
a
local
like
invalid
database
as
a
storage
layer,
and
there
are
some
like
at
least
in
my
company.
There
are
some
effort
that
is
trying
to
replace
the
storage
layer
of
its
ICD.
We
are
trying,
say
my
sequel,
database
or
RDS
or
like
our
radius
cluster,
but
salads.
Just
for
the
storage
layer.
You
can
replace
the
storage
layer,
but
from
the
front
from
the
API
point
we
still
as
the
API
and
nothing
is
changed
right.