►
From YouTube: Kubernetes SIG API Machinery - 20220420
Description
April 20th, 2022
[35 minutes] [wojtekt] API pagination in watchcache
https://github.com/kubernetes/enhancements/pull/3274
[20 minutes] [liggitt] etcd client test gaps
context:
https://groups.google.com/a/kubernetes.io/g/steering/c/e-O-tVSCJOk
https://github.com/kubernetes/kubernetes/pull/106591
https://github.com/etcd-io/etcd/pull/13737
A
There
we
go
recording.
Thank
you
for
joining.
Welcome
everybody
see
api
machinery
by
weekly
meeting
today
is
april,
20
2022
a
lot
of
tools
on
the
number.
We
have
a
couple
of
very
interesting
topics
today
and
I
think
we
should
get
into
them
to
use
the
time
appropriately.
So
the
first
one
is
for
wojciech.
Thank
you
for
joining
voice
check.
I
will
leave
it
with
you.
B
B
I
would
like
to
finally
graduate
to
ga
I
mean
the
original
cap
is
isn't
coming
from
me,
but
it's
coming
from
clayton
from
one
eight
or
one
seven,
or
something
like
that.
So
it's
like
five
years
old
or
so,
and
we
still
didn't
graduate
to
ga.
B
So
so
I
think
it's
it's
high
time
to
actually
do
that
and
one
of
the
things
that
is,
or
probably
the
only
really
controversial
thing
is
that
I
believe
that
we
one
one
one
of
the
things
that
we
should
do
for
for
for
ga
is
to
add
the
pagination
support.
Also
to
the
watch.
B
Cache
and-
and
this
is
this-
is
the
core
of
the
proposal
or
the
core
of
the
change
in
the
proposal,
and
I
think
I'm
not
sure
do
you
want
me
to
to
briefly
go
over
what
I'm
proposing
or
like.
I
think
that
most
of
people
here
read
it
because
there
were
a
bunch
of
comments.
So
I.
B
Okay,
so
the
where
we
are
currently
is:
basically,
we
have
a
support
for
pagination
when
listing
from
hcb.
We
don't
have
any
support
for
pagination
when
listing
from
watch
cash.
So
we
have
this
strange
semantic
or,
in
my
opinion,
at
least
strange
semantic
where,
if
you
pass
a
resource
version
equal
to
zero,
then
the
limit
parameter
is
actually
ignored
and
watchcast
is
returning,
like
all
the
items
requested
by
the
list,
if
you
don't
specify
resource
version,
I
mean
it's
empty
string.
It's
basically
always
is
is
redirected
to
at
cd.
B
So
we
are.
We
are
just
honoring,
the
the
delimiter,
the
limit
parameter
and
and
do
the
pagination.
If
you
set
the
resource
version
to
some
actual
number
not
equal
to
zero,
then
if
the
limit
is
said,
we
send
the
request
to
hcd
and
use
pagination.
If
the
request
is,
if
the
limit
is
not
limit
is
not
set,
then
we
serve
it
from
cash.
B
So
so
this
is
roughly
where
we
are.
What
I'm
proposing
is
basically
to
start
honoring
the
limit,
even
when
the
list
is
happening
from
cash,
which
is
fairly
like
pretty
helpful
for
the
performance
reasons,
and
then
you
will
unify
the
semantic
of
the
wrist.
So
we
will
be
always
honoring
the
the
limit
parameter
and
in
terms
of
implementation.
I
think
that
the
biggest
concern
is
that
it's
potentially
complex
change,
the
the
in
in
terms
of
implementation.
B
What
what
I'm
proposing
is
to
basically
switch
the
the
data
structure
in
which
we
are
storing
the
the
current
state
from
from
the
current
like
more
or
less
hash
map
to
a
b3
that
is
actually
the
same
like
reuse,
the
same
b3
that
scd
is
using
underneath
so
hopefully
from
hcd.
We
we
get
a
lot
of
like
so
we
have
a
lot
of
soak
time.
B
So
so,
hopefully
it
is
very,
very
fairly
reliable
structure
and
build
the
and
basically
wrap
the
the
this
b3
into
the
interface
that
that
watch
cache
is
currently
using
to
and
then
yes
wrap
the
b3
and
to
implement
the
same
interface.
That
watch
cache
is
currently
using
and
simply
replace
the
existing.
The
existing
cache
with
the
b3
based
cache.
B
Like
a
couple
like
smaller
changes,
that
needs
to
happen
there,
but
I
think
we
can
start
from
from
here.
E
I
I
was
gonna
say
some
other
stuff,
but
before
I
before
I
dive
in
that
actually
david
raised
a
good
point,
which
is
you
know
why
what's
what's
wrong
with
just
gang
the
way
it
is,
I
mean
sure
the
semantic
is
inconsistent,
but
I'm
you
know
is
that
really.
Is
that
really
that
bad.
B
I
think
this
misleading,
I
think
it
would
be
useful
also
to
support
the
like
the
problems
that
we
have
is
that
we
can't
basically
redirect
many
of
the
existing,
or
there
are
many
requests
that
we
can't
redirect
to
at
cd,
in
particular
like
listing
pots
that
cubelets
are
doing
when
on
on
start,
because.
B
E
B
B
E
C
Okay,
so
what
if.
E
C
Said
we
want
to
ga
this
as
is
but
before
we
do,
we
want
to
have
a
ga
level
for
three
releases
of
the
initial
lists
from
resource
version.
Zero
are
handled
by
using
a
watch
instead
and
we
get
all
of
our
known
and
well-behaved
clients
fixed,
and
then
we
say:
okay
now,
when
you
set
limit,
it's
going
to
be
behave
consistently
and
we
will
just
always
send
it
to
etcd,
and
if
you
crush
your
lcd
well,
that's
probably
because
you
aren't
using
the
recommended
way
to
list
watch,
don't.
E
E
One
is
how
to
how
to
make
this
ga
to
tie
up
loose
ends,
and
the
other
is
I,
I
think
if
you
read
the
subtext
here,
you're
saying
that
we
need
to
do
this
watch
cash
thing
for
performance
reasons
which
it's
not
really
a
ga
or
not
reason
this
performance,
and
this
is
just
a
convenient
time
I
wish
to
do
it.
Does
that
sound
like
what
you're
saying
wojtek,
or
did
I
mischaracterize
you.
E
Okay,
all
right,
then,
then,
then
yeah,
so
why
not
the
thing
that
david
said.
B
Yeah,
so
I
think
I
was
thinking
about
it
and
I'm
still
seeing
a
bunch
of
especially
third-party
third-party
components
that
are
using
like
client
go
from
like
one,
seven
or
one,
nine
or
whatever
111,
and
it's
hard
to
convince
all
those
people,
and
we
don't
have
like
a
good
leverage
to
convince
them
to
actually
migrate
to
newer
versions.
B
And
with
that,
like
any
changes
that
we
do
currently
will
take
years
to
propagate,
to
all
those
components
and
from
operators
perspective.
We
it
I'm
a
little
bit
afraid
that
we
will.
This
will
har.
E
I
mean
we've:
we've
got
some
tools
right
like
we
could.
We
could
tweak
api
priority
and
fairness
to
like
serialize
all
non-ideal
list
requests
or
something
right
like
it
would
make
those
clients
slow
and
that
would
give
people
an
incentive
to
update
them.
B
Yes,
and
no
I
mean
we
can
prevent
them
from
from
overloading
api
server.
I
100
agree
with
that
one,
but
we
will
potentially
break
that
functionality
that
those
components,
those
components
those
components
provide
and
like
in
some
cases,
were
where
cluster
admins
actually
own,
also
those
components
they
will.
B
They
will
probably
be
able
to
fix
that,
but
in
some
cases
those
are
like
some
third-party
components
that
they
just
use
and
all
they
can
do
is
ask
those
third-party
providers
to
change
that
which
is
sometimes
happening.
E
I
understand
I
at
the
same
time,
like
of
all
the
strategies
I've
seen
to
like
get
people
to
upgrade
things
like
just
making
it
slower
is
like
the
least
disruptive
and
and
like
the
disruptiveness,
to
like
probability
that
you'll
actually
go
up
upgrade
it
ratio
is.
It
seems
pretty
good
with
that.
One.
B
Yes,
on
the
other
hand,
like
the
biggest
escalations
that
I've
seen
from
customers
are
also
coming
from.
Why
did
you
break
my
scalability?
Something
worked
and
I
didn't
well.
E
Okay,
so
here's
here's
what
I
hear:
here's
here's
the
thought
I
have
about
this,
and
it's
actually
not
just
in
relation
to
this
proposal,
but
some
other
discussions
that
have
happened
in
a
few
different
places.
E
I
feel
like
api
server
has
a
kind
of
a
choice
to
make
like
historically,
we
have
decided-
or
we
have
at
least
pretended
that
we're
not
a
database
and
we're
not
offering
particular
features
of
a
database
such
as
indexing,
fast
queries,
complex
queries,
transactions
right,
we're,
not
api
servers,
not
in
the
business
of
doing
those
things,
so
that
and
I'll
call
that
the
status
quo
option
that
that's
that's
kind
of.
Historically
what
we've
done.
E
In
contrast,
I
see
two
potential
roads
that
we
could
go
down
the
so
the
second
option
is
the
road
where
we
admit
that
we're
a
database-
and
we
start
implementing
some
database
features,
and
I
would
consider
this
b
tree
and
mvcc
thing
a
database
feature.
So
I
think
I
see
that
as
moving
us
down
that
road
other
other
other
similar
features
again
are
like
implementing
selection
over
particular
fields
on
crds
right.
If
we
made
that
efficient.
That
would
that's
definitely
a
database
type
feature.
E
So
I
see
that
that
is
like
option.
Two
is
like
okay,
I
guess
we're
a
database
and
we
better
act
like
it
and
implement
all
this
stuff,
and
then
I
see
option
three
as
like.
Okay,
the
contract
users
want
is
a
database,
but
our
business
value
is
not
adding
database
features.
That's
that's
not
necessarily
the
thing
that
we're
good
at
or
maybe
we
are.
I
don't
know
if
we
haven't
really
tried
that
much
but
to
implement
the
the
user
requests
for
these
database
type
features.
E
Maybe
it's
time
to
get
a
real
database
or
you
know
somehow
right,
and
that
would
be
super
disruptive.
It
would
involve
like,
like,
like
our
storage
format,
would
have
to
change
right.
You
can't
store
opaque,
proto,
double
double
proto
blobs
in
a
database
and
expect
the
database
search
features
to
work
right.
So
that
would
be.
That
would
be
super
disruptive,
but
it
would
maybe
be
the
most.
The
more
correct
way
of
getting
database
type
features
that
users
the
users
want.
E
B
Yeah,
I
think
I
I
understand
the
concern.
I
think
the
the
reason
why
I
don't
fully
agree
that
it's
it's
now
this
the
decision
point
is
that,
like
we
already
added
support
for
pagination
like
it's,
it's
already
in
the
api,
so
it's
a
matter
if
we
implement
it
like
consistently
or
we
implement
it
in
the
currently
partial
partial
way.
So
I
think
it's
it's
true.
E
Partly
right,
like
the
imagination
that
we
added
currently
passes
through
to
to
etcd,
so
it's
it's
that's
sort
of
in
category
three
right
where
we're
using
the
database
as
it's
intended.
F
I
think
I
wanted
to
agree
with
way
tech
I
mean,
if
you
you
know,
just
stop
and
think
about
the
api.
We
have
forget
about
the
implementation.
Think
about
the
api
right.
We've
already
told
clients
they
can
paginate
right,
we've
already
told
clients,
they
could
do
field
selectors,
you
know,
so
I'm
not
sure
yeah.
I
don't
see
it
as
you
know,
signing
up
for
a
bunch
of
new
features.
I
see
it
as
finishing
implementation
of
the
features
we've
already
got.
F
C
Option
for
finishing
that
implementation
is
to
say
we
have
the
current
state
that
we
have
and
we
consider
that
finished
or
we
could
say
we
have
the
state
that
we
have
and
we
have
given
you
a
way
now
to
be
efficient
and
we
are
going
to
make
your
limit
request,
always
honored.
The
side
effect
of
that
is
that
it
can
be
slow,
and
I
see
that
as
completing
the
feature
we
started.
G
Yeah,
so
so
maybe
that's
repeating
what
you
guys
said
before,
but
I
think
like
you
can
see
that
as
option
which
is
not
revisiting
and
not
not
building
a
database
but
just
enhancing
what
we
have.
It's,
not
it's
not
like
building
new
features
but
more
like
improving
the
implementation.
E
E
I
I
would
say
yeah
I
I
get
where
you
all
are
coming
from,
but
I
would
say
that
the
databasi
features
that
we
have
are
incomplete
at
best
right,
like
field
selectors,
are
hand
curated
and
extremely
unperformant.
Pagination
works
only
in
particular
scenarios,
which
is
the
complaint
here.
E
There's
no
transaction
support
at
all
right,
like
our
queries
in
general,
even
of
label
selectors
are
rather
are
extremely
inefficient,
except
for
a
couple
that
we
hand
optimized
right.
So
the
the
question
isn't
like.
Like
yeah,
we
gave
people
some
half-baked
semantics,
so
the
the
the
question
is:
do
we
want
to
like
turn
those
fully
baked
or
live
with
it?
The
way?
E
D
I
think
it
would
ask
like
what
you
know
when
we're
considering
finishing
out
things
like
label
selectors
or
field
selectors
or
whatnot
from
david's
framework
like
if
you're,
not
delegating
that
to
a
database
that
has
that
done
and
you're
implementing,
b
trees
in
the
api
server
like.
How
is
that
not
building
the
database
right
like
vitamins.
C
Yeah,
I'm
not
proposing
that
we
build
these
trees,
I'm
proposing
that
we
explicitly
don't
say
we
have
the
features
we
have.
We
have
the
scale
that
we
have
if
you
are
looking
for
something
different
or
something
more
you're.
Looking
for
a
system
on
top
you're,
not
looking
for
something
that
we
provided
here.
D
E
B
Yeah
yeah,
so
I
think
think
I
wanted
to
better
understand,
like
the
concerns
that
you
have
so
do
I
understand
correctly
that
you
are
mostly
afraid
about
introducing
additional
complexity
due
to
this
feature
or
is?
Is
it
something
else.
E
Complexity
is
definitely
a
large
part
of
the
concern
and
like
not
just
not
just
the
like
complexity
of
getting
it
working
at
all,
but
also
the
ongoing
maintenance
so
yeah,
I
I
don't
know
if
that's
completely,
I
don't
know
if
that's
completely
the
concern,
but
that
is
definitely
a
part
of
it.
Yeah.
C
E
Yeah,
I
I
feel
like
adding
yeah,
I
feel
like
adding
a
b
tree
and
mvcc
is
definitely
like
a
like.
Maybe
it's
arguable
that
we're
not
a
database
before
that,
but
if
you
add
something
like
that
in
it's
time
to
stop
pretending.
B
I'm
wondering
if
there
is
something
that
we
can
address
that
or
I
can
address
to
the
concert
of
like
complexity
like
would
better
testing
or
what
do
like
anything
like
that,
like
better
extraction
of
things,
would
help
here
or
it
doesn't
address
in
anything.
In
your
opinion,.
C
Right
and
ncd
is
suffering
from
a
lack
of
people,
either
willing
or
able,
or
both
to
to
maintain
it.
H
E
H
B
H
I
think
you
know
usually
databases
they
usually
like
maintain
a
sparse
index,
then
like
a
buffer
memory
buffer.
That
is,
you
know,
capped
at
certain,
like
one
gigabyte
or
something
data,
and
then
you
bring
data
from
disk
to
the
buffer
and
then
index
right.
So
this
is
how
they,
I
think,
I
mean
at
least
some
databases.
This
is
how.
I
H
B
Now
so
I
I
think
I
I
didn't
have
time
to
respond
to
that,
since
I
saw
that
so
this
comment,
but
I
think
it's
it's
doable
to
to
to
make
it
work
within
the
existing
limit,
like
it
won't
decrease
that
the
existing
memory
usage,
but
it
won't
visibly
increase
it.
I
I
I'm
I'm
completely
fine
with
saying
that,
like
it's,
it's
a
requirement
that
we
can't
increase,
that.
B
E
Not
that
we'll
store
more
history,
but
well
yeah
that
we
would.
We
will
store
more
history
for
specific
objects,
especially
if,
if
a
single
object
changes
a
lot.
B
Yeah,
we
are
storing
the
objects
themselves
anyway,
because
we
are
storing
the
transaction
locks
anything.
So
it's
it's
not
that
we
will
be
storing
more
objects.
It's
only
the.
B
So
that
one
we
can
probably
fix
and
say
we
we
are
still
going
to
store
like
75
seconds
and
if
we
are,
if,
if
we
exceeded
that
and
like
that,
the
request
is
basically
older
than
that
we
just
forward
it
to
its
cd.
It's
it's
fairly
rare
that,
like
we
are
like
doing
continuations
from
that.
Also,
we
can
probably
like
afford
redirecting
those
to
fcd.
E
I
mean,
while
we're
talking
about
like
tactical
alternatives.
I
think,
should
we
consider
some
other
thing
like
like
like
it
occurs
to
me
that
a
lot
of
times
people
just
want
to
list
the
names
of
objects
like
they
don't
want
to.
They
don't
want
the
contents
of
all
the
objects
right.
If
we
implemented
a
list
api
that
just
listed
names,
could
we
maybe
return
all
the
names
in
one
in
one
response
and
not
need
to
rely
on
imagination
so
heavily.
B
Yeah,
so
I
should
probably
have
added,
I
think,
all
the
cases
that
I'm
aware
like,
which
is
like
number
of
different
components
from
different
areas
like
they.
Neither
of
those
need
like
full
objects,
but
all
pretty
much.
All
of
them
need
more
than
even
object
metadata.
F
Yeah,
so
one
of
the
things
we
find
in
databases
is,
you
can
do
a
read
with
a
projection.
I
say
return
this
fragment
or
these
pieces
of
the
object.
I
don't
know
we're
taking
cases
you're
talking
about
is
something
like
could
something
like
that.
Yeah.
E
D
J
Well,
but
cpu
cpu
decentralization
is
actually,
I
still
think,
if
not
the
overwhelming
majority
of
the
time.
It's
still
a
big
chunk
of
the
time
right.
We
optimized
out
almost
all
the
other
components
but
like
serialization,
deserialization
and
re-serialization
are
still
fundamentally
expensive
and
the
problem
is,
is
there's
a
lot
of
overhead
in
that
approach
and
it's
diminishing
returns.
The
more
crds
exist
in
the
system.
J
J
I
mean
even
I
I
only
bring
that
up
just
because,
like
avoiding
deceleration,
avoiding
serialization,
there's
a
lot
of
things
there.
What
is
the
thing
you're
optimizing
for
as
a
project?
Is
it
partial
lists
of
pods
on
nodes.
E
I
yeah
I
I
mean
that
that
goes
back
to
the.
I
don't
know
if
you
were
here
to
to
hear
my
trilemma
clayton,
but
that
that
kind.
E
Here,
for
that,
that's,
okay,
that
goes
back
to
the
the
choice
of
like
you
know:
status
quo,
which
is
like
don't
implement
any
more
database
features
or
admit
that
we're
a
database
and
go
ahead
and
implement
database
features
ourself
or
admit
that
we're
database
features
but
we're
a
database.
But
we
want
to
outsource
all
the
features
to
an
external
database
right.
F
J
J
Delay
it
deliberately,
while
we
invest
in
a
better
assignment
like
one
of
the
things
I
struggle
with
is
I
the
use
cases
in
the
ecosystem
are
diverging,
but
I'm
not
sure
that
the
high
scale
people
really
still
care
about
anything
other
than
base
pod
or
node
performance.
It's
really
two
different
audiences.
F
E
Yeah,
I
I
think
I
actually
agree
with
mike
that
we
need
to
consider
non-built-in
resources.
B
Yeah,
but
I
wanted
to
say
that
it's
somewhat
orthogonal
to
this
cap
right
because
it
doesn't
say
anything
about
it's
not
specific
to
building
resources
serves.
It
works
the
same
way
for
us.
B
E
Okay,
okay
yeah.
I.
E
I
found
okay,
then
you're,
probably
right
if
you
tried.
Maybe
that's
a
bug,
though,.
E
Me
I
do
think
like
improving
crd
scale,
is
something
that
maybe
also
fits
into
this
trilemma
right.
J
Yeah
and
again
like
we,
I
don't
I
don't
know,
we've
fully
like
hit
all
of
the
performance
aspects
of
like
deserialization.
I'm
not
saying
that
and
in
fact
I'm
not
I'm
not
trying
to
apply
void,
but,
like
that's,
this
isn't
the
right
thing
to
spend
our
resources
on
I'm
just
a
little
there's.
A
lot
of
things
like
we've
got
the
protobuf
undigestable
lump
sitting
in
our
stomachs
of
protobuf2
and
gogo
protobuf.
J
E
F
E
E
J
E
You're
switching
to
to
like
which
uses
bson,
which
is
basically
json
but
binary
like
even
that
we
wouldn't
be
able
to
search,
because
we
have
this
double
encoded.
Proto
thing
right
like
we
would
have
to
adjust
the
storage
format
to
make
it
work
with
mongo's
indexing
and
that's.
J
J
E
Well,
yeah,
I
think
it
also
like
it's
not
just
how
much
are
we
investing,
but
what
are
we
investing
towards
right?
If
we're
going
to
switch
storage
engines,
then
that's
a
very
different
storage.
Optimization
then,
assuming
we're
not
going
to
switch
storage
engines,
then
we
just
want
to
make
deserialization
faster
yeah.
J
G
G
For
pagination,
I
not
for
pagination,
but
if
we're
talking
about
the
possibility
that
we're
admitting
that
we
should
behave
like
a
a
database,
all
right,
at
least
for
me,
that
sounds
a
little
bit
like.
Maybe
it's
an
overkill
and
if
that's
the
use
case,
what
what
our?
How?
What
our
use
cases
for
that
right.
C
D
For
instance,
asking
people
hey
I'd
like
to
create
a
user
provided
an
index
on
my
crds,
like
that
sort
of
thing.
C
Yeah
field
queries
are
sorry
fields,
selectors
label
selectors.
Those
to
do
efficiently
require
an
index
to
be
built
and
cleverly
using
the
xcd
transactions,
for
instance,
could
give
you
a
look
aside
index
and
I.
G
It's
probably
worth
writing
something
down
right
if,
if
you're
doing
like
in
in
a
small
capacity
field
search,
that's
not
really
hard
enough
for
us
to
to
implement
all
of
this
kind
of
overhead
inside
of
the
api
right,
but.
E
G
F
F
Let's
see,
the
the
reference
object
is
around
as
long
as
the
the
referencing
object
is
using
it.
So,
for
example,
while
a
pod
is
using
a
pvc,
we
want
to
keep
the
pvc
around
and
similarly,
while
a
pvc
is
latched
or
bound
to
a
pv,
we
want
to
keep
the
pv
round
now.
We
do
have
transactions
in
in
the
kube
api,
but
they're
single
object,
transactions,
they're
single
reads
and
writes,
and
so
you
can't
maintain
referential
integrity.
That's
what
this
is
called
in
database
land.
You
can't
maintain
referential
integrity
with
single
object
transactions.
F
Okay,
so
we
just
can't
do
it
in
directly
in
the
api
machinery,
but.
C
F
I'm
going
okay,
so
the
point
is
the
api
machinery
itself
can't
do
it.
F
What
you
do,
what
you're
talking
about
is
enlisting
the
services
of
a
controller
and
so
right
we
have
existing
demonstrations
of
how
to
do
it
in
the
existing
stuff
that
keeps
the
pvc
around
while
the
pod
is
used
and
keeps
the
pv
around
while
while
a
pvc
is
using
it
and
so
the
key,
the
key
way
of
you
know-
and
I
wrote
up
a
blog
entry
about
you-
know
the
abstract
idea
here
right
and
the
idea
is
that
you
have
you
use
a
finalizer
and
you
keep
an
object
around
while
it's
in
use
by
another
object
and
you
enlist
the
services
of
a
controller
to
you
know,
remove
that
finalizer
when
it's
okay
to
to
have
the
object
deleted
and
by
careful
ordering
of
the
actions
of
the
controller
that
you
can
do.
F
But
in
order
to
do
this
pattern,
you
basically
have
to
query
for
things
that
are
using
a
particular
thing.
Okay,
so
now
we're
getting
into
query
by
a
field
selector.
F
E
Yeah,
I
I
think
to
take
what
what
mike
is
saying
and
what
david
is
saying
and
combine
them.
What
we're
saying
is
we
don't
want
to
write
this
hard
code
so
that
that
we
don't
know
how
to
maintain
so
we're
gonna
make
users
do
something
similar
to
work
around
it?
E
F
E
Like
just
observing
people,
try
to
try
to
do
this
dance
correctly
like
like,
if
you've
got
a
multi,
if
you
want
to
do
a
manipulation
that
requires
multi-stage,
locking
to
do
correctly,
it's
just
really
challenging
for
most
people.
To
do
that.
I
agree
ways
that
I
wouldn't
have
appreciated
before
I
saw
people
struggle.
F
Yeah,
so
you
know
again,
I
can
I
just
want
to
emphasize
you
know
I
I
think
you
know
without
yeah,
I'm
not
so
much
interested
in
I
kind
of
am
so.
I
have
a
short
list
of
features
that
I
would
like
to
add
to
the
queries,
but
you
know
mainly,
I
think,
we're
talking
about
you
know
we
started
here
talking
about
completing
the
existing
features,
but
anyway
you
know.
F
Overall,
you
know
I
kind
of
sympathize
with
the
idea
of
you
know.
Why
are
we
repeating
the
work
of
implementing
a
database
when
there's
already
other
people
that
are
implementing
maintaining
databases?
Maybe
we
should
just
sign
up
for
storing
in
a
more
serious
database,
and
you
know
steve
tells
me
he's
going
to
present
something
about
that.
Sometime
soon,.
E
All
right
yeah,
I
I
steve,
had
an
item
here
and
I
booted
it
to
next
time,
because
I
thought
that
this
item
would
take
a
while,
and
on
that
note
I
think
we
should
move
on
to
the
last
topic
and,
unfortunately,
jordan,
chatted
at
me
and
said
that
he's
stuck
and
can't
join
us.
So
hopefully
we
can
figure
out
what
he
wanted
to
tell
us
from
these
contact
links.
F
So
can
I
one
other
remark
that
you
know
it's
kind
of
bugging
me
about
this
previous
discussion
about
people.
You
know
not
having
people
available
to
maintain
stuff
sure
you
know
this
is
something
that
it
was
marked.
It
came
up
in
ncd,
it
comes
up
again
and
again.
Right
I
mean
we've
had
cases
of
like
absolutely
foundational
stuff
right,
like
the
log
for
jbug
right,
there's,
absolutely
foundational
stuff,
which
is
just
pitifully
under-maintained
yeah.
It
seems
to
me:
we've
got
an
industry
problem
here.
Right
we've
got
everybody
trying
to
free
ride
off
of
almost
nothing.
F
E
C
That's
my
point:
I'm
interested
in
finding
a
way
to
solve
the
needs
for
my
little
spot.
I'd
be
very
happy
with
something
generic,
but
I
don't
know
that
I
see
a
practical
way
for
me
to
influence
log
for
j,
for
instance,
right.
F
Of
course,
we're
not
going
to
solve
it
even
for
ourselves
now.
This
is
this.
Is
a
problem
that's
bigger
than
us
all
we
can
do.
Is
you
know
kick
it
upstream
or
up
up?
You
know
up
to
high
level
people
right.
Just
add
our
voices
to
the
people
saying
there's
an
industry
problem
here
right,
tell
our
execs
there's
an
industry
problem
here.
C
I'm
hoping
for
something,
maybe
a
little
bit
more
concrete
and
lower
level
for
etcd
in
particular,
but
I
am
sensitive
to
not.
I
am
aware
enough
that
I
don't
want
to
dig
that
stack
deeper.
I
have
one
basket
I
need
to
watch.
I
don't
want
to
make
a
second
one.
F
I
I
think
we,
you
know,
look
the
big
problems
aren't
going
to
still
get
solved
quickly.
Obviously
we
have
to
take.
You
know
tactical
steps
and
do
what
we
can
coping
with
industry
as
it
is
today,
but
I
think
we
you
know
should
also
you
know,
add
our
voices
to
saying
you
know
there
is
a
big
problem.
It
needs
a
bigger,
different
way
of
being
solved.
J
The
easy
answer
is
find
the
use
cases
in
kubernetes
that
have
people
who
need
to
go
solve
those
problems
and
find
a
way
that
sustainably
overlaps
that
investment.
That's
the
only
way
that
open
source
works.
That
is
how
open
source
works.
We
find
problems
that
need
to
be
solved,
and
then
we
align
the
investment
in
that
new
thing
and
we
get
the
win
on
fixing
the
things
that
we
come
out
like
that's
the
only
way
that
anything
ever
gets
done
anywhere,
especially
in
open
source,
and
I
think
we
can
do
it.
J
That's
that's
partially
why
I
was
asking
about
watch
cache
performance
like
is
this
a
scale
problem?
That's
why
steve
is
working
on
the
database
stuff
because,
like
we
think,
there's
to
use
cases
around
cubans
crds
that
are
can
be
improved.
I
know
there's
people
who
are
looking
for
it,
like
I'm,
trying
to
find
the
reasons
that
we
need
to
improve
some
of
these
specific
things
as
dan.
You
were
saying
like.
Why
are
we
doing
this?
What
is
the
use
case
broader
than
just
get
to.
E
Yeah,
I
think,
yeah,
I
I'm
worried
that
we
didn't
give
wojtek
any
guidance
whatsoever,
any
concrete
guidance
about
this
cap.
I
I
I
I
do
think
we
should
take
a
minute
and
and
talk
about
the
ftd
client
stuff
that
jordan
has
put
here
from
what
he
told
me
in
in
over
chat.
There
are,
let's
see.
K
I
I
actually
am
here
I'm
on
mobile,
so
I'm
not
sure
my
signal
but
yeah
I'm
waiting
for
a
tow
truck.
So
who
knows
what
this
will
be
like,
but
just
to
summarize
briefly,
the
context
is
in
the
the
thread
about
ncd.
K
I
linked
to
a
particular
pull
requests:
updating
the
sap
client,
where
there
were
some
questions
about
like
test
coverage
and
there's
at
least
one
area
where
it
doesn't
seem
like
there's
sufficient
coverage
upstream
and
there's
no
coverage,
I'm
aware
of
in
our
tests
and
so
like
in
terms
of
like
concrete
impact.
It
means
when
there's
a
update
to
an
std,
client
library
like
as
a
reviewer.
I
don't
know
if
it's
safe
or
not.
K
If
it's
not
well
tested
upstream
and
we
don't
have
tests,
I
don't
know
if
it's
safe
to
accept,
and
so
I
just
want
to
raise
that
particular
area
and
see
like
do.
We
want
to
try
to
help
add
test
coverage
upstream
or
get
coverage
in
our
kubernetes
suites,
and
what
do
we
do
in
the
meantime
until
that
coverage?
Is
there.
E
Yeah,
I
I
think
to
go
back
to
the
trial
trial
that
I'm
that
I'm
positing.
E
If
we're
not
going
to
switch
databases,
then
we
had
better
make
sure
that
fcd
is
functioning
the
way
we
think
it
is
functioning
and
anyone
who's
been
following,
probably
knows
that
atd
has
had
some
bugs
and
it
was
suggested
that
you
not
use
the
first
couple
versions
of
3.5
which
is
concerning.
So
this
is
yeah.
E
But
yeah,
I'm
concerned
in
general,
and
this
this
is,
I
mean
it's:
it's
both
an
argument
for
finding
a
real
database
that
nct
is
having
issues
at
the
moment
and
it's
also
an
argument
against
attempting
to
implement
our
own
database,
because
why
are
we
going
to
do
better?
So
that's
that's
my
that's
my
current
take
on.
I
I
to
be
clear
like
like.
E
If
you'd
asked
me
like
a
month
ago,
what
I
thought
of
rolling
over
to
a
different
database,
I
would
have
been
like
100
against
it
and
today
I'm
only
like
80
against
it.
E
E
D
K
Yeah
so
the
the
particular
gaps
that
I
was
noticing
were
around
things
like
failover
scenarios
and
multi-server
scenarios
and
tls
handling.
So
those
are
the
three
areas
I
know
have
been
problematic
in
the
past
and
I'm
I'm
just
trying
to
chase
issues
and
test
stuff
upstream
and
std.
K
It
seems
like
there's
been
some
efforts
made
to
to
close
those
gaps,
but
there
are
still
gaps
flexible,
and
I
like
to
it's
not
clear
to
me
that
the
tests
that
are
there
actually
match
what
production
environments
are
encountering
and
so
that,
like
trying
to
highlight
the
gap
and
then
it
seems
like
closing
the
gap
upstream,
is
the
right
place
to
do
it.
But
I
I
don't
know
enough
about
their
test
infrastructure
to
know
how
reasonable
that
is.
K
E
E
J
I
mean
it
is
worrying
that
we
don't
have
failover
tests
so,
like
certainly,
I
would
say
when
openshift
was
doing,
like
our
rolling
upgrade
tests
and
all
that,
like
we
built
in
things
to
make
sure
that
ncd
stayed
available
and
we
definitely
hit
client
hangover.
Bugs
and
sam
put
some
things
upstream.
But
we
didn't
quite
get
all
the
way
because
we
don't
run
a
ton
of
aha
and
cd
rollover.
Upgrade
scenarios
that
expect
rolling
updates
of
std
like
we're
not
hitting
that
scenario.
J
So
I
would
lean
a
lot
of
the
failover
connectivity
stuff
to
the
upstream
because
it's
going
to
have
fewer
dependencies,
but
it
also
points
that
like
do
should
we
have.
Should
we
be
thinking
about
spending
a
little
bit
more
time
on
rolling
update
of
cube
api
server
and
net
cd
upstream.
E
That
failover
from
api
perspective,
api
servers,
perspective,
works
but
testing
it
to
verify.
If
we
think
that
most
of
the
problems
are
in
the
nct
client
like,
I
think
we
should.
We
should
test
that
in
the
etsy.
So.
K
One
one
nuance
of
that
is
that
the
failover
behaviors,
like
a
lot
of
the
weird
edge
cases,
tend
to
live
at
the
intersection
of
the
scd
logic
and
the
grpc
libraries
and
the
versions
of
the
grpc
libraries.
That
etcd
is
testing
with
may
or
may
not
be.
The
versions
that
we
are
building
into
the
cube
api
server
like
we
might
have
picked
up
a
newer
version
of
grpc
david.
J
Definitely
hit
some
of
those
like
early
like
we
had
a
lot
of
grpc
hangs.
I
haven't
seen
those
in
a
year
or
two.
I
don't
know
david
if
you've
seen
them
recently,
but
I
I
I
would
worry
about
version
skew
there,
because
a
regression
could
creep
in
and
it
would
hit
us
at
scale
almost
immediately
like
us.
F
K
With
the
given
version
snapshot,
so
if
the
test
lived
in
scd
and
could
be
invoked
as
like
normal
go
tests,
we
can
run
those
with
the
versions
that
we
have,
but
the
more
esoteric
or
involved.
The
setup
is
like
their
integration
tests
or
ede
tests.
On
the
sd
side,
we
don't
have
good
ways
to
like
run
their
ci,
complex,
ci
stuff,
with
our
grpc
versions.
Right.
E
We
we
could.
We
could
like
like
do
that
in
a
fork
of
fcd
like
right,
like
fork,
cd
put
in
the
grpc
version,
we're
going
to
use,
run
the
tests
right
like
something
like
that.
K
C
And
as
an
example,
what
we
do
is,
we
can
say:
okay,
I'm
going
to
vendor
cube
for
regardless
of
whether
you're
supposed
to
or
not
we're
in
a
vendor
cube
cube,
has
test
associate
associated
with
it,
we're
going
to
pull
those
testing
libraries
and
run
that
test
inside
of
whatever
it
is.
We've
entered
it
into.
You
could
do
something
similar
to
that
in
with
that
cd,
where
you
have
a
test
sure
they
run
it,
but
it's
structured
in
a
way
that
you
can
actually
essentially
vendor
that
test
and
run
it
locally
for
yourself.
I
I
mean
just
from
an
scd
perspective.
It
makes
sense
to
me
that
we
would
be
doing
especially
with
grpc.
We
would
be
doing
more
aggressive
testing
against
recent
versions
of
it
and
maybe
doing
against
multiple
versions
of
it,
because
what
I
mean
just
the
way
the
go.
Mod
works,
you're
going
to
have
you're
going
to
have
anybody
using
the
client,
potentially
picking
up,
not
exactly
the
version
that
you
tested
with.
If
you
want
to
pick
one
version.
E
C
C
Well,
I
guess
I
also
listed
it
in
the
cap
itself.
I
would
really
like
to
see
us
make
the
cost
make
most
clients
stop
using
the
case
that
doesn't
work
right,
which
I
believe
is
resource
version
0
with
a
limit
set
and
then
increase
the
cost.
You
know
degrade
the
performance
of
clients
who
are
making
that
request
and
say
you
should
update
to
this
new
thing
and
then
eventually
make
it
just
point
directly
to
ncd,
and
so
we
gain
unification.
That
way
all
limit
requests
are
treated
the
same
and.
J
They
don't
note
that
note
that
resource
version
0
is
wrong
for
components
that
are
restarting,
because
they
can
time
travel
back
in
time.
So
we
already
have
that
thing
that
we
still
haven't
fixed
that
bug.
So
there's
at
least
a
part
of
me,
that's
like
I
don't
want
people
using
resource
version
0
unless
they
absolutely
have
to
anyway.
J
B
B
There's
also
a
question,
a
reason
that,
like
the
counter
argument,
is
that,
like
we
will
be
building
the
support,
for
I
mean
the
api
will
be
the
api
for
like
streaming.
The
lists
will
will
be
there
but
like,
like
a
bunch
of
that,
will
be
built
into
our
client
library,
which
is
fine
for
components
that
are
written
in,
go
but
like,
and
there
are
also
other
components
written
in
other
languages.
That
will
not
be
able
to
directly
benefit
from
that
and
they
won't
migrate
to
it
easily.
E
So
my
my
first
idea
about
what
we
should
do
is
separate
the
ga
discussion
from
the
optimization
discussion
and
take
all
the
commits
to
this
cap,
except
for
the
last
one
and
merge
those
and
call
the
feature.
Ga
and
then
talk
about
this
optimization
separately,
because
you've
got
three
commits
here
and
the
first.
The
first
two
are
obviously
great
and
all
the
stuff
that
we
need
to
discuss
is
in
the
last
one.
So,
but
that
that,
being
my
my
impression
for
how
to
make
progress,
I
don't
know:
does
that
sound.
C
B
Sure
yeah
I'm
happy
to
discuss
further
like
the
splitting
the
discussion.
I
think
it
I'm
not
sure.
I
fully
agree
that
we
should
ga
without
it,
but
I
agree
that
there
are
like
the
the
things
that
david
just
mentioned,
makes
sense
to
me
like
universally
independently,
independently
of
what
we
do
with.
C
A
Well,
thank
you
for
the
discussion
and
looks
like
we
will
have
room
for
continue
this
discussion
either
on
a
next
meeting
or
separate
meeting.
Thank
you,
everybody
for
joining
and
hope
you
have
a
good
rest
of
your
day
and
we'll
see
you
next
time.