►
From YouTube: Kubernetes SIG API Machinery 20190327
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
B
D
Yes,
so
what
a
storage?
My
greater
the
goal
is
to
reach
beta
in
two
two,
and
this
PR
is
mostly
just
updating
our
cap
with
the
beta
requirements.
Mostly
it's
big.
It's
of
just
adding
more
tests
tools
to
ensure
that
last
migrator
system
works
under
stress.
Also.
Add
visibility
requirements,
animal
metrics,
so
it's
check
it
out.
If
you
are
interested-
and
I
we
do
want
to
have
some
early
adopters
after
it's
reached
after
to
reach
us
beta-
and
it
will
be
a
crucial
step
for
before
you
can
simply
upgrade
your
cluster.
B
B
B
B
It's
proposing
that
we
begin
serializing
objects
or
started
cashing
the
serialization
of
objects,
part
of
the
as
part
of
it
like
in
some
sort
of
smart
object
from
that
that
gets
passed
around
stored,
I'm
still
not
totally
sure
that
yeah
I
don't
know
I'm
still
thinking,
I'm
still
going
to
be
sure
what
I
think
about
it.
But
if
you
have
thoughts,
I
put
this
on
here
does
sling
fYI
not
because
I
expect
us
to
take
action
today,
but
I
yeah.
E
I
would
say
that
people
who
are
familiar
with
the
watch
cache
your
feedback
would
be
really
useful
if
you're
familiar
with
addy
serialization
cache
feedback
will
be
useful.
You
know
you
remember
the
smart
objects
that
have
been
discussed
off
and
on
here
and
in
six
CLI
yeah
that
are
aware
of
their
own
serialization.
This
is
the
kept
to
come
in
and
comment
on
yeah.
B
F
F
Not
only
do
we
need
to
do
this
just
to
interoperate
well
with
go.
It
actually
works
way
better
than
what
we're
currently
using
for
dependency
management,
which
is
go
DUP,
and
so
there's
a
there's,
a
request
that
actually
switches
our
rendering
tools
to
use,
go
modules
and
comparing
how
fast
ci
runs.
It
takes
30
minutes
to
verify
bender
with
go
depth
and
it
takes
3
minutes
with
go
modules.
F
F
There
are
some
other
interesting
parts
of
go
modules
that
we
could
maybe
take
advantage
of
in
the
future,
like
Symantec
import
versioning,
but
that
requires
all
of
our
dependencies
to
also
support
Symantec
import
versioning,
which
almost
none
of
them
do
yet.
So
that's
more
about
future.
Looking
possible
benefit,
so
this
this
cap
is
describing
what
we
plan
to
do.
The
first
aspect
is
switching
our
rendering
tools
to
use
go
modules.
That's
pretty
self-contained!
That's
limited
to
just
the
update
vendor
and
verify
vendor
and
update
vendor
licenses.
Question.
E
E
F
Right
now
we
have
two
different
formats
of
good
apps
JSON
files.
We
have
the
ones
that
are
checked
into
communities,
communities
which
have
dummy
revisions
for
all
of
the
staging
components
like
revision,
xxx
xxxxxx.
Well,
obviously,
those
can't
be
used
directly
by
go
Depp's
or
glide,
or
anything
at
publish
time
when
we
take
those
subfolders
inside
the
kubernetes
kubernetes
repo
and
push
them
out
to
their
own
repositories.
F
At
that
point,
we
in
the
publishing
BOTS
rewrite
those
to
insert
the
actual
shah's
that
the
staging
components
can
use
to
find
each
other,
and
so
the
plan
is
to
continue
publishing
that
at
publish
time,
but
just
to
generate
it
from
the
dependencies
and
there
go
modules.
So
when
we
publish
out
do
the
individual
repos,
we
would
generate
go
to
adds
JSON
files
for
use
five
things
like
glide
or
godõs,
okay,
and
it
will
still
be
good
at
JSON.
Yeah
it'll
still
be
named
the
same
right
now.
F
F
This
describes
some
of
the
options
we
have
around
versioning.
The
proposal
is
actually
to
stick
with
what
we're
currently
doing.
It
is
no
worse
with
go
modules
and
it
leaves
our
options
open.
Some
of
the
implications
of
switching
to
major
versions
or
semantic
import
versioning
are
irreversible,
and
so
for
the
first
phase.
The
goal
is
really
just
to
get
our
been
during
tools
working
well
and
make
sure
the
things
we
publish
out
can
be
consumed
by
module,
aware
consumers
well.
F
So
right
now
in
the
go
112
time
frame
modules
are
able
to
be
used,
but
they
are
not
on
by
default,
and
so
the
goal
is
to
structure
the
kubernetes
repository
so
that
it
works
people
who
are
using
modules
and
it
works
for
people
who
are
not
using
modules,
which
means
vendor
and
staging
some
links
exists,
at
least
for
a
couple
more
releases.
There
are
other
considerations
around
been
during
aside
from
the
staging
symlink
thing
like
being
able
to
checkout
kubernetes
kubernetes
and
have
a
reproducible
build
so
before
we
got
rid
of
that.
B
F
Would
like
to
convert
all
of
our
build
scripts
to
use,
go
modules
and
at
the
point
where
you
have
them
using
go
modules.
You
have
options.
You
can
either
build
pointing
at
a
local
vendor
door,
even
with
modules,
or
you
can
build
pointing
at
a
local
module
cache
again
the
questions
around
like
hermetic
builds.
F
E
F
B
B
F
F
F
F
Then
it
lists
a
bunch
of
dependencies
and
these
are
in
the
form
of
their
route
import
path
and
the
version
that
you
depend
on
now,
anytime,
you
see
require
you
actually
have
to
switch
that
in
your
mind,
like
require
greater
than
or
equal
to,
because
go
modules,
use
minimal
version
selection,
which
means
that
all
of
the
modules
involved
in
a
build
get
to
require
versions,
and
it
takes
the
greatest
of
the
required
versions
for
a
particular
module
and
that's
what
gets
used.
So
when
we
say
we
want
to
use
new
version,
o
FF
module.
F
If
one
of
our
dependencies
requires
a
newer
version,
then
that
will
actually
get
used.
This
is
what's
called
a
pseudo
version,
this
dependency
and
you'll
notice,
almost
all
of
our
dependencies,
actually
don't
use
semantic
version
tags
or
we
are
not
on
commits
that
are
tagged
with
semantic
versions
and
so
go
generates.
A
pseudo
version
which
consists
of
the
timestamp
of
that
commit,
and
then
the
SHA
of
that
commit
so
on
that
you
know.
A
E
B
F
F
So
we
have
a
bunch
of
required
statements
when
you
actually
start
up
a
module
for
the
first
time,
if
you
say,
go
mod
in
it.
If
you
have
a
go
desk,
JSON
file,
it
will
actually
populate
all
these
require
directives.
With
the
existing
revisions
that
your
code,
X
JSON
said
you
depended
on
which
is
kind
of
nice.
F
Means
that
this
dependency
uses
has
get
tags
that
are
using
semper,
but
they
are
not
using
get
module
versioning.
So
the
way
get
modules
want
you
to
do.
Major
versions
is
by
actually
changing
the
import
path
of
your
module,
and
so
what
that
would
mean
would
be
if
we
wanted
to
bump
customize
to
v2
and
do
it
the
go
module
way
with
semantic
import
versioning.
We
would
actually
have
SIG's
kate's,
io,
/,
customized,
slash
v2,
and
then
we
would
depend
on
the
2.03
of
that,
and
that
is
what
lets
you
actually
have.
F
Two
different
major
versions
of
a
module
coexist
in
the
same
build,
so
it
from
goes
perspective.
It's
a
completely
different
package.
You,
you
could
accomplish
the
same
thing
by
having
one
version,
one
branch
and
just
having
a
v2
packaged
in
it.
That
would
accomplish
the
same
thing,
and
so
this
dot
air
+
incompatible
indicates
that
this
is
a
semantic
version
tag.
F
But
this
is
not
using
semantic
version
imports,
and
so
it
it
throws
the
plus
incompatible
on
there
to
let
you
know
hey
if
one
of
your
dependencies
depends
on
v3
of
this
customized
component,
we're
not
going
to
be
able
to
let
you
use
both
of
those
simultaneously
and
all
of
these.
All
of
this
normalization
and
generation
of
stood
over
zh
ins
and
stuff
is
done
by
go
tooling.
If
you
run
like
go
mod
tidy
and
it
formats
your
module
file,
it
will
actually
go
resolve
these
and
figure
out.
F
The
time
stands
of
the
shah's
and
sort
it
and
put
it
in
this
format.
So
this
is
all
done
by
go
to
one
all
right.
So
that's
that's
the
required
section
that
lets
all
the
modules
involved
in
a
build
and
say
here's,
the
version
I
would
like
here's.
The
the
minimum
version
I
would
like
now,
the
top-level
module
in
a
build
is
special.
F
It
gets
to
exert
control,
and
so,
even
though
there
are
lots
of
modules,
lots
of
dependencies
and
transitive
dependencies
involved
in
a
build,
the
top-level
module
gets
to
be
the
decider,
and
it
can
do
a
couple
different
things.
First
of
all,
it
can
pin
virgins,
and
so
you
express
this
by
saying
whenever
I
want
to
replace
whenever
we're
deciding
what
to
use
for
this
module.
I
am
going
to
replace
it
with
this
module
at
this
version,
and
so
this
is
what
let's
cube
entities:
kubernetes,
pin
versions,
no
matter
what
transitive
dependencies
are
trying
to
do.
F
You
know
now
there
are
a
couple
things
you
can
do
here,
one
you
can
pintu
a
particular
version.
So
that's
neat.
Another
thing
you
can
do
is
do
rewriting,
so
you
can
say
whenever
the
system
stat
module
is
requested,
I
actually
want
you
to
pull
that
content
from
some
other
module
entirely.
So
this
could
be
useful
in
cases
where
kubernetes
has
a
dependency
that
is
unmaintained
and
there's
a
security
vulnerability
found
in
it.
We
could
use
replace
to
say
the
system
step
to
pick
on
that
much
I'm
sure
they're
well
maintained.
B
E
F
So
that's
handy.
It's
also
super
handy
for
like
working
locally
if
you've
got
a
set
of
repos
and
you
want
to
point
somewhere
else
temporarily
and
do
bills
and
let
things
actually
resolve
and
actually
build.
That's
that's
really
nice
and
then
the
the
last
thing
you
can
do
and
if
I
come
down,
you
can
actually
point
to
locations
on
disk
instead
of
remote
locations,
and
so
this
lets
us
indicate
that
whenever
one
of
our
staging
modules
is
being
looked
for,
we
can
locate
it
relative
to
the
communities.
Communities
rig
module.
F
B
F
F
B
F
F
Just
names
I'm
not
actually
sure,
I
think
some
of
the
go
get
the
original
go,
get
semantics
about
like
understanding,
different
kinds
of
version,
control
providers
and
things
like
that
like
it
knows
how
to
find
modules
from
github
and
google
code
and
go
package
and
like
some
of
those
come
into
play.
But,
okay,
this
lets
you
specify
arbitrary
models,
so
I
could
have
a
module
that
had
three
segments
or
four
segments
or
something-
and
it's
pointed
at
the
location,
there's.
E
B
F
This
this
actually
is
something
that's
nicer
than
glide,
like
the
ability
to
point
to
a
local
directory.
This
is
this
is
really
really
handy,
and
so
that's
this
that's
what
the
structure
of
the
the
kubernetes
kubernetes
Godot
mod
file
looks
like
only
the
top-level
module
gets
to
specify,
replace
directors.
Other
dependency
modules
can
specify
them,
but
they
don't
get
honored
in
a
build.
Only
the
top-level.
F
B
A
question
which
is,
if
your
replacement
uses
a
version
which
doesn't
meet
the
requirements,
is
that
a
compile
error
doesn't
meet
the
requirements.
Yeah
like
I,
guess:
zero
zero
zero
is
going
to
be
easy
to
satisfy,
but
if
I
replace
one
of
the
others
with
a
version
that
is
lower
like
the
disk
version
is
lower,
does
that
fail
the
compile
or
does
it
assume
that
your
replacement.
F
Is
okay
replacement,
wins?
These
are
unconditional
replacements.
You
can
replace
specific
versions
as
well.
So
on
the
left
side,
you
could
say
I
want
to
replace
cloud.google.com
plush
go
V
one
with
a
particular
thing.
You
could
conditional
eyes
the
replacements
if
you
wanted
to,
but
these
are
unconditional
now.
Obviously,
if
you,
if
you
point
at
a
version
and
it
pulls
down
the
content
for
that
version-
and
it
actually
won't
compile
well
yeah
at
that
point,
you
have
a
you've
made
a
compile
error
but
replace
wins
unconditional
unconditionally.
I
K
L
F
F
Commands
to
manipulate
the
go
mod
file.
So
if
you
look
at
the
the
update
vendor
script
in
the
branch
that
I
have
open
for
the
go
module
work,
there's
a
lot
of
this
where
I
know
whole
sweep
through
the
existing
stuff
and
one
of
the
things
I
do
is
look
at
all
the
require
statements
and
then
generate
replace
statements
to
make
sure
that
we're
pinning
things
at
the
top
level.
So
we
have
equivalent
control
to
the
current
go
depth.
Json
file,
because.
B
F
And
go
the
go.
Tooling
does
a
few
things
that
are
kind
of
wacky.
It
aggressively
updates
that
requires
statements
in
your
go
mod
file
to
reflect
reality.
So
you,
if
we
say
we
want
the
100
of
something
and
one
of
our
dependencies
says
it
wants
to
be
one
one
go
tooling,
will
actually
update
your
top-level
go
mod
while
to
say
no.
Actually
you
want
one
one
because
you're
getting
one
one
anyway,
because
this
dependency
says
it
wants
one
one.
F
So
it
will,
it
will
bubble
up
like
the
actual
versions
that
would
have
been
selected
into
your
required
statements
at
the
top.
Now
your
your
replaced
statements
still
rule,
but
that
at
least
gives
you
visibility
to
like
hey
require
is
1
1
and
we're
pinning
to
one.
Oh,
we
have
a
mismatch
here.
So
I
mean
I
half
the
time.
That
was
nice,
because
I
could
see
what
the
effective
minimum
version
was.
F
B
F
Really,
that's
that's
kind
of
an
overview,
the
bits
that
we're
using
some
of
the
forward-looking
things
that
we
might
be
able
to
use
in
the
future.
If
all
of
our
dependencies
behave
better
and
yeah,
the
short
version
is,
you
can
run
update,
bender
locally
in
a
minute
and
a
half,
and
it
works
every
time
which
is
awesome.
F
Think
it's
a
combination
of
good
Epps
being
terrible
and
go
modules,
making
some
simplifying
assumptions
about
how
they
determine
versioning
like
they.
The
Wego
modules,
determine
versioning,
is
scan
dependencies
and
take
the
biggest
version
requested.
Instead
of
trying
to
do
like
tons
and
tons
and
tons
of
graph
resolution
and
like
letting
letting
things
bound
min
and
Max
it.
Just
let
you
do
that
you
either
pin
at
the
top
level,
or
it
just
takes
the
max
of
all
the
requested
ones.
So
yeah
it
passed.
F
E
Okay,
David
say:
I'm,
not
I'm,
not
arguing
against
that,
but
I
think
we
really
several
of
us.
Remember
that
GC
tried
it
and
you
see,
isn't
using
it
just
about
to
get
to.
F
That
so
I
went
and
looked
at
what
we
would
need
to
do
so.
There's
three
controllers
today
that
could
benefit
from
it.
They're
the
two
that
are
easier
or
quote,
accounting
and
namespace
deletion
actually
wins.
Looking
at
this
because
namespace
deletion
I'm
calls
JSON
list
on
things
and
so
eating
eat
tests,
like
thirty
percent
of
the
time
on
the
Masters
on
ete,
runs
of
the
cpu
time,
is
actually
doing
namespace
json
lists
so
there's
some
nice
wins
to
have
in
our
ETA
infrastructure.
And
that's
what
got
me
down
this.
The
get
paths
are
mostly
fine.
F
The
protobuf
serialization
actions
be
updated
because
we
accidentally
left
out
list
meta
and
so
we're
gonna
have
to
decide
whether
we're
okay
with
partial
object,
metadata
list
going
to
GA
with
a
different
printer,
buff
ID
numbering
then
almost
every
other
list.
We
have
we've
made
this
blue
on
a
couple
of
beta
fields
where
we
did
items
and
then
it
got
ID
one
and
then
we
realized
we
can.
We
fix
it
and
we
won
anyone's
a
different
object.
F
We
fix
it
in
v1.
That
is
a
good
point.
So
I
will
go,
take
a
dig
through
that
and
then
the
gap
on
garbage
collection
controller.
So,
unlike
all
the
others,
garbage
collection
controller
needs
to
call
put,
and
so
we
would
have
to
do
a
server-side
support
for
putting
partial
object
metadata.
It's
not
really
that
hard
I
was
ass
on
it.
It
basically
would
require
the
rest
reading
and
merging
it
with
the
storage
version
which
we
already
have
to
do.
Can
we
not
a
match?
B
F
Actually,
no
it's
still
in
115,
it's
still
calling
up,
and
so
I
was
going
to
also
create
a
partial
object,
metadata
client
that
looks
a
little
bit
like
dynamic,
but
it's
a
lot
cleaner
because
it
can
use
the
existing
serialization
infra.
A
second
note
is
I
have
an
outstanding
change
that
Jordan
and
I
have
been
talking
about
to
clean
up
the
serialization
stack
and
client
go
sort
of
cut,
takes
conversion
out.
It
takes
some
of
the
hacks
that
David
had
to
put
in
to
get
dynamic
and
they're,
not
hacks.
F
Sorry,
clever,
workarounds
that
David
put
in
to
get
the
dynamic
client
out
the
door
and
cleans
it
up
just
because
we
don't
need
conversion
and
client
go
so
I
want
to
bring
that
in
comment.
Get
some
discussion
going
on
removing
conversion
from
client
go
which
would
allow
us
to
simplify
both
dynamic
client
and
partial
object
client,
so
that
you
would
basically
have
a
fairly
straightforward,
very
clean
code
path.
H
H
F
F
B
That's
great
okay,
I
thought
I'd
give
like
a
30-second
update
on
the
rate-limiting
discussion.
Those
of
us
working
on
it
had
another
sneak
up.
Meeting
I
was
a
yes
good
day
and
we
have
another
one.
Next
Tuesday,
basically
we're
close
to
converging.
There's
we're
still
talking
about
the
mechanism
for
taking
things
out
of
the
Q's
yeah,
but
we're
pretty
close
on
that.
So
look
for
a
sort
of
combined
PR
with
with
our
collective
design.
I,
don't
know
you
know,
maybe
maybe
we
can
have
that
out
by
the
next
API
machinery
sig
founder
as
well.
B
B
Yes,
I
am
yeah,
so
I
saw
that
you
asked
this
question
in
slack
and
nobody
answered
so
shame
on
us
seems
legit
to
ask
in
this
sig
meeting.
I
have
just
one
thought
on
it,
which
is,
it
seems
like
there's
sort
of
the
the
cause
and
effect
are
are
separate
like
yeah.
We
if
we
broke
JSON
merge
batch,
is
pretty
interesting.
B
I
think
and
I
hope
this
is
documented,
but
I
think
web
hooks
really
need
to
be
like
they
need
to
be
upgraded
before
the
rest
of
the
control
plane
when
you're
doing
it
upgrade
and
that
needs
to
be
tested.
Unfortunately,
but
I'd
love
it.
If
you
could
describe
the
JSON,
which
patched
thing
that
you
have
observed
good.
J
J
So
if
you
see
the
older
behavior
was
that
the
child
was,
we
were
basically
the
value.
Was
this
Sherpa
injector
dot
service
net
slash
status
as
injected
when
you
showing
your
screen
or
not?
No
I
I,
basically
pasted
it
on
Jessie,
and
then
then
the
fixed
that
is
working
is
basically.
The
path
now
includes
the
whole
status
field
in
in
the
labels.
Does
that
make
sense
or
I
am.
F
So
that's
basically
like
we,
you
are
using
it
incorrectly
yeah.
There
were
two
two
bugs
that
I'm
aware
of
one
was
in
the
JSON,
merge
library,
but
I.
Don't
think
that
was
this.
The
other
was
that
we
weren't
actually
clearing.
We
weren't
decoding
into
a
new
object,
and
so
whatever
the
current
state
of
the
object
was,
the
result
of
your
patch
would
actually
get
overlaid
on
top
of
it,
and
that
meant
that
you
couldn't
actually
remove
fields
using
a
mutating,
webhook
and
1/9
sounds
like
about
the
right
time
frame
for
that,
but.
J
B
Yeah,
unfortunately,
we
didn't
build
in
a
there's,
not
necessarily
a
mechanism
for
telling
if
a
web
book
is
incompatible
with
the
new
control
plane,
so
it
is
kind
of
left
as
an
exercise
to
the
reader.
At
the
moment,
it's
I
think
if,
if
I
were
running
a
cluster
with
some
custom
web
books,
I
think
it
would
definitely
be
good
to
have
a
test
environment.
Do
the
upgrade
in
a
test
environment
and
see
what
see
how
the
how
the
behavior
is
yeah.
Okay,
so.
B
E
J
F
B
J
B
Answer
to
that
is
with
a
left
hook.
You
don't
really
have
a
choice.
It's
always
chased
on
merge
patch,
oh
I
didn't
know
that
okay
yeah,
you
don't
have
a
choice
and
in
general,
if
like
like,
if
you're
not
using
WEP
locally,
if
you're
just
using
the
API,
then
I
would
recommend
planning
on
using
server-side
apply
as
soon
as
that
is
in
general
usage,
which
hopefully
will
be
1.15
I
hope
it
goes
to
beta.
Ok,.
J
B
So
the
part
of
the
reason,
strategic
you
merge
patch
exists
is
so
that,
in
the
case
of
our
lists,
you
need
to
be
able
to
refer
to
an
element
by
key
in
order
to
be
useful
or
by
the
like
the
name
field,
or
something
like
that.
In
order
to
be
able
to
make
a
patch
that
remains
useful
in
a
concurrent
environment
where
somebody
else
could
like
insert
a
list
item,
you
want
to
be
able
to
refer
to
the
list
item
based
on
one
of
its
fields,
not
based
on
its
index,
because
that
could
change.
B
So
to
do
that,
we
made
strategic,
merge
patch.
Now
a
web
book
doesn't
operate
under
that
condition.
It's
totally
fine
to
refer
to
it
by
index
in
a
web
book,
because
there's
no
chance
that
somebody
else
is
going
to
modify
it
before
the
result
of
the
web
book
is
applied.
So
a
strategic,
merge
patch
is
not
necessary
for
web
books,
cool.
B
F
Yeah,
so
so
real
quick,
so
on
the
client
go
bid.
It's
about
cleaning
up
client,
go
and
simplify
the
interfaces
rid
of
some
of
the
very
Curto
q1o
legacy.
Crap,
that's
just
accumulated
over
the
years
I'm
trying
to
trying
to
slim
it
down
so
that
it
matches
where
we
we
versioned
API
is
version
2,
clients,
internal
client
sets
are
gone,
conversion
really
has
no
place
in
the
inn:
Clank
o,
etc.
It
should
simplify
a
bunch
of
use
cases.
F
This
out
is
getting
rid
of
go
restful
in
the
serving
path
we
don't
actually
use
go
restful
for
any
of
the
decision-making.
We
don't
use
it
for
media
types.
We
don't
use
it
for
negotiation.
We
basically
just
pass
right
through
into
the
HTTP
stack
and
go
restful,
is
actually
fairly
wasteful
of
allocations.
I
put
a
couple
fixes
upstream
and
then
just
realized.
It
wasn't
worth
it
so.
F
At
the
current
point,
I'm
looking
at
swapping
out
the
MUX
logic
that
we
use
with
something
that's
better
faster
and
doesn't
allocate
as
much
which
will
give
us
a
nice
performance
win
on
everything.
I'm
gonna
play
around
a
little
bit
more
than
put
something
up
for
folks
to
look
at.
Maybe
in
the
next
couple
days.
Go
restful
really
isn't
involved
in
our
serving
path
today.
So
it's
a
mostly
it
will
change,
but
there's
some
good
wins
in
there.
So
we
should
get
some
performance
just
from
general
cleanup
across
the
board.
M
F
Yeah
so
as
a
114
we're
no
longer
using
it
for
those
and
the
fortunately
it's
it's
still
very
easy
to
initialize
it
and
verify
all
the
stuffs
there
and
keep
all
the
docks.
If
we
actually
need
it
and
if
it'll
basically
be
the
we'll
put
a
shim
underneath
it,
it's
not
actually
used
in
the
serving
path,
and
then
we
find
we
don't
need
it.
We
can
pull
the
whole
thing
out.
I.
B
F
N
F
Almost
every
measurable
one,
I've
ever
seen,
we've
caught
there
or
it
showed,
but
I
do
think
what
I
did
notice
and
I.
Think
to
your
point.
Mike
is
that
we
were
growing
slowly
and
it
was
was
like
10
or
15
small
things,
so
those
are
kind
of
hard
to
catch
in
a
performance
test.
I,
don't
know
what
we
could
do
to
be
there.
I.
F
Our
work,
Clayton
you're,
get
and
watch
benchmark
like
I,
wouldn't
mind
running
that
NCI
and
saying
the
number
of
allocations
and
a
get
path
shouldn't
grow
without
consideration.
Oh
yeah,
that
would
be
what
I
would
suggest
is
like
we
get
a
couple.
These
fixes.
They
pick
a
number
and
they
say
if
you
go
over
you'll
just
feel
that
unit
test
and
we'll
see
it.
That
requires
some
Basile
changes
and
I
hate
Basel.
So
I
may
need
some
help
too
to
deal
with
that
aspect
of
it.
But
okay.
B
Yeah
I
think
I
saw
somewhere.
Maybe
we
can
start
running
our
bidding
benchmarks
continuously,
because
I
don't
think
we
do.
We
don't
yeah.
That
was
a
look
at
the
basil
part
I
evidence
that
would
be
super
useful
to
just
run
our
own
benchmarks
continuously.
The.
F
F
And
honestly
does
have
caught
almost
every
actual
performance
problem
we
have
or
those
make
it
easy
to
profile
and
find
where
the
cause
is
for
most
of
the
things.
I
do
think
we're
missing
something
in
a
meta
level
and
we
have
a
large
number
of
independent
components
that
make
up
a
request,
profile,
audit
and
so
in
full
runs.
Like
part
of
why
I've
been
doing
this
is
because
we've
been
all
the
open,
API
performance
issues
they're
still
there.
F
You
know
the
work,
that's
defined,
I
know
the
folks
we're
helping
with
in
the
meta,
runs
and
like
an
overall
run,
I
get
very
different
results.
That
I
will
on
some
of
the
benchmarks,
because
things
like
audit
and
authorization
and
they're.
So,
but
that's
where
I
think
the
IDI
scalability
runs
because
they
capture
profiles
and
you
can
download
that
profile
and
look
at
it
in
a
flame
graph
in
ten
seconds
and
it'll.
Tell
you
exactly
where
some
of
those
like
heavy
hitters
are
in
the
core
paths.
N
Okay,
yeah
I'm,
not
familiar
with
that,
so
I
would
be
interested,
maybe
I,
guess
I'll
see
if
I
can
find
it
and
figure
out
how
to
do
that.
I
think
you
know
I'm
interested
in
promoting
the
Kubb
API
machinery
for
more
general
use
and
as
part
of
that,
I
think
we
should
have
some.
You
know
documented
and
maintained
performance
results
that
that
people
can
refer
to.
You
know
without
being
mixed
up
with
all
the
rest
of
kubernetes.
E
E
B
Promises
that
the
project
makes
or
like
somebody
knows
so
many
odds
stuff
like
that
I
think
it
would
be
great
if
we
could
make
a
scalability
claim
about
the
number
of
objects
tracked
by
the
control
plane
and
and
like
because
this
becomes
more
important
as
people
shuffle
or
work
into
CR
DS.
It
becomes
important
to
know
like
how
many
CR
DS
can
I
put
in
the
control
plane.
N
Right
I
would
also
say
that
we're
thinking
about
you
know
how
systems
work
at
a
higher
level,
I
mean
even
in
kubernetes
right.
What
we've
got
is
in
some
sense.
This
ecosystem
of
controllers,
interacting
through
objects
and
sort
of
the
fundamental
step,
is
from
a
client
submitting
an
update
to
an
object
to
a
controller
processing
that
update.
So
having
some
statement
about
the
you
know,
the
latency
throughput
of
that
kind
of
step
is
something
that
higher
level
designers
would
would
need
and
want
to
work
with.