►
From YouTube: Kubernetes SIG API Machinery 2019 02 13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
But
it
seems
to
me
that's
not
necessarily
the
right
fit
for
the
kubernetes
control
plane,
because
it's
not
built
out
of
our
pcs
all
right.
So
I
I
tried
to
think
of
the
right
way
to
do
this.
So
I
said
to
proposal
to
the
instrumentation.
Sig
got
a
reply
from
someone
who
had
open
to
kept
for
how
to
do
tracing
on
the
couplet
he's,
mainly
thinking
about
the
coolant
and
a
little
bit
about
the
scheduler
and
I.
B
Replied
I,
put
a
comment
in
that
his
PR
there
about
why
I
think
spans
are
not
the
right
conceptual
basis,
so
I
thought.
That's.
You
know
the
should
be
interesting
to
discuss
with
this
group
as
well.
Obviously,
since
it's
a
kind
of
a
generic
principle
issue
for
a
control
plane
and
does
API
machinery,
so
the
way
it
seems
to
me
is
the
way
that
kubernetes
control
plane
is
built.
Is
it's
built
out
of
state
rights?
We
strive
to
writer
controllers
in
a
level
based
way,
but
this
does
not
deny
that
there
are
edges.
B
In
fact,
of
course,
there
are
edges
and
they're
very
important
to
performance,
and
we
care
about
the
edges.
Each
each
edge
is
a
right
to
some
state
and
generally,
when
I
could
or
does
that
it
is
based
on
what
it
read,
that
is
to
say
earlier,
state
rights
whose
whose
effects
it
read.
So
it
seems
to
me
the
right
conceptual
basis
for
performance
of
observability.
You
might
call
it
tracing,
but
that
might
mislead
people
is
basically
capturing
these
these
rights
and
these
relationships
between
them.
B
Of
course,
we
already
have
the
latency
metrics
that
come
out
of
the
work
queue,
but
those
only
capture
what
happens
one
one
iteration
in
the
work
queue
right,
the
higher
level
latency
metrics
that
are
interesting
and,
in
fact,
in
my
own
performance
work,
I
produce
these
such
metrics
in
a
higher
industry
in
an
ad
hoc
way.
This
is
the
latency
from
one
state
right
to
a
consequence
state
right.
B
So,
if
we're
capturing
those
relationships,
this
could
produce
automatically
these
higher
level
latency
metrics,
as
well
as
the
observability
data,
and,
of
course,
it's
been
observed
that
actually
capturing
and
saving
all
of
this
data.
You
know
just
as
with
traditional
tracing,
that's
a
lot
of
data,
and
you
don't
necessarily
want
to
do
it
all.
It
seems
to
me
there
are
just
in
some
sense
of
three
levels
in
which
this
might
be
done.
One
is
maybe
doing
nothing.
B
One
is
the
sort
of
a
full
instrumentation
keeping
and
propagating
all
the
information,
and
an
intermediate
level
would
be
to
record
in
the
objects
the
times
of
the
state
rights,
but
not
what
they
depend
on.
That
would
be
enough
to
enable
the
production
of
these
higher
level
latency
metrics.
It
would
be
an
intermediate
cost.
A
B
Take
an
example:
let's
just
look
at
the
gross
outline
of
a
lifetime
of
a
pod,
so
somebody
creates
it
and
that's
recording
the
creation
timestamp.
For
then
the
scheduler
gives
it
a
placement
that
is
not
time-stamped
anywhere
and
then
the
couplet
goes
ahead
and
implements
the
thing
and
the
result
of
that
is
not
time-stamped
anywhere,
but
if,
for
example,
the
state
right
from
the
scheduler,
if
we
just
captured
the
timestamp
of
the
state
right
from
the
scheduler,
then
we
could
automatically
produce
latency
data
from
latency
metrics.
B
A
B
A
Going
to
record
we're
recording
the
the
field,
managers
and
they're
changing
fields
and
to
handle
the
case
where,
where
somebody's
changing
fields
repeatedly
we're
actually
recording
a
time
stamp
along
with
that,
in
which
fields
they
change,
so
you'll
be
getting
that
data
and
a
little
bit
extra
on
top
of
it.
Now.
B
B
A
I
definitely
agree
with
the
first
point
you
made,
which
is
that
the
like
tracking,
our
pcs,
is
not
particularly
useful
in
the
way
kubernetes
control
playing
construction
yeah,
because
yeah
what's
interesting,
is
what
what
triggered
things.
I
worry
that
the
set
of
like
the
set
of
triggers
will
just
grow
unbounded
ly
over
time
until
until
it
encompasses
every
object
in
the
cluster
yeah.
B
I'm
so
I'm
not
I'm,
proposing
what
I
would
propose
is
that
controllers
explicitly
record
individual
relationships
and
not
the
transitive
closure?
Of
course
the
transitive
closure
grows
over
time.
But
what's
the
date
of
this
recording
is
individual
relationships
and
also,
as
the
example
of
the
scheduler
shows,
I
think
it'll
be
critical
to
be
able
to
summarize
right.
The
scheduler
makes
a
decision
based
on
reading
all
the
met,
all
the
nodes
and
all
the
pods,
and
we
don't
wanna
record
all
of
those
relationships
every
time
it
make.
B
The
one
of
the
things
I
did
not
discuss
at
all
is
how
there's
it's
you
know
for
most
instrumentation.
There
are
kind
of
factors
into
a
few
phases
right.
You
have
to
instrument
the
code
to
kind
of
inject
the
primary
data
into
some
kind
of
an
instrumentation
framework
in
the
local
process.
Then
there
has
to
be
some
kind
of
a
collection
that
collects
it
from
all
the
processes
and
then
pray.
B
You
know
it
gets
stored
somewhere
and
you
so
on
those
latter
points,
one
could,
for
example,
imagine
using
events
as
the
way
to
get
that
data
out.
I
I
would
think
the
primary
instrumentation
should
be
with
an
API,
that's
appropriate
for
the
content
rather
than
just
ad
hoc
stuffing
stuff
into
events.
I
think
we
should
have
a
purpose-built
API
for
it.
I
don't
know.
D
B
Yeah
context:
oh
no
look
I
completely
agree
that
actually
context
willness
will
play
an
important
part
in
conveying
the
information,
but
contact
is
only
going
to
convey
information.
There
needs
to
be
points
at
which
a
controller
says:
okay,
I'm,
making
this
state
right
based
on
those
things,
are
read
and
that's
a
new
kind
of
data
that
we
should
have
a
purpose-built
API
for
expressing.
Yes,
it
may
pull
some
things
out
of
context
in
the
course
of
doing
that,
may
put
some
things
into
context,
but
the
you
know
the
the
primary
new
data
is.
A
Know
if
we
should
go
to
in
depth
and
whether
we
need
a
API
or
not
in
this
group,
I
think
that's
a
good
design
cap
thing
to
talk
about
I
am
interested
I'm,
very
interested
actually
in
in
like
that
that
first
layer
of
interactions,
because
I'd
like
to
use
that
to
compute
a
number
for
our
system,
which
is
like
I'll,
call
it
the
r-value
the
app
given
a
particular
right
on
average.
How
many
other
rights
does
that
imply
throughout
the
stack
which
our
system
has
a
property?
A
A
B
I
I
agree:
there
are
other
things
one
could
use
as
well.
One
could
also
look
at
the
Phillies.
Layton
sees
there
in
some
sense
in
part,
a
measure
of
the
performance
of
the
control
plane,
just
in
shuffling
information
around
between
controllers
and
I
would
like
to
see
metrics
on
that
coming
out
of
our
regular
testing.
Yeah.
A
Okay,
cool,
let's
I,
guess
any
last
comments
on
this
all
right.
You
can
add
a
to
the
notes,
so
people
can
know
where
to
go
to
follow
up
on
this
conversation.
B
A
A
E
You
want
to
give
us
a
two
minute
summary
or
I
will
try
to
be
fast,
so
so
the
motivation
is
that
we
would
like
to
avoid
unnecessarily
processing
the
same
watch.
Events
that
we
already
processed
for
so
the
context
is
that
when
the
watch
is
observing
only
a
small
percentage
of
objects,
then
we
don't
really
have
a
way
to
to
let
the
water
know
that
we
already
processed
some
some
stream
of
events
for,
but
there
wasn't
anything
interested
in
not
so
clear.
E
B
F
E
B
B
A
B
E
It
doesn't
don't
really
have
to
be
least
if
it's
like,
even
if
it's
like
within
history
window,
we
and
it's
still
sending
that
to
resource
version.
We
still
need
to
process
a
bunch
of
like
what
events
that
happen
in
the
meantime
that
we
already
may
have
processed
and
for
when
we
were
when
it
was
like
previously
established.
Okay,.
B
E
That's
right,
so
so
that's
the
that's
the
motivation,
so
what
I'm
proposing
is
I'm
proposing
to
introduce
what
right
what
we
called
like
a
bookmark?
This
is
like
a
new,
even
type
where
that
will
give
us
an
ability
to
say
that
we
already
processed
all
their
watch.
Events
up
to
this,
this
research
version
and.
B
E
It
like
compatible
and
backward
compatible.
We
will
be
like
encoding
it
in
the
object
because,
like
the
word
Shivam
contains
inside,
like
the
runtime
object,
we
will
be
just
encoding
it
in
the
object.
But
that's
like
more
of
an
implementation
detail
and
important
there
is
a
clients
must
opt
in
to
observe
this.
E
G
G
I
was
just
really
hoping
in
background
I
think
like.
We
should
definitely
think
about
super-high,
cardinality
or
super
high
numbers
of
Watchers
and
the
impact
on
the
system.
That's
already
there
like
I
know,
systems
that
have
ten
thousand
active
watches
at
any
time,
and
so
it's
just
like
we'll
probably
want
to
switch
to
bookmarks
for
in
those
cases,
but
just.
D
H
A
G
G
G
Not
that
that
I,
let
me
I'll
just
say
like
the
only
thing
I
was
really
worried
about
in
this
proposal-
was
super
large
number
of
watches
and
the
unanticipated
behavior.
Oh,
we
sent
a
bookmark
of
mint
to
10,000
watches
at
all
the
same
time
in
the
API
server,
used
up
all
its
memory
and
died,
which
caused
a
thundering
heard.
That
kind
of
stuff
not
be
the
value
of
the
proposal.
E
E
J
I'm
here,
Tappan
just
a
reminder
about
kept
out
there
about
safe,
like
microphone,
sets
and
heavy
review,
and
he
want
to
go
forward
with
it.
It's
not
that
critical,
because
it
goes
into
experimental
inside
of
component
base,
but
we
want
to
move
that
forward
and
iterate
from
that
clear
why
it
wasn't
just
gonna
be
a
different
Reba.
J
D
D
Like
it
looks
like
a
new
flag,
parsing
wrapper
it's
not
it's,
not
a
hugely
significant
choice,
one
way
or
the
other.
We
would
end
up
dealing
with
some
opinionated
hierarchy
chains,
I
think,
as
I
understand
the
proposal
to
try
to
map
what
the
cubelet
did
during
its
transition
from
fonts
from
Flags
the
structure
config.
It's.
J
K
J
A
B
But
I've
been
working
with
him,
so
I
can
relay
a
lot
of
what
he
says
beyond.
What's
in
here
yeah,
would
you
like
to
give
the
overview?
Okay,
so
yeah?
This
is
responding
to
something
we've
discussed
before.
So
the
proposal
is
to
say
what
we're
going
to
do
is
take
the
existing
in
the
chain
of
handlers
in
an
API
server.
One
of
them
is
applying
a
concurrency
limits.
There's
two
concurrency
limits,
one
for
mutating
requests
and
one
for
non
mutating
requests,
and
the
proposer
here
is
to
extend
that.
B
Currently
they
do
simple
traffic
policing,
which
is
to
say
a
request,
is
either
accepted
and
immediately
started
or
it's
rejected.
The
proposal
is
to
add
some
traffic
shaping
to
achieve
some
goals.
Really
we
have
two
sorts
of
goals
that
we've
been
talking
about,
and
one
is
sort
of
system
protection
goals.
We
want
the
system
to
protect
itself
from
workload
and
from
out
of
control
components
within
the
system
and
the
other
is
in
a
at
least
in
a
multi-tenant
system.
B
You
want
to
have
some
concept
of
tenant
and
fairness
between
tenants
and
you
know,
as
you
probably
know,
for
me,
I'm
interested
not
only
in
kubernetes,
but
other
systems
built
like
kubernetes
with
the
API
machinery,
so
and
I
definitely
have
some
multi
tenant
scenarios,
and
so
that's
that's
one
of
my
concerns.
People
in
this
group
tended
to
start
more
with
the
system
protection,
but
you
know
I
think
the
intent
is
that
we
should
be
able
to
accomplish
both
or
essentially
the
same
kind
of
thing
there,
their
protection
of
one
form
or
another.
B
So
the
proposal
that
that
this
contributor
originally
brought
phrased
this
in
a
way
that
I
recognized
as
equivalent
to
the
way
VMware
expresses
scheduling
constraints.
So
the
idea
is
that
the
requests
are
classified
so
at
this
enforcement
point,
requests
are
classified
according
to
some
predicates
that
identify
which
class
a
request
belongs
to,
and
it's
classify
based
on.
You
know
things
you
can
tell
looking
at
the
proquest
when
it
arrives,
and
this
does
not
apply
to
long-running
web
requests.
Why
watch
and
connect?
D
Yeah
I
spoke
with
Yui
when
he
started
this
and
we
modeled
it
on
TC
right.
So
when
you
model
it
on
TC,
it
actually
tries
to
match
your
your
request
with
a
class,
and
we
have
the
additional
piece
of
choose
that
based
on
your
subject.
In
addition
to
the
rest
and
the
subject,
matching
will
be
the
primary
classification.
At
least
that
was
the
way
it
was
initially
sketched,
be
the
primary
classification
and
then
from
there.
You
can
have
the
additional
limits
right.
B
Alright,
so
actually
I
wanted
I
think
that
one
of
the
most
productive
things
to
really
discuss
is
what
we
really,
you
know
kind
of,
be
careful
about
what
we
need
and
what
we
want
all
right.
So
the
proposals
here
really
talk
in
fairly
simple,
blanket
terms
about
system
versus
workload,
but
I
recall
it
from
our
earlier
discussions.
There
was
a
lot
of
concern
with
well.
What,
if
you
know,
one
of
the
controllers
just
goes
crazy
for
some
reason.
So
I'm
talking
about
protecting
the
system
from
other
parts
of
the
system
as
well,
yeah.
A
I
interpreted
the
the
proposal
at
like
those
were
just
two
examples
of
the
sorts
of
things
that
you,
the
sorts
of
cues,
that
you
could
have
I
think
within
the
system
bucket.
We
definitely
need
more
than
one
cue
right.
Somebody
brought
up
the
example
of
leader
election,
like
that's
higher
priority
than
other
things,
yeah
yeah.
B
So
yeah
I
think
the
the
best
way
to
do
it
is
probably
with
a
priority
sort
of
scheme.
So
we
let
people
write
predicates
and
we
can
have
overlapping
predicates,
but
each
predicate
for
each
class
has
a
matching
priority.
So
you
look
at
all
the
classes
whose
predicate
matches
the
request
and
then
with
the
highest
matching
priority.
That's
the
one
you
consider
to
actually
be
the
class
that
you
want
to
work
with
and
yeah.
We
want
to
be
able
to
use
the
authenticated
information
that
comes
out
of
the
authentication
Hamburg,
which
is
indeed
earlier.
B
B
B
Still,
the
number
of
things
that
are
matched
against
in
our
back
is
much
smaller
than
the
number
of
tenants
in
a
multi-tenant
system.
All
right
you
go
to
a
multi-tenant
system.
You
may
have
thousands
of
tenants.
We
don't
deal
with
thousands
of
our
role
as
your
plaintiffs
you're,
expecting
to
give
each
each
like
namespace.
D
B
Yeah
I
I'm,
not
well
I,
think
we
don't
have
to
have
a
blocking
problem
here.
At
all.
Twenty
years
ago,
people
working
on
message
brokers
figured
out
technology
for
fast
matching
of
a
message
against
a
lot
of
subscriptions.
You
know
content-based
matching,
so
there's
technology
we
can
pick
more
or
less
off-the-shelf
to
do
fast,
matching
against
a
lot
of
the
you
know,
conditions
yeah,.
A
B
A
B
A
C
B
That's
why
we're
seeing
looking
at
two
caps
actually
yeah?
The
original
doesn't
really
have
an
abstract
of
the
behavior
or
the
interface
it
just
dives
directly
into
the
implementation.
I
tried
to
write
a
version
that
focuses
on
the
you
know.
The
behavioral
description
and
the
interface
and
I
did
not
write
down
an
implementation.
I.
B
B
D
To
do
on
this
as
well
I'm
gonna
say
that
if
there's
an
alternate
proposal,
I
would
want
to
see
it
in
a
level
of
detail
similar.
What
Yui
is
described
like
I
I,
don't
want
to
hold
this
just
gives
maybe
there's
something
else
out
there.
I
would
want
to
see
that
something
else.
Concrete
Yui
has
a
clear,
actual
plan
that
solves
a
legitimate
need
in
our
control
plane
and
it
looks
like
it
is
viable
and
implementable.
A
D
I'm
gonna
say:
if
there's
an
alternate
design
and
I'd,
be
looking
for
this
level
of
detail
and
I.
Think
I'd
like
to
see
it
open,
otherwise,
I
would
or
a
doc
for
it.
Otherwise,
I
would
encourage
you
each
write-up,
okay
and
open
it
and
against
the
enhancements,
and
we
would
continue
the
conversation
there
and
yeah
we
can.
We
can
move
the
implementation
out
into
a
separate
repo
that
gets
linked.
Is
that
only
sorry
not
right?
The
the
the
gate
who
had
asked
about
that.
A
D
L
A
L
The
cap
itself
is
simple,
but
I,
don't
know
how
much
background
you
give
here.
So
I
assume
everyone
is
familiar.
What
storage
version?
Yes,
it's!
Basically,
when
your
engine
server,
just
your
data
onto
a
CD,
it
will
encoded
seeing
a
certain
version
like
for
deployment.
You
were
always
so
a
11.4
teenager
silver
will
include
it
into
apps
the
one
and
then
possess
it
in
a
CD,
and
we
have
a
problem
when
you
upgrade
to
downgrade
your
HSM.
L
If
your
existing
data
in
a
CD
is
encoded
in
a
different
version
than
that,
the
court
poorest
person
after
to
operate
on
great
you
might
reach
a
server,
might
not
be
able
to
interpret
what's
in
the
SAT.
So
we
need
to
do
a
migration
whenever
there's
a
version.
Skill
between
the
sed
data
and
the
default
is
for
expression
we
have
already
so
far.
We
have
already
implemented
the
migration
vanilla.
What
we
lack
is
an
automation,
automated
migration
trigger,
for
that
we
had
a
proposed
that
you
exposed
the
falsest
orange
version
as
a
hash.
L
You
know
a
discovery.
Api
I
already
have
a
open
law
request
to
implement
that,
and
this
calf
is
specifically
about
how
to
use
the
expose.
The
storage
version
has
to
automate
it
their
migration.
The
basic
the
idea
is
that
we
all
have
a
custom
controller.
That's
periodically
calling
the
discovery
API.
It
will
keep
record
of
what
the
storage
version
are.
So
it
will
know
what
are
stored
in
a
CD
and
what's
the
current
default
restored
version,
and
if
there's
a
steel
it
will
trigger
the
migration
account.
L
A
B
A
B
B
A
D
A
F
N
Don't
see
how
do
I
zoom
I
can
zoom
okay,
I've
setup
K
to
be
an
areas
from
a
good
old
which
I
just
filled
and
I
I'm
warning
against
my
own
version
of
kubernetes
right
now
and
it's
using
my
micro
version.
So
let's
see
we're
going
to
try
why
a
deployment,
maybe
I'm,
going
to
show
the
department.
So
it's
a
simple
deployment
and
genetics
not
much
to
say
about
it.
N
Let's
try
and
apply
it
I'm
going
to
mention
that
this
is
going
to
go
directly
to
the
patch
and
right
not
to
create
endpoint,
and
so
this
is
creating
the
deployment.
We
can
now
look
at
the
deployment
that
we
feel
and
what's
new,
is
this
massive
managed
field
at
the
top
of
the
object
which
has
the
list
of
people
who
modified
the
the
document
and
when
they
did
so
so
we
can
see
end
of
version.
N
So
we
can
see,
for
example,
that
what
we
applied
is
this
set
of
fields
and
so
currently
it's
being
owned
by
the
apply
manager,
and
we
can
see
also
that
the
hypercube,
because
I'm
running
the
local
cluster
is
earning
some
of
the
status
fields.
For
example-
and
it
has
done
multiple
changes,
so
we
can
track
them
and
dice
the
day
for
each
of
these
changes.
N
N
N
One
thing
I
can
do
to
mitigate,
that
is
to
run
conflicts
which
is
going
to
say,
hey
I,
want
to
regrab
these
fields,
give
them
back
to
me,
and
if
we
look
at
this
again
we'll
be
able
to
notice
that
Cuba
all
these
completely
disappeared
from
this
list
of
managed
field,
it
doesn't
own
anything.
So
the
the
managed
field
set
is
completely
removed
and
it's
back
to
I
own
it
now-
and
it's
been
set
back
to
three
here.
A
N
A
G
G
A
G
M
G
N
G
A
A
G
A
H
N
It's
always
a
John,
yes
yeah,
it
is
always
returned,
but
does
it
does
not
reflect
you
flip
it
to
to
say
I'm,
not
interested
in
it,
so
I'm
interested
in
it,
yeah
I
was
concerned
about
backward
compatibility
and
we
get
this
with
a
gate
right.
This
is
going
in
with
a
gate.
There's.
I
A
So
there
is
a
yes,
it
is
part
of
our
schema.
The
thing
that
is
not
part
of
this
schema
is
the
particular
format
of
the
names.
It
is
a
tree
structure,
so
I'm
not
sure
that
you'll
be
able
to
learn
too
much
from
looking
at
the
schema,
and
you
you
wouldn't
this.
The
schema
doesn't
tell
you
how
this
the
string
names
are
constructed
other
than
that,
yes,.
H
N
A
good
question
we
only
keep
track
of
when
the
fields
were
acquire
and
if
you
lose
the
ownership
of
these
fields,
the
time
stamp
is
going
to
disappear,
so
an
HPA
that
would
constantly
and
how
isn't
old,
bugger's
a
scale
that
would
constantly
update
number.
Why?
Because
he's
going
to
have
a
new
one
entry
in
the
managed
fields,
which
would
be
the
last
one
yeah.
N
G
Be
good
to
have
a
knowing
the
history
on
we
screw
up
a
template
rights,
so
many
times
would
be
good
to
have
a
couple
of
things.
To
specifically
call
it
identify
rights.
How
many
times
we've
broken
that
six
or
seven.
Actually,
yesterday,
I
had
to
explain
to
someone,
because
it
was
correct.
We
were
terribly
confused
that
it
was
correct,
which
I
thought
was
a
success
on
our
part.
A
Yeah,
okay,
I'm
pretty
excited
about
this.
We
have
a
umbrella
issue
with
a
list
of
things
that
we
need
to
get
to
beta
and
a
few
other
things
that
we're
going
to
do
before
the
current
releases
cut
yeah.
This
is
pretty
huge.
It's
in
it's
in
alpha
we're
out
of
the
Future
branch
yeah
so
play
with
it.
Let
us
know
yeah
all
right
and
we're
done
with
five
minutes
to
spare.
So
thank
you
all
for
coming
and
we'll
see
you
in
two
weeks.