►
From YouTube: Kubernetes SIG API Machinery 20200115
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
I
figured
just
get
this
out
of
the
way
and
then
let
you
do
your
meeting,
I'm
Bob
or
mr.
Robbie
tables
online
I'm,
one
of
the
really
sleek
chato's
and
we're
trying
to
just
hit.
You
know
every
sick
and
just
you
know,
make
sure
they
know
about
the
various
states.
The
big
one.
That's
coming
up,
that's
just
a
little
under
two
weeks
away
as
enhancers.
Freeze,
honestly
I'm,
not
too
worried
about
guy
machinery.
You
all
have
been
pretty
good
about
keeping
track
of
these
dates,
but
still
wanna
just
make
sure
folks
know
it.
B
A
Thank
you
so
much
okay,
next
item.
This
is
kind
of
old
already,
when
it's
being
here
when
we
cancelled
so
glad
to
go
what
you
have
connected
with
many
of
you
at
UConn
in
San
Diego
was
at
least
for
me
in
very
impacting
to
say,
such
as
large
Yukon,
the
largest
ever
I
have
been,
and
it
has
been
I
think
was
around
12,000
people.
For
me,
the
contrast
was
a
contributor
they
on
Monday.
A
You
know
you
more
or
less
know
the
faces,
and
then
the
next
day
you
could
even
get
into
the
resident,
because
so
many
people
around.
So
it's
impressive,
so
I
put
there
a
couple
of
links
on
related
talks
and
there
was
an
interest
in
machinery.
I
heard
it's
not
very
good.
I
was
the
Elaine
and
it's
fine,
and
then
there
was
a
deep
dive.
I
think
the
deep
time,
especially
for
the
people
in
this
meeting,
is
interesting.
A
C
C
A
Yeah
good,
so,
following
with
the
same
topic,
there
is
3q
con
schedules
for
this
year.
After
than
is
the
first
one
I
think
calls
for.
Paper
is
closed,
but,
as
I
see,
we
still
have
the
opportunity
to
sign
up
for
intro
and
deep
dive.
I
was
talking
to
David.
Eads
looks
like
David
is
going
to
be
doing
a
deep
dive.
A
We
still
have
not
clear
if
anybody
wants
to
do
the
intro
to
see
JP
machinery.
So
if
somebody
on
this
call
is
interested
in
me,
we
can
pair
you
up
with
somebody
else.
Like
you
know,
you
don't
have
to
do
all
the
work.
I
think
it's
good
to
have
always
an
introductory
session.
Yeah
yeah,
the
deep
dive
and.
B
A
E
C
E
E
E
I
spent
a
decent
amount
of
time
on
that
email,
trying
to
digest
what
the
author
actually
meant,
whether
it's
because
they,
it
wasn't
clearly
obvious
whether
they
were
talking
about
cube
six
or
the
CNC
f6,
because
you
can
have
both
and
I
think
that
I
even
went
up
to
the
point
that
I
opened
the
forum
and
when
I
picked
the
sake,
because
there's
no
distinction
between
cube
and
CNC
and
it's
sick.
It
allowed
me
to
only
sign
up
for
one
yeah,
okay,.
C
A
H
We
realized,
when
we
started
discussing
about
features
being
better
felt
too
long,
that
try
run
and
deep
well
better
for
quite
some
time
now,
so
we've
decided
to
make
it
GA
in
118.
Deep,
heavily
depends
on
dry
run.
It's
mostly
a
six-year
life
insurer,
but
I
I
thought
I
would
mention
it
and
dry
run
were
mostly
done.
We
don't
have
a
lot
of
chances
to
do
it's
mostly
in
coop
at
all.
H
H
C
F
H
F
H
C
H
A
Working
group
apply
gets
together
every
two
weeks:
yeah
Tuesday
mornings,
9:30
PST.
Yes,
there
is
quite
a
you
know,
community
of
people
already
working
on
all
these
features.
There
is
one
for
everybody,
this
documentation,
coding
performance
system.
So
there's
enough
cool
next
one.
The
evidence
ball
is
here.
So
let
me
open
this
Cape
didn't
mind.
If
I
share
my
screen
action,
oh
please
I
need
to
stop
sharing.
I
I'll
try
and
be
quick,
but
I
think
this
should
help
so
hi
everyone.
My
name
is
David
Asheville
I
hail
from
signal
and
I've
been
playing
around
with
tracing
for
a
year
so
now
and
gave
a
talk.
If
you
come,
if
you'd
like
to
see
a
more
fully
fleshed-out
version
of
this,
but
I
just
want
to
walk
through
what
I'm
doing
and
what
changes
would
be
required
for
this
from
sig
API
machine,
so
actually,
first
I'll
just
start
out
with
a
super
quick
demo.
I
I
I
Got
the
important
part
which
is
specified
that
I
want
it
to
be
traced,
and
this
CLI
I'll
talk
just
60
Li,
for
so
that
might
change.
But
the
basic
idea
is
that
I
should
be
able
to
send
something
to
the
API
server
and
then
see
the
amount
of
time
it
spent
in
the
API
server
and
the
amount
of
time
it
was
waiting
on
a
TD
and
possibly
also
if,
if
a
TD
were
to
implement
tracing,
then
we
would
also
be
able
to
see
the
spans
from
@cd
as
well.
I
There
so
API
server
pushes
it
to
the
open
telemetry
agent,
which
basically
can
be
configured
to
point
to
one
of
a
number
of
different
backends.
So
the
open,
telemetry
project
is
basically
trying
to
make
it
so
that
you
can
write
tracing
in
your
code
once
send
it
to
a
generic
collector
and
then
from
there
send
it
to
one
of
a
variety
of
backends.
I
So
if
I
were
to
create
a
deployment
with
five
pods,
let's
see
if
this
works
I
haven't
tested
this
in
a
few
months
last
run
at
Q,
Khan.
Okay,
so
if
we
were
to
create
a
deployment,
we
can
see
the
request
coming
to
the
API
server,
the
sed
transaction
for
the
deployment
creation.
We
can
see
the
deployment
controller,
creating
the
replica
set
and
the
requests
to
do
that,
and
then
we
can
even
see
the
scheduler
scheduling
the
pod,
the
cubelet
doing
its
thing
and
sending
even
sending
requests
to
the
container
runtime.
I
I
Okay,
so
cool
fun.
This
is
one
change
that
involves
API
machinery
and
then
the
second
change,
which
may
be
more
difficult,
we'll
see,
is
that
to
make
this
convenient
to
do
it
would
be
nice
to
add
context
to
the
client
go
interfaces
that
way
we
can
still
use
them
and
we
can
pass
content.
That's
been
used
for
the
video.
C
Okay,
so
my
thoughts
on
it,
obviously
that's
super
cool
to
have
all
that.
It's
hard
to
argue
against
it,
but
I'm
gonna
try
anyway,
actually
I.
Think
the
the
client
go
context.
Changes
obviously
correct
and
would
help
us
with
lots
of
things.
So
I
don't
have
any
arguments
it's
over
there
with
that
I'm
a
little
more
nervous
about
making
API
server
push
things,
and
my
other
concern
is
or
yeah
I'm
I'd
like
to
know
more
or.
F
I
C
With
API
server,
or
can
it
be
in
the
cluster
where's,
its
external
service
or
against
that
right
now?
Okay,
that's
that's
something!
I
want
to
see
either.
The
other
thing
is
from
any
theoretical
standpoint
like
what
do
you
do
if
multiple
controllers
are
acting
on
an
object
right
like
like,
like
out-of-band,
that
that's.
F
C
C
I
No
weed
so
the
only
mitigation
we
really
have
is
to
link
the
first
one
for
the
second,
so
that
you
can
see
the
stuff
that
was
definitely
associated
with
your
change
and
then
be
able
to
see
everything
that
happened
afterward
as
a
result
of
the
next
change,
but
yeah
we
once
an
object
is
updated.
You
don't
really,
or
the
controllers,
don't
I
think
know
which
fields
were
changed
by
we
wouldn't
want
to
track
it
in
that
way.
I
don't
think.
C
F
Well,
but
this
is
a
different
person
right
well,
I
assume
the
different
person
right,
I'm
assuming
I,
would
not
open
up
open
tracing
to
my
average
user
that
instead
this
would
be
e.
Either
user
gets
a
limited
view
of
it.
You
can
see
his
own
or
more
likely
someone
says
something
such
as
not
working
right
again
and
admin
tries
figure
that
out
yeah.
I
J
Especially
for
persisting
it
on
objects,
at
that
point,
it's
visible
and
mutable
and
copyable
yeah.
Like
I
look
at
this
emily.
Oh,
this
is
a
great
demo
and
as
soon
as
you
have
something
this
powerful
that,
like
it's
so
immediately
useful,
it
will
start
building
important
things
on
top
of
it
and
so
ask
him
like
how
easily
moved,
manipulated
or
yep
before
we
build
important
things
on
top
I
agree
with
Daniel.
J
The
context
change
I
think
is
non-controversial
in
accepting
like
how
we
actually
go
gold,
okay,
yeah,
so
I
think
that
would
unblock
a
lot
of
people,
be
the
first
step
towards
this,
and
also
things
like
priority
and
fairness
and
making
sure
we
clean
up
back-end
stuff
when
the
front
end
times
out
and
so
I'd
really
like
to
see
that
happen.
I
know
my
denise
is
on
the
call
as
well
was
looking
at
that
recently.
Yeah.
C
F
I
Log
right,
that's
not
end
up
doing
it,
you
would
end
up
so
I
haven't
I.
Think
I've
accidentally
done
this
many
times,
but
if
you
export
multiple
trees
under
the
same
trace,
ID
most
backends
will
just
pick
one
and
display
it.
So
if
you
copied
a
span
context
from
one
object
to
another,
and
then
controllers
did
things
on
both
of
them
and
exported
spans,
then
you
would
end
up
just
seeing
one
tree
or
another
and
it
would
pick
one
of
them.
Hopefully.
I
Finding
a
value
for
it,
so
I
can
give
a
little
bit
of
context
on
how
this
should
be
used.
It
obviously
can
be
abused
if
you
don't
build
things
well,
but
generally
people
trace
a
small
percentage
of
requests
and
try
to
trace
ones
that
are
problematic
are
interesting
in
some
way,
so
you
wouldn't,
for
example,
have
this
tracing
everything
all
the
time
you
would
want
to
trace
a
percent,
a
quarter
of
a
percent
or
even
just
as
I've
implemented
here
explicitly
specified
requests
where
you
want
to
know
very
detailed
view
of
what
happen
so.
C
Terminology,
people
like
every
couple
is
requests
tracing.
This
is
this
is
like
object,
lifecycle
tracing.
Yes,
it
is.
The
first
thing
you
showed
was
justiça
think
that
it
was
never
the
request
yeah.
It
was
just
showing
you
it.
Yes,
server
goes
to
get
to
at
CD
in
the
Mac,
like
whatever
weapon,
we
should
show
up
in
their
tune
up,
that's
a
request,
but
the
the
other
thing
where
there's
a
value
propagated,
3
annotations.
If
that
is
not
request
there,
it's
not
requesters
yeah.
K
I
C
C
Boundaries
make
sense,
we'd,
also,
probably
once
parity
in
furnaces,
in
which
I
think
is
the
next
agenda.
Once
priority
awareness
is
in,
we
would
want
to
separately
count
time.
You
spend
waiting
for
service
from
time
than
it
takes
to
service
the
request,
because
we're
going
to
keep
in
queue,
requests.
J
I
I
C
E
J
So
adding
methods
to
an
interface
is
not
actually
backwards
compatible,
so
there's
also
we're
looking
to
be
compatible.
There's
actually
no
way
to
do
it
without
like
introducing
a
parallel
package.
Yeah
you'd
have
to
do
again
making
the
existing
package
that
people
are
currently
using
be
the
old
legacy
bad
way
forever.
Yeah.
I
C
C
L
F
C
I
C
First
I,
do
a
global
rename
of
the
existing
methods
to
like
temporary
name,
so
that
and
like
that.
That's
easy
to
review,
because
you
can
just
count
the
the
changes
and
make
sure
that
they're
they're
flying,
even
though
it's
a
global
change
right.
Okay,
then
I'd
add
the
new
methods
in
the
context
and
then
I
would
in
a
series
of
changes
everywhere.
J
J
David
proposed
actually
snapshotting
the
existing
package
tree
and
making
a
copy
of
it
and
calling
it.
You
know:
deprecated,
client-side
or
legacy
client
set
and
then
a
single
rename
of
all
uses.
Two
legacy
client
set,
then
doing
what
I
propose
figuring
out
what
you
propose,
but
actually
shipping
it
for
a
release,
say
that
everyone
who
wants
to
upgrade
and
make
the
same
package
rename
when
they
bump
to
118
dependencies
and
then
heal
their
codebase
or
transition
their
codebase
to
the
new
package.
J
F
Wasn't
even
gonna
do
anything
that
fancy
I
mean
I
really
would
I
would
just
do
a
straight
copy.
I
think
we
still
somewhere
have
the
code
that
freezes
entire
directory
trees.
So
I
would
do
a
straight
copy,
freeze
the
code
for
the
entire
directory
tree
both
rename
and
then
not
even
why
our
generation
to
it
right,
because
it's
just
there
to
die
it's
there,
because
it
makes
it
easier
for
us
to
do
a
very
fast
non-conflicting,
merge
across
the
whole
code
base
for
changing
the
imports
and
then
take
bunches
to
do
a
mechanical
update.
F
C
C
J
J
C
J
C
J
So
it's
important
to
note
that
there
are
two
issues
that
have
been
raised:
one
is
a
consumer
that
is
depending
on
something
like
controller
run
time
and
depending
on
kubernetes
libraries,
and
this
would
not
help
that
consumer
because
controller
run
time.
They
don't
control
the
code
for
that
right,
so
until
controller
run
time
updates
to
use
the
context
specific
methods,
consumers
of
controller
runtime
are
stuck
on.
Whatever
version
of
cube
content
words,
what
that
does
help
is
someone
who
controls
their
entire
code
base
and
just
needs
to
stage
out
the
updates
yeah.
So.
M
J
J
Yeah,
alright
I
I'm
in
favor
of
the
rename
or
the
copy
of
the
package,
the
snapshot
of
current
package
to
a
legacy
package
and
then
he'll
the
add
context
and
he'll
the
cube
codebase,
maybe
with
mics
generated
change.
If
we
can
get
that
all
in
at
once,
I'd
be
great.
If
we
runs
a
complex,
we
might
need
to
stage
it.
L
J
I
think
the
community
pull
request.
That
was
opening
that
was
open
a
long
time
ago
that
proposed
this
talked
about
threading
context
to
a
client
go
like
to
create
delete.
You
know,
let's
watch
update
patch,
whatever
methods
and
also
to
informers.
I
am
much
less
certain
about
the
meaning
of
context
being
passed
to
an
informer,
Lister
watcher
or
an
informer
method,
because
that
spans
multiple
requests
so
I
think
we
should
start
with
just
the
client
interfaces
and
separately.
I
C
L
J
J
You
look
back
at
the
notes
from
the
last
time
we
talked
about
it.
I
had
the
action
item
to
copy
the
community
proposal
to
a
cap.
I
can
still
do
that.
It
will
not
be
this
week
if
someone
wants
to
jump
on
it
and
summarize
the
notes
and
get
that
into
kept
form.
I
would
not
object.
Otherwise,
I
will
get
to
it.
Probably
next
week.
K
I
I
Yeah,
we
could
definitely
do
that
that
essentially
amounts
to
the
same
set
of
changes.
I'm
proposing
to
you
here,
which
is
trace
stuff
in
the
API
server
and
a
decline
go
whether
or
not
we
do
the
inserted
in
an
annotation
and
then
use
that
to
allow
controllers
to
propagate
context
between
it
is
if
we
feel,
for
example,
that
it
doesn't
handle
concurrent
updates
well
enough,
we
can
certainly
at
least
for
the
Alpha
start,
with
just
tracing
request
to
and
from
the
API
server,
but
yeah.
That's
definitely
an
option.
C
C
C
You
know
if
they've
configured
this
right
and
or
is
this?
You
know
as
a
developer,
I
sent
a
request
and
it
took
longer
than
expected.
Why
is
that?
Is
it
because
I
match
to
the
wrong
priority
level,
or
is
it
because
somebody
else
is,
is
hugging
the
cluster
or
something
like
that
so
I
have
some
ideas
on
how
we
can
make
that
more
visible,
but
we're
still
working
to
get
the
basic
functionality
emerged.
So
thank.
N
Alright
companies-
yes,
ok!
So
again,
my
KP
new
year
and
I'm,
just
here
just
to
bump
you
again
for
the
for
this
kept
for
them
master
service
to
be
of
type
of
extra
name,
I
really
hope
to
get
at
least
kept
implementable
for
1.18,
because
the
code
changes
which
are
related
I
mean
don't
be
good.
The
most
concerning
part,
will
be
the
IP
I
server
controller,
but
everything
in
the
cutest
case
already
be
done.
I
mean
I
even
have
a
PR
open
for
from
October
from
months
ago.
So
it's
only
updates
there.
N
This
is
I'm
going
to
be
to
get
to
get
because
I
didn't
want
to
be
too
pushy,
but
ya.
N
N
J
N
I
mean
because
current
we
don't
have
any
updates
on
I
mean
none
of
the
plants
are
actually
watching
for,
for
this
change
or
every
single
controller
or
pot,
which
is
listening,
which
talking
to
the
API
server,
probably
needs
to
be
restarted,
I
mean
it
is
the
same
with
like
when
you
roll
out
on
your
certificates.
Pretty
much
so.
J
Just
sweeping
through
it
looks
like
this
would
mean
that
the
address
that's
projected
into
pods
to
speak
to
the
API
server
would
become
a
DNS
name
instead
of
an
IP
address.
Yes,
exactly,
okay,
so
that's
that's
a
pretty
visible
change.
I
know.
We've
talked
about
this
in
a
few
different
contexts
and
whether
or
not
that
is.
C
N
Mean
so
far,
I've
looked
into
all
of
the
dynamic
clients
and
all
the
clients
like
the
Python
client,
the
Klan
go
in
all
of
those
cases.
They
are
pretty
much
parsing,
the
environment
variable
for
the
quebradas
hostname
and
they
are
simply
putting
it
as
the
server
address
and
a
paddock
HTTP
account
of
it,
and
then
just
saying
it's
on
port
443
and
of
all
the
clients
are
doing
this
thing.
N
Unless
you
have
specific,
like
a
test
and
in
coaster
and
and
the
coaster
has
been
created
or
has
been
modified
with
this
change,
then
I
don't
think,
there's
any
problem
with
it.
I
do
it
manually,
I!
Guess
it
with
a
lot
of
different
controllers
and
everything
on
a
patched
cuentas
coaster
and
everything
was
running
without
any
problems.
So
far,.
G
J
Trying
to
find
a
conversation
that
10
Hawken
was
part
of.
It
was
around
this
exact
issue
like
what
what
would
need
to
happen
if
we
were
going
to
start
injecting
DNS
names
in
place
of
this
or
an
addition
to
this
and
I'll
see
if
I
can
track
that
down
and
add
a
link
to
that
conversation,
and
maybe
I
think
probably
signal
network
could
be
copied
into
this.
They
might
have
an
idea
of
interesting
network
topology.
Is
that
this
would
work
well
worth
or
not?
Work
well
with
okay,
I
think.
F
C
F
C
F
A
Okay,
thank
you
Martine
and
last
Toby
is
just
a
reminder.
Anybody
that
is
new
or
forgot.
We
run
twice
a
week
at
three
entries.
There
are
public
meetings,
a
lot
of
the
people
that
is
here
today
we
go
there.
We
go
through
everything
that
we
all
submitted
as
a
pull
request
or
open
as
an
issue
from
the
last
meeting.
So
sometimes
we
get
a
lot
sometimes,
but
you
know
it's
it's
an
important
way
of
keeping
track
of.