►
From YouTube: Kubernetes SIG-Scheduling Weekly Meeting for 20200507
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Aldo
multiple
profile,
graduated
beta,
so
basically
multiple
profile
was
introduced
in
the
last
release.
118
release,
so
actually
it
doesn't.
He
wasn't
controlled
by
a
feature
gate
or
something
so
he's
just
in
place,
and
you
can
use
the
model
profile
to
craft
different
strategies
to
schedule.
Your
were
closed
within
Warren
schedule
binary.
So
it
is
a
very,
very
useful
features
so
that
you
don't
need
to
use
multiple
schedules
in
different
weather
processes
to
manage
different
kind
of
Booker's.
So
I
don't
do
once
as
more
things
on
this
point.
Peter.
C
Yes,
so
the
question
is
yeah:
what
what
does
it
mean
to
graduate
beta
and
I
can
only
think
of?
Basically
the
configuration
API
moving
forward
beta,
which,
which
is
what
we're
already
planning
so
so
they
do.
Yes,
basically
to
get
the
consensus
from
everyone.
They
are
basically
the
same
thing
and
we
can
proceed
with
both
yeah.
A
Via
the
phone
yeah
in
terms
of
what
profile
teachers
itself
in
it
wasn't
specific
control
by
us
teach
again
and
I
think
there's
a
couple
of
items
that
depends
on
the
can:
Perelman
cancer
graduating
into
V
1
theta
1
right
the
profiles,
one
is
one
and
I
recall
there.
Some
other
issues
also
need
to
change
the
elements
or
something
of
the
component
of
it.
At
the
total
one.
A
And
also
I
have
one
depends
on
the
villain
bitcoins,
the
poster
theater
definition.
Extension
point
definition.
So
the
proprietary
interface,
where
being
Judaism
go
ahead,
see
since.
C
C
D
D
Scheduled
profiles
is
a
component
cover
feature,
so
it's
not
something
to
white
flag
make
sense
just
to
graduate
it
to
beta,
as
you
graduate
from
comfort
to
beta
I.
Don't
think
you
have
any
other
option
to
not
graduate
it
to
beta.
If
you're
going
with
component
comp
activity
would
compare
conflict,
the
beta
and.
D
The
other
thing
is
measuring
girls.
Every
loser,
yeah
this
has
been
discussed
before
I
think
make
sense.
We
should
move
forward
with
that
similar
to
what
we
did
with
score.
A
normalized
score,
just
fuse
them
into
one
plug
in
I
mean
what
argument
could
be
a
difference.
Scheduler
is
not
using
them
right
now.
So,
like
it's
easier
to
convince
people
that
is,
is
like
it's.
It's
not
gonna.
Have
that,
like
a
big
impact,
I
understand
that
out
of
tree
plugins
might
be
using
them,
but
the
framework
is
alpha
in
general.
D
A
B
I
think
David
is
here:
I
asked
him
to
come,
join
us
and
I
was
hoping
that
kind
of
we
could
start
off.
You
could
talk
a
little
bit
about
what
open
telemetry
is
and
what
his
goals
are
around
that
kept
that's
linked
upstream
and
then,
if
you
don't
mind,
David
and
then
I
can
show
a
quick
demo
of
how
we
can
implement
it
specifically
for
the
scheduler
so
David
do
you
think
you
could
just
cover
a
little
bit
about
what
is
open,
telemetry,
yeah
sure.
E
I'll
introduce
myself
first
David
poll
I've
been
working
with
sig
note
for
a
couple
years
now
and
I'm.
The
tech
lead
for
sig,
instrumentation
and
I've
been
working
on
trying
to
use
tracing
in
kubernetes
for
almost
two
years
now,
I
present
to
that
cute,
Cod
last
November
and
there's
a
cool
video
of
it
as
well.
It'll,
probably
be
a
little
bit
more
in
depth
of
than
what
I
do
here
today.
E
Let's
see
so
first
I'll
talk
about
open,
telemetry,
open
telemetry
is
it
does
tracing
and
it
does
metrics
it's
and
it
can
correlate
between
the
two
and
from
what
I
understand
they're,
also
working
on
logging.
So
the
the
whole
idea
is.
They
want
to
define
a
component
level
telemetry
API,
where
you
can
instrument
your
component
with
open,
telemetry,
client,
libraries
and
then
choose
either
at
export
time
in
your
component
or
even
have
it
transmit
the
telemetry
to
some
agent,
that's
running
separately.
That
decides
walk
back
in
to
descend
here.
E
Your
information
so,
for
example,
I,
could
instrument
my
component
with
the
open,
telemetry,
client
library
and
then
send
traces
to
a
collector
which
then
could
send
a
stock
driver
or
a
Zipkin
or
Jaeger
or
any
of
the
tracing
backends.
So
open
telemetry
is
really
cool
and
it's
actually
quite
a
nice
fit
for
kubernetes
because
as
developers
and
maintainer
zuv
kubernetes,
we
don't
want
to
tie
ourselves
to
a
single
back-end
for
for
tracing
or
for
metrics.
E
C
E
Everyone
seems
quite
happy
with
it,
and
everyone
knows
how
to
use
the
client
libraries
and
we
have
a
lot
of
tooling
built
around
it.
So
at
least
for
right
now,
the
the
current
proposal
is
just
to
use
open
telemetry
for
tracing
and
smarter
clayton's
stuff
is
completely
separate
from
that
and
is
about
adding
new
Prometheus
metrics,
not
about
open
telemetry,
isn't
a
specific
set
of
metrics.
It's
a
you
could
compare
open,
telemetry
more
to
Prometheus
at
least
the
Prometheus
text
format,
then,
to
what
Clayton
has
been
talking
about.
E
So
there
are
a
couple
pieces
to
trying
to
get
tracing
to
work
in
kubernetes.
One
is
tracing
API
requests,
so
normally
the
way
tracing
works.
Is
you
send
a
request
to
some
front-end,
which
then
propagates
that
request
throughout
your
system
and
tracing
is
a
context
aware
telemetry,
meaning
that
there's
some
context
that
is
sent
along
with
the
request
through
your
system
and
in
the
end
you
get.
E
E
So
if
I
send
a
request
to
the
API
server
that
sends
to
its
backing
storage,
that
comes
back
and
that
actually
looks
like
sort
of
the
the
traditional
tracing
model
where
you're
serving
in
our
PC
and
you
get
a
nice
tree
for
us,
that's
a
fairly
simple
tree,
but
you
get
a
tree
of
requests.
That
comes
back
so
that's
part.
E
E
So
if
I
start
out
by
creating
some
simple
config
map
and
I've
added
a
trace
argument
to
this
and
I've
got
this
set
up
with
it's
running
some
open
telemetry
collectors
and
those
are
sending
to
a
Zipkin
back-end.
So
if
I
go
over
here,
it
is
akin
and
try
and
find
my
traces.
Lo
and
behold,
I
have
this
trace
from
creating
the
config
map.
E
I
can
see
the
API,
so
every
received
the
request
here
and
then
it
sent
a
transaction
to
Etsy
be
here,
and
so
you
can
sort
of
see
how
that
that
work
broke
down.
Pretty
simple,
quite
useful.
If
they're
complicated,
API
requests
like
once
I
have
to
go
through
a
series
of
admission
controllers
or
aggregated
API
servers
or
anything
like
that.
E
But
for
our
simple
case
it's
not
super
useful,
say
I
could
do
something
more
fun,
though
I
could
create
a
pod,
and
this
is
probably
where,
of
course,
everyone
here
is
mostly
interested
in
so
I
can
go.
Let's
see,
it's
been
a
while
since
I've
navigated
this,
if
I
create
a
pod.
That,
of
course,
is
more
interesting.
E
We
have
our
initial
request
to
the
API
server
and
request
at
CD,
but
then,
for
example,
I
have
a
trace
from
the
scheduler,
where
it's
calculating
what
no
to
schedule
it
on
and
creating
the
binding
object
to
perform
the
scheduling
and
then
I
even
have
some
traces
from
the
cubelet,
including
updating
status.
I
have
traces
from
container
runtime
calls
to
the
CRI
and
then
the
final
status
update
that
brings
the
pod
to
running
so
that's
pretty
cool
and
that
works.
D
E
D
What
is
the
scheduler
like
the
trace
is
imagining.
Is
that,
like
you're
tracing
out
the
C
code
right
so
from
cut
it
all
the
way
to
API
server
and
then
and
then
what
how
did
he
gave
to
this
scheduler?
What
skater
is
actually
watching
for
the
pod
didn't
make
any
yeah
yep?
How
did
it
go
so
propagate
to
the
scheduler
yep.
E
E
D
But
then
yeah
there's
no
sequence
right
between
those
two
things,
because
each
one
is
watching
for
the
PAS.
Each
controller
like
scheduler,
is
a
controller
and
then
there's
another
controller
watching
they
both
you
know,
did
something
when
the
pod
and
then
they
do
added
that
information.
How
does
this
go
back
to
the
so?
They
will
have
two
different
like
a
mid
left,
two
different
items
or
like
sub
items
in
the
the
new
context,
and
how
do
they
get
murdered?
I
guess
because
there's
no
order
here,
yep.
E
So
this
is
a
tree
and
there
can
be
many
children
of
a
single
parent
in
a
tree,
meaning
that
the
initial
context
that
was
generated
by
cube
control
is
actually
used
for
it's
used
for
the
API
server
call.
So
this
is
one
child
of
the
tree.
It's
used
for
it's
used
by
the
scheduler
when
it
picks
up
the
that
context
from
the
annotation
and
then
that
same
one
is
also
used
by
the
cubelet
when
it
performs
its
work
so
and
that's
perfectly
fine
to
have
it
would
be
the
same.
E
E
E
D
E
So
the
this
cube
scheduler
one
is
quite
small,
I
think
the
other
thing
that's
important
is
that
hopefully
you're
not
trying
to
synchronize
across
two
components
with
like
nanosecond
precision.
Hopefully,
even
though
the
trace
here
is
very
small,
the
thing
that's
actually
ends
up
being
important
is
the
time
at
which
it
happened,
which
is
separated
from
other
traces
by
some
amount
of
time,
but
you're
right
that
there
can
be
issues
with
that
in.
B
E
C
B
E
E
So
technically
you
attach
labels
to
spans
and
you
propagate
some
information
along
with
the
context,
but
yeah
you
can
think
of
it
like
I
could
attach,
for
example,
the
container
ID
to
my
container
start
calls,
and
then
it's
very
easy
for
me
to
be
able
to
search
actually
I
used
to
have
a
demo
where
I
did
that.
But
I
forgot
how
to
do
it.
But
then
you
can
search
for
specific
tags
like
this
container
ID
and
it
will
actually
find
the
span
for
you
and
show
you
the
overall
trace
in
which
that
action
occurred.
E
Okay,
I
just
wanted
to
show
super
quick,
the
deployment
trace,
which
is
interesting
because
it
involves
multiple
objects.
So
not
only
are
we
dealing
with
a
single
pod
here,
but
also
we
have
deployments
and
replica
sets,
and
it
turns
out
that
you
can
use
this
same
model
here
too,
for
example,
read
in
a
deployment
object
in
controller
1
and
when
it
creates
a
replica
set,
use
the
context
and
propagate
the
context
from
one
object
to
another
when
you're
acting
on
an
object
and
creating
another
object
as
a
result.
E
So
it
is
also
possible
to
use
this
to
propagate
context
from
objects,
and
that
ends
up
being
even
more
useful.
I
can
see,
for
example,
if
there
were
a
single
pod
that
were
slow
to
start.
That
would
become
super
obvious
here,
and
I
can
also
very
easily
see
the
structure
of
all
the
work.
That's
done
to
to
start
my
deployment.
B
B
E
E
E
So
none
of
this
is
implemented.
Yet
there
was
a
proposal
that
was
fairly
broad
and
that
we
were
finding
it
hard
to
come
to
agreement
because
it
involved
seven
or
eight
SIG's,
so
we're
doing
kind
of
a
narrow
down
version
with
just
the
API
server
changes
for
now
and
that
way
we
could
go
one
step
at
a
time.
C
E
Does
let
us
solve
some
of
those
problems
but
I
think
there's
still
some
discussion
about
the
model
of
propagation
and
how
well
it
works,
and
so
that's
been
hard
to
come
to
an
agreement
on
so
using
an
annotation,
and
we
can
actually
do
a
lot
of
this
out
of
tree.
First
will
allow
us
to
experiment
and
try
using
this
much
more
easily
than
than
if
we
started
with
a
an
object
field.
We're.
C
B
Can
everyone
see
my
screen
right
now,
I'm,
hopefully
sharing
right
yeah?
We
can
greatly
so
I've
already
gone
and
created
a
couple
pods.
Well.
The
first
thing
I
want
to
point
out
is
that
I'm
using
Jaeger
David
was
using
Zipkin
and
that's
one
of
the
other
cool
things
that's
provided
by
open
telemetry
is
they
have
a
sort
of
standard
middleman
collector
that
can
he
can
receive
traces
in
different
formats
and
then
translate
those
traces
into
other
formats
for
different
backends.
B
So
if
you
have
clusters
that
are
one
surroundings
of
kanar
you're
running
Jaeger,
you
want
to
switch
between
those.
You
can
just
configure
this
collector
to
point
to
the
new
type
of
back-end
without
having
to
recompile
or
restart
anything,
which
is
pretty
mean,
but
right
here,
I'm
just
running
Jaeger,
so
I
scheduled
a
couple.
Pods
I
want
to
point
out
that
I've
just
added
spans
to
just
the
default
filter
and
score
plugins,
and
it's
over
a
hundred
spans
for
one
pods
scheduling
cycle.
B
So
if
I
look
at
one
of
these
pods
here
I've
each
each
one
of
these
points
is
each
one
of
these
spans
is
literally
two
lines
of
code.
It
says,
starts
pan
and
then
defer
an
end
to
the
span.
If
you
look
at
my
PPR
that
it's
pretty
simple
code
changes
but
I'm
able
to
narrow
down
the
entire
scheduling
cycle
or
this
pod
in
just
this
one
point:
two:
seven
milliseconds
and
I
added
a
couple
logs
at
different
spots
to
see
you
know
what
was
the
suggested
host
from
the
scheduling
cycle.
What
notes
were
evaluated?
B
I
can
go
into
these
plugins
and
specifically
see
so
we're
filtering
out
one
of
these
nodes
here.
Why
was
that
filtered
out?
It
was
unschedule
for
this
tape,
things
that
we
can
get
from
logs,
but
is
really
tough
to
parse
out
and
put
together
into
a
whole
context,
but
I've
got
all
the
filter
plugins
run
for
each
node.
These
are
collapsible.
That's
just
part
of
the
backend.
B
B
B
B
This
was
the
this
was
the
one
with
no
problems
that
went
through
so
on
he's
on
these
nodes,
no
problems
filtering
runs
through
all
of
the
plugins.
This
bob
has
a
node
affinity
on
it,
and
I
can
see
that
it
just
cuts
out
as
soon
as
it
gets
to
note
affinity
and,
if
I
look
at
the
reason
why
it's
because
it
doesn't
match
the
node
selector.
C
B
D
E
D
E
E
B
D
B
This
is
what
and
what
you
get
for
free
is
basically
just
like
looks
like
this
like
filter
node.
This
was
just
two
lines
of
code
in
the
node
name:
filter
plugin,
that's
this
star
span
here
that
says:
filter
node
name,
so
I
got
the
process
name,
which
was
the
scheduler
that
I'm
running
just
because
you
know
I
had
three
pods
that
were
all
different
schedulers
and
they
were
interfering
with
each
other.
B
B
We
were
more
interested
in
the
outcome
of
running
like
the
filter
plugins
on
a
node,
but
another
cool
thing
about
this
library
that
I
found
out
was
that,
if
just
because
you
include
the
you
have
the
code
for
tracing,
if
you
don't
have
an
actual
exporter,
configured
you'll
just
get
a
no
op
trace
exporter.
So
if
you
don't
want
to
be
running
this
in
your
scheduler
and
it's
just
too
much
information,
you
don't
have
anywhere
to
store
or
you
don't
want
to
be
Chokin
up
your
network.
B
You
can
just
disable
that,
and
you
don't
have
this
and
I'm
sure
that
we
could
probably
do
it
by
who
could
add
flags
by
levels
if
we
want
really
detailed
traces
enable
that,
just
by
some
spec
component
config
but
yeah,
this
is
this
is
basically
what
looks
like
I've
been
using
this
custom
scheduler
image
for
like
the
past
two
weeks
or
so
to
debug
all
my
work,
bugs
I've
been
getting
and
I
found
it
incredibly
helpful
to
be
able
to
see
at
a
glance
what
plugins
failed,
what
the
scores
for
different
nodes.
Were.
A
So
much
that
we
are
adopting
the
the
best
injector
BB.
What
will
you
call
that
the
open
time
machine
I
don't
know
as
if
it
calls
or
something
into
our
scheduler
code
right?
So
my
question
is
that
is
there
any
guidance
to
to
follow
to
give
a
default
setting
which
layers
we
want
to
count
the
open
timer
changing
to
that
present
an
awful.
E
I,
this
is
something
that
hopefully,
will
be
an
outcome
of
the
kept,
especially
as
we
think
about
beta
or
beyond.
One
of
the
issues
that
is
probably
going
to
come
up
eventually
is
that
most
of
the
tracing
backends
have
a
limit
to
the
number
of
spans
that
can
be
in
a
single
trace
and
it
seems
a
thousand
seems
to
be
the
convention,
which
becomes
very
interesting
if
certain
components
want
to
have
lots
and
lots
of
spans
like
say
the
scheduler,
but
other
components
like
cubed
or
something
also
wants
to
have
a
large
number
of
spans.
E
So
we'll
definitely
have
to
keep
an
eye
on
how
much
of
information
we
include
where
so
that
I,
don't
think
open.
Telemetry
defines
a
standard
for
the
behavior
that
you
should
have
when
dropping
spins
I
think
it's
more
concerned
with
just
collecting
them
and
sending
them.
Oh
yeah,
a
lot
of
back
pins
will
drop
spans
from
traces
if
the
trace
gets
too
many.
So.
B
B
E
The
approach
that
I've
or
that
I
had
proposed
was
that
by
default,
all
the
components
don't
trace
anything
and
then
you
can
specify
in
cube
control
with
dash
dash
trace.
If
you
want
stuff
to
be
traced
that
way,
we
don't
overwhelm
people's
trace
backends,
but
yet
that
there's
also
some
cool
stuff.
That
open
telemetry
has
like
tail
based
sampling,
where
you
have
the
components
sample
everything,
and
then
you
have
a
component
later
down
the
line,
for
example,
take
only
the
slowest
traces
or
traces
with
that
are
95th,
percentile
and
above
or
something
like
that.
B
You
know
just
any
kind
of
field
that
we
can
set,
but
anyway,
my
my
hope,
with
showing
this
to
us,
was
that
we
could
hopefully
be
one
of
those
components
that
starts
to
get
involved
in
this
gap
and
try
to
get
this
started
and
with
my
pull
requests
that
have
sent
show
how
easy
it
is
to
start
to
implement
this
and
ultimately
when
it
does
become
more
available
through
other
components
and
those
contexts
are
being
attached
to
pods.
We
can
easily
plug
in
to
that.
B
E
Had
I'd
been
discussing
this
with
some
of
the
API
machinery
folks
for
a
while,
and
we
had
a
kept
that
I
would
say,
was
fairly
simple:
four
components
to
implement
it
involved,
a
single
line
of
code
and
we've
just
generate
traces
for
HTTP
and
RPC
calls.
So
it
wouldn't
get
stuff
like
this,
but
you
just
had
to
essentially
identify
what
your
parent
object
was
that
you
were
operating
under,
but
we
still
weren't
able
to
reach
consensus
on
that,
and
so
we
for
it
well,
not
19.
E
At
least
this
isn't
anything
that
is
going
to
happen,
but
for
1.20.
Hopefully
we
can
resurrect
this
and
talk
about
the
model
with
a
different
set
of
people.
I,
don't
know
we'll
see
what
we'll
have
to
see
where
it
goes,
but
yeah
definitely
I
think
the
scheduler
is
one
of
the
places
where
this
will
be
most
useful
and,
of
course,
we're
happy
to
keep
everyone
in
the
loop
and
I
appreciate
everyone's
involvement
and
feedback
as
well.
B
A
B
Yeah,
no
we're
a
little
over
time,
so
I'm
sorry,
but
this
was
just
something
that
came
up
in
one
of
the
piers
I.
Think
Abdullah.
You
mentioned
that
we
have
a
bunch
of
these
cleanup
PRS
that
are
trying
to
add
to
the
scheduler
you
tillage,
but
we're
alternately
trying
to
deprecated.
That
I
believe
you
mean
in
favor
of
the
external
staging
packages.
Was
that
right.
D
Yeah
I
guess
the
point
is:
there's
no
point
in
duplicating
the
code
and
moving
it
into
the
schedule
to
actually
cut
the
dependencies.
What
we
should
be
doing
is
right:
let's
just
try
to
have
those
utility
functions
themselves,
moving
to
staging
and
used
by
all
the
other
components.
Well,
I.
Definitely
this
is
a
much
slower
process,
but
it's
it's
the
right
process,
yeah.
B
I
definitely
agree
with
you
on
the
duplicating
codes
part.
We
don't
want
to
be
doing
that
and
I
don't
want
to
do
that.
My
question
is
just
that
we
do
have
a
couple
other
PRS
that
are
really
just
getting
rid
of
these.
Are
these
helpers
and
moving
them
into
the
scheduler
so
for
the
benefit
of
at
least
those
authors
of
those
PRS?
B
Would
it
be
acceptable
to
in
a
case
where
it's
not
duplicating
but
actually
moving,
collect
these
helpers
in
our
scheduler
you
to
a
package
so
that
on
our
own
time,
we
can
eventually
break
those
out
and
get
the
contributions
from
these
people
that
have
been
trying
to
help
just
kind
of
move.
This
cleanup
effort
along.
D
A
Thanks,
so
better
typically
Spanish
pull
over
agendas.
I
have
some
update
up
on
the
coast
pewter
or
we
call
preemption
experience,
point
design,
implementation,
so
I
think
ever
go
with
the
post
pewter.
A
suggestion
given
by
body
and
I
have
have
the
POS
in
plates
full
for
two
phases.
The
first
phase
is
to
do
pre
sectoring
by
existing
code.
So
no
matter
we
go
with
possibility
in
this
release
or
not.
We
can
do
there
safely,
so
that
is
just
priest
lecturing
to,
for
example,
there's
a
lot
of
things
using
the
general
generic
schedule.
A
Sometimes
those
kind
of
things
can
be
reflected
to
stay
less
functions,
so
I
have
three
tiers
put
in
place
to
figure
out
all
those
kind
of
things
and
reduce
the
dependencies
to
which
other
like
profile
to
generic
a
schedule
to
make
them
stay
list
as
much
as
possible.
So
those
three
tiers
are
in
place
and
then
phase
to
have
the
real
serious
appeals
going
on
be
defining
the
posterior
accession
point
and
mood
hard-coded
preemption
logic
as
the
default
preemption
plugin,
which
adapts
the
post
filter
interface.