►
From YouTube: Kubernetes SIG Node 20200428
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Yeah
so
I
know
in
my
speak
signal
meeting.
We
talked
about
this
for
myself
personally
I've
been
trying
to
prepare
for
the
Red
Hat
summit
digital
event
this
week,
so
I've
had
less
time
to
review
enhancements.
I,
don't
know
if
anyone
else
has
had
an
opportunity
that
wanted
to
raise
some
concerns
are
both
pros
and
cons.
I
guess.
A
A
donut
had
a
chance
to
review
it
and
unfortunately
I
know
she
said
energy
issue
today,
but
thank
you
for
the
reminder
and
then
I
know
in
my
case
much
red-hot
someone
has
done
this
week.
I
will
have
more
cycles
to
review
your
proposal
to.
A
A
D
This
we
first
started
discussing
about
this
in
the
data
protection
work
group,
because
we
want
to
take
application
snapshot.
We
want
to
have
a
way
to
quiet
the
application
before
taking
a
snapshot
and
uncurse
after
that.
So
for
that
we
need
to
send
a
command
to
the
pod
to
trigger
some
command
to
do
the
class
and
then
tin.
How
can
he
was
revealing
that
he
mentioned
that
there
are
many
other
use
cases.
D
D
D
Right
so
so
he
listed
a
the
use
cases
right,
so
no
to
be
able
to
run
commanding
a
pod
quiet
as
an
application
before
taking
a
snapshot.
Unquiet
other
words,
is
it
using
snapshot
like
CRI,
you
like
checkpoint
restore,
so
this
is
for
yeah.
So
for
the
snapshot.
This
is
like
the
we
are
talking
about
application
snapshot
here,
so
it
could
be
like
you
have
a
like
MongoDB
or
something
come
into
quite
as
the
database,
okay,
yeah
and
then
there
are
a
few
other
use.
Cases
like
send
a
signal
say
what
he
reloaded
configurations.
D
I'll
start
an
upgrade
or
you
want
to
start
some
database
migration.
So
was
that
other
use
cases
you
want
to
be
able
to
send
signal
to
the
pod
so
because
of
that
there
are
multiple
use
cases,
so
we
want
to
have
a
common
way
to
do
this
type
of
thing,
and
also
we
want
to
make
this
secure
so
have
cubelets,
execute
those
commands.
What
mean
was
it
your?
D
So
that's
why
we
are
having
this
proposal
here.
There
are
two
parts.
The
first
is
to
have
a
in
line
poor
definition
for
a
container
notifier
and
then
which
defines
whatever
command
you
want
to
run
inside
container,
and
the
second
part
is
to
have
a
notification.
If
you
object
to
request
this
tab
will
contain
notifier.
So
the
in
line
definition
would
be.
Look
would
be
something
like
this.
So
inside
a
container,
you
have
those
notifiers
so
because
there
could
be
multiple
of
them.
D
That's
why
this
is
an
angle,
a
and
then
the
definition
of
this
kali
notifier,
you
have
a
unique
name,
you
and
then
the
handle
or
handle
were
basically
specifies
how
you
run
a
command,
that's
a
exact
section,
and
then
it's
time
out.
So
this
is
the
definition
inside
container
itself
and
then
we
also
need
to
have
a
this
notification
api
object.
So
this
will
be
a
API
object
because
we
need
to
be
able
to
tell
people
it
when
to
run
this.
D
So
it's
an
external
controller,
we'll
be
creating
this
API
object
and
then
in
the
spec.
This
will
have
the
continued
notifier
name
and
then
also
it
was
specify
how
to
you
select
a
part.
You
run
this
command
and
then
there
was
status
basically
after
this
commanded
run,
cubed
needs
to
change
its
status,
so
the
external
controller
knows
okay,
this
is
complete
or
not
so
that's
I,
don't
know
what
that's
what
this
one
is.
D
E
D
D
To
chariot
what
they
run
so,
for
example,
if
you
want
to
so
I
just
want
to
I
just
want
to
use
the
example
for
taking
a
snapshot
advocating
snapshot
right.
So
if
you
want
to
snap,
show
you
my
sequel
and
then
you
need
to
run
comments
like
mock
tables
right,
so
you
need
to
run
that
comment
inside
a
container,
so
that
would
be
type
of
comment.
You
run
right.
E
D
D
D
You
yeah,
that
is,
if
you're
running
from
outside
right,
but
right
now
we
are
we're
talking
about
yeah.
That's
what
because
any
similar
thing,
if
you're
running
cue,
Caro
exact,
that's
you're
running
from
outside
yeah,
but
this
is
just
so,
you
will
be
able
to
run
this
command
inside
container.
Actually,
no.
E
D
D
A
couplet
will
be
running.
This
is
because
that
we
will
have
those
comment
e
find
right
so
quickly.
What
we
running,
those
will
be
runny
nose
command.
So
so
that's
the
same
as
how
you
run
it's
only
the
timing,
it
you
have
to
know
from
outside.
You
need
to
be
triggered
from
outside.
You
know
when
you
run
this,
but
the
the
way
to
run
the
command
should
be
the
same
as
all
those
other
comments,
yeah
right.
D
So
no
so
notice,
though,
so
we
have
to
be
able
to
communicate
between
this
external
controller
and
cubular.
So
so,
right
now
the
propose
is
to
say:
okay,
an
external
controller,
well,
create
this
notification
object
to
request
that
I
want
you
around
the
second
and
then
cubelet
will
be
watching
this
notification
well
object
wine
sisters.
He
knows
okay,
now
I
need
you
finding
us
and
in
rung
it
and
then
there
is
also
the
status
field
that
tells
the
external
controller
whether
this
is
being
triggered
in
its
Stubing
valeting.
A
D
So
yeah,
so
those
are
the
things
that
we
need
to
discuss,
but
we
would
would
want
to
have
some
reach
wise
if,
if
it
does
not,
if
the
first
try
did
not
did
not
succeed,
then
there
should
be
some
rich
choice.
But
then,
of
course
we
don't
want
this
one
to
be
running
forever.
There
should
be
because
this
is
like
time-sensitive
rice.
We
also
don't
want
to
the
application
to
be
frozen
forever,
so
there
are
some,
so
we
do
right
now.
D
It
can
be
handled
by
a
let's
say,
the
the
external.
The
external
controller
can
be
sending
out
this
request
and
waiting
and
to
see
whether
there's
any
whether
there
is
any
change
whether
the
status
is
changing
or
not
so
I
think
those
type
of
details
we
still
need
to
I
ran
out
because
we
like
to
get
some
initial
feedback
first,
but
we
can.
D
A
F
But
you're.
A
Getting
kind
of
like
top
of
mind
and
feedback,
so
understanding
how
many
times
you'd
expect
the
qubit
to
deliver
a
signal
and
what
it
should
do
when
it
is
unable
to
deliver.
That
signal
is
probably
the
first
thought
that
comes
to
mind.
The
second
thought
that
comes
to
mind
is
like
today,
the
cable.
It
basically
updates
to
object.
Well,
really:
three
objects:
a
node,
the
node
lease
and
the
pod
status
is
kind
of
what
I,
typically
think
of
as
like
the
primary.
What.
D
D
F
A
Did
on
that
object
and
if
it
would
differ
from
pod
status
may
be
talking
through
expectations
on
that
would
be
helpful
because
one
of
the
things
that
concern
me
is
like
if
we
try
to
increase
pod
density.
What
is
the
factor
of
of
that
and
the
cupids
ability
to
write
back
notification
status
just
as
an
example?
A
A
The
relationship
of
this
resource
with
the
node
authorizer
is
also
helpful
to
understand.
So
today,
like
one
of
the
things
that
we
are
careful
about
is
how
many
resources
the
cubelet
chooses
to
list
and
watch
today
we
list
watch
pods,
but
that
node
and
then
things
like
secrets
and
config
maps
we
kind
of
have
to
the
caching
mechanism
and
so
understanding
how
you
would
expect
this
notifier
to
be
kind
of
watch
by
the
qiblah.
D
So
nervous
is
somebody
Kevin
Fox.
He
has
some
comments.
He
added
some
comments
here
about
the
this
poor
security
policy.
I
think
the
team
replied
that
think.
Basically,
you
can
read
this
basic
fellow
saying:
oh
maybe
we
should
have
something
like
this
Tim
was
saying
that
does
not
look
like
we
have
done
something
at
this
granularity.
But
what
does
anybody
I
was
saying
so
I
think
there's
some.
A
Security
policy
is
kind
of
a
different
layer
or
the
node
authorizer.
So
if
you
wanted
to
look
up
the
node
authorizer
afterwards,
it's
basically
an
admission
plugin
that
restricts
the
set
of
resources
that
a
cubelet
can
see
beyond.
Traditional
are
bucks
so,
for
example,
the
way
that
the
cubelet
is
only
able
to
see
secrets
that
are
associated
with
the
pod
that
are.
A
D
Tina
said
so
what
we
have
done
previously,
we
actually
have
another
cab
that
is,
that
was
already
merged
like
more
than
a
year
ago,
which
we
have
everything
outside.
Basically,
it's
a
so
like
the
the
hook
itself
is
a
CRT,
and
then
we
have
a
controller
which
is
also
external.
We
will
use
this.
You
know
the
port
exempt
sub
resource
to
do
this
type
of
thing.
D
So
that
was
something
that
we
have
already
done,
but
then
thing
the
cap
was
merged
and
then
we
were
doing
the
implantation
chin
and
then
the
API
reviewers,
including
Tim,
and
think
of
somebody
else,
Daniel
something.
So
they
were
saying
that
this
be
better
behind
all
the
inside
container
itself.
So
that's
why
we
are
now.
We
had
doing
a
completely
different
design.
A
D
We
actually
yeah,
we
had
to
have
an
implantation
of
that.
It
was
doing
the
code
review.
We
were
asked
to
think
about
this
approach,
so
I
think
that's
why
I
team
was
actually
saying
that
I
want
us
to.
You
know,
come
to
the
signal
and
see
what
you
guys
are
thinking,
and
then
you
know
get
some
feedback
before
we
are
going
getting
too
deep
into
the
design.
So
if
I.
D
D
D
That
approach
would
work.
It's
flexible,
I.
Think
the
concern
is
for
an
API.
Reviewer
is
the
security
aspect,
I
think
that's
the
name,
a
concern,
so
I
I
should
I
find
that
our
original
cap,
so
we
actually
went
through
several
fantasizes.
This
is
another
one,
so
those
cap,
when
you
find
it
I
can
go
to
the
this
is
the
this
is
the
original
cab
we
are.
Actually
we
actually
even
opened
a
repo
just
to
implement
this
one
and
then
implementing
the
API
and
the
controller.
D
And
then
then
we
got
this
comments
to
a
deal
this
company
differently.
So
this
one
actually
go
to
this
proposal.
Let's
see
yeah,
so
this
is
this
con
lilius
a
CRT
approach
right
so
here
in
the
CRT
you
select
pod
and
then
have
the
action.
Action
defines
how
you
want
to
run
the
command
inside
a
container,
so
so
that
would
work
yeah
at.
A
D
D
A
D
A
D
We
have
right
so
you
see
that
we
had,
then
we
basically
get
in
comments
from
both
steam
and
the
Daniels,
and
then
you
somewhere,
yeah
Danny.
He
was
saying
that
they
were
saying
that,
because
in
our
evening
our
original,
the
you
know
in
this
cab
itself,
we
also
listed
those
other
approaches
as
a
chart.
So
basically
they
have
a
main
approach
which
is
comely
zrd,
and
then
we
have
those
alternative
approaches
which
is
to
doing
this.
D
You
know
adding
those
comments
inside
the
continue.
It's
I
mean
it's
slightly
different
from
what
we
were
showing
in
the
container
notifier
all
right.
So,
but
this
is
still
we
are
trying
to
add
this
on
inside
life
cycle
and
similar,
not
completely
the
same,
but
no
when
we're
implementing
it,
we
got
feedback
from
again.
We
were
saying
that
we
should
really
seriously
consider
how
to
do
this
inside
the
container
itself.
D
D
A
H
I
think
what
Derek
meant
it
is
perfectly
make
sense.
So
so
we
need
to.
We
need
to
be
able
to
answer
those
questions
anyways.
So
if
there's
anything
from
sick
know
that
with
regarding
to
this
curve-
and
is
this
idea
this?
Let
us
know
so
that
we
can
do
or
research
and
if
Derek,
if
you
do
have
you
have
more
resources
in
cig
note,
if
you
do
have
some
samples
that
we
can
look
into
and
some
standard,
which
should
be
four
rings,
that
are
that
be
highly
appreciate.
It's.
A
The
other
general
thought
I
have
here
is
like:
is
there
some
convention
you're
expecting
for
container
notifier
terminology
that
says
this
is
what
liking
your
example,
unlocking
tables
or
something?
Is
there
some
spec
or
standard
you
guys
are
looking
to
apply
that
gives
these
arbitrarily
train
named
notifiers,
meaning.
D
You
mean
you
mean,
like
the
specific
ones
like
if
we
want
to
say
quiet
or
something
that
yeah
so
yeah.
So
if
we
are
doing
choirs
for
snapshotting
and
then
then
usually
we
call
it
quite
a
sore
freeze,
those
type
of
yeah.
We
use
those
names
yeah.
There
are
some
common
names
we
pretty
sure
to
use,
but
for
others
other
type
of
use
cases
I'm,
not
so
sure,
nice.
What
there
are
many
other
use
cases
that
I
don't
know
what
other
better
name
for
those.
A
D
Support,
yes,
so,
for
example,
choirs
that
we
can
definitely
with,
or
we
quite
a
freeze
and
it'd,
be
so
easier
to
write
that
down,
so
that
those
we
do
have
some
common
names
right,
but
for
other
use
cases
then
see
we
list
a
few
this
year.
So
first,
while
we
can
just
say
Lee,
either
quite
ass
or
phrase
it
something
like
that
right.
It's
that
little
thing
but
the
for
the
for
the
others
and
not
quite
sure
what
be
a
better
name
but
but
for
those
we
can,
because
those
are
the
things
that
we
need.
D
A
D
G
D
A
If
I
could,
maybe
that
one
use
case
that
I
had
in
mind
that
we
could
explore,
that's
really
maybe
more
basic
is
right.
Now
we
don't
have
a
way
to
tell
a
pod
to
restart
and
or
a
container
to
restart,
and
that
has
been
a
request
by
others
in
the
community
for
probably
the
last
four
years
or
so
and
feels
like
maybe
ground
zero
use
case
that
we
can
build
from
and
I'm
wondering.
If
that's
something
that
you
had
explored
or
felt
was
in
scope
of
this
proposal.
A
D
H
A
I
It's
just
called,
and
it's
just
exact
into
the
container
and
there's
no
way
to
get
the
call
back
it.
Just
it
just
sits
there,
and
so
like
exec,
is
really
not
an
awesome
mechanism
for
these
kinds
of
things
and
like
give
it
given
the
lock
tables
example
that
you're
talking
about
I
mean
it
would
be
much
easier
to
do
that
at
a
higher
level
and
just
like
make
make
in
a
you
know,
a
network
level
call
into
the
pod
and
the
pod
has
some
API.
I
The
the
container
has
some
API
inside
of
it
right
that
you
know
when
you
hit
need.
You
know,
slash,
lock,
tables
in
point
that
there's
something
in
it
that
executes
flock
tables
right
or
it
could
even
be
at
the
application
protocol
level.
It
just
doesn't
seem
like
the
right
level
to
do
this
at
to
me
and
I.
Do
I.
Do
worry
about
the
cubelet
doing
a
bunch
of
execs,
because
it's
exec
has
always
been
a
problem
in
the
culet.
D
F
I
Mind
would
work
the
solution
that
I
go
to
is
you've
got
to
see
Rd.
That
represents
that
deployment
right
and
there's
an
operator.
That's
managing
that
deployment.
Well,
you
could
just
you
know
you
just
set
something
on
the
CRT
spec
say
you
know:
quiesce
equals
true.
The
operator
sees
that
and
the
operator
connects
to
each
of
the
pods
and
does
whatever
it
needs
to
be
done.
Yeah.
F
I
D
D
D
D
D
A
Capture
the
the
challenges
both
with
execs
and
breaths,
as
well
as
other
alternatives
like
an
application-level
operator,
something
like
that,
the
South
explored,
and
maybe
we
can
come
back
and
revisit
the
topic
once
those
updates
have
been
made.
A
I
D
I
D
D
D
H
A
Okay,
we
spent
a
good
bit
of
time
on
this
and
I
just
want
to
be
considerate
of
others
who
prepared
an
agenda
yeah.
So
maybe,
if
we
could
move
on
to
the
next
topic
and
a
big
thank
you
for
joining
us
today
to
share
your
ideas.
Thank.
D
J
J
J
Okay,
I
will
for
the
most
important
key
points
and
if
anything
want
to
be
understood
or
if
you
will
need
more
explanation,
just
let
me
know
and
I
will
go
back
so
I'm
I'm,
Christopher
chickens,
Chris
from
Sam
square
and
D,
and
we
prepared
a
presentation
to
discuss
about
apology,
manager,
changes
and
Noma
our
scheduling,
yeah.
The
presentation
was
designed
to
have
two
parts
we
will
see
if
we
will
fit
with
this
one.
J
J
So
we
came
up
with
the
new
idea
to
use
the
annotation
field
for
this,
and
we
think
that
this
may
be
a
way
to
go
and
another
way
is
their
approach.
C
to
use
the
new
new
field,
topology
policy
in
the
runtime
runtime
object-
and
this
would
look
like
this-
the
runtime
class
and
pelagic
policy-
and
this
is
the
new
ways
that
we
can
go
with
the
pot
pot,
spec
extension
and
also
there's
a
new
approach
for
the
general
topology
manager
policies
in
our
cap
and
proposal.
J
J
This
may
confuse
someone,
and
also
we
can
go
strictly
and
choose
the
approach
E
and
not
go
from
the
annotations
to
the
random
class
and
directly
go
to
the
runtime
class,
and
this
is
also
will
be
easier
than
extending
the
pots
back,
but
the
disadvantage
is
the.
It
may
be
hard
to
determine
what
policies
should
be
used
were
other
than
dynamic
policy
is
configured
on
a
note
and
also
the
runtime
class
is
specified
on
the
topology
policy
per
pod.
J
Okay,
let
me
go
forward
and
we
have
several
use
cases
for
the
topology
policy
policy
here,
for
per
port
and
for
our
changes,
practical
telecom
use
cases
we
can
see.
This
is
the
user
stories
is
that
UPF
from
the
5g
requires
minimal
assignment,
and
it
was
mentioned
earlier
in
the
presentations,
Veeran
and
orang.
This
is
also
from
5g
require
cpu
memory
device
pinning
for
proper
deployment
and
also
Mac,
which
is
the
mobile
edge
cloud.
J
Blueprints
requires
CPU
memory
and
memory,
and
also
the
Mac
Mac
deployment
has
one
or
two
servers
that
that
have
the
the
architecture,
software
and
also
will
contain
the
clients,
clients,
applications,
and
then
this
is
in
this
scenario.
The
topology
policy
purport
is
required
to
in
most
efficient
way
to
utilize
the
resources
of
those
two
servers
or
or
this
one
server,
because
different
applications
can
have
different
requirements.
J
J
The
issue
from
the
topology
manager
right
now
is
that,
if
you
would
like
to
grant
the
allocation
of
resources
of
all
the
resources
for
all
containers
in
a
pot
from
the
same
memo
note
the
we
can
do
this.
We
can't
do
this
right
now,
and
this
is
this-
can
cause
decrease
of
performance.
So
there
are
several
potential
ways
to
resolve
this.
J
If
we
would
wouldn't
introduce
that
4G
policy
per
pod,
we
would
receive
the
the
the
second
way
here
and
if
we
would
deploy
and
note
with
that,
that
kind
of
policy
and
the
requirement
for
for
such
pots
will
be
will
be
not
not
high.
The
total
utilization
of
the
server
will
won't
be
one
we
won't
be
big
or
or
maybe
even
otherwise.
Some
pots
can
be
rejected
if
there
will
be
scheduled
to
this
node,
so
not
providing
here.
Topologically
per
pod
can
cause
some
issues.
J
J
J
J
J
The
second
benefit
is
that
operator
can
apply
particular
topology
policy
only
to
the
pots
which
really
need
coordination
of
resource
allocation
of
for
performance
in
the
worker
node.
This
is
the
issue
from
the
previous
slide.
This
will
reduce
the
total
admission
time
of
a
node.
The
third
benefit
actually
was
featured
by
mr.
cherrick
from
from
the
Ericsson.
J
To
the
same
worker
note,
and
in
this
case
we
can
have
a
common
worker
node
for
the
DP
DK
workflow
and
we
don't
have
to
separate
nodes
and
the
works
of
Ford
for
different
nodes
because
they
have
different
requirements
that
will
be
offered
for
the
part
one.
What
do
you
think
about
the
approaches?
What
what
is
the
desired
approach
in
your
opinion?
And
maybe
what
are
the
next
step
to
go
that
that's
our
my
questions
and
let
me
know
what
you
had:
what
do
you?
What
do
you
think.
A
Yeah
so
I
think
that's
a
lot
for
a
number
of
us
to
absorb
so
appreciate
you
going
through
that
I
had
reviewed
I
think
in
earlier
form
of
this
day
that
I
think
maybe
victory
was
shared
and
I
think
I
think
at
the
end
of
the
day,
what
I'm
struggling
to
understand
is
like
or
not
understand,
I'm
just
one
of
it
clear.
It
feels
that
within
the
taco
space,
there's
a
strong
feeling
that
the
control
plane
needs
to
be
node,
local,
topology,
aware
and
no
know.
A
J
So
you're
concerned,
if
the,
if
the
pots
back
on
the
runtime
class
is
the
best
way
here,
I'm
not
sure.
If
I
understand
that
I'm
struggling
in.
H
F
Let
me
just
add
in
so
so
Derek
you're
right.
We
did.
Some
of
us
met
a
subgroup
last
week
and
during
that
meeting,
what
we
were
mainly
focusing
on
was
held
to
make
the
scheduler
new
more
aware
and
I
think
that's
part
two
of
chryses
and
so
I
myself
didn't
particularly
review
part.
One
of
this
either
and
I
thought
today.
What
we're
going
to
talk
about
in
this?
F
But
so,
but
the
thing
is
you
know:
I
had
pointed
Derek
to
that
and
I
and
so
I
Derek
I.
Think
spent
two
cycles
looking
at
that
and
provided
just
a
little
feedback
this
morning
and
so
I
think
you
know
this
part.
One
Chris
is
I,
didn't
quite
prepare
for
this
meeting
to
look
at
this
and
provide
feedback
either.
For
this
particular
part,
I
was
thinking
we're
going
to
talk
about
the
scheduler
and
how
to
potentially
add
the
name
awareness
there.
So
I
think
there
was
probably
a
little
mix-up
in
the
signals.
A
Inform
at
the
pod
level
how
to
make
a
node
local
scheduling
decision
so
that
we
can
extend
a
Hance
what's
there
in
the
Kibo
today,
I
think.
The
key
thing
to
understand
is
like
assuming
that
is
an
agreed-upon
principle
like
how
is
that
information
made
available
on
the
control
plane
via
a
place
to
rally
on
in
the
API
server,
without
needing
to
do
point-to-point.
Communication
between
the
various
key
components.
J
L
A
Think
in
general,
that
approach
is
contradictory
to
like
the
generality
of
architectural
goals,
which
was,
we
should
have
a
set
of
components
that
rally
around
config
and
state
and
reconciliation
by
just
reading
state
from
API
server
and
not
having
to
have
components
communicate
directly.
So
like
the
idea
of
a
scheduler
talking
to
each
qubit.
A
First
is
like
I
think
in
the
feedback
I
tried
to
put
on
the
deck
beforehand,
or
at
least
one
of
the
decks
I'd
seen
was
saying:
okay,
I
appreciate
that
there's
a
desire
to
make
the
qubits
node
local
topology
choices
influenced
externally,
and
so
the
kind
of
thought
I
was
wondering
if
you
guys
had
explored
was
saying.
Let's
imagine,
there's
a
new
topology
management
component
run
on
the
control
plane.
It
could
be
a
pod
co-located
with
the
scheduler
I,
don't
I,
don't
know.
A
Okay,
Hugh
book
consults
that
component,
like
a
remote
topology
manager,
to
figure
out
how
to
do
something
rather
than
making
that
choice
locally.
But
in
that
regard
you
have
the
qubit
phoning
home
to
a
control
plane
to
figure
out
what
to
be
done
rather
than
the
control
plane
floating
out
to
each
cubelet,
to
figure
out
what
we
should
be
done
and
I
don't
know
if
that
was
explored
in
great
depth,
but
in
general
I
would
not
recommend
pursuing
solutions
but
have
schedulers
making
RPC
calls
out
to
each
cubelet.
J
J
F
That's
that's
exactly
what
we
discussed.
It's
like
why
you
know
the
data
is
down
there
and
the
thinking
was.
Why
would
we
want
to
sort
of
almost
replicate
the
data
up
into
the
you
know
where
the
API
server
could
get
it?
So
we
can
make
that
at
decision
at
scheduling,
time
and
I
think
the
overall
feeling
was.
We
were
thinking
hey.
Why
can't
we
just
call
out
I
mean
the
data.
F
Is
there
let
them
make
the
decision
one
time,
but
if,
if
it's,
if
that
sort
of
breaks
the
architecture
which
I
didn't
know,
I,
don't
think
any
of
us
were
thinking,
hey
DUP,
but
that
was
an
architecture
nono.
Then
you
know
the
other
options
that
we
talked
about,
which
was
essentially
exporting
the
data
from
the
cubelet
so
that,
for
example,
the
scheduler
plugin
could
make
that
decision
instead
of
making
a
call-out.
You
know
that
option
was
there
and
there
were
a
couple
options
discussed,
which
you
know
are
probably
still
viable.
F
Now
that
the
direction
is
saying
hey,
don't
make
a
call-out
from
scheduling
context
to
the
cubelet,
so
it
would
it
sort
of
tells
me
well,
let's
get
rid
of
that
option
and
go
back
to
look
at
I,
think
you
know
options
a
and
a1
which
really
come
down
to
what's
what
is
the
best
way
to
export
the
data
either.
You
know
in
your
options,
were
I,
think
extending
the
node
object
or
come
up
with
another
custom
object
and,
and
then
what
decision
do
we
need
to
make?
F
So
I
think
what
I
got
out
of
it
is
we
need
to
go
back
and
say:
hey,
don't
make
the
call
out
and
let's
talk
more
about
option
to
figure.
How
do?
What's
the
best
way
to
get
the
information
there
and
then
go
back
and
look
at
this
sort
of
part
one
about
okay?
How
do
we
represent
that?
Do
we
do
an
API
change
in
the
pots
back
to
do
it?
F
For
a
you
know,
policy
per
pod,
because
I
think
there's
some
there's
some
good
stuff
in
there
that
you
guys
have
worked
on
and
I
know.
For
me,
myself,
I
would
certainly
like
to
spend
a
few
more
cycles
and
I'm,
pretty
sure
that
you
know
both
Kevin
and
Connor
and
in
Francisco
all
of
us
that
were
there
looking
at
this.
Would
it
would
appreciate
a
few
more
cycles
to
look
at
that
and
it's
yeah
I
think
that's
sort
of
the
next
step
and.
J
I
wanted
to
strongly
present
the
second
part
and
discuss
it
today,
but
my
question
is
what
what
we
should
do
right
now,
if
III
see
that
we
have
a
lack
of
time,
so
I
don't
know,
go
to
the
the
second
approach
and
just
present
it
quickly
on
maybe
reschedule
it
or
other
sub
submitting,
or
what?
What
do
you
think
would
be
the
best
way
here
to
move
forward
with
this
yeah.
A
I'm
going
to
keep
the
meeting
open
just
so
that
you
can
capture
your
thoughts
and
then
we
can
upload
it
YouTube
for
those
who
can't
stay
to
review,
and
so,
if
you
wanted
to
spend
another
ten
minutes
to
walk
through
your
proposal,
that's
fine
and
just
understand
that
those
who
can't
stay
we'll
have
to
catch
up
and
provide
their
questions
afterwards.
Okay,.
J
Okay,
so
so
all
of
the
approaches,
the
the
the
first
one
approaches.
The
second
one
approach
are
based
on
the
scheduler
framework
and
for
those
who
don't
know,
scheduler
is
a
pluggable
architecture
which
makes
customizations
of
scheduler
very
easy,
and
it
adds
multiple
extension
points
where
plugins
can
be
invoked
and,
as
you
can
see
on
the
picture,
we
have
scheduling,
cycle
and
binding
cycle,
and
we
have
different
extension
point
here
and
in
those
extensions
points
plugin
can
be.
It
can
be
called
out.
J
This
is
how
the
broader
scope
of
the
scheduling
works
right
now
and
you
the
scheduling
process
and
in
this
process
the
plugins
are
are,
are
called.
One
of
such
plugins
is
the
node
resource
plug-in
that
is
called
in
the
filter
and
the
score
score
score
face
and
it
uses
the
allocatable
field
of
the
v1
node
object
and
they
request
a
field
of
the
v1
put
object,
but
it
is
totally
not
not
no
more.
Our
right
now.
J
The
issue
with
numa
numa,
our
scheduler,
is
that
right
now
operator
needs
to
provide
some
node
selector
or
were
to
specify
the
particular
node
that
that
can
handle
the
topology
requirement
of
the
pot.
If
you
want
to
specify
that
and
we'll
we'll
create
the
pot
and
scheduler
we'll
try
to
schedule
this
pot,
it
can
be
bound
to
the
node
that
can
can't
satisfy
the
topology
requirement
and
there
will
be
at
a
pod.
J
J
This
additional
field
of
fears
can
be
a
subpart
of
the
allocation,
allocatable
field
or
the
and
the
capacity
field
sources.
So
maybe
this
may
not
increase
or
provide
new
fields
in
the
note,
object
and
and
the
scheduler
plugin
will
be
used
to
select
or
score
the
nodes
that
could
satisfy
the
requirement
challenges
with
that.
J
With
that
approach,
are
the
first
part
is
that
right
now
that
doesn't
maintain
and
note
resources
at
the
pneuma
level
it
it
don't
have
information
about
that,
so
the
challenge
is
how
to
collect
them
and
expose
them
to
the
view
unknown
object.
The
second
challenge
is
how
to
how
the
plugin
shoots
coordinates.
J
The
first
way
is
that
it
can
filter
the
nodes
that
can't
handle
the
requirement
and
the
second,
and
it
can
be
done
like
first
first
first
feed
that
both
feet
kinda
put,
can
fit
there
and
it
is
selected
right
away,
and
this
the
second
way
is
that
plugin
can
score
the
nodes,
and
also
here
is
a
change.
How
to
do
that.
There's
a
concept
of
scoring-
and
this
is
my
by
alexey
Paravel
of-
and
you
can
find
it
right
here-
it
can
be
also
applicable
for
the
for
the
other
approaches.
J
I
think-
and
there
is
also
a
challenge
that
scheduler
plugin,
which
scores
the
nodes
doesn't
guarantee.
The
poet
is
the
that
can
be
the
that
will
be
deployed
on
the
node,
because
there
still
can
be
a
rejection
on
the
bound
node,
and
this
is
the
issue
I
mentioned
that
scheduler
can
make
his
decision
that
okay
know
the
pot
can
fit
to
this
to
this
node.
It's
it
binds
the
pot
to
the
note,
and
the
note
calculates
the
hints
on
it
on
its
own
and
and
then
it's
produced
is
the
ultimate
message
on
its
own.
J
This
is
how
the
approach
a
looks
like
in
a
broader
spec.
The
red
parts
are
this
parts
that
are
that
are
changed
by
this
approach
and
the
important
disadvantage
order
may
be
challenge
is
how
scheduler
can
calculate
if
pot
can
pot.
What's
requirement
can
be
satisfied
on
the
note,
because
if
you
would
write
if
we
would
run
the
same
hint
calculation
algorithm,
it
may
cause
a
huge
decrease
of
scheduling,
performance,
scheduler
approach.
A1
is
nearly
the
same.
Our
scheduler
approach
a
but
instead
of
providing
new
fields
into
the
view
unknown
object.
J
J
A
J
Okay,
there's
not
a
lot
of
slice
the
end.
This
is
the
comparison
of
approaches
and
okay,
so
B
B's,
not
no
not
worth
mentioning.
The
advantage
of
this
approach
is
that
it
respects
the
architecture
of
kubernetes,
and
actually
there
is
a
six
scheduler
maintainer
that
left
a
comment,
and
probably
he
is
behind
this
approach.
Maybe
we
could
get
a
more
opinion
about
that,
but
I
think
in
that
comment
implied
that
promoting
is
here.
Deorbiting
object
without
of
trip
ragging
to
the
entry.
Plugin
will
be
a
good,
a
good
plan,
I'm
Chris,
yes,
hi.
E
J
If
we
won't
make
your
a
new
approach
with
moving
that
the
Paulo
G
manager
to
the
to
the
control
train
or
something
like
that,
this
can
be
still
down
the
same
way.
So
this
is
handled
by
view
manager,
the
the
Pinta
views
and
the
memory
manager
for
the
memory
cells
and
the
information
that
will
be
exposed.
It
is
the
amount
of
used
new
Murray
sources,
and
the
amount
of
that
is
free
right.
E
E
A
A
A
There's
no
sense
in
making
the
information
available
in
a
or
a
1
if
the
actual
pinning
decision
isn't
being
made
on
the
control
plane
itself
for
the
cubit
to
respect,
and
so
we
need
to
show
how
that
would
be.
And
if
that
meant,
that
you'd
have
a
future.
Topology
manager
plug-in
option.
That
said
remote,
and
it
knew
how
to
fetch
that
resource
to
find
that
pending
decision.
That,
maybe
is
what
you
could
do
next
to
supplement
either
a
or
a
one.
J
The
second
concern
is
this
displayed
blend
brain
scenario:
that
there
is
the
calculation
on
the
on
the
scheduler
side,
and
there
is
this
calculation
on
the
node
side
and
it
may
be
the
same,
but
we
don't
have
any
idea
for
this
solution
right.
Any
any
solution
for
this
right
now
and
the
third
disadvantage
is
that
the
poster
can
be
rejected
on
the
note
I
would.
F
F
J
Yes,
this
is
exactly
the
case,
but
it
was
mostly
here
put
it
as
there
as
a
disadvantage
in
in
comparison
to
the
to
the
B
approach,
which
basically
eliminated
this
this
kind
kind
of
the
problem
yeah
and
yeah
yeah.
That's
the
that's
why
it's
here
but,
as
you
can
said,
it
is
the
very
rare
rare
problem.
A
J
A
But
yeah
I
guess
in
general
to
me
that
the
thing
that's
needs
to
be
understood
is
either
a
or
a
one.
You're,
basically
saying
I
want
the
control
plane
to
decide
both
node
and
node.
Local
awareness
scheduling
decisions,
and
so
the
next
step
needs
to
be
like.
If
that
is
to
be
agreed
upon,
then,
where
do
we
capture
the
node
local
scheduling
decision
that
the
qubit
acknowledges.
K
And
when
we
end
up
with
a
problem,
what
control
plane
actually
need
to
understand
what
a
node
is
actually
meaning,
because
if
you
have
a
feature,
a
genius
cluster,
where
you
mix
work,
intel
and
AMD
machine
so
well
into
an
arm
when
old,
meaning
is
different
and
when
you
have
like
CPU
s,
noma
nodes
and
so
on.
So
if
interested
I
can
just
past
knowledge
at,
for
example,
of
way
to
fail,
yeah.
A
That
could
be
made
to
constrain
as
well
as
just
going
all
the
way
back
to
months
ago.
Saying
I.
This
note
can
support
X
number
of
counted
pods,
and
so
then
you
didn't
run
into
this
arbitrary
scheduling
problem
because
you
had
a
uniform
homogeneous
pod
unit.
You
were
scheduling
with
fixed
sizes
and
I've.
Yet
to
hear
and
I,
don't
think
I'll
ever
hear
like
a
way
to
constrain
the
design
space
so
that
we
don't
have
to
support
everything,
but
I
keep
hoping
one
day.
I'll
hear
that
and
so
I
I
I.
C
Option,
let
me
connect,
can
I
ask
one
question
so
so
you
mentioned
that
the
calculation,
the
calculation
is
sort
of
expensive.
So
that's
there
that
one
time
calculation.
So,
for
example,
one
step
hardness,
are
sort
of
Numa
index
on
a
specific
node,
so
the
calculation
happens
down
there,
no
specifically,
and
that
calculation
is
one
time
right.
C
So
I
mean
the
the
metric
or
the
kind
of
spec
has
to
be
reported
by
the
paint
calculation
algorithm
from
either
the
red
or
the
some
diamond
set
path
right
so
I.
My
point
is
that
I
want
to
understand
whether
the
calculation
is
just
one
time
based
on
the
past,
assigned
to
some
specific
Numa
index
or
resources
on
there.
L
F
C
So
is
that
possible
to
move
the
calculation
to
the
control
plane,
so
maybe
I'm
wrong.
I
just
want
to
understand.
So
so
suppose
we
are
at
the
very
first
moment,
so
no
pass
has
been
assigned
so
right
now.
The
spec
of
the
nuumber
on
each
node
is
aesthetic
right.
So
we
have
that
static
information,
just
like
what
we
get
from
the
QA
available
resources,
so
that's
available
in
the
schedule
site
and
then
based
on
each
part
scheduled
and
assigned
to
a
No.
C
J
So
it
has
to
run
something
like
similar
to
the
king
calculation
algorithm
and
in
this
way
how
this
should
look
like
it
is
one
by
one
and
the
best
fit
or
if
you
dad
phone
run
for
all
of
the
nodes-
and
this
is
the
problem
here
that
will
decrease
the
performance.
I'm,
not
sure
if
this
is
the
things
you
asked,
but
this
is
the
problem.
C
I'm
just
trying
to
understand
why
it's
mandatory
to
round
the
categories
and
algorithms.
Just
on
the
note.
Instead
on
the
control
point,
because
it
seems
to
me
it's
like
that,
the
Christian
playing
can
get
initial
initial
information
of
all
the
numeric
resources
and
the
based
on
that
upon
each
incoming
path.
It
can
calculate
to
do
addition,
deduction,
sort
of
and
the
based
on
the
calculator
numeral
resources
to
do
the
iterative
calculations.
So.
F
So
so
one
part
is
the
scheduling,
part
to
say:
hey
well,
this
pod,
with
the
requested
resources
fit
on
that
node
and
when
the
answer
is
yes,
then
the
next
part
is
to
say:
okay,
the
part
is
scheduled
on
that
node.
Now
we
need
to
talk
to
each
one
of
the
device
plugins,
for
example,
CP
manager,
device
manager
that
maybe
don't
SR
vo
GPU,
said
okay.
Now
the
pod
is
here,
I'm,
Alec,
8
and
and
go
allocate
these
resources
and
assign
them
to
this
pod.
For
example,
you
know
if
they
want
five
exclusive
CPUs.
M
Know
cubelet
there
is
no
polish
manager
and
cpu
manager
and
device
manager
and
cpu
manger
and
device
manager
provides
topology
hint
to
the
ecology
manager
and
topology
manager,
just
bursting
that
hint
and
figure
out
what
is
the
best
permutation
for
a
container.
So
the
problem
is:
if
we
try
to
move
topology,
no
topology
manager
to
the
control
plane,
courage
to
be
a
way
to.
F
Yeah
I
guess
when
I
think
about
it,
I
think
that
the
information
that
is
exposed
about
the
resources
and
the
Numa
is
really
just
a
subset
of
what
is
in
topology
manager
now
and
that
we
do
enough
calculation
to
see
that
the
resources
are
available
on
a
node
that
can
meet
the
alignment
as
specified
in
the
pots
back
and
the
actual
allocation
reservation
allocation
still
occurs
in
the
couplet
at
the
node
level.
Inside
of
topology
managers,
just
like
BG
explained,
and
things
like
that.
That.
A
A
And
maybe
that's
a
limitation
but
I
do
think
we
could
find
a
way
to
export
a
minimal
set
of
information
to
mitigate
the
risk
of
a
scheduling
decision,
ultimately
being
rejected.
I,
don't
I,
don't
know
if
we'll
ever
get
to
a
hundred
percent
of
that,
but
I
think
we
can
probably
explore
ways
to
export
a
subset
of
it
that
you
can
reduce
the
incidence
of
that
risk.
That
would
be
acceptable.
F
A
J
Okay,
if
that's
all
for
this
I,
have
lost
less
slides
for
this
presentation,
and
it
is
connection
between
the
scheduling
and
topology
manager.
Changes
from
the
part
one,
because
the
problem
is
here
that
if
we
have
a
dynamic
policy
specified
on
a
set
of
nodes
and
single
node,
which
is
specified
and
sent
of
node
and
etc,
how
we
can
decide
where
we
want
to
deploy
the
pods
accordingly.
J
But
there
is
an
issue
here
that
if
we
have
the
dynamic
policy
specified
and,
for
example,
there's
a
single
NameNode
policy
specified
in
the
pod
level-
and
there
are
also
nodes
with
single
dominoed
policy,
how
this
should
mix
and
how
this
the
node
should
be
chosen
in
this
kind
of
the
scenario.
But
this
is
this
is
a
concern
for
this
kind
of
way,
and
this
is
I
think
the
issue
that
is
with
interconnecting
this.