►
From YouTube: Kubernetes SIG Node 20200414
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
A
B
A
A
A
Well
in
case
you
don't
have
time
to
read
it.
I
can
briefly
summarize
so
in
the
existing
test,
which
is
a
lot
passed
a
test,
so
we
have
a
embedded
container
or
watching
for
the
lobster
for
expected
a
lot
snippet,
however,
for
this
for
the
new
ete,
the
requirement
is
different.
So
the
container
that
this
a
busy
box
container,
we
need
to
at
least
other
container,
because
this
piece
about
a
container
is
supposed
to
be
live
during
this
ete
test.
So
I
can
I'm
not
aware
familiar
with
a
test
framework.
I,
don't
know.
A
A
So
I
list
it
I
mentioned
sidecar.
So
if
we
have
a
sidecar
a
container
that
can
be
injected
into
say
some
other
part,
because
this
log-rolling
rotation
versus
assembling
removal
can
happen
in
other
tests,
and
we
know
a
sidecar
mechanism
is
not
there,
so
even
within
the
same
part,
I'm
still
looking
for
some
API
user
through
a
coke
Otto
or
some
some
other
API,
which
can
give
me
a
tasty
information.
But
in
summary,
so
you
might
a
second
paste
in
the
chat.
A
A
Yeah,
so
maybe
somebody
a
more
familiar
with
ete
test
can
give
me
some,
but
overall
I
think
this
UV
test,
so
it's
good
to
have,
but
maybe
the
test
itself
can
be
a
decoupled
from
the
fix
from
the
actual
fix.
So
in
so
during
my
a
day,
job
I
looked
at
some
internal
are
tickets.
Where
a
q
you
mentioned
Oh
analog,
hiyoshi
a
soul,
a
certain
phrase
and
marks
the
ticket
past.
C
A
C
B
So
any
other
suggestion
and
the
question,
and
so
so
I
suggest
think
about.
We
need
to
have
the
test,
but
we
can.
We
can
carry
off
nine
discussing
and
to
say
how
do
I
do
the
proper
test
there
I
think
there's
the
way
to
character
those
information
yeah?
Okay.
So
let's
move
to
next
topic
next
one
they
may
do
you
want.
Do
you
want
to
tampering
pass
to
provide
the
otherwise
and
to
talk
about
the
I
located
what
direction
for
the
pigs?
Yes,.
C
C
Just
in
terms
of
the
bigger
picture
is
that
each
time
we
try
and
do
eviction
on
a
new
resource,
we
are
adding
stats
for
that
resource
to
the
summary,
if
yeah
and
if
we
want
to
be
able
to
do
a
fiction
on
file,
descriptors
and
tasks
and
pids
and
all
of
these
various
resources,
we
may
not
want
to
actually
add
all
of
those
to
the
summary
api,
so
I
was
actually
hoping.
Derek
would
be
here,
so
maybe
I'll
pay
them
online,
but
I'm
also
just
curious.
C
If
anyone
has
opinions
on
I
was
thinking,
it
might
be
appropriate
to
have
an
internal
cubelet
API
for
passing
these
stats
to
the
eviction
manager
instead
of
using
the
summary
API.
But
at
the
same
time,
I
can
also
see
that
if
the
couplet
is
performing
eviction
on
say
file
descriptors,
then
someone
who
is
monitoring
the
node
should
be
able
to
tell
whether
it's
starting
to
run
out
of
file
descriptors
or
not
so
I
could
see
it
both
ways,
but
was
just
looking
to
raise
it
and
see.
If
there
were
any
opinions
about
how
I.
B
Want
to
ask
I
introduced,
you
make
the
summary
API
as
the
monitoring
API
for
the
coupon
net,
and
then
we
have
the
call
match
with
the
API
for
the
control
like
the
kubernetes,
the
controller
I
think
that's
basically
Orangeville
my
approach,
I
suggest
and
then
we
move
forward.
But
we
love
a
really
finish.
Then
we
still
move
toward
it
that
one
I'd.
B
B
That's
the
piece
before
we
talk
about
human
either.
Only
only
have
there
corner
like
the
metrics,
the
for
the
control,
API
controller
API,
which
is
meenakumari,
say
via
little
summary
API,
as
the
monitoring
API
could
be
exposed
by
export
by
some
other
demon
like,
for
example,
we
could
have
like
the
some
other
plugin
could
be
run
stared,
whether
I
so
as
the
demon
side.
So
just
like
what
you
just
said
right
now,
we
are
actually
some
APSU
served
dual
purpose,
so
actually
it's
still
half
of
the
control
management
and
even
coupon.
B
C
C
There
I
could
definitely
see
a
world
in
which
the
couplet
is
managing
resources,
that
it
doesn't
expose
metrics
about
and
doesn't
necessarily
collect,
metrics
with
the
same
pattern
that
it
does
today,
which
is
a
10
to
15
second
integral
and
very
well,
could
only,
for
example,
periodically
check
the
node
and
allocatable
level
pit
usage
or
a
file
descriptor
usage
and
save
craving
for
container
level
paid
or
file.
Descriptor
usage
for
just
when
it's
doing
the
I'm
eviction
ranking
so
I.
C
C
B
Next,
we
started
to
violate
some
of
those
of
women,
so
I
think
there's
the
middle
ground
and
weakened
each
other.
So
starting
from
the
discussing
is
art
because
the
week
I
think
we
push
to
persuade
him
to
reach
that
a
cream
hunter.
So,
okay!
So
it's
more
on
to
the
next
topic.
So
next
one
it
is
a
its
typology
management
enhancement.
We
are
going
to
release
your
last
week's
discussing.
So
sorry,
I
didn't
see
your
name
correctly.
Yes,
hello,.
B
D
B
D
D
D
Some
issues
that
can
also
be
solved
by
dissolution
some
extra
information
in
the
appendix
and
then
some
QA
time.
We
have
a
lot
of
time,
so
I
think
there
will
be
a
time
for
discussion
there
were
there
were
discussions
of
community
on
the
signal
on
the
slack
in
the
presentation
you
can
find
the
link
here,
or
also
there
was
discussion
on,
will
dock
the
command
and
in
this
document
is
the
same
idea
that
is
presented
to
this
proposal
in
this
presentation
there
were
also
a
discussion
about
some
alternative
ideas.
D
D
So,
let's
start
with
limitations
right
now,
when
Cuba
that
cause
admit
function
on
topology
manager,
topology
manager
takes
contain
container
from
the
pod,
get
hints
from
hint
providers
for
resources
that
this
container
requests
and
then
uses
the
policy
to
merge
those
hints
and
right
now.
Only
one
policy
from
this
above
can
be
selected,
and
this
is
the
first
limitation,
so
pots
requiring
different
resource.
D
Topology
cannot
be
bound
to
the
same
node
if
they,
if
they
require
different
different
policies,
and
this
policy
is
merging,
hints
and
generating
the
best
hint
and
an
admitted
message
and
provides
it
to
the
topology
manager.
At
then,
topology
manager,
based
on
the
message
allocates
resources
or
not,
and
steps
from
two
to
six
are
performed,
prefer
each
container.
D
And
this
is
the
second
visitation.
Calculation
is
door
for
each
container
in
the
pot
individually,
and
this
gives
us
some
challenges.
The
first
challenge
is
to
deploy
different
topology
requirements
within
a
common
worker
note.
So
let's
get
an
example
here
we
have
a
pot,
a
that
requires
allocation
of
resources
with
single
nominal
policy
and
pot
B
with
restricted
policy,
so
to
schedule
them
to
the
right
note.
D
So
the
worker
note
on
the
right
bill
will
be
over
committed
and
fully
used,
while
the
other
notes
will
be
will
be
nearly
empty,
so
the
deployment
tends
to
be
an
inefficient
right
now,
because
only
a
particular
policy
can
be
applied
within
a
worker.
Not
only
one
policy-
and
the
second
challenge
right
now
is
that
ensuring
that
is
to
ensure
the
allocation
of
resources
from
the
same
noon.
Numa
note
for
multiple
containers
defined
in
the
same
poll.
D
So
if
we
have
a
pot
that
has
three
containers
and
the
first
two
containers
get
resources
from
the
first
demo
note,
and
there
was
no
enough
resources
for
this,
for
the
third
container
first
container
will
get
resources
from
the
third
nominal
and
in
such
situation,
where
second
container
would
like
to
communicate
with
the
first
container,
they
will
have
to
communicate
through
the
UPI
channel
between
the
sockets.
So
there
will
be
a
degradation
of
performance.
D
So
there's
Ellucian,
we
proposed
our
the
first
one
is
extension
of
pots
back,
it
is
to
add
a
new
field
in
a
pot
spec
that
would
specify
specify
the
requirement
of
topology
and
the
topology
policy
will
be
dealt
in
mind
by
this
field
and
not
the
cabinet.
Configuration
of
a
note
in
this
field,
following
policies
can
be,
can
be
specified,
known,
non
best-effort,
restricted,
single
numa
note
and
later
present
in
this
presentation
what
level
single
nominal.
D
Also,
we
would
like
to
introduce
new
policies:
1
4
4,
on
the
on
the
on
the
node
level,
called
dynamic
that
would
coordinate
resources
assignment
of
resources
according
to
the
topology
policy
specified
in
a
prospect
so
to
use
this
field
and
topology
posses
policy
specified
in
this
field.
Dynamic
policy
has
to
be
selected
on
an
old
level
by
by
cubed
flag.
If
any
other
policy
will
be
specified
rather
than
dynamic.
D
And
also
this,
the
Cuban
flag
for
topology
policy
will
be
deprecated
in
future
versions
and
default.
Policy
of
topology
manager
will
be
replaced
with
this
dynamic
policy
and
it
won't.
There
won't
be
issue.
There
will
not
be
an
issue
with
compability,
as
as
I
said
previously,
the
default
value
of
this
policy
will
be
known
if
won't
be
specified
in
a
pot
spec,
and
the
second
policy
that
we
want
to
introduce
will
be
on
a
pot
level
that
can
be
specified
on
imports
back,
and
this
will
be
port
level.
D
Single
numa
note-
and
this
is
the
solution
for
the
second
challenge,
that
would
support
pot
level
reserves,
alignment
to
the
same
Numa,
node
and
other
as
I
said
in
it
will
only
be
applicable
through
through
the
pots
back.
So,
as
you
can
see
on
the
picture,
if
we
provide
our
solutions,
each
worker
node
can
support
different
different
topology
policy
requirements.
D
D
So
the
plan
is
to
to
leave
the
default
policy
value,
which
is
known
for
119
version
of
kubernetes
and
to
utilize
the
topologies
policy
specified
in
the
pots
back.
It
would
be
required
to
set
dynamic
policy
on
Cobra
at
startup
and
in
version
120
topology
manager
default
policy
will
be
dynamic
and
it
cannot
be.
It
cannot
be
changed
at
the
node
level,
so
the
configuration
flag
will
be
deprecated
and
topology
policy
for
the
pot
will
be
specified
only
in
the
pot
spec.
E
F
F
We've
been
working
on
the
topology
manager,
probably
Kevin,
longer
than
theirs,
and
also
come
on
her
and
probably
provide
some
feedback,
but
in
general
we
had
already
been
discussing,
and
thinking
about
you
know
doing
the
having
the
policy
that
we
could
apply
to
a
pod
and
it
sounds
like
you've
already
taken
some
look
at
that
and
that
looks
very
interesting,
but
I
think
if
you
can
give
us
some
time
to
look
at
that
and
review
it
sort
of
offline.
That
would
be
great.
Okay,
yes,.
B
Actually,
last
week
we
talked
about,
but
it's
really
high
level
and
other
things,
and
also
cover
other
topic
like
next
week,
so
from
Halawa,
so
I
didn't
mention
your
name
and
and
other
people
yeah,
all
the
other
people's
name,
because
also
I'm
I'm,
looking
at
the
people,
Rena
King
on
the
typology
and
in
the
past
to
contribute
a
lot
to
the
memory,
management,
CPU,
CPU
policy
and
also
management
policy,
and
also
topologies.
So
so
because
this
is
kind
of
the
big
changing,
because
that
last
week,
I
also
mentioned
couple
things.
B
One
things
I
think
you
cover
next
week
is
going
to
Chris
you
Carol
next
week
is
going
to
have
the
new
proposal
because
one
housing-
it
is
so
we
only
talked
talked
about
the
Numa
node
and
a
memory.
We
didn't
really
talk
about
the
CPU,
what
it
is
the
Superman,
so
we
last
week,
apprentices
connect
the
furnace
three
different
separately,
like
walk
to
present
here,
but
the
edge
to
me.
B
If
you
really
care
the
to
support
high
performance,
work,
node
and
HPC
type
up
the
workload,
you
have
to
take
all
three
into
consideration,
and
so
we
so
we
we
want
to
notice
how
well
another
thing
is
I,
think
I
think
about
you
didn't
touch
base
earlier,
but
and
Alex
asked
a
little
bit
one
Hansen.
It
is
especially
you
add
this
multiple
policies
that
put
it
back
per
node
and
I
will
think
about.
Dirge
scheduler
will
make
more
controversial
decision,
which
it
may
concern.
B
Rec
scheduler,
don't
know
those
topology
and,
and
you
add
more
typology
to
promote
support
a
final
grant
decision.
So
then
you
we
may
have
to
know
that
may
make
a
controversial
decision
more
chance
to
make
that
control,
traditions
and
then
the
sky
donor.
So
you
end
up,
have
the
more
rejecter
think
about?
Oh
I
cannot
excited
fat
and
that
request,
and
then
you
end
up
could
be
next
to
King
Kong
or
what
a
cascading
issue
with
the
scheduling
I,
don't
think
about.
We
are
talk
about
this
when
today,
okay,.
D
D
B
D
D
D
B
B
B
So
so
we
today
it
is
a
schedule:
it's
not
a
topology,
a
while
right
so
used
to
be.
We
applied
a
single
policy
to
a
loader,
and
so,
even
in
that
cases
we
basically
is
unlikely.
No,
the
will
reject
scheduling
decision
next
apart,
say:
oh
I
cannot
find
and
satisfy
this
policy,
and
now
we
either
we
either.
We
allow
the
multiple
partners
to
apply
to
single
node
and,
of
course,
I
can
say
that
Aurelia
you
want
to
increase
of
the
utilization
and
but
the
potential
could
be
next.
B
The
scheduler
schedule
more
parts
and
each
one
have
some
necklace
level
of
the
different
of
the
typology
policy.
And
then
you
end
up
like
the
node
cannot
satisfy
the
requirement
and
then
rejecter.
So
you
enough
have
the
more
like
the
disagreement
between
scheduling
and
also
I'm
and
also,
and
also
the
folks,
but
no
have
you.
We
look
into
those
kind
of
things
and
have
some
example
and
then
look
into.
D
Okay,
I
I'm,
not
sure
if
I
was
hurt
through
the
street,
because
I
was
talking
about
it
in
the
cab.
We
in
the
kept
updates
in
the
pier
we
added
a
chapter
called
practical
challenges
where
we
address
this
issue
and
we
present
two
types
of
mitigation.
The
first
one
is
may
be
temporary
or
some
custom
solution
when
you
can,
where
you
can
create
a
new
workload,
type
to
call
it
there,
topology
set
or
the
CRD
or
some
may
be
long
term
solution
here,
to
make
topology
order.
D
The
scheduler
support
apology,
aware
and-
and
there
are
two
ways
to
do
this-
that
we
described
in
the
in
the
cap
and
the
first
one
would
be
with
usage
of
detection
of
noise
resources
topology
by
not
feature
discovery
or
the
second
one
is
to
enhance
scheduler
with
scheduler
framework.
So
we've
got
this.
We've
got
this
in
mind
and
we
know
that
there
will
be
rejections
right
now,
but
it
is.
It
is
still
to
be
solved
in
the
future.
B
So
last
time
I
looked
at
what
don't
have
those?
Maybe
last
week
we
talked,
then
you
update
this
or
maybe
I
just
missed
it,
because
I
raised
this
problem
and
I
think
the
week
we
didn't
say,
but
the
problem
is,
can
we
treat
this
as
the
company
solution
and
it
sounds
like
we
just
say?
Oh,
it's
just
only
something
know
the
problem
first
and
in
stanza
of
the
kubernetes
cluster
level
problem.
So
can
we
have
some
way
to
think
about
the
because
I
want?
B
Because
if
you
want
to
make
this,
it
is
the
scheduler
awhile
like
the
make,
and
you
need
to
figure
out.
What's
the
API
and
now
you
propose
the
powders
back
extension
and
I
worry
about
in
the
future.
You
have
to
make
more
exchange
on
the
powder
level.
So
can
we
have
not
a
high
level
picture
next?
How
we
are
going
to
solve
this
at
craft
level?
B
B
If
I
am
the
user
and
item
in
custody
IMing
the
item
in
for
a
caster,
how
I'm
going
to
configure
my
caster
in
certain
way
and
I'm
going
to
then
I
present
that
class
that
you
made
to
my
user.
So
then
there's
another
way.
It
is
I,
am
the
application
developer
and
how
I
have
HPC
workload,
how
I'm
going
to
deploy.
B
So?
Can
we
solve
that
problem?
I'm?
Okay,
with
like
the
no
completed
solution
on
the
scheduler,
but
at
least
oh,
we
need
to
put
in
Stine
and
say:
oh
well,
we
know
there's
the
problem,
then
we
just
put
it
there
in
the
shared
and
then
we
come
back
later
Michelson.
It
is
because
we
are
changing
the
palace
back
API
and
in
the
past,
actually
there's
the
many
proposed
or
how
to
support
this.
Well.
B
I
am
missing
a
glass
we
connect
the
discuss
and
to
support
those
things,
but
do
with
the
intervening
go
further,
so
I'm
worried
about
this
is
simplify
those
kind
of
things
Dean
put
into
the
casted
Iowa
communication,
because
the
premiums
resource
class
is
being
pushed
back.
It's
due
to
the
class
today.
What
we
couldn't
to
solve
that
problem
and
I
mean
not:
we,
we
think
what
we
have
to
do.
B
The
solution
solve
that
problem,
but
it
got
a
lot
of
pushback
because
it's
too
complicated
and
nobody
no
good
way
to
support
that
complex
scenario,
and
so
that's
why
I'm!
My
concern
is:
even
we
single
out
here
it
is
once
we
have
this
one,
then
at
the
end,
we
don't
have
the
caster
solution,
so
I
think
we
should
have
started
to
thinking
about.
What's
the
class
that
I
will,
after
how
we
are
going
to
expand
it,
then
we
can
focus
on
the
loader
solution.
C
C
F
A
F
B
No
actually,
my
concern
is
with
this
new
enhancement.
We
are
allowed
to
apply
Matic
for
policy.
There
are
different
policy,
either
single
order
to
improve
our
increase
of
the
new
generation.
The
previous
will
only
have
the
single
policy,
so
even
we
don't
have
like
the
sky
donor.
I
I
think
that
it
is
the
sky
donor
and
also
know
that
make
the
different
decision
controversial
decision
conflict.
The
decision
is
life.
B
It's
not
the
is
lies
frankly,
so
I'm
not
that
concern,
and
so
because
we
talked
about
it
apart,
like
the
typology,
a
while
schedule
not
before,
but
we
didn't
block
that
way.
It's
just
because
it's
the
single
policy
applied
to
the
single
order,
so
it
is
not
like
so,
but
now
we
have
the
multiple
policy
applied
to
the
single
node,
so
so
to
increase
off
the
generations.
So
you're
in
that
could
be
have
like
the
sky
donor
schedule
because
they
don't
have
the
topology
knowledge
and
but
nobody
will
recheck
the
more
often.
B
So,
that's
not
like
the
scheduler,
a
wire.
The
topology
aware
of
the
scheduling
is
the
more
like
the
king
to
me
right
now,
so
it's
kind
of
I'm
Castle
about
that
one,
so
I'm
so
chance,
because
this
is
this
perm
krai
seen
a
streak
and
I
suggest
we
at
least
have
some
understanding
how
we're
going
to
address
that
a
problem
for
high-level.
Then
we
can.
Of
course
we
cannot
solve
all
the
problem
at
the
same
time,
but
we
then
we
can
start
the
focus
down
as
well,
I'm
only
to
be
uncomfortable
and
ugly.
B
We
just
say:
oh,
we
are
going
to
builder
and
because
we
know
is
take
the
years
we
have
to
build
this
topology
a
while
the
sky
donor
and
we
used
to
have
the
resource
class
proposal.
I.
Think
the
you
met.
You
met
a
wire
through
the
through
resource
management.
Workgroup
and
the
way
of
the
engineer
from
from
Google
jarring,
actually
put
a
lot
of
effort.
That
way
and
the
broker
actually
is
not
fun
and
no
decide
is
more
fun.
B
D
B
Victor
and
Alex
and
and
I
think
the
last
week
here,
some
of
you
here
and
some
is
not
so
we
talked
about
you
have
the
other
group
exists,
is
a
big
change
and
will
change
how
we
manage
CPU
memory
and
also
Numa
and
so
I
hope.
We
have
some
next
group
likely
up
previous
the
form,
a
group
and
to
review
this
one,
and
so
so
who
is
going
to
participate?
I,
don't
know,
I
had
any
talk
to
the
Derek
and
also
I
believe
Derek
also
have
people
from
the
Red.