►
From YouTube: Kubernetes SIG Node 20181030
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
The
morning,
and
while
we
have
the
three
topic
to
discuss
today
and
the
first
one,
it
is
what
we
must
mess
up
last
and
we
are
going
to
do
that.
The
best
relief
from
our
between
that
law
and
the
theory
is
going
to
give
that
and
another
one
it
is.
We
are
going
to
talk
about
the
topology
manager
and
the
maybe
David
you
can't
start
sure.
I
can
start
share.
B
A
B
Okay,
so
I've
been
working
on.
Let's
see
when
I
last
presented,
I
presented
this
here,
which
is
I'll,
make
this
a
little
bit
bigger,
just
the
sort
of
how
we're
planning
to
tackle
device
monitoring,
you'll
have
a
workload
pod
and
it's
using
some
set
of
GPUs,
and
this
monitoring
daemon
set
is
able
to
figure
out
via
the
cubelet
which
pods
are
using,
which
devices,
by
using
or
by
querying
the
this
qubit
UNIX
socket
to
get
that
information
and
I
just
wanted
to
give
a
quick
demo
of
that
today.
B
B
B
B
I
can't
seem
to
get
the
pod
to
actually
do
any
work.
So
all
of
the
usage
metrics
are
zero
right
now,
but
it
I
believe
it
is.
It
is
actually
it's
at
least
able
to
get
the
the
total
memory
from
it,
so
I'm
pretty
sure,
everything's
everything
drinking
and
then
I'll
dive
just
a
little
bit
deeper
into
monitoring
aspect.
So
if
I
I.
B
Describe
this
pod
I
can
see
that
it
has
should
have
a
host
path
mount
to
wherever
I
put
bakit
right
now.
It's
here,
but
I
would
actually
move
that
to
a
subdirectory
and
as
one
should
have
mounts
for
running
MV
ml
and
if
I
look
at
the
logs
or
if
I
look
at
the
logs
for
this
pond.
I'm
currently
printing
out
the
blob.
But
it
gets
back
from
the
cubed.
B
Yeah
I'll
skip
over
this
part,
because
I
should
have
formatted
it
in
some
way,
but
I
suppose.
If
we
look
really
closely,
we
can
see
that
there
are
a
whole
bunch
of
containers
and
somewhere
in
here
is
a
non
empty
device
list,
CUDA
vector
ad,
so
right
at
the
bottom
there.
We
can
see
that
this
this
particular
one
has
a
set
of
container
devices
and
it
it
actually
points
to
the
device
IDs
of
those
devices.
B
So
that's
how
the
monitoring
pod
is
actually
able
to
consume
this
metadata
and
in
eater,
if
we
needed
to,
we
can
use
this
model
for
other
to
monitor
other
things
as
well.
The
most
immediate
ones
that
come
to
mind
would
be
attaching
information
about
the
network
name
space
to
pods
or
adding
in
the
the
hosts
path
to
volumes
so
that
we
could
have
a,
for
example,
an
NFS
monitoring,
daemon
set
that
understands
how
to
monitor
all
the
intricacies
of
NFS
and
can
identify
which
s
volumes
are
used
by
which
pods
based
on
path.
B
And
I
should
say
that
I'm
running
a
Prometheus
server,
but
that
is
not
required
for
me
this.
While
these
demons
have
produced
Prometheus
metrics,
there
are
a
lot
of
different
monitoring
pipelines
that
you
can
use
and
you
can
usually
take
things
from
Prometheus
format
and
use
them
with
other
systems
fairly
easily.
C
So
David
came
up
with
this
solution
of
using
its
socket
and
pushing
a
chase
and
pushing
them
at
device
IDs
in
JSON
format,
so
I'm
working
on
using
the
tags
in
our
monitoring
solution,
hopefully
I'll
have
something
working
and
maybe
I
can
also
present
it
in
the
next
few
weeks.
So
so
whatever
solution
is
presented
it
it
works
with
our
monitoring
style.
D
C
B
D
C
C
B
A
So
just
like
with
the
baby
Vincent
is
the
effort.
What
we
are
doing
is
make
the
the
monitoring
Pep
nine
at
least
I-
don't
know
that
it
is
more
extensible,
so
I'm
block,
let
me
walk,
and
so
this
is
the
first
highway
week
then
well
how
we
make
this
is
extensible,
so
other
things
make
the
network
the
rest
in
the
future
people
talking
about
and
it
could
be
using
the
same
model.
And
so,
if
you
have
more
question,
please
share
with
the
debate
and
they'll
reach
to
the
day
way.
So
it's
more
next
topic.
E
Definitely
let
me
just
throw
something
on
the
screen:
I
didn't
make
slides
or
anything,
but
just
put
the
proposal
up
there.
All
we
talked
about
it.
Yes,
so
this
is
a
the
topology
manager.
I
put
something
in
the
in
the
doc
to
TL
DR.
What
this
is
about.
It's
to
align,
topology,
dependent
resource
binding
within
the
cubelet.
Practically,
that's
that's
talking
about
two
scenarios
involving
the
CPU
manager
and
the
device
manager.
E
So
scenario,
one
is
where
you
have
a
request
for
an
accelerator
device,
plus
a
request
for
exclusive
CPUs
and
that's
using
the
aesthetic,
CPU
manager
policy,
and
the
second
scenario
is
where
you
have
a
request
for
multiple
devices
and
you
want
those
to
be
socket
aligned
as
well.
So
our
current
goal
is
to
merge
this
proposal
so
that
we
can
review
patches
to
be
merged
after
the
114
code
saw.
E
In
terms
of
history,
this
was
kind
of
this
proposal
covers
a
gap
that
we
left
when
we
scoped
the
CPU
manager
in
the
device
manager,
but
now
that
both
of
those
are
in
beta,
we
think
that
it's
a
good
time
to
address
this
missing
functionality.
So
this
this
particular
proposal
was
originally
posted
in
January
18.
E
It's
the
end
of
October
now
that
part
of
that
was
due
to
some
plan
delays
due
to
prioritization
within
the
resource
management
working
group.
But
you
know,
we've
had
a
discussion
within
this.
This
forum,
where
both
you
know,
Derek
and
John,
have
agreed
to
prioritize
design
in
this
quarter.
We've
got
some
stakeholders
lined
up.
E
You
know
us
from
Intel
I'm
speaking
on
behalf
of
numerous
groups
here
at
Intel
Nvidia
has
done
some
reviews
here:
Nokia
Red,
Hat,
flash
IBM
and
in
terms
of
reviews
we
don't
have
any
known
blockers
at
this
point
and
we've
also
given
a
demo
to
this
forum,
Louise
made
a
pretty
awesome
demo
using
a
dummy
device.
Plugin
there's
been
some
further
work
involving
a
PCI,
SR
iov
device
again
as
well,
so
yeah
I
think
we're
basically
at
the
point
where
it's
kind
of
a
speak
now
or
forever
hold
your
peace.
E
E
F
So
my
recollection
from
the
demonstration
was
I
thought
it
seemed
pretty
compelling
and
the
plugin
points
seemed
sensible
enough
when
we
talked
through
it
at
that
time,
immediately,
I've
had
a
bit
of
a
weird
week,
so
I
I
will
have
to
I
think
refresh
on
the
actual
design
not
devote
time
to
reviewing
that,
but,
like
you,
I
would
like
to
see
us
be
able
to
move
forward
on
this.
So
I
guess
from
my
perspective,.
F
A
D
Corner
I
have
a
question
relate
to
the
the
performance
benefit
from
this
PR
I
know
you,
you
guys
have
done
some
performance
benchmarking
and
the
actually.
You
also
have
helped
taking
in
some
of
my
spent
mark
in
the
note
performance
tester.
So
do
you
already
have
some
example
workload
that
can
help
us
evaluate
to
the
potential
benefit
from
this
feature.
E
E
F
The
only
thing
I
was
thinking
about
with
this
Connor
was
them
I
know,
there's
been
some
discussion
on
vertical,
auto
scaling,
and
so
the
only
thing
that
came
to
mind
on
this
would
be
like
if
I
had
too
high
a
vertical
on
a
scalar
to
this
feature,
I
would
want
to
be
able
to
auto
scale
stepwise
in
full,
increments
and
so
I.
Don't
know
if
we've
like
done
much
investigation
or
analysis
on
how
we
could
advertise
that.
But
given
that
that
discussion
still
very
early,
it's
just
something
to
keep
in
the
back
of
our
minds.
F
B
E
Lot
of
a
lot
of
the
performance
features
that
we've
been
discussing,
so
maybe
that's
the
topic
that
we
could.
We
could
start
in
the
resource
management
worker
kind
of
like
maybe
get
some
of
them
in
and
present
to
us
how
they
see
that
feature
working
with
things
like
CPU,
pinning
or
huge
pages
things
like
that.
Actually,.
F
Maybe
you
know
Don,
it
seems
like
there's
two
competing
efforts
being
discussed
in
that
space,
but
so
I
actually
can't
say,
I
have
a
clear
read
on
which
one
I
know
I
read
one
last
weekend
in
commented
on
it,
but
I
don't
think
we
should.
You
know,
block
progress
on
on
what
you
have
here
on
the
outcomes
of
those
discussions.
So
I,
don't
I,
don't
want
to
give
that
that
fear
in
any
way,
because
we've
discussed
this
topic
for
a
long
time.
Eirick.
F
F
A
I
also
involved
recently,
while
some
like
the
discussion
wheel
in
a
theater,
the
the
implies
updates,
also
skating,
the
vertical
hot
rod
on
escape.
So
so
I
think
those
student
is
in
earlier
stage
and
all
those
kind
of
disgusting,
so
I
I
think
the
kind
of
doing
actually
help,
because
we
had
once
we
merge
this.
We
make
a
more
concrete
process
and
the
inspiration
that
the
earlier
we
talk
about
the
performance
like
of
the
benchmark.
A
So
we
can
measure
how
those
kind
of
work
with
the
secured
King
huge
page
and
new
map,
and
although
together,
we
could
have
like
the
understanding
or
customer,
because
this
is
a
lot
of
question
how
we're
going
to
handle
Numa.
So
so
I
think
we
I
professional
think
at
least
at
this
moment
a
signal,
the
community
and
also
the
kubernetes
community.
We
cannot
give
or
concrete
recommendation
to
our
customer,
so
I
think
once
we
have
those
kind
of
things,
so
our
performance
benchmark
is
we
can
demonstrate
to
the
community.
A
F
F
Take
a
pass
out
closing
on
that
design
topic
in
the
next.
You
know,
let's
say
before
next
next
week's
sig
note
and
then
I,
don't
think
I've
seen
the
PR
and
so
I,
don't
know
how
how
big
the
PR
was.
Looking
like,
I,
think
I
think
a
branch
was
sent
out,
but
I
never
had
time
to
go
review.
It.
So
apologies
on
that,
but
obviously
we
wouldn't
get
things
in
and
1:13
cycle.
E
G
E
C
A
H
H
Two
weeks
ago,
they
had
recommended
that
we
take
an
approach
of
using
a
hybrid
cluster
where
we
would
have
both
Windows
and
Linux
nodes
in
the
same
cluster
and
then
use
node
selectors.
So
that
way,
the
tests
that
apply
to
Windows
run
there
and
the
rest
are
just
tagged
to
we're
node
selected
to
run
on
Linux,
and
so
we've
got
a
work-in-progress
PR
on
that
that
we're
taking
through
discussion
with
cig
architecture
and
cig
testing.
H
So
that
way
it
can.
You
know
log
create
clusters
on
annajura,
run
the
tests
there
and
so
on,
and
then
I
think
Peter
horny
echo
and
you
jus
also
got
a
cluster
up
and
running
using
the
one
of
those
scripts.
That's
already
there
in
the
Kubrick
repo
on
GCE
as
well
something
but
I
can
go
into
more
detail
next
week.