►
From YouTube: Kubernetes SIG Node 20190716
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
From
July
9th
that
there
are
some
label
deprecations
on
the
co-driver,
so
they
were
deprecated
we're
shipping
duplicate
sets
of
labels.
You
can
watch
I,
guess
they're
recording
if
you
want
to
see
that
whole
discussion,
but
basically
we
asked
Zig
node
when
you
give
us
your
blessing,
so
we
can
remove
the
duplicate
labels,
the
old
names
that
don't
follow
the
instrumentation
guidelines
for
the
1/16
release.
So
we're
back
asking
for
an
answer.
Yes,.
B
Thank
you
for
Colorado
and
the
mastery
act,
the
signals
we
talked
about
this
one
and
we
think
about.
We
do
agree
signaled
as
the
community
for
the
open
source
project.
We
don't
have
an
ingredient
rock
that
were
back
to
because
many
people
in
a
signal
actually
also
breakfast
and
kubernetes
production
in
production
in
their
view
product,
so
each
one
of
them
and
need
to
go
back
to
figure
out
there's
any
the
question
for
their
production.
So
so
far
we
haven't
receive
at
least
I
know.
B
The
derrick
is
not
good
and
enjoying
today's
meeting,
so
I
haven't
heard
from
openshift
yeah
and
from
from
Google
side.
We
also
have,
though,
some
follow-up,
but
we
haven't
to
figure
out
to
the
final
answer
yeah.
So,
unfortunately,
we
cannot
give
you
the
final
decision
here
but
tip
front
again,
I'm
from
the
open
source
perspect.
Actually,
that's
not
our
concern.
B
A
What
I
will
probably
do
is
I
will
put
up
a
PR
and
then
I
think
that
Fredrik
has
been
also
mentioning
this
in
multiple
different
community
meetings,
so
I
think
that
we've
shopped
it
around
as
best
we
can,
and
so,
if
anybody
comes
up
with
a
major
objection,
they
can
do
so.
At
that
point,
that's
goodbye
cool,
great
thanks.
So
much
for
your
time.
Thank
you.
B
C
Imagery
yeah
so
for
context.
I
had
proposed
that
the
container
spec
be
extended
with
the
API
key,
which
would
allow
to
bring
up
a
list
of
alternatives,
registries
upon
which
an
image
could
be
hosted
and
eventually
pulled.
If
cubelet
cannot
cool
from
a
registry
defined
in
the
image
key
to
example,
because
the
canary
stream,
it
might
be
down
Don,
you
have
posted
a
comment
on
my
github
issue
regarding
that
a
container
runtime
should
be
able
to
do
this
already.
So
if
you
could
explain
that
more
so.
B
D
Talk
about
the
community
sure
so
in
cryo
we
added
support
for
repository
mirroring.
So,
for
example,
you
have
two
different
registries:
a
dot
io
Acme
your
image.
You
can
set
up
a
rule
saying:
okay,
my
backup
is
b
dot,
io,
/
food
or
bar
for
that
repository.
So
if
an
image
fails
to
be
pulled
from
the
first
location,
it
will
try
another
location
and
that's
just
an
example.
You
can
have
a
various
different
mirror
set
up
for
your
repository,
so
this
is.
D
D
However,
one
thing
to
point
out
here
is
being
careful
about
making
sure
that
you
are
pulling
the
same
image
and
if
you
are
pulling
by
tag,
then
you
cannot
be
sure
that
it's
the
same
image,
the
only
way
to
ensure
that
is,
if
you
pull
by
sha,
so
you
can
actually
cryptographically
verify
that
your
contents
haven't
changed.
So
we
allow
you
to
limit
such
mirroring
to
only
pulling
by
Shah
in
cry.
B
The
continuity
is
a
half
the
way
to
make
the
darker
can
configure
after
registering
and
then,
if
the
one
fail
and
you
can
switch
it
to
the
secondary
and
the
poor.
The
images
so
I
think
I
product
there
and
there's
also
have
example
and
the
link
to
example,
how
you
configure
that
as
yeah.
So
I'm,
not
sure
this
is
status
for
you,
because
from
your
dog
and
the
some
thing
you
want
more.
So
that's
why
I
want
to
bring
you
to
the
signal
we
can
discuss
more.
C
Absolutely
so
what
I
had
intended
from
the
from
the
cap
was
to
resolve
an
issue
we
had
at
pusher
comm,
where
our
CI
pipe
I
had
essentially
kind
of
blown
up,
because
a
registry
was
down
I
understand
that
kid
in
a
runtime
can
resolve
this,
but
I
had
also
noticed
that
some
people,
like
the
idea
of
putting
this
into
the
spec
of
a
deployment,
so
I,
think
the
only
question
right
now
that
I
have
is.
Is
this
at
all
interesting
on
that
level
or
ship
is
just
sayin
like
assist
in
the
unit's?
E
E
C
B
D
Images
I
will
know.
This
is
the
repository
level,
not
at
the
image
level
and
I
think
we
still
have
an
issue
of
authentication
of
providing
authentication
to
cry.
Oh
so
like.
If
we
make
this
something
more
first-class
in
the
cubelet,
then
like
runtimes,
don't
have
to
worry
about
authentication
like
cubelet
can
potentially
give
us
authentication
for
the
mirror
we
are
trying
to
pull
from.
So
there
might
be
advantages
in,
like
figuring
out
a
design.
F
B
B
So
so
well
original,
so
you
also
I
thought
it
is
the
image
based,
oh
content,
of
a
sister,
but
it
looks
like
it.
Actually,
it
is
our
today's
company
random
oil
of
choice,
all
your
problem,
all
your
concern
so
I
think
there's
no
need
to,
because
this
is
hover,
those
things
into
the
into
the
containers
bag
and
the
pod
level.
It
here
actually
there's
the
impact,
a
lot
of
the
complexity
there
and
the
make
of
management
is
much
harder,
and
so
it
looks
like
we
right
now.
B
B
H
I
think
the
summaries.
Last
week
we
discussed
two
things:
one
is
the
potentially
moving
the
container
container
dautry
container
status
freezes
allocated
into
the
spec
portion
so
that
we
have
a
way
to
reliably
recover
from
any
results.
The
other
option
to
that
was
to
store
node
locally
store.
The
this
information,
like
what
container
has
what
requests
allocated
and
when
update
request
comes
in
after
we
complete
the
update,
then
we
update
it
so
at
any.
H
Given
time
we
read
it
just
like
today,
when
cubelet
restarts
it
discovers
the
current
pods
that
are
there,
and
it
sees
what
the
API
server
says:
the
pots
that
should
be
and
then
drives
towards
a
desired
State
we're
going
to
drive
towards
the
desired
resources
State
as
well.
So
there
are
two
ways
to
do
that.
The
second
part
of
it
was
to
see
if
we
can
keep
the
pod
condition,
which
I'm
still
favoring,
give
it
a
little
bit
more
thought
on
on
that
and
I
see
that
giving
this
it's
generated.
H
I
think
there
was
some
questions
about.
This
is
keeping
state.
It's
not
keeping
State
as
far
as
I
can
tell
unless
M
is
something
it
can
be
generated.
If
cupola
tree
starts,
it
can
be
generated.
It's
just
an
observation
made
based
on
the
current.
Once
we
get
the
reliability
of
the
requests,
we
can
see.
Okay,
the
requests
match
the
desired,
we're
good
everything
is
fine
or
it
does
not.
H
Can
we
fit
if
we
cannot
fit,
we
fail
pod
load
resources
or
if
we
can,
then
we
work
towards
it
and
said
the
state
has
spending
in
progress.
So
I
don't
know
if
you
do
and
David
had
a
chance
to
interact
as
well
at
a
chance
to
look
at
I
think
the
plan
was
to
review
the
cap
over
the
last
week
and
then
see
if
we
can
drive
it's
a
closure
on
this,
so
I
just
wanted
to
bring
this
up.
That's
the
summary.
So
what?
If
I,
not
where
we
are.
H
H
Well,
it
is
not
redundant
in
the
sense
that
once
it
reconstructs
it
gooble,
it
knows
whether
it's
possible
to
resize
or
if
it
cannot
because
of
capacity,
and
that
information
is
critical
for
the
initiating
actor
like
vpa.
For
example,
let's
say
you
have
one
single
pod
on
a
node
just
a
simple
example,
and
the
node
has
four
gig
of
ram.
The
pod
is
currently
using
three
gig.
Everything
is
good
and
the
pod
wants
five
gig.
We
know
that
this
is
not
possible
from
the
node.
We
know
that
it's
not
possible.
H
It
can
set
that
to
fail.
Since
we
use
the
max
of
resources
and
resources
allocated,
the
scheduler
is
not
assigning
any
new
parts
because
it's
the
desired
is
five
gig
there.
So
it's
blocking
off
that
capacity.
Now,
if
the
VP
a
would
know
that
okay,
this
is
not
possible
and
the
policy
allows
me
to
reschedule
to
a
different
node,
which
has
that
capacity
and
the
port
can
come
up
on
a
new
node,
then
that
one
gig
can
get
freed
up
immediate.
The
whole
whole
thing
can
get
freed
up
immediately
for
new
pods.
H
Otherwise,
it'll
have
to
wait
for
a
certain
duration,
30
seconds
or
whatever
it
is,
and
for
that
duration
that
one
gig
is
blocked
away
and
then
the
vpa
decides
okay.
This
is
not
happening
within
the
duration
that
I
expected.
So
then
I'm
gonna
reschedule
it.
So
now
you
add
this
up
cumulatively
across
the
cluster,
you
would
potentially
end
up.
You
know
blocking
away
and
wasting
resources,
so
it
doesn't.
This
condition
seems
to
be
in
no
way
against
any
API
design
constraints
or
design
principles
that
we
that
have
read
off
so
far
and
it's
reconstructive.
E
I
think
that
sounds
like
the
correct
behavior
to
me.
My
point
that
I
just
made
was
more
about
if,
if
this
is
entirely
reconstructed
from
the
resources
that
have
been
allocated,
the
resources
that
have
been
requested
and
the
pods
currently
assigned
to
the
node
and
the
only
issue
I
see,
is
that
we
now
have
a
condition
that
we
have
to
keep
in
sync
with
all
the
other
declarations,
and
so.
E
H
It's
a
part
status,
it's
not
in
the
part,
but
it's
an
observation.
Yes,.
H
Let's
give
it
a
little
bit
more
thought
before
completely
rejecting
it.
I
just
see
that
if
we
do
it
then-
and
this
is
again
one
of
the
things
we
can
always
add
on-
it's
not
like
a
deal
breaker
yeah
but
I
since
I-
see
that
there
is
potential,
for
this
is
one
single
pod
example
that
I
took
now.
If
multiple
recesses
were
going
on
and
each
little
pod
blocks
off
a
little
capacity
and
then
BPA's
holding
on
to
that
essentially
for
a
duration
of
time.
H
You
pick
a
number
next
30
seconds,
so
one
minute,
and
it
generally
expects
for
things
to
sync
I,
don't
know
we
would
pull
a
number
from
the
air
for
that
duration
will
be
under
utilizing
the
cluster.
So
that's
one
of
the
reasons
I
felt
this
condition
I
I
see
there
is
a
benefit
to
doing
this.
I
don't
see
any
detriment,
so
I've
been
in
favor
of
keeping
it
so
hot.
What.
H
F
B
Earlier
you
described
the
use
cases
and
also
previously
described
the
cases.
It's
a
lot
of
time.
We
add
that
complexity.
At
you
mean
it's
just
because
scheduling
made
a
decision
or
whatever
controller
made
this
decision
for
this
wiki
a
and
they
don't
know
no
condition
right.
They
don't
know.
Basically
they
really
don't
know
it
is
the
current
node
and
they
don't
know
the
limits
and
what
it
is
maximum
its
assigned,
and
then
they
also
don't
know
that
usage.
B
So
you
end
up
to
thank
the
only
base
and
because
the
scheduler
only
knows
the
request
and
the
base
are
required
to
make
that
decision.
So
so
you
interrupt,
you
have
the
located
item
part
or
container
on
a
node,
but
then
you
couldn't
fulfill
your
desired
state.
The
fourth
died
to
powder.
Do
we
think
about
Nernst
potential?
Have
the
more
problem
like
the
you
may
end
up
to
like
the
one
part
of
my
container,
will
be
in
a
cascading
at
the
clip
sent
to
the
node
at
the
end
to
fulfill
that?
B
Is
there
an
end
reschedule
to
another
node
and
for
fail?
If
we
think
about,
we
could
change
that
if
we
teach
she
scheduler,
more
intelligence
and
next
gather
and
know
some
usage
here,
let's
get
him
no,
we
could
define
some
rules
for
sky
donor
like
they're.
Not
just
request
simply
place
those
things
there
and
we
could.
We
could
have
a
node
to
export
as
teammate
or
something
predictive,
some
usage
based
on
a
counter
assigned
Reynold,
and
then
they
based
on
that
one,
make
the
most
a
decision.
B
H
Believe
the
scheduler
already
knows
it
caches
the
nodes
in
it.
It
knows
the
nodes
capacity,
the
informer
no
Informer
I
believe
gives
it
that
information
and
then,
when
every
part
comes
in,
as
it
assigns
the
thoughts
to
a
node,
it
caches
the
pods
that
are
assigned
to
a
node,
and
then
it
sums
them
up.
H
In
the
case
of
resize,
the
scheduler
in
the
current
proposed
design,
we're
saying:
okay,
the
part
that
is
staking
X
once
X
plus
y,
so
the
scheduler
is
going
to
say:
okay
I
see
that
the
pod
wants
X,
plus
y
and
I,
believe
the
couplet
is
working
towards
it.
So
I'm
not
going
to
take
away
from
that
Y.
That's
the
current
proposal.
B
Together
do
know
the
only
requester
and
know
the
capacity
and
the
node
ID
locatable
and
which
is
good
of
a
sign.
So
they
do
some
of
those
requests
say,
oh
and
also
in
the
VTA.
Then
you
have
this
additional
request,
but
the
problem
is
when
you're
on
an
order
and
you
could
using
way
more
than
you'll
request.
So
scheduler
have
no
idea
about
that
one.
So
you
interrupt,
it
could
be
half
nekton
real
behavior
of
the
node
or
one
single
node
which
have
the
work
behavior.
B
We
have
the
current
drops
there
and
they
didn't
using
much
resources.
Scanner
I
have
no
idea
about
those
things
and
then
but
another
node.
Every
one
is
a
minimal,
but
everything
of
things
is
first
of
all,
I'll
pass
the
effort
and
the
schedule
to
keep
first
storage
tank
to
node,
with
the
new
new
requester
and
in
the
apnic.
No
requester
can
be
really
satisfied
under
the
node
and
existing
running
part
could
be.
B
Performance
could
be
bad
because
they're
complete
after
computer
resource
on
the
north,
so
I
just
because
I
remember
a
recent,
and
it
also
has
a
long
long
to
change,
accompany
to
introduce
I
forgot
detail
until
either
some
really
complicated
logical,
try
to
bypass
the
scheduler
and
I
also
suggest
that
can
we
just
started
evil
I
just
asked
her.
If
the
sky
knows
the
usage,
it's
kinda
too
bad
a
job,
so
they
did
Oakley.
So
I
wasn't
sing
cover
in
your
kisses.
Also,
if
you
could
have
something
like
this
is
just
based
on
the
Davis
proposal.
B
H
Yeah
I
think
I
know
what
you're
saying
this.
However
I
don't
know
if
it's
pertinent
to
the
resizing
cap,
this
may
be
a
separate
kept
where
this,
where
we
want
the
scheduler
to
make
smarter
decisions
on
scheduling
new
parts
when
one
spots
are
scheduled
and
they
want
to
be
resized
in
place
that
information
doesn't
have
any
bearing.
Even
if
the
scheduler
knows
that
the
current
usage
is
reaching
threshold,
the
scheduler
a
case
knock
and
proposal,
we're
saying
scheduler
is
just
an
observer.
We
don't
want
it
involved
in
the
resize
once
the
part
is
bound.
H
H
We
do
it
without
preemption
first
and
then
we
see
how
it
works,
and
then
we
see
if
we
want
to
do
preemption
from
cubelet
as
long
as
it
can
happen
at
the
same
parity
as
scheduler,
which
has
a
global
live,
then
it's
fine
in
my
mind,
so
yeah
your
your
point
is
a
very
valid
one.
It's
just
something
that
I
don't
know
if
it
applies
to
resize
the
industry
size
kept
I.
B
I,
don't
I
don't
push
it?
We
have
to
do
that
as
well.
But
I
disagree,
couple
thing
and-
and
one
thing
it
is
I
did
agree
before
to
make
you
Panetta
to
the
preemption
for
those
for
the
initial
the
written.
It
is
because
they're
in
the
kind
of
situation
and
schedule
you
cannot
because
they
don't
a
wire
of
this,
no,
the
real
condition.
B
So,
for
me,
neat
accumulate,
no
matter
what
you
have
to
do,
that
it's
just
self-defense,
because
I
don't
want
to
lose
the
in
half
note
so
cubed
it
have
to
do
if
the
scum
upstream
there
don't
do
that,
work
and
the
coupon
it.
It
is.
The
last
nine
of
the
cultural
plan
last
nine
of
the
scheduling,
so
it
hasn't
heard
of
the
work,
but
it
at
disagree.
The
ad
is
the
same
effect.
This
kiln
I
don't
have
the
global
knowledge
schedule
Naidu
contradicts
so
it
could
end
up
OneNote.
B
You
could
have
the
end
up
the
one
part
one
container,
one
really
critical
services,
because
these
kind
of
things
you
basically
keep
kill
in
the
many
many
notes
now
we're
reading
one.
That's.
It
is
real
problem
in
production
because
we
have
no
idea
about
the
next
okay.
How
critical
this
node?
We
basically
just
traded
this
powder,
only
simply
information
we
have
it
is
the
proud
here
castle,
but
that
it
is
self
is
the
true
course
cream.
B
It's
not
giving
you
all
the
detail,
so
you
could
end
up
the
among
all
this
critical
and
jobs
and
this
one's
keeping
me
kill
them
and
then
schedule
another
node
and
a
cure
and
think
about
that
wrong
trade
and
a
mindful
not
to
buy
a
cell
together.
That's
really
really
problem
for
customer.
Another
thing
is
I
do
agree.
This
is
the
second
issue,
but
I
do
see
because
the
scheduler
Nick
of
the
usage
of
the
real
situation-
and
we
end
up
at
the
many
actual
complexity
to
many
design.
B
Then
we
here
it
is
one
of
the
design
here,
because
a
lot
of
the
I've
been
observed
and
asked
the
men
discuss
it
because
I
do
read
the
latest
of
the
PR
the
last
time
I
read
the
PR.
It
is
that
before
they
hand
over
to
you,
so
I
can
say
that
many
many
use
cases
is
came.
Father
because
we
are
thinking
of
the
information,
so
we
want
to
capture
some
information
and
the
checkpoint,
some
information
and
take
the
reaction
mode.
So
that's
kind
of
from
my
observation,
but
anyway
it's
just
my
observation.
H
Now
the
scheduler
cubelet
or
scheduler
when
they
evict
a
VP
as
far
as
I
know,
if
it
were
to
evict
service,
that's
critical,
it
will
make
sure
prod.
They
all
use
the
eviction
API
and
that
will
either
succeed,
in
which
case
it'll
delete
the
pod
or
it
will
fail
because
of
XYZ
reason,
and
one
of
the
reasons
is
like
pod
destruction.
Budget
isn't
there
so
that
ensures
that
a
minimum
available
set
of
services
is
running.
H
E
Yeah
I
think
my
question
was
about
whether
the
grace
periods
are
reflected
or
respected,
but
it
sounds
like
pod
disruption.
Budgets
are
also
respected
as
far
as
just
my
my
basic
question
was
whether
the
eviction
API
actually
gives
us
a
time
bound
by
which
a
pod
would
be
removed,
just
because,
in
past
discussions
with
Derek
and
dawn
at
sig
node,
we
said
that
one
useful
way
to
think
about
this
feature
would
be
as
being
able
to
provide
access
to
compute
resources
within
some
period
of
time,
so
that
was
I
was
mostly
trying
to
figure
out.
E
H
Is
my
observation
is
half-baked
in
the
sense
from
when
I
experimented
with
it
now
I
saw
that
when
poor
Deception
budget
is
available,
it
immediately
evicts
the
grace
period
is
not
it's
as
if
you
call
it
a
pod
delete,
I'm
that
kind
of
suggests
that
we
have
to
put
that
grace
period
hold
off
before
you
call
evict
in
there.
So
that
is
a
bit
of
complexity
that
would
be
added,
haven't
looked
into
the
scheduler
code
to
see
if
scheduler
does
it
explicitly,
but
my
observation
was
eviction.
H
E
Yep
and
I've
actually
remembered
why
what
my
concerns
were
okay
and
put
them
a
little
bit
more
eloquently.
My
main
concern
was
the
pod
condition,
because
it
is
a
sort
of
aggregation
of
other
state
and
the
key
or
one
piece
of
that
state
is
actually
node
state,
and
my
concern
was
mostly
about
consistency
between,
say,
the
pods
assigned
to
the
node,
and
this
condition
so
I
know
that
currently
it
sounds
like
the
proposed
flow
is
mostly
linear
in
terms
of
your
pod
gets
scheduled.
E
But
if,
for
example,
we
ever
wanted
to
do
something
where
it
could
get
scheduled
and
then
BPA
could
decide
to
abandon
the
resource
update
or
something
like
that
and
leave
it,
as
is
I,
would
be
concerned
about
the
condition
becoming
out
of
the
or
like
it
starts
to
look
more
like
a
state
machine,
because
it
can
go
in
many
different
directions.
If
that
makes
sense,
oh
it.
H
Would
not
become
stale
because,
let's
say
we
in
the
previous
example,
your
node
capacity
has
4
gig
and
the
POTUS
pod
is
3
gig
and
dataquest
5
gig.
It's
not.
You
cannot
satisfy
it.
The
pod
condition
goes
to
failed
and
then
VP
a
says.
Ok,
I'll,
remember
what
the
previous
request
previous
value
was:
I
am
requesting
5
gig,
oh
I,
see
that
it's
failed
node
capacity,
so
my
choices
are
rescheduled
upon
or
abandoned.
A
E
One
of
the
reasons
why
I
thought
moving
to
a
node
condition
would
be
more
helpful
is
because
what
we're
actually
trying
to
get
out
is
describing
the
state
of
the
node
at
any
point
in
time,
not
necessarily
the
state
of
the
pot,
the
state
of
the
pod.
The
important
thing
we
care
about
is:
has
it
been
reconciled
to
its
desired
state,
but
the
the
main
other
piece
that
we
care
about
isn't
sort
of?
Has
the
node
been
rejected
or
not
right?
H
H
Maybe
it
can
compute
which
pods
it
can
choose
to
have
it.
If
the
the
granularity
is
much
finer
if
the
pod
conditions
were
known
where
it
is
exceeding
or
which
one
is
in
progress,
if
we
did
not
have
that
information,
then
let's
say
there
are
two
parts:
one
of
them
the
pod
is
being
resized.
The
next
update
comes
in
and
then
the
it
cannot
meet
the
the
node
capacity
is
reached.
H
Now
both
update
requests
came
in
at
the
same
time
at
t0,
so
there
is
30
second
window
during
which
it's
going
to
wait
and
see
if
the
pods
are
being
resized,
the
first
one
we
are
resizing
we're
working
towards.
We
can
do
it,
we
are
resizing
it,
the
second
one
we
cannot,
because
capacity
has
reached
now
we
Pierre
does
not
know
it
sees
that
node
condition
is
set.
Okay,
that
capacity
is
its
oversubscribed,
but
we
PA
cannot
tell
which
of
the
pods
is
causing
that
or
which
is
being
reset.
E
B
I
think
that
we
already
read
before
that
in
the
five
minutes
on
this
topic.
Maybe
we
can
move
those
discussing
of
nine
is
really
really
good
discussing
and
having
continuous
through
the
PR,
and
so
because
we
have
so
many
topic
today,
and
so
we
have
to
move
faster
and
otherwise
we
cannot
cover
everything
there.
So
next
one
chemical
made
small
to
the
typology
manager
updates
and
a
review
and
I
think
this
is
proposed
to
by.
G
Thanks
so
basically,
what
I
wanted
to
do
is
just
there
is
6pr
is
still
open
for
topology
manager
and
that
we'd
like
to
get
reviewed
and
merged
for
code
freeze
for
116,
and
a
lot
of
them
have
had
a
fair
few
review
cycles.
Some
of
them
are
just
waiting
on
an
approve
on
them.
I
can
link
the
six
that
are
remaining
and
they
do
depend
on
each
other,
so
they
do
need
to
be
merged
in
an
order.
G
G
Yeah
and
and
just
I
think
Kevin's
follow-up.
The
next
item
is
based
on
topology
managers,
so
some
of
the
PRS
made
need
tweaks
based
on
what
we
decide
after
the
discussion
with
Kevin's
proposals,
but
I
would
like
to
keep
a
lot
of
them.
I
think
a
lot
of
them
have
the
base
code
there
and
if
we
could
merge
them,
mostly
as
they
are
and
then
make
changes
based
on
wash
Kevin
proposes
and
what
we
decide
that
that
would
be
great
I.
Just
don't
want
to
leave
them
hanging
open
for
for
much
longer,
if
possible.
G
B
B
G
I
Yeah,
so
the
next
item
on
the
agenda
here
was
just
about
some
updates
to
the
current
implementation.
That's
in
those
PR
that
Louise
was
just
talking
about.
I
definitely
agree
with
her
that
they're
in
a
state
that's
good
enough.
If
we
can
get
a
last
set
of
eyes
on
it
for
review,
we
could
merge
them,
mostly
in
the
state
that
they're
in
right
now,
and
then
you
know
in
the
in
the
background
we
could
talk
about
some
of
the
proposals
I
have
in
these
documents
here
emerge.
I
I
Now,
with
you
know,
the
limited
time
we
have
left
in
the
collet
actually
go
through
all
of
the
different
points
that
I
have
in
these
proposals,
but
we've
been
talking
about
them
offline.
The
the
first
link
that
I
have.
There
is
a
document
that
out
like
in
detail
some
of
the
tweaks
that
I'd
like
to
see
made
to
the
topology
manager.
I
At
least
you
know
some
short
term,
some
long
term
and
I've
been
talking
with
Louise
and
others
at
Intel
and
others
at
different
places
about
this,
because
at
least
internally
at
Nvidia,
we
would
need
these
changes
to
be
made
for
us
to
actually
have
the
topology
and
will
be
simply
something
useful
for
us
and
that's
kind
of
the
motivation
behind
these.
These
different
proposals
that
I'm
adding
here
so
I
think
that's
kind
of
it
on
the
topology
manager.
I
We,
if
we
have
extra
time
I,
think
we
can
probably
come
back
to
it
and
talk
through
some
of
the
details,
but
I
don't
want
to
use
up
too
much
of
this
call
to
go
into
detail,
though,
that
if
people
could
just
review
these
documents
and
look
through
the
enhancement,
pr's
I,
have
there
I
think
that's
the
best.
First.
I
B
J
Some
here
this
again
a
ping
for
like
a
PR
which
I
opened
and
I
was
here
like
also
weeks
ago,
so
I'm
just
waiting
for
like
a
quick
review.
Hopefully
the
that
the
change
is
it's
a.
In
my
honest
opinion,
I
still
believe
it's
a
book.
It
might
be
a
feature,
I
don't
know.
So
it's
so
it's
to
you.
But
but
hopefully,
if
you
can
get
some
reviewers
on
it,
it
will
be
ready.