►
From YouTube: Kubernetes SIG Node 20201027
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
Okay,
hello,
everybody
welcome
to
seeknote
it's
october,
27th,
happy
to
see
everybody
here.
We
have
a
lot
of
prs
coming
in.
We
have
many
newcomers
and
thank
you,
everybody,
mentoring,
people
to
start
contributing
to
signals.
It's
very
exciting.
B
We
have,
let
me
open
sure
somehow
it
could
close.
We
have
24
pr's
created
this
week
and
it's
it's
quite
high,
quite
high
number
and
some
of
them
was
just
opened
by
mistake
or
some
like
new
new
contributor,
so
they
could
close
almost
immediately,
but
we
still
immersed
less
than
we
created.
So
we
still
not
reviewing
prs
fast
enough,
like
face
of
creation,
is
much
higher.
B
I
hope
that
we
will
have
more
reviewers
from
from
google.
We
are
trying
to
avoid
more
people
to
review
and
start
with
smaller,
like
easiest
easier
pr's.
So
I
hope
we'll
have
more
participation
here
and
then
once
we
will
fix
lgtm
problem,
we
will
need
to
start
tackling
a
proof
of
problem
and
start
approving
faster.
B
I
also
noticed
we
have
a
lot
of
test
related
prs
stuck
in
lgtm
state
and
for
those
we
have
less
approvers
and
for
a
regular
signal,
like
dawn,
is
not
there,
for
instance,
and
I
I
don't
know
what
to
do
with
that.
I
I
will
start
pinging
people
more
aggressively,
but
if
you
are
plugger
for
this
area
and
some
pr
is
assigned
to
you,
please
take
a
look.
It's
it's
important
to
keep
rolling,
especially
in
test
area.
A
Thanks,
okay
and
for
the
great
summary
for
us
to
start
the
weakness
signal
meeting,
and
I
have
the
two
topic
and
the
two
topic:
it
is
as
the
follow-up
through
the
offline
email
and
also
for
last
time
and
last
couple
weeks,
the
in
at
the
signal
meeting
people
ask
for
the
who
is
the
new
owner
about
the
no
the
problem,
detector
and
also
new
owner
about
the
state
weather,
because
the
way
regular
of
the
kubernetes
venus
is
being
blocked
by
the
said
writer
since
david,
the
the
tiny
parametric.
A
So
so
we
actually
identified
new
owner,
but
it's
not
like
the
there's,
the
only
owner.
So
we
also
call
out
we
want
to
call
out
and
the
community
please
help
understand
why
there
and
the
node
problem
detector.
But
do
we
also
identify
some
like
the
actively
identified
some
owner?
So
I
want
to
introduce
david
potter.
I
think
many
people
already
know
david
and
he'd
been
working
on
the
shada
design
and
contributed
a
lot
and
also
so
david.
Do
you
want
to
talk
about.
C
Hi
everyone
yeah,
so
thanks
don
for
introduction
yeah,
so
I
mean
I'm,
I'm
happy
to
be
starting
to
work
on
see
advisor,
I'm
starting
to
work
with
david
asheville
a
little
bit
just
to
kind
of
figure
out
how
to
kind
of
transition
a
lot
of
his
work
in
the
short
term.
It
seems
like
the
the
biggest
thing
kind
of
for
kubernetes
itself
is
to
unblock
the
release.
C
So
there's
a
new
c
advisor
released
for
every
kubernetes
release,
so
we
need
to
make
sure
and
stay
stay
on
track
of
that
and
then
longer
term.
There
are
some
some
plans
to
to
potentially
kind
of
refactor
parts
of
the
kubelet
integration
with
c
advisor
and
figure
out
what
to
do
there.
So
I'm
excited
to
kind
of
start
diving
into
these
this
area
and
also
really
excited
to
get
help
from
the
from
the
community.
I
mean
dims.
A
Yeah
in
the
past
many
times,
I
I
asked
david
david,
the
dashboard
I
said
can
we
take
turns,
for
they
would
always
say
he
wanted
to
do
it
before
we
take
care,
and
he
already
did
and
now
because
he's
move
moving.
He
have
the
new
students
in
open
source,
but
I
have
a
new
task
and
all
of
a
sudden
we
realize
actually
how
big
we
can.
It
is
for
us
to
fail
here
so
david.
Thank
you
for
all
your
contribution
to
the
signal
and
the
kubernetes
community,
but
the
dams.
A
Thank
you
for
recognizing
this
problem
and
this
big
loss
and
the
call
out
to
the
attention
and
deems
do
you
also
want
to
share
something.
We've
been
talk
of
nice
to
the
what
we
are
doing
so
so
david,
you
david.
The
product
can
also
work
together
and
we
can
move
forward
for
long-term
housing
about
the
monitoring
of
the
node.
D
Yeah,
so
I
just
want
to
lay
down
the
problem
a
little
bit,
so
the
main
issue
here
is
c
advisor
is
both
a
binary
as
well
as
a
library,
so
not
all
of
what
is
required
for
the
binary
is
actually
used
by.
It
is
vendored
by
kubernetes
just
a
subset
of
it.
D
You
know
david,
then
jordan,
david
ashford
and
jordan,
and
I,
where
have
been
working
for
a
while
to
reduce
the
footprint
of
the
things
that
get
dragged
into
kubernetes
kubernetes
through
c
advisor,
and
we
reduced
quite
a
bit
so
now
the
problem
becomes.
Do
we
want
to
keep
these
two
in
the
same
repository?
Can
we
put
them
in
two
repositories,
the
library
and
the
binary,
and
can
the
library
be
part
of
like
the
kubernetes
six
or
something
like
that?
D
So
those
are
the
things
that
we
need
to
explore
a
little
bit.
The
hard
questions
will
be
around
okay.
If
it
is
a
library-
and
we
want
to
make
changes
to
the
library,
then
how
do
we
test
it
right
and
having
the
library
and
the
binary
in
two
different
github
orgs
under
two
different
owners
is
gonna
like
make
it
make
the
problem
even
harder.
So
those
are
the
kind
of
things
that
we
need
to
work
through.
D
D
You
know
thing
before
we
do
the
code
freeze,
so
that's
the
plan
for
120,
but
121
we'll
try
to
think
a
little
bit
more
about
how
do
we
test
it?
How
do
we
test
the
library
if
we
have
two
separate
repositories
and
things
like
that,
so
that
that
will
be
the
longer
term
thing,
so
I
I
will
try
to
work
with
david
porter
offline
and
put
together
something
that
we
bring.
The
two
of
us
can
bring
it
back
to
this.
B
It
may
be
a
good
idea
to
for
people
who
uses
sea
advisor
as
a
binary,
not
as
a
part
of
kubernetes
to
speak
up
and
help
shape
the
proposal
like
what
exactly
is
needed
and
how
we
can
move
it
going
forward.
D
True
yeah,
the
the
one
more
flip
side
to
this
is
david
ashpole
had
a
kept
in
progress
on
how
to
finally
eliminate
c
advisor,
I
think
and
switch
to
some
cube
state
metrics
or
something
like
that.
So
that
is
even
longer
term
process
that
we
need
to
think
about.
D
So,
basically,
the
ask
us
people
who
are
interested
in
this
problem
space
please
reach
out
to
david
porter
and
me,
and
we
can
start
working
together
and
figure
out
how
to
do
how
to
do
this
right
and
david
ashford
has
been
single-handedly
taking
care
of
this
for
a
really
long
time.
D
So
we
don't
want
to
do
the
same
mistake
again
and
even
though
david
dashboard
really
did
a
really
good
job,
we
don't
want
to
get
into
the
same
position
where
we
we
have
like
a
really
few
people,
doing
heroes,
work
and
like
spread
the
load
a
little
bit.
A
Yeah,
I
I
think
the
david
his
intention
is
not
to
be
the
hero.
It's
just
because
he's
the
hero
here
in
the
past,
because
we've
been
called
out
to
community
help
on
the
sale
weather
for
a
long
time
and
the
apple
he
is
the
store
it
is.
The
hero
handle
all
those
kind
of
the
painful
points,
that's
enemies.
So
so
I
also
want
to
point
out
even
today,
signaled
the
couple
of
things
effort
actually
depend
on
the
zero
either
even
standard
alone
binary.
A
We
do
have
to
say
the
e2e
test
and
the
signal
there's
the
sum
like
the
foot,
performance
measurement
and
a
lot
of
things
we
have
to
reset
weather
and
to
measure
to,
for
example,
for
the
node
perth,
and
we
build
that.
That's
built
by
man
also
is
by
my
intern.
Also,
it's
the
saw
people
and
after
that
to
head
over
and
the
community
and
nobody
us
take
over,
and
so
you
can
call
that
also
harold
and
and
but
just
still
today
have
the
value
measure
of
the
performance
for
many
people.
A
So
so
we
call
out
again,
I
want
to
call
out
it's
not
those
the
indie
contributor
and
the
individuals
want
to
be
the
herald.
It's
just
they
because
we
call
out,
and
they
have
to
take
those
jobs,
because
nobody
helped
so
in
the
past
and
so
until
right
now,
because
their
job
they're
relocated
to
the
new
job
and
they
they
want
to
move
on.
And
so
that's
why
we
recognize
those
problems,
but
it's
not
like
the.
A
We
today
recognize
problem
actually
being
called
out
for
many
times
so
next
next
one
I
want
to
introduce
our
new
owner,
which
is
they
are
not
new.
They
are
the
contributors.
Last
many
months
can't
continue,
contribute
to
know
the
problem
detector
after
lantau
and
I
found
in
that
project
and
the
many
people
using.
So
they
still
continue.
So
let
me
introduce
of
the
warsaw
and
also
happy,
and
also
did
I
pronounce
your
name
correctly,
and
I
know
you
recently
is
active
on
the
project
and
contribute
a
lot.
E
Thanks
for
the
introduction
tom,
so
I
recently
started
contributing
to
the
npd
and,
as
you
know,
npd
is
about
detecting
problems
and
adding
a
reporting
system
stacks.
So
I've
been
working
on
adding
few
more
system
stats,
as
well
as
detecting
few
more
problems
from
the
docker's
docker
logs,
as
well
as
the
channel
logs.
So
I'm
still
working
on
consolidating
all
the
different
issues
that
we
face
on
the
vm
instances
like
io
errors
or
the
memory
related
issues,
so
I'm
planning
to
add
in
this
quarter.
A
Thanks
and
the
cafe,
do
you
want
to
say
hi
to
the
community
you're
also
helpful,
looking
into
some
know
the
problem
detective
issue
and
the
plan
to
contribute
more
there.
F
Yeah
hi,
I'm
hamphei
and
on
mike
hamlin
is
forest
code.
So
you
can
see
my
github
on
the
dock,
but
not
my
name
and
other
than
the
things
that
russia
mentioned
about
like
the
docker
party
and
did
some
memory
part
she
has
been
improved
on
other
things,
and
I
have
done
is
about
migrating
mpd
to
the
new
image
release
pipeline
and
so
that
people
can
download
a
new
mpd
image
from
the
public
registry
and
also
for
some
other
future
direction
for
mpd.
F
One
thing
is
that
it
may
come
out
soon
about
adding
mpd
on
windows.
Note
and
currently
mpd
doesn't
have
an
ability
to
pack
windows
note,
because
the
system
level
on
windows
is
much
different
with
linux
and
and
cos.
So
there's
a
lot
of
work
to
do
there
and
also
other
things
like
something
still
in
the
brainstorming
phase
that
hasn't
been
in
a
very
sink.
F
Very
deep,
yet
is
like
support,
adding
tens
of
abilities
by
mpd
to
nodes,
so
that
pods
can
be
avoided
to
schedule
on
certain
nodes
that
have
certain
convictions
and
other
things
like
adding
more
signals
signals
to
detect
the
node
failures
during
the
no
step-up
process.
Yeah.
Thank
you.
A
Thanks
what's
up
and
healthy,
I
also
want
to
mention
that,
besides
those
two
contribute
to
the
from
google
site
contribute
to
know
the
problem
detector.
Before
we
also
have
the
shield
and
the
contributor
hugely
and
the
architecture,
the
the
main
re-architecture
point
we
did,
which
is
also
what's
the
current
doing.
It
is
removed
the
node
from
the
detector,
the
dependency
on
the
cube
api
server,
so
that
has
part
of
the
problem.
A
Even
aps,
server
is
not
available,
and
so
this
is
really
really
useful
feature
for
us
and
I
don't
want
to
want
to
share.
We
actually,
I
think
about.
When
we
have
the
architecture,
I
invited
the
shirley
to
talk
to
the
community,
as
I
believe
we
do
have
the
dock
shared,
and
we
also
have
like
the
record
safe
somewhere.
If
people
are
interested
on
those
kind
of
things
and
we
can
share
reshare
of
those
design
dog
again
so
so
that's
that's
all
any.
G
Question:
hey
john,
I
have
a
question.
You
may
already
answer
it.
So,
basically,
when
some
customers
use
mkd,
they
also
deploy
their
own
mpg
in
the
part.
Is
there
is
a?
Are
we
planning
to
resolve
that
issue
in
the
long
run.
A
I
don't
understand
what
you
say
and
the
customer
is
up
to
them.
We
are
not
unified
everybody's
mpd
design,
autism
right,
but
that
one
thing's
the
same
okay,
yeah.
G
Go
ahead,
go
ahead,
yeah!
I
want
to
clarify
a
little
bit
so
because
the
mpd
today
is
not
extensive
right.
So
it's
not
a
so
we
cannot
extend.
Customers
cannot
extend
it
to
support
their.
You
know
their
events
or
messaging.
So
that's
why?
So,
basically,
if
customer
wants
also
to
use
mpd
to
detect
their
own
customized
messaging
or
events,
they
are
doing
that
themselves.
G
A
Okay,
I
I
just
want
to
say
I
don't
wanna
say
from
day
one
and
the
mpd
design
is
extensible,
so
we
basically
is
the
well
designed
the
customer
can
plug
it
into
the
customer
could
be
like
the
each
vendor,
so
they
could
introduce
their
own
problem
detector,
so
so,
but
it
is
up
to
each
vendor
or
user
or
but
no,
the
problem,
detector
decided
their
products
and
issues
they
want
to
depend
on
the
node.
The
problem
detector
detects
at
taking
gks
example.
A
I
know
gk
today
published
all
those
node
problem.
Detector
to
open
source
contribute
back
to
open
source,
but
I
do
know
a
lot
of
people
didn't
publish
and
then
they
keep
their
own
version
and
but
by
design
we
didn't
do
this.
We
allow
those
things
and
the
way
there's
no
goal,
because
the
difference
of
the
operating
system
different
digital,
different
version
of
the
kernel
they
may
have
the
different
problem.
Different
version
of
the
container
runtime
also
maybe
have
the
their
own
problem
so
from
day
one.
A
B
A
A
And
also,
please
could
you
could
you
say
to
the
warsaw
and
the
hefe
for
those
mpd
failures
there
thanks.
Oh.
H
Yeah
exactly
so,
I
made
a
pr
a
year
ago
as
a
vendor,
and
it
was
kind
of
a
nice
to
have
that.
I
thought
people
would
need,
and
now
I'm
kind
of
on
the
other
side,
where
I
actually
need
it
for
myself.
H
So
basically
much
of
the
visibility
and
kind
of
usage
statistics
are
assuming
that
what
you
would
do
is
you
would
go
through
all
the
containers
that
someone
is
using
for
their
namespace
and
adding
them
up
and
never
considering
instead
podc
group
and
when
first
of
all
pod
is
always
going
to
be
more
accurate
than
the
sum
of
containers.
H
H
So,
based
on
that,
as
a
user,
I
want
to
be
able
to
accurately
account
for
the
usage
that
a
pod
takes
on
the
system
and
to
facilitate
doing
that.
I
looked
at
the
metric,
slash
resource
endpoint
provided
by
cubelet,
and
they
go
through
and
already
expose
container
level
stats
and
everything
else.
They
already
loop
over
all
the
pods
and
everything
else.
H
It's
just
a
matter
of
output,
the
pod
level,
c
group
information
and
then
go
ahead
and
sure
you
can
up
with
the
container
ones
as
well,
but
I
won't
scrape
that
that's
not
interesting.
For
me,
the
the
pod
is
the
most
useful
thing
that
is
missing
and
necessary.
So
I
I
rebased
and
sent
to
pr
for
this
late
last
week
and
to
me
it
seems
non-controversial.
H
I
hope
it's
not,
but
if
it
is,
I
basically
I
just
want
eyes
on
it,
because
I
really
needed
it
and
would
love
it
to
get
into
120.,
so
any
feedback.
If
this
isn't
the
right
mechanism,
there's
a
different
mechanism.
H
That's
that's
kind
of
where,
where
I'm
at
and
I'm
just
the
request
is
for
feedback
on
it,
whether
it
be
online
now
if
people
have
thoughts
or
otherwise
to
just
comment
on
the
pr
related,
I
know
that
clayton,
I
believe
there
is
a
cap
around
pod
level,
resource
stuff
in
general,
from
an
accounting
standpoint.
So
maybe
this
is
somewhat
related
to
that.
Maybe
I'm
helping
him
out
by
getting
one
of
his
pr's
out
of
the
way
for
that
cap,
I'm
not
sure,
but
either
way.
A
Controversial,
I
I
like
your
head
and
I
do
think
about.
We
need
the
power
level
and
they're
not
just.
I
totally
agree
with
the
power
level,
maybe
more
accurate,
because
the
container
die
and
then
you
restart
the
container,
and
you
only
measure
newly
created
of
the
c
group,
the
resource
usage.
But
there's
the
certain
slab
usage
goes
to
the
parent,
which
is
goes
to
the
path
level.
So
this
is
one
have
the
more
accurate
measurement,
obviously
for
memory
and.
A
I
A
From
the
original
plant
c
group
or
under
like
the
different
layer
of
flower,
and
then
we
have
the
powder
parentheses
group
over
here,
so
so
please
share
your
concern.
If
you
have
consent,
if
you
have
any
disagreement.
I
Hello,
yes,
I'm
new
to
sickness
meeting.
So
thanks
for
having
me
I'll
drop
a
note
in
chat
to
a
hacking
d
file.
If
you
want
to
open
that,
I
actually
have
the
expert
that
worked
on
this
on
before
steve
steven
heywood
stephen.
You
want
to
explain
through
the
arguments
that
we
have.
J
Hi
in
the
document,
if
you're
able
to
follow
along
or
if
you're
able
to
share
it
on
screen,
we're
working
with
the
sig,
conformer,
sorry
sega
architecture
and
work
with
the
conformance
sub
project,
around
conformance
for
kubernetes
across
the
board.
J
So
we've
got
a
number
of
endpoints
that
we're
dealing
with
as
part
of
120
around
proxy,
so
we've
got
some
endpoints
that
we
started
looking
at
was
for
no
proxy,
and
there
is
issues
around
at
the
moment.
There's
a
generic
redirect
that's
happening
for
the
base
api
endpoints
that
once
it's
removed
at
the
moment
as
part
of
my
own
internal
build
of
case,
it's
returning
our
404
era,
basically
there's
nothing
there.
So
I'm
just
wondering
is
there
anything
suitable?
J
That
is
likely
to
actually
be
there?
That
is
able
to
be
tested,
or
is
it
just
just
not
a
feasible
endpoint?
That
needs
to
be
looked
at
as
part
of.
J
And
the
second
request
is
around
there's
a
node
proxy
with
an
appropriate
pass,
and
I've
already
done
some
initial
checks
where
I've
been
able
to
get
some
responses
using
the
config,
zed
or
metrics.
J
J
So
it
would
be
great
to
find
out
if
there's
a
way
that
the
nodes
could
have
a
like
a
standard
endpoint
that
is
able
to
be
used
as
a
like
as
a
health
check.
That's
not
directly
relying
on
tubeless,
but
I'm
just
not
too
sure
about
the
super
internals
about
how
how
the
node's
doing
a
lot
of
things
where
I
can.
J
As
part
of
doing
the
stuff,
with
no
sorry
pod,
proxy
and
service
proxy,
I've
been
able
to
looking
at
getting
a
custom
pod
behind
the
scenes
to
be
able
to
use
as
part
of
the
conformance
test,
I'm
just
not
too
sure.
What's
able
to
be
part
of
ga
long
term.
J
A
It
so
the
I
I
can
feel
a
little
bit
the
background
context
so
because
of
the
proxy
in
mind,
it
is
introduced
really
long
long
time
ago
that
it
is
at
a
very
earlier
stage
of
the
kubernetes
and
the
folks
worry
about
how
to
debug
the
system
and
that
time
we
haven't
really
opened
discussing
serverless,
but
already
a
lot
of
people
have
that
idea
and
that
sort.
A
So
we
introduced
those
proxy
in
the
point
most,
it
is
trying
to
expose
some
of
the
information
on
the
node
and
try
to
improve
some
of
the
node
level
debug
being
needed,
but
that's
the
original
reading
and
over
time.
Obviously
people
using
that
for
the
different
reason,
but
we've
been
long
time
back
and
later
to
for
the
architecture
and
also
for
the
security
reason
we
try
to.
A
We
want
to
deprecate
those
in
the
points
so,
but
do
we
also
know
the
customer,
some
user
men
still
using
those
kind
of
things
and
have
the
automation
build
around
those
things.
So
maybe
the
team
or
claire
actually
in
the
past
is
from
the
signal
and
also
the
gk
team
is
represented
of
the
lower
level
of
the
lower
level
of
the
security.
A
I
remember
he
look
into
those
endpoints
and
come
up
some
plan
to
deprecate,
so
maybe
we
can
carry
off
the
discussion
from
that
one
to
can
you
send
the
email
I
can
share
with
you
his
email,
so
maybe,
through
the
slack
and
picking
him.
J
Yeah,
that's
okay,
just
wanting
to
just
get
a
little
bit
more
context
on
those
endpoints
and
I
can
understand
the
part
of
the
reasons
around
wanting
to
deprecate
the
endpoints,
but
it's
just
hopefully
getting
a
solid
answer.
I
If
we
can
get
a
solid
answer
that
we
will
deprecate
those,
it
will
also
help
us
to
know
it
or
how
to
move
forward,
and
we
can
actually
remove
them
from
eligibility
for
conformance,
which
would
also
give
us
a
better
indication
of
where
we
are
with
conformance
coverage
over
all
four
so
yeah.
That
would
be
nice.
If
we
can
get
a
clear
answer
on
on
whether
we
deprecate
it
or
not,
and
then
we
can
mark
them.
A
Yeah
yeah
thanks,
so
I
think
there's
the
many
way
to
move
forward,
at
least
for
confirmed
tests
to
move
forward.
We
could
just
call
out
and
for
confirmed
by
default.
It
is
not
support,
we
can
introduce
the
feature
get
and
the
disable
is
by
default
and
but
the
customer
still
could
be
using
not
confirmed
kubernetes
if
they
have
the
dependency.
So
there's
the
many
way
to.
We
also
can
call
out
this
public
list,
so
we
are
going
to
deprecate
and
announce
this
deprecation
and
give
sometimes
and
to
the
community.
A
So
so
the
user
can
pre-pile
to
move
and
to
get
ready
their
automation
to
remove
the
dependence,
the
others
and
the
pines.
Then
after
time,
then
we
can
keep.
We
can
remove
those
dependency.
A
So
that's
why
I
suggest
to
talk
to
the
team
acquirer
because
he
have
that
plan
to
remove
many
dependency
for
the
security
cam
region
in
the
past,
so
we
can
start
from
there,
but
we
definitely
can
move
forward.
I
believe-
and
I'm
not
sure
I
answer
your
question
a
lot
here.
J
A
Thanks
next
topic
circuit,
do
you
want
to
talk
about
the.
B
Yeah
there
is
a
small
pr,
it's
a
new
one
and
it
introduces
a
new
header
for
all
the
probes,
namely
accept
with
the
star
star.
I'm
sorry.
This
is
similar
to
what
coral
is
doing
by
default.
So
I
don't
see
any
problems
with
that.
The
only
question
for
me
is
that
this
pr
introduces
11
bytes
for
every
wirelessness
probe,
and
I
wonder
if
anybody
has
experience
with
the
performance
test
and
understanding
how
much
this
may
affect
high
density.
B
A
Looks
like
nobody
at
least
I
didn't
look
at
that
pr.
I
also
don't
look
at
what's
the
problem
that
I
try
to
address,
so
we
may
need
the
follow-up
of
nine
and
to
to
to
see
what
at
least
I
need
to
figure
out.
What's
the
problem
and
try
to
solve
here.
B
Okay
makes
sense:
do
you
wanna
go,
do
it
now?
I
think
we're
short
in
time,
so
we
can
do
it
offline
or
I
can
quickly
introduce
the
problem.
A
B
B
Everybody
about
timeline,
the
enhancement
timeline
is
coming
up
and
for
every
enhancement
that
is
approved
for
120.
We
need
to
have
code
merged
by
code
fees
and
we
need
to
create
this
placeholder
for
documentation,
ideally
like
the
orientation
needs
to
be
updated
fully.
B
So,
if
you
own
one
of
this
enhancement,
please
try
to
move
it
forward
and
react
on
release
managers
things.
I
I'm
curious
about
specifically
cri
alpha
to
beta
migration,
even
bruno
on
the
call
you
know,
yeah.
K
Yes,
okay,
so
so
I
sasha
just
opened
a
pr,
so
mainly
we
need
the
second
changes,
I'll
paste,
a
link
to
that,
and
once
that
is
merged,
mike
brown
and
I'll
open
peers
to
clean
up
comments
and
documentation.
And
then
it
should
be
good
to
go
just
a
sec.
B
Cool
okay
yeah.
I
just
wondered
like
if
we
progression
with
docker
shim
removal-
and
we
said
that
cri
graduation
is
one
of
the
requirements
to
do
this
eduplication.
Then
we
need
to
make
sure
that
it's
it
feeds
into
one
thing.
K
Yep,
of
course,
so
I
put
a
link
in
chat
I'll.
I
can
add
it
to
the
doc
as
well.
B
Thank
you
and
I
started
marking
enhancement
that
was
immersed
with
the
green
in
this
enhancement
tracking
document
that
we
had
before
a
feature
health
check.
So
if
you
want
to
update
your
own
pr,
like
your
own
enhancement,
just
please
do
it
so
you'll
have
a
clear
picture
and
then
by
120
we'll
have
a
understanding.
B
Next
and
then
last
question
I
wanted
to
ask,
is
some
features
were
marked
as
desire
to
be
duplicated,
like
specifically
like
dynamic,
couplet
config,
and
I
I'm
not
sure
about
the
process
like
if
duplication
is
also
needs
to
be
approved
and
we
already
missed
a
deadline
or
it's
something
that
you
can
do
now
and
at
least
mark
some
features
as
deprecated
so
going
forwards.
You
can
start
removing.
B
B
I
can
help
follow
up
with
release
team
about
what
they
think
about
the
process
and
bring
it
back
to
the
next
sig
note
meeting.
If
nobody
has
the
context
here,.
B
Okay,
thank
you
and
I
believe
there
are
a
few
more
topics
about
feature:
requests
that
fall
in
the
agenda.
M
Hi
learn
swati
here,
so
we
have
a
kind
of
a
question
in
relation
to
topology,
aware
scheduling,
particularly
with
where
we
place
the
topology
note
resource
topology,
crd
definition,
just
to
refresh
everyone's
memory.
We
have.
We
have
two
caps
in
in
question.
One
is
the
scheduler
plug-in
cap,
which
is
proposing
that
a
topology
where
scheduler
plug-in
be
in
kubernetes
entry
and
the
other
component
is
the
exporter
which
exposes
per
node
crd
information
corresponding
to
its
hardware.
M
So
the
main
glue
between
these
components
is
the
crd
api
definition,
so
the
scheduler
plug-in
needs
to
import
the
api
definition
and
again,
nfd
needs
to
do
that.
We
had
a
couple
of
arts
in
relation
to
nfd.
What
is
its
release
cadence
and
things
like
that
we
followed
up
on
that,
so
nfd
is
released,
cadence
is
ad
hoc
and
and
the
so.
Basically
we
have
we
proposed
three
options.
M
One
was
have
this:
the
this
api
definition
in
nfd,
but
that
was
a
not
viable
option
because
of
circular
dependencies,
another
option
was
have
it
in
an
external
repository
say
in
kubernetes
sig.
So
I
I
had
a
conversation
with
actually
dems
in
relation
to
that.
He
recommended
again
that
has
issues
because
of
circular
dependencies,
because
if
kubernetes
is
trying
to
import
the
epi
definition
and
then
we
need
the
api
definition
to
import
kubernetes,
so
that
leaves
us
with
the
only
viable
option
of
having
it
in
staging.
D
D
So
I
don't
have
any
other
solution
at
this
point.
But,
like
you
know,
I
can
talk
to
jordan
and
we
can
think
about
a
few
things.
But
at
this
point
we
should
just
go
with
the
pattern
that
is
already
there
and
not
try
to
invent
something
new.
So
I'm,
okay
with
starting
with
staging.
A
C
D
A
talk
with
nikita
also
this
morning
about
like
can
we
have
when
api
server
starts
up?
Can
it
pick
up
crds
from
somewhere,
and
you
know,
try
to
come
up
with
some
other
solution,
but
anything
that
we
can
think
of
is
not
going
to
be
for
120
it's
going
to
be
for
later.
So
at
this
point,
even
six
storage,
we
ended
up
having
some
definitions,
especially
with
you
know.
I
remember
pvcs
and
stuff
like
that
ended
up
entry
from
six
storage
as
well.
So
I
so.
D
The
first
question
I
asked
swathi
was:
does
the
inbuilt
scheduler
need
to
do
need
to
import
this
right,
and
is
it
going
to
be
on
by
default?
So
the
answer
was,
it
is
going
to
be
imported,
but
it's
going
to
be
off
by
default,
at
least
for
the
short
term.
That's.
M
D
So
yeah,
if
the
answer
was
a
little
bit
different,
then
I
would
have
said
just
go
away
in
terms
of
like
you
know,
open
a
separate
repository,
but
the
the
key
here
is
that
it
needs
to
be
built
into
this
cube,
scheduler
and
there's
just
no
other
way
around.
It.
M
Okay,
so,
like
typically
speaking
again,
there
is
an
option
of
making
it
out
of
tree,
but
given
that
there
are
limitations
within
kubernetes
itself,
that
is
leading
to
the
problem
that
we
are
trying
to
address.
I
M
M
Yeah,
so
we
have
two
caps
for
the
scheduler
plugin
and
for
the
exporter
component,
which
will
be
in
nft.
I
A
A
D
One
other
thing
you
can
do
swati
is
just
summarize
the
conversation
here,
send
it
to
both
six
scheduler
and
signal.
D
So
there
is
a
track
record
of
you
know
what
we
ended
up
deciding
and
who
needs
to
be
involved
right
and
and
the
problem
statement
as
such
that
there
is
a
circular
dependency
and
there
is
no
good
solution
to
it.
L
Yeah,
hey
chris
here,
so
one
thing
is
just
a
reminder
for
review.
I
mentioned
that
last
week,
so
so
no
more
about
that.
Maybe
I'm
just
worried
if
we
can
can
fit
in
the
code
freeze
just
this
is
the
reason
why
I'm
reminding
so
often-
and
my
question
is
about
issue
tracker
on
kubernetes
enhancement,
because
I
know
that
we
made
one
for
the
memory
manager
because
there
was
a
new
cap
and
this
is
the
whole
enhancement
and
for
the
topology
manager
scope
is.
It
is
a
feature
of
topology
manager.
L
L
N
N
My
guess
is
that,
because
of
the
nature
of
the
change
for
the
for
the
scope,
we
can
probably
just
continue
doing
that,
whereas
the
memory
manager
one
was
a
much
bigger
change,
it
was
its
own
hint
provider
and
all
that,
and
it
wasn't
just
a
change
to
the
topology
manager
itself.
So
that's
why
I
think
it
made
sense
to
have
its
own
tracking.
N
A
Thanks
we
normally
just
up
to
the
feature
owner
decided.
We
need
to
separate
the
tracker
a
lot
and
the
tracker
is
just
help
our
communication
and
for
the
checking
our
progress
on
those
features.
So
that's
why
so
so,
there's
no
strong
preference
unless
we
think
about
it.
It's
harder
for
community
of
hard
for
the
group
to
understand
the
progress
or
there's
the
big
change.
A
So
so
it's
just
like
the
most
of
it
is
like
the
future
owner
make
the
decision,
at
least
that's
the
in
the
past,
but
sometimes
we
may
suggest
that
you
have
the
separate
tracker
if
it's
complicated
or
it
is
harder
to
track
and
there's
obviously
enhancement,
but
we
do
a
lot
of
things.
So
it's
just
opinion
is
a
little
bit
based
on
our
own
judgment
here,
the
ones
here-
and
I
I
I
saw
your
pr
and
derek
it-
is
on
it
so,
and
I
saw
the
direct
review
all
the
things.
A
Thanks
next
one
francisco,
do
you
want
to
talk
about
powder
resource
ps,.
M
Nick,
so
I
can,
I
can
speak
through
this.
I
don't
know
it
might
be
some
audio
issues
with
francesco's,
so
this
is
in
relation
to
pod
resource
api.
As
part
of
the
the
kubernetes
enhancement
proposal,
we
had
three
components
of
it:
one
was
introducing
the
topology
information,
cpu
ids,
then
the
second
one
was
introducing
get
allocated
resource
and
the
third
component
is
watch
implementation.
M
So
we've
we've
done
kind
of
completed
the
first
two
parts.
The
third
part
is
work
in
progress
and
it
might
not
be
able
to
get
into
one
dot,
20
20,
the
code
freeze,
so
we're
trying
to
figure
out
what
to
do
in
this
situation.
Like
do
we
modify
the
cap,
or
do
we
just
capture
it
somewhere
that
this
this
part
of
the
cap
is
being
addressed
through
this
pr.
A
A
So
I
I
personally
don't
want
those
process
to
block
you,
your
effort
and
but
once
we
are
done,
I
want
the
cab
keep.
It
is
up
to
latest
status.
That's
just
my
personal
preference,
because
a
lot
of
the
documentation
and
a
lot
of
things
will
reference
back
to
the
cap
as
the
source
of
the
truth,
so,
but
also
during
the
development
time
right.
So
the
cap
is
using
for
the
for
the
developer
communication
and
review
and
all
those
kind
of
things.
So
we
also
don't
want
that
process
to
slow
down
your
progress.
A
M
So
like,
if
so,
with
the
two
components
that
we're
proposing
we
keep
the
cap
as
it
is
with
the
three
components.
That's
what
you're,
saying
and
and
eventually
that
watch
will
be
like
published
and
based
on
that
read,
update,
kept
as
well
yeah.
M
A
A
A
So
slightly,
maybe
you
just
send
it
up
here
to
update
that
to
cap,
but
do
we
keep
type
as
yes?
So
basically,
it's
like
the
approved
and
it's
just
as
another
enhancement
and
make
that
enhancement
pr
pending
there
to
keep
that
is
update
to
align
with
the
implementation
detail
and
the
progress.
So
mgnet
is
up
to
date.
I
think
at
the
end,
when
we
have
miss
and
the
merged
of
the
cub
status
represent
current
status.
That
means
that's
the
best.
A
K
Hey
a
quick
one
so
that
pr
I
pasted
earlier
from
sasha
right.
So
that's
that's
changing
how
second
works,
but
now
we
have
a
chicken
and
egg,
because
if
we
change
this
then
the
run
times
are
not
updated.
Like
container
d
tests
will
fail
immediately,
so
we're
thinking
of
doing
it
in
two
steps.
We
first
just
add
the
new
field.
We
update
container
d,
we
update
cryo
and
then
we
make
another
pr
to
kind
of
remove
the
old
field.
So
right
now
it's
just
marking
it
as
deprecated.
G
K
A
So
you
know
you:
can
you
can
always
ping
me
if
you
want
yeah
thanks
thanks,
thank
you,
everyone
and
last
two.
We
have
a
great
of
the
meeting
today
we
process.
We
talk
about
a
lot
of
topics
and
looking
forward
next
week.
Oh,
we
cancel
the
next
week.
Sorry
next
week
we
can.
I
think
that
we,
I
propose
to
cancel
it
and
if
anyone
has
a
constant
need
to
ask
no,
but
the
current
day
plan
is
next
week.
Next
tuesday,
it
is
the
working
day.
A
So
that's
why
I
plan
to
cancel
so
people
can
can
plan,
but
to
please,
if
you
have
the
topic,
you
want
to
discuss,
get
attention
always
being
asked
through
the
slack
and
send
us
email
to
the
kubernetes
signal.
The
group.