►
From YouTube: Kubernetes SIG Node 20220920
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220920-170440_Recording_1920x960
A
B
C
I
just
want
to
make
sure
that
all
of
the
the
cabs
that
were
worked
on
in
125
will
be
worked
on
in
126
as
well.
So
if
the
owners
can
just
mark
this
column
with
yes,
then
yeah,
we
should
be
good.
We
will
know
that
we'll
be
working
on
this
caps.
D
It
there
was
amazing
about
CRI
metrics
right
before,
let's.
B
Either
Peter
or
David,
do
you
guys
want
to
give
a
quick,
quick
update
on
what
you're
planning
to
do?
First
year,
I
started
yeah
yeah
I.
F
Can
I
can
chime
in
we?
We
are
planning
on
working
on
this,
we're
gonna,
we're
retargeting
it
to
Alpha
or
redoing
Alpha,
and
we're
planning
on
extending
the
CRI
to
account
for
the
metric
seat,
advisor
endpoint
as
well
as
ADD
window
support
to
all
of
it.
So
those
two
things
are
criteria
for
Alpha
and
we
just
have
to
add
a
cat
update
for
them.
B
All
right,
so
the
next
one
in
that
list
is
dynamic
resource
allocation.
That's
that
work
is
going
to
continue
right,
like
we
have
a
group
of
folks
working
on
it
and
even
want
to
add
anything
to
it
or.
G
Yes,
for
Patrick
and
Kevin
were
working
on
that
plus
few
other
Engineers
on
our
side
working
on
that.
So
the
pull
requests
supposed
to
be
moved
from
draft
in
next
few
days
here
from
Albuquerque.
B
B
Okay,
yeah,
okay,
all
right,
so
the
next
one
is
splitting
standard
out
in
standard
error.
Log,
Stream,
So
I
know
we
had
a
pull
request
for
it
out,
not
sure
if
the
author
is
on
the
call
today.
A
B
All
right
next
one
is
evented
plague.
Someone
wants
to
talk
about
it
quickly.
I
know
it's
in
progress.
Do
we
have
a
rep.
I
Yeah
yeah
sure
they're
they're,
going
over
the
API
doing
prototypes
back
and
forth.
The
cap
probably
needs
a
quick
little
update,
based
on
some
skill.
C
C
There
is
some
issue:
that's
right.
I
was
just
embarrassing.
My
issue,
we
discovered
on
the
couplet
site,
we're
trying
to
figure
it
out,
but
yeah
we'll
be
working
on
this
in
126.
B
Thanks
Mike,
so
the
next
one
I
see
has
just
started
to
the
list.
Re-Tribal
jobs,
failures.
A
J
B
H
Yeah,
this
was
I
think
trying
to
have
a
way
of
denying
part
admission
on
the
keyboard
side
to
protect
like
certain
classes
of
workloads,
I
yeah.
We
would
need
the
original
folks
presenting
it
to
kind
of.
Let
us
know
if
this
is
still
a
thing
they
want
to
pursue
or
have
Cycles
to
do
in
this
time.
H
I
don't
think
the
cap
was
ever
actually
fully
resolved.
J
B
Yeah
we
can
poke
them
offline
and
see
if
anybody
has
saran's
email.
Otherwise,
I
can
choose
it
down.
C
B
Okay,
so
the
next
one
is
marked
as
not
active,
so
I
assume
we
just
skip
over
those
ones.
Okay,
so
next
two
are
not
active.
We
you
escape
fine-grained
cubelet,
API
authorization,
that's
a
defer
to
I'm,
not
sure
whether
that's
a
recent
update
or
it's
from
the
125
away.
B
A
B
So
the
next
one
is
node
swap
Ilana
is
not
around.
So
is
there
anyone
else
interested
in
like
picking
this
up
in
126.
D
See
if
we
can
pick
it
up
depend
on
what
other
things
you'll
be
doing.
Yeah
foreign.
E
B
D
Yeah,
it's
all
about
just
removing
this
flag,
I
kind
of
check
the
status
of
transitioning
people,
but
I
know.
Recently.
We
had
this
issue
again
on
some
windows
customers,
so
people
still
rely
on
this.
One
second
I
mean
on
timeout
that
you
is
never
enforced
and
when
this
flag
will
be
enabled
people
may
I
mean
people
work
with
me,
suffer
I.
Think
for
now
we
can
just
keep
keep
the
logic
there
with
the
flag
for
a
while.
I
We
can
keep
I
think
we
can.
We
might
want
to
look
at
this
to
see
if
we
can
add
some
additional
test
cases.
We've
got
another
probe,
CAP
Lower
down
that
we
might
be
able
to
include
test
cases
for
the
exact
probe.
Timeout
probably
already
have
additional
buckets.
B
All
right,
anyone
can
add
a
note
there.
We
can
move
to
the
next
one.
Okay,
Swati,
are
you
on
the
call
yeah
I
am.
B
K
Let
me
take
a
closer
look
into
this.
I
haven't
looked
in
a
while.
Maybe
can
you
put
a
list
pending
or
something?
Let
me
think.
Okay,
there's
some
interest
to
this.
Okay.
K
C
D
C
Yeah,
the
cap
of
merge
and
the
code
implementation
was
done
in
124
as
Alpha
I'm,
just
not
sure.
If
there's
any
plan
to
bring
it
to
Beta
in
126,
so
I
yeah,
just
as
curious.
C
The
call
by
the
way,
okay,
okay,
no
then,
let's
see.
C
We
can
book
Kevin
and
see
right
all
right
right,
but
anyways.
This
is
not
it
it's
Alpha
already.
So
if
we're
going
to
work
on
it,
it'll
be
just
a
brain
into
beta.
I
will
keep
it
here
and
in
Kevin
later
on.
Okay,.
B
All
right,
cubeletal,
symmetrics,
endpoint.
K
K
K
A
C
I,
don't
think
so.
There
doesn't
seem
to
be
any
update
on
this
cap
for
two
years
or
so.
C
H
Trying
to
remember
the
history
on
this
one,
this
was
done
originally
with
Sig
network,
if
I
recall,
right,
yeah
and
and
I,
don't
think
it
was
a
regular
contributor
who
drove
this
originally.
C
Yeah
the
last
update
on
the
alpha
PR
was
two
months
ago.
I
can
go,
try
ping
shimane,
the
owner.
I
C
Okay,
yeah
and
see
if
I
can
get
any
update.
If
not,
I
will
just
mark
this
as
like
not
ready
for
one
message
again.
B
Okay
sounds
good.
B
All
right
next
one
is
graduate
CPU
manager
from
beta
to
ga,
Francisco
or
Swati.
L
H
I
guess
one
thing
I'd
like
to
clarify
is
like
I:
don't
want
to
end
up
in
a
state
like
we
did
with
device
plugins
like
we
should
see
something
to
completion
before
doing
like
Dynamic
resource
stuff,
so
I
would
I
would
find
it
frustrating.
Actually
if
we
can
move
the
existing
stuff
to
GA
before
starting
like
a
new
Alpha
bubble.
Thing
like
Perma
beta
is
a
bad
thing.
H
L
Yeah
he's
saying
category
I
I
can,
if
there
is
I,
mean
very
interest.
I
can
do
my
best,
but
really
this
is
really
no
promises
in,
for
the
best
measure.
I
need
to
pick
one
and
I
I
am
a
slightly
better
shift,
run
CP
measure,
but
really
if
there
is
interest
in
reviewer
availability
and
a
probability,
I
will
try
to
help
us
watch
in
in
the
priority
I
set.
However,
if
anyone
wants
to
join
the
fun
I
mean
be
my
guest.
D
Yeah
I
haven't
looked
at
this
I
mean
I
just
glanced
over
a
threat.
On
slack
saying
that
we
want
to
change
semantic
of
device
management.
Is
it
part
of
it
like
do
we
need
to
do
it
before
GE.
L
Right
so,
okay
good
points,
so
basically
there
is
a
a
bug
on
which
a
device
plugin
broke,
but
after
first
investigation
it
seems
that
the
the
behavior
which
is
now
enforced
by
the
device
manager
in
25
and
above
is
actually
documented
documentary
one.
So,
despite
my
initial
feeling
was
okay,
this
was
like
a
breakage
but
turns
out
no
turns
out
the
the
device
manager
is
doing.
The
advertising
thing
is
following
the
documentation,
the
documented
sequence,
so
the
device
plugin
seems
to
be
at
fault.
L
In
that
case,
it
seems
that
this
bug
is
a
documentation
bug
like
okay,
starting
from
this
version,
we
are
really
enforcing
what
is
supposed
to
be
the
or
the
expected
flow.
I
I
need
to
double
check,
but
it
really
seems
the
case,
so
yeah
doesn't
seem.
We
need
to
an
urgent
fix
in
this
area.
Doesn't
does
not
it.
D
J
B
I
Sure
we
think
this
one's
ready.
We,
we
did
have
a
signal
review
last
year
when
it
was
first
ready
and
there
was
a
concern
brought
up
about.
You
know
probe
charging
that
it's
you
know
off
to
the
container
and
if
you
do
too
many
probes
get
you
know
using
subsecond
that
it
would
be.
You
know
to
just
too
much
performance
hit
and
too
expensive.
I
I
D
D
D
Okay,
I
just
curious:
if
streaming,
some
sort
of
streaming
API
may
be
better
in
this
case,
for
grpc
streaming
is,
can
be
implemented
and
for
HTTP
we
can
just
listen
for
open
connection
for
a
while
I
I'm
curious.
If
this
was
considered
yeah.
I
A
redesign
in
the
probes
that
wouldn't
be
this,
but
but
yeah
we
could
do
a
faster
implementation
of
probes
which
would
make
doing
subsequent
approach
costs.
Last
right.
This
different
issue
I
think.
D
Yeah
I
must
be
worried
about
how
many
check
like
how
many,
how
much
we
can
protect
customers
from
shooting
their
in
their
feet
like
if
they.
I
Will
stop
second
yeah
exactly
and
that's
why
we're
saying
we
can
make
the
scaling
and
we
can
provide
a
lot
of
documentation
to
help
explain
to
them.
You
know
how
not
to
do
that
and
we
can
stop
them
from
doing
it
by
by
only
allowing
it
to
check
once
quickly
and
then
slow
slow
it
down
tremendously
right
in
the
next
few
times.
D
D
I
To
hold
this
out
for
a
redesign
of
you
know,
probes
entirely.
C
So
Marcus
says:
yes,
it'll
be
work
only
in
126..
By
the
way
the
cabin
is
still
marked
as
trapped.
If
it's
ready
for
review,
can
we
just
undraft
besides
it
will.
I
B
And
the
one
after
that
is
Qs
class
resources.
I
M
M
B
Maybe
Ryan:
do
you
think
you
can
help.
B
E
O
Sure,
okay,
so
this
is
a
feature
that
is
owned
by
the
red
tribal
and
not
retryable,
pod
filers
for
jobs,
so
it's
owned
by
sea
gaps
and
it's
already
in
Alpha.
O
But
now
we
want
to
extend
the
scope
of
the
feature
to
also
modify
cubelet.
So
let
me
just
briefly
introduce
what
we
did
in
Alpha.
So
in
this
feature
we
want
to
give
a
little
bit
more
of
control
to
the
users
for
handling
portfoliers,
so
the
standard
job
configuration
for
now
gives
just
the
number
of
retails
called
back
of
limit.
O
But
often
some
failures
can
be
categorized
as
non-retable,
because,
for
example,
it's
known
that
give
a
forgiven
exit
codes
is
just
a
like:
a
user
error
misconfiguration
and,
on
the
other
hand,
some
pod
failures
are
caused
by
disruptions,
which
are
just
should
be
retried
for
free
without
even
incrementing
the
counter
towards
the
limit
of
red
rice.
So
this
feature
the
implementation
is
default
in
job
controller.
O
We
first
of
all
respect
the
configured
portfolio
policy,
which
is
a
list
of
roles
where
we
handle
how
to
handle
the
Pathfinders
and
another
part
of
this
feature
is
to
unify
the
Pod
and
state
so
that
the
portfolio
policy
can
easily
match
against
the
Pod
and
state.
So
what
I
mean
by
that?
We
add
pod
conditions
when
evicting
a
pod.
O
That
would
that
indicate
what
was
the
reason
of
the
Pod
failure
so
for
now
we
in
Alpha
and
we
introduce
disruption
Target,
which
is
the
new
pod
condition
type,
which
we
add
to
annotate
portfoliers,
caused
by
taint
eviction
or
preemption
or
forgot
now.
But
there
is
another
scenario
and
in
beta
what
we
do
is
we
want
to
extend
the
set
of
scenarios
for
cubelet
initiated
eviction,
so
it
particularly
cover
we
want
to
cover
shutdown
case
full
shutdown,
which
is
of
particular
value
to
users.
O
So
for
again,
for
the
eviction
initiated
by
character,
we
want
to
just
for
free,
give
the
users
the
ability
and
but
other
evictions.
We
also
want
to
cover
like
limits,
memory
limits,
ephemeral,
storage
limits
and
also
evictions
due
to
admission
filings.
O
B
That
brings
us
to
the
end
of
the
table.
Folks,
do
you
have
if
you
have
anything
else,
please
let
us
know
that
should
be
true.
There.
D
Is
also
grpc
probes
that
was
beaten,
124
I'm,
not
sure
if
you
want
to
like
I
look
at
the
usage-
and
we
have
very
little
usage
for
now,
even
though,
like
implementation
is
quite
set
it
forward
and
everybody
seems
to
be
wanting
it,
but
nobody
can
actually
use
it
in
production,
because
version
of
124
of
kubernetes
is
not
widely
distributed
yet,
so
the
question
is:
do
you
want
to
push
it
to
GA,
even
though
we
don't
have
too
much
usage
or
we
want?
We
just
want
to
wait
another.
It
is
any
opinions.
D
Yeah
and
I
presented
this
grpc
probes
to
grpc
community
on
their
Meetup,
and
they
seem
to
be
happy
with
that.
But
again
they
don't
have
too
many
people
to
try
it
on,
because
the
version
of
kubernetes
is
not
widely
in
production.
B
D
M
P
This
side
related
to
C
gaps
is
pretty
much
approved
that,
because
that's
also,
it
was
approved
for
Alpha.
So
what
we
need.
The
approval
from
sync
notes
is
about
any
changes
related
to
kiblet.
So
like
Port
admission,
failures,
dress
will
shut
down
and
an
own
kills.
P
So
we
need
aspect.
P
Because,
from
the
cube
level,
we're
gonna
be
inserting
what
conditions
based
on
those
errors.
A
A
P
Yes,
yes,
the
feature
is
already
tracked
because
I
mean
it's
one
feature
and
Z
caps
are
really
approved,
but
yes,
I'll
clarify.
What's
the
request
for
signal.
C
Add
them
to
the
end
of
the
first
table,
just
for
better
visibility
and
yeah,
then
that'll
be
great.
P
All
right,
sorry,
can
you
clarify
what
you
meant
about
the
first
table.
C
Oh,
no
because
we
currently
have
two
tables
one
from
125
and
one
from
the
caps
cut
from
125..
So
you
can
add
the
Caps
that
need
a
signal
to
review
to
any
of
the
table,
but
preferably
on
the
first
table
for
better
availability,
so
that
we
can
all
see
it
see
them.
E
Comment
on
the
bot
has
Network
condition
feature
that
we
worked
on
for
Alpha
and
125..
Do
we
want
to
attract
things
that
we
want
to
kind
of,
let
so
for
a
release
before
moving,
or
do
we
just
like
leave
them
off
the
table?
If
we
don't
plan
to
work
on
it
actively
in
the
next
Milestone
we've.
J
B
E
D
Yeah
another
thing
small
thing
I
want
to
bring
to
attention
is
this
I
saw
two
PRS
talking
about
some
external
crit
cloud
provider.
We
have
duplicated
settings
in
kublet
to
Market
cloud
provider
and
reset
external,
and
we
want
to
move
to
config
filed
mostly
because
we
want
to
have
attained
on
initialization,
and
somebody
suggests
that
you
may
want
to
redesign
it
and
just
provide
a
set
of
things
on
Google
start.
So
it's
not
like
specific
cloud
provider,
so
we
get
rid
of
cloud
provider
notion
completely.
Is
there
any
like?
B
D
Yeah
I'm
thinking
like
I,
don't
know
like
we
can.
We
can
just
convert
everything
like
do
minimal
changes,
but
I
I
understand
there
are
more
desires
to
like
change
most
romantic
of
it
like.
D
A
Okay
thanks,
please
just
you
can
follow
up
with
us
through
the
either
through
this
talk,
Pinas
or
through
the
slack
and
to
we
are.
We
are
finalizing
of
the
1.26
planning
and
I
also
still
say
that
the
certain
things
people
we
have
the
author
say
yes,
but
they
still,
maybe
don't
have
the
reviewer
and
the
approver
mark
there
for
certain
things.
So
maybe
we
need
to
felonize
those
things
too.
B
Sasha
or
Marcus
do
you
have
folks
from
Intel
to
talk
about
that?
One.
A
M
J
M
Cool
so
yeah
quick
update
on
the
U.S
class
results.
We
renamed
it
since
125
cycle,
so
it
used
to
be
class
based
resources.
Now
that,
like
they
think
that's
the
best
best
name
is
curious
class
resources.
But
I
don't
know
if
that
makes
it
any
easier
to
easier
to
comprehend.
Anyway,
the
little
bit
context
and
remind
what
to
people
what
this
is
about
so
kind
of
trying
to
improve
that
quality
of
Collective
Services
applications,
and
we
are
targeting
certain
kind
of
resources
that
are
kind
of
inherently
shared
by
workloads.
M
But
but
there
is
no
support
for
kubernetes
supporting
kubernetes
to
kind
of
handle
these
resources
at
the
moment,
so
we're
talking,
for
example,
about
cache
memory
bandwidth
and
this
Kyle,
where
they're
all
like
existing
method
methods
and
Technologies
to
control
those,
but
not
in
kubernetes
Marcus.
H
H
Like
the
other
thing
that
I'd
look
at
with
kubernetes
is
that
we
are
steering
you
to
patterns
to
you
know
when
you
need
to
use
disk,
you
stick
your
state
on
things
not
bound
to
the
node
right.
So
what
I
was
wondering
is
like
for
the
resources
that
are
shared
by
workloads
like
this
guyo
to
me
is
basically
like
I,
want
fair
share,
Fair
sharing
of
logging
right
but
like
to
me
in
some
ways
it's
an
anti-pattern
for
pods
to
be
abusing
the
local
disk.
H
So
did
do
you
have
like
particular
use
cases
for
each
of
these,
where
you
could
like
Express
like
beyond
the
just
hey
the
Linux
host
offers
this
more
like
how
adding
this,
along
with
like
kubernetes
typical
usage
patterns,
would
work
the
two
Better
Together
and
I
just
pick
on
Disco
a
little
bit
here,
because
we
were
having
this
discussion
last
week.
Actually
at
Red
Hat,
where
it
was
like,
we
had
a
user
go
and
stick
a
disgae,
abusive
pod
on
the
same
cluster
known.
H
And
we
were
having
to
remind
folks
that
actually,
that
would
be
a
bad
use
case
generally,
like
people
should
be
writing
to
remote
volumes
or
the
only
time
this
would
be
happening
is
if
the
qubit
itself
was
pulling
images
and
not
necessarily
the
workload
or
the
workload
was,
was
being
abusive
on
how
it
would
write
logs.
So
for
each
of
these
I
guess,
I
was
wondering
if
we
could
like
tie
to
the
perspective
of
like
a
particular
pod
or
or
more
specific
case,
I
guess
well,.
G
The
initial
use
case
forward
was
was
streaming
application,
which
was
streaming
like
large
files
from
a
local
storage
from
local
rates,
so.
H
Helping
us
like
compare
it
to
a
particular
workload
is,
is
probably
cool
right
because,
like
the
most
default
thing,
I
see
with
cube
right
now
is
that,
like
maybe
a
pod,
how
often
a
pod
rates
to
its
local
log
file,
which
is
the
cubelet?
Actually
writing?
That
is
probably
the
biggest
area
where
I
see
disc,
I
o
contention
right
now
or
or
the
cubelet
even
deciding
how
to
do
Fair
sharing
of
pooling
images
or
the
runtime
I'm.
Sorry
anyway,
thanks
for
the
clarification.
M
Let's
go
forward
here
so
yeah.
This
is
where
the
product
or
either,
but
what
are
we
are
kind
of
targeting
at
the
moment
and
where
there
is
kind
of
supporting
the
container
runtime
already
existing
so
kind
of
what
are
these
QRS
class
resources?
So
we
think
that
this
should
or
could
be
modeled
as
as
new
type
of
resource
in
kubernetes,
so
kind
of
properties
of
this
kind
of
resource
would
be
that
multiple
containers
can
share
the
same
same
resource
or
class.
M
So
so
you
can
afford
a
container
or
pod
kind
of
assign
class
ID
or
identifying,
instead
of
preserving
an
amount
of
of
some
some
capacity
that
can
be
I,
counted
so
sort
of
infinite
or
not
accountable
resources
in
that
sense,
and
why
well
again,
some
some
resources
are
inherently
plus
based
by
Design
in
the
harbor,
for
example,
and
another
use
case
would
be
simplifying
user
interface
person,
some
some
controls.
M
So
she
told
me
about
the
cap,
so
we
have
kind
of
split
that
cap
now
in
multiple
implementation
phases.
So
the
first
phase
that
we
have
done
I
would
like
to
get
included
in
126
is,
is
a
really
small,
First
Steps,
so
to
update
the
Sierra
protocol,
an
API
to
be
able
to
communicate
resource
assignments,
the
container
runtime
and
then
use
port
annotations
as
the
kind
of
initial
user
interface
or
for
users
to
communicate
resource
assignments.
M
Munitioning
implementation
is
kind
of
pretty
so,
and
it's
it's
really
really
small
at
the
moment.
So
excluding
generative
code
and
test
it,
the
dev
stop,
is
really
really
small
and
the
blog
post
about
this
is
already
in
work,
works
and
kind
of
craft
beer
state.
So
if
it
gets,
if,
if
you
get
the
okay
kind
of
merged,
then
I
think
we
can
quite
quickly
proceed
with
a
with
a
plug
PR
of
the
implementation
and
a
blog
post.
Also
explaining
explaining
the
enhancements
for
The
Wider
public.
H
Foreign,
let's
say
we
were
to
do
something
like
this,
though
right.
How
would
you
recommend
segregating
the
resources
that
a
pod
can
claim
from
a
class
from
the
management
components
on
the
Node
being
able
to
use
those
resources?
So,
for
example,
if
if
we
had
a
class-based
resource
for
disk,
I
o
or
block
I
o,
we
still
probably
want
to
be
able
to
ensure,
for
example
like
if
we
wanted
to
do
a
quality
of
service
guarantee
around
pod
logs.
H
How
would
you
want
us
to
evaluate
future
choices
because,
right
now
it's
like?
If
we
have
this,
then
at
some
level
we
don't
know
what
we
should
or
shouldn't
take
on
itself,
inherently
in
Cube.
M
So,
what
at
least
one
one
aspect
of
this
is
that
I
think
most
of
these
resources.
They
are
not
kind
of
guarantees
for,
for
anything,
so
they're
more
like
more,
like
throttling
uq
in
that
that
sense
that
so,
for
example,
in
this
guy
or
you
cannot
really
like
in
any
meaningful
way
to
guarantee
any
like
bandwidth
to
any
anywhere
close.
It
could
be
more
more
of
a
like
throttling
yeah.
H
So
if
I
wanted
to
guarantee
a
higher
amount
of
disk
IO
to
The
Container
runtime,
who
was
pooling
images
or
starting
containers
relative
to
the
workload
that's
running
on
that
node,
do
you
have
thoughts
on
how
this
class-based
pattern
would
help
or
hurt
that
going
forward?
I'm
just
trying
to
think
like
as
a
node
planner
I
went
on
Twitter.
G
H
Yeah
yeah,
but
I
still
need
kind
of
a
fair
sharing
constraints
among
them
right
so
like
at
the
slice
level
of
some
note
in
that
hierarchy,
I'll
have
to
set
a
waiting
between
the
container
runtime
process
somewhere
under
system.slice
versus
something
under
Q
pods
that
slice
here
and
so
I
was
just
trying
to
think
through.
Like
this.
H
This
yeah
so
I
bring
this
up
because
we
have
system
reservations
today,
right
for
CPU
and
memory
and
pids
and
some
other
resources
and.
J
G
A
Sorry
I
have
to
interrupt
it
for
time,
tracking
and
the
Marcus.
Can
you
send
the
link
attach?
This?
Is
your
slide
tax
link
to
the
agenda,
so
people
can
follow
up
more
with
you,
and
also
you
have
the
car
price
we
can
follow
up.
I
also
need
the
refresh
memory.
I
know
you
talk
in
the
past,
but
being
so
long,
so
yeah.
M
Yeah
sure
I
will
do
that.
H
I
That's
a
great
Point,
Derek
I
think
we
need
a
node
set
of
resources,
ready
a
class
or
in
subsets
right,
so
we
can
manage
yeah.
That
makes
sense.
N
Yeah
I
think
direct
I
think
you
have
a
great
Point.
Actually
we
got
some
customer
use
cases
they
require
guaranteed
like
this
IO
for
their
for
their
critical
workloads.
So
I
think
we
need
to
think
about
that.
You
know
I,
think
this
way
this
cap
is
a
is
a
course
based,
so
it
doesn't
provide
guaranteed
service
guaranteed.
These
are
all
guaranteed
any
other
resources.
I
think
we
need
to
think
about
that.
Together.
With
this,
you
know,
non-guaranteed.
K
Yeah
I
just
wanted
to
add
on
the
on
our
side.
We've
been
looking
a
little
bit
in
this
issue
too,
with
like
different.
I
o
schedules
like
bfq
and
those
type
of
things
so
be
interesting
to
understand
kind
of
what
the
underlying
mechanism
would
be
like
the
I
o
scheduler
or
some
type
of
secret
v2io
controllers,
to
do
some
of
this
disk
isolation.
Work
I,
think
that
would
be
also
interesting,
I'm,
not
sure
if
that's
covered
somewhere.
K
N
On
that
pause,
this
is
not
the
easy
problem
to
solve.
You
know
to
provide
guaranteed
service,
but
we're
working
on
it,
and
so
it
may
come
back
with
some
update
or
some
some
new
proposals
on
that
yeah.
A
A
Oh
by
the
way,
last
week
we
did
agree
have
the
work
group
right,
but
that's
the
dynamic
resource.
I
look
is
that
the
work
Group,
which
is
only
for
dynamic
resource
allocation
or
it
is
Zooey.
G
It
was,
it
was
not
about
Dynamic
resources,
it
was
about
well
couplet
resource
plugins.
What
model
was
was
presented?
Were
people
who
responded
to
interest
with
making
well
selected
and
scheduled
already.
Q
A
I
I
just
hope.
If
we
didn't
discuss
there's
so
many
resource
management
related
topic.
Can
we
take
that
holistic
view
to
have
the
work
group
take
that
wheel,
instant
one
individual
each
one
by
one?
Next
tarika
also
already
mentioned
that
we
started
so
many
new
efforts.
We
wanted
the
effort
finish
existing
one
against
the
finish
and
then
move
to
next
way.
Instead,
I
have
a
lot
of
alpha
proposal.
A
Try
to
address
the
different
problem,
but
without
of
the
holistic
wheel
for
the
holder.
No,
the
resource
management
I
understand
resourcement
is
always
complicated,
never
finish
those
work,
but
this
deal
I
hope
like
every
time
we
discuss
and
there's
the
review
at
the
same
time
to
say:
oh
what
it
is
with
for
this
proposal
and
another
purpose:
how
at
the
node,
like
the
as
the
node
manager
earlier
I,
like
what
directly
say
a
load
planner
and
we
are
going
to
manage
it
at
the
pronoun,
all
those
kind
of
resources,
yeah.
N
A
Yeah
thinking
about,
where
are
the
people
on
the
Node?
Even
not
the
all
of
us,
understand
all
the
Deep
of
those
proposals.
Then
how
can
you
say
we
are
using
right?
So
user
could
maybe
understand
one
proposal
because
that's
what
they
exactly
ask
for,
but
please
remember
they
have
I,
don't
know
they're
all
kind
of
the
workload.
What
kind
of
resource
management
requirements
so
yeah
exactly.
Q
Yeah,
and-
and
that
is
the
meat
behind
the
discussion-
John
that
I
would
like
to
have-
is
basically
come
up
with
one
model.
So
it's
easy
for
us
to
do
different
Resource,
Management
models
without
having
to
shove
it
through
Google
and
bogging
down
this.
You
know
the
maintainers
were
over
subscribed
here
and
you
know
the
rest
community,
so
we
can
go.
Do
their
custom
stuff
without
having
to
go
through
Kublai
sure.
H
And
I
think
one
thing
that
upsets
me
that
I
wish
we
were
smarter
about
in
the
past
and
then
we
think
about
in
the
future
is
having
ways
to
identify
workloads
that
run
on
a
node
that
are
just
management
components
versus
like
distinct
workloads,
I
guess,
I
wish.
We
had
that
separation
from
the
start,
but
particularly
as
we
make
like
workload,
components
more
flexible,
like
the
reason
I
just
keep
bringing
up
like
needing
to
reserve
resources
for
the
management
stuff
is
like.
H
We
just
want
to
make
sure
that
node
always
stays
ready,
even
when
these
other
workloads
are
consuming
things.
So,
whether
that's
like
protecting
the
runtime
protecting
the
keyboard
or
even
like
protecting
like
a
metrics
exporter
or
the
cni
plug-in
like
trying
to
think
about
ways.
We
don't
ignore
these
management
components,
I,
guess
yeah.
N
E
N
H
Anything
that
contributes
to
like
the
node
being
reported
as
ready,
so
I
would
say
or
letting
you
know
the
health
of
that
node
so
typically
I
would
include
it
stuff.
That's
commonly
deployed
as
Daemon
sets
today,
which
would
be
like
your
cni
Plug-In
or
maybe
a
metrics
exploiter,
like
your
node,
isn't
ready
until
your
cni
report's
ready.
So
like
I,
don't
know,
I
bundle
the
two
as
saying
they're,
both
management
components
and
at
some
level
yeah
I
guess
that's,
that's!
Q
A
R
I
I
think
at
this
point
we
we're
mainly
looking
to
see
if,
when
we
can
have
container
D
come
in
Mike.
If
you
have
any
sense
of
what
a
release
date
might
be
for
the
next
release,
which
contains
ribbons
changes.
R
That
would
probably
be
the
next
trigger
for
making
changes
the
CI
to
pick
it
up
and
I.
Don't
know
when
that
might
happen.
I
also
tried
tried
out
Creo,
and
currently
it
is
not
supported
in
Cuba.
If
there
is
any
interest,
I
could
add
that
I
try
it
on
the
local
cluster.
It
seems
pretty
simple
to
do
at
least
for
Ubuntu
David
Porter.
If
you
have
any
concerns.
K
Yeah
I
was
just
going
to
say
we
should
look
into
like
if
we
can
test
it
on
the
master
Branch
containerd
currently,
because
so
no
block
release
I.
R
Did
for
for
the
in
place,
update
yeah
I
did
I
tried
the
latest
container
Deed
from
the
master
branch
and
it
works
for
end
to
end.
I
tried
the
Creo
as
well
just
the
latest
release
on
local
cluster.
That
also
worked,
but
that
support
for
Creo
is.
G
R
There
in
Cuba
I
was
wondering
if
there
is
any
if
it's
helpful,
to
bring
it
in.
D
Ci
perspective
like
when
we
can
merge
the
main
PR
like
we're
going
to
call
it
Mothership
PR
yeah.
You
may
have
CI
running
on
master
of
contain
energy.
That
may
be
sufficient
yeah.
I
L
I
I
Maybe
the
first
time
we've
added
features
to
container
D,
so
we're
we're
giving
Google
it
some
some
leanings
here,
great.
M
A
Okay,
sorry
about
that
and
I
think
the
I
have
to
go
to
another
meeting.
So
thank
you.
Everyone
and
yeah
thanks.