►
From YouTube: Kubernetes SIG Node 20210420
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Me:
okay,
okay,
yeah!
Let's
start
today
is
the
april
20th
201
and
let's
start
that
is
the
second
meeting.
I
let's
start
with
sergey
and
alailah
updated
us
as
the
pr
status
and
the
bachelor
educators.
A
Then
I
noticed
that
last
week,
there's
the
two
topic
is
uncovered:
let's
move
those
to
the
beginning
so
give
them
time
and
to
have
then
we
can
follow
so
so
it's
okay.
Maybe
you
want
to
start.
B
Yeah
this
week
there
were
a
lot
of
cherry
picks.
We
released
a
few
versions
and
now
we're
trying
to
fix
them
as
fast
as
possible.
So
yeah,
I
think
most
of
cherry
picks
are
lgtm.
Some
were
already
approved,
so
if
approvers
can
look
at
other
cherry
picks
to
be
awesome,.
C
Yeah,
I
noticed
there
were
tons
of
cherry
picks
sitting
in
the
triage
column,
and
then
we
had
another
urgent
one
come
in
yesterday,
which
I
think
is
good
to
go.
So
I
don't
think
that
we're
going
to
have
another
patch
release
until
next
month,
because
the
cherry
pick
deadline
for
april
was
a
couple
weeks
ago,
so
we'll
see
but
yeah
those
are
gone
through
there
other
than
that.
Definitely
I
think
the
prs
are
growing
right
now,
but
we
will
get
through
them.
A
Thanks,
let's
move
to
the
first
topic
from
cisco:
do
you
want
to
start
talk
about
the
cpu
man
manager.
D
D
Okay,
hopefully
you
can
see
my
my
slides
okay,
so
thank
thanks.
So
the
work
we
are
like
to
propose
is
basically
a
new
policy
for
cpu
manager,
and
this
sparked
a
lively
discussion
about
how
do
we
extend
cpu
manager
in
the
first
place,
but
I
will
get
to
them
in
a
minute.
First
of
all,
I
would
like
to
cover
what
is
this
new
policy
is
about.
D
We
need
a
bit
more.
We,
the
biggest
thing
we
want
to
avoid
is
the
noisy,
neighbor
scenario
on
which
a
physical,
core
or
mid-level
cache
are
shared
among
different
containers.
So
let
me
just
quickly
illustrate
the
case.
Let's
consider
this
very
simple
case
with
let's
say:
eight
physical
cpus,
each
of
them
with
two
virtual
gpus
we
allocate
with
static
policy.
We
allocate
a
guaranteed
quality
of
service
container,
which
requires
five
cpus,
so
you
get.
D
Of
course
you
get
five
virtual
cpus,
but
you
also
get
three
physical
cores,
one
of
them
being
shared
because
it's
partially
allocated
so
other
containers
playing
around
there,
because
you
know
other
policies
are
in
the
shared
pool,
so
everything
can
run
there.
You
can
have
noisy
neighbors,
so
you
can
have
latency
spike,
for
example,
in
latin
sensitive
application
worst
worst
case,
even
well,
not
really
words,
but
more
different
cases.
D
D
So
in
this
sense
this
is
an
extension
of
the
static
policy.
So
it's
an
additional
guarantee.
We
believe
that
the
most
natural
extension
is
having
a
new
policy,
because
we
want
to
preserve
the
current
guarantee
of
the
static
policy.
It's
unlikely.
Every
workload
which
already
is
using
the
static
policy
could
benefit
of
it.
So
we
propose
an
extension,
could
be.
A
new
policy
could
be,
for
example,
an
extended
behavior,
a
static
policy
that
pods
could
be
able
to
opt
in
or
an
even
accumulated
option,
but
still
should
be
obtain
this
behavior.
D
So
we
need
to
reserve
this
vis
virtual
cpu
and
somali
and
make
it
unavailable
to
from
the
shared
pool.
D
How
do
we
do
that
and
what?
Why
is
the
problem?
Because
you
know,
for
all
intents
and
purpose,
let
me
let
me
actually
go
back
you
you
the
workload
request
and
are
now
the
amount
of
cpus,
because,
if
you
fulfill,
if
you
have
a
workload
which
requires
an
amount
of
cpus
such
as
every
physical
car,
is
fully
allocated
no
big
deal
but
no
issue
at
all.
Actually.
But
if
it
is
not
the
case,
do
we
do
that
to
prevent
that
we
said
we
need
to
over
allocate
let's
say,
allocate
a
physical
cpus.
D
To
fix
this
is
add
an
admission
handler
which
enforces
the
requirement-
and
this
is
another
reason
by
the
way
to
done
as
an
external
policy,
then
at
the
on
an
admission
plugin
to
enforce
the
requirement
to
reject
the
admission
much
like
we
do
for
topology
manager
already
we've
seen
already,
and
if
the
the,
if
this
policy
cannot
guarantee
that
the
workload
cannot
be,
cannot
get
cpus
such
as
all
the
physical
calls
that
are
occupied,
reject
the
workload.
D
For
this
approach,
for
example,
for
example,
I
think
alternatives
we
we
considered,
but
we
consider
less
signal
than
that
at
the
next
generation,
much
like
by
the
way
cpu
puller
is
doing,
but
if
we
had
an
extended
results
to
convey
the
fact
you
want
physical
chords
well,
first
of
all,
it
clashes
with
the
existing
resource.
You
need
to
specify
bot
if
you
don't
specify
cpu
results
out,
you
can
obtain
in
guaranteed
quality
of
service
class,
which
is
a
problem
which
is
a
nice
property.
We
want
to
keep
so.
This
is
not
seems
ideal.
D
Another
option
could
be,
maybe
not
maybe
let
the
results
the
cpu
results,
specify
a.
We
want
actually
physical,
core
being
a
physical
core
capable
of
hosting
two
or
more
virtual
cpus.
We
can
maybe
stretch
the
the
definition
and
consider
it
a
multiple.
A
physical
call
could
be
seen
a
multiple
of
virtual
codes,
but
this
relationship
again
is
depending
on
the
adwords
settings
on
the
other
on
the
specific
of
our
births
and
really
doesn't
feel
like
a
good
direction
at
this
moment.
D
But
when
I
started
a
a
conversation
to
get
feedback
about,
if
this,
how
the
community
feels
about
the
noisy
neighbor
problem,
of
which
other
I
mean
consumers,
can
we
see,
besides
the
very
specific
low
latency
scenarios
like
seeing
like
telco
workloads
or
even
high
frequency
trading,
or
something
like
that?
The
very
first
question,
which
is
actually
the
the
very
first
question
I'm
going
to
rise,
is
though
oh.
D
Do
we
even
extend
the
cpu
manager
because
the
very
first
proposal
we
with
the
very
first
answer
we
we
got-
was
like
hey,
let's
considering
adding
external
policies
and
making
the
cpu
manager
work
much
like
the
external
plugins,
with
much
like
we
have
for
device
plugin,
but
for
cpu
manager.
So
this
is
actually
the
very
first
question.
I
believe
we
should
consider
in
the
first
place,
so
we
we
can
do.
D
Okay,
just
add
a
new
built-in
policy
or
make
changes
to
the
existing
policy,
or
we
can
have
the
external
and
external
policies
we
had
already.
It
was
told
that
was
not
involved
in
kubernetes
there.
We
we
kind
of
covered
this.
Last
week
we
had
an
initial
discussion
and
about
having
a
let's
say
those
external
policies,
but
that
approach
was
not
really
gaining
traction
last
time.
D
D
In
this
case,
while
while
working
on
that,
we
identified
a
few
few
topics
about
implementing
external
plugin-
and
those
are
things
we
hit
ourselves
while
experimenting
in
this
era-
and
others
are
things
which
were
raised
by
the
cpu
cooler
maintainer,
which
is
very
close
to
this
cpu
puller
is
an
a
device
plug-in
plus
external
components
made
by
nokia
which
implements
something
very
similar
and
they
listed
them
there,
for
example,
who
should
how
we,
how
we
we
declare
the
resource?
D
If
we
move
out
out
of
three,
how
do
we
still
use
the
core
results?
We
use
an
external
research
like
cpu
puller
does
again
which
api
we
can
use
and
who
owns
the
c
groups
who
sets
them.
But
again,
my
the
the
very
first
question
we
we,
I
I
think
we
should
discuss,
is
what's
the
preferred
way,
look
going
forward
to
extend
cpu
manager
to
enable,
for
example,
the
policy
we
are
looking
forward
to
implement
or
general
policies
to
to
be
implement,
or
it
depends
on
on,
for
example,
on
the
magnitude
or
the
policy.
D
A
Guidance
so
francesco,
so
we
we
in
the
past
maybe
four
years
ago
we
did
actually
heavily
discuss
external
api
to
support
cpu
and
even
some
memory
and
also
the
pneuma
all
it
is
being
talked
about.
If
I
recall
like
why.
A
I
think
that
at
least
that
time
I
I
try
to
push
forward
on
the
external
api
external.
So
that's
why
we
have
the
device
plugging,
but
there's
a
lot
of
concern.
Let
me
let
me
I
share
here
from
the
community
when
it
is.
A
I
think
you
also
mentioned
that
in
your
slides
when
the
state
owner
another
thing,
it
is
people
concern
about
the
because
it
is
the
cpu
and
the
memory
management
is
the
first
class
we
support
and
every
single
work
node
requires
to
handle,
and
they
worry
about
the
potential
of
the
latency
and
also
discrepancy
and
introduced
by
this
external
some
demon
site
and
also
those
conflicts
decisions.
A
So
that's
why,
at
the
end,
we
decided
to
start
from
the
building,
but
we
again
we
didn't
rule
out
completely
off
the
external
demon
side
like,
for
example,
intel
have
the
new
management,
a
lot
of
things
which
basically
guide.
That
idea
and
they
have
their
own
external
resource
management,
even
collaborate
with
the
built-in
cpu
and
the
memory
so
yeah.
I
just
share
some
background
contacts
here,
so
any
other
one
have
some
comments
on
this
topic.
E
Well,
my
comment
will
be
more
generic,
so
if
you're
talking
about
cp
extending
cpu
manager,
we
cannot
do
it
alone.
So
if
we
really
fight
for
a
noisy
neighbor
situation,
cpu
is
not
enough.
You
always
need
to
consider
how
the
memory
is
connected.
What
kind
of
memory
is
used?
What
kind
of
buses
are
connected
and
so
on
and
so
forth?
So
if,
if
we
really
do
external
apis
when
it
means
like
full
resource
management
or
topology
manager,
api
which
needs
to
be
taken
into
account.
D
Thanks
alexander,
I
have
just
a
follow-up
question
because
I
I
was
following
even
though
not
involved
as
I
would
like
the
efforts,
for
example,
to
container
device
initiative,
and
my
question
is:
I
understand
the
long-term
goal
to
have
all
the
resource
management
in
some
way
being
modular
external,
but
maybe
it's
not
at
least
but
what's
the
the
the
migration
pathways
path
because,
in
my
opinion,
having
external
policies
could
be
for
cpu
manager
could
be
a
nice
middle
ground
step
towards
that
direction.
I
was
under
the
impression.
E
It's
a
nice
step,
and
I
fully
agree
and
actually
christian,
who
is
probably
also
on
this
call.
He
was
participating
in
with
early
discussion
few
years
ago,
when
this
was
proposed
first
time,
one
of
the
problems.
E
What
I
recall
from
this
past
year's
discussion
for
external
apis
was
about
maintenance
of
of
those
apis,
meaning
what
if
we
create
a
new
endpoint
with
new
plug-in
mechanism,
it
will
be,
it
will
be
some
people
who
will
start
to
use
it
and
when
it
will
be
a
question
of
how
we
graduate
how
we
duplicate
it
in
the
future.
E
So
it
was
one
of
us
roadblocks
to
external
apis
and
when
it
was
also
complexity
of
what
happens,
if
resource
plug-in
will
be
crashing,
what
could
should
do
in
that
scenario?.
E
So
so,
yes,
potentially
it
can
enable
some
of
the
use
cases
right
now,
but
we
I
mean
like
if
we
do
just
apis
for
cpu
manager
right
now.
It
means
what,
in
very
near
term,
we
will
need
to
revisit
them
because
of
the
memory
because
of
devices
and
overall
alignment
of
resources.
D
D
I
understand
that
we
will
have
and
not
resolve
not
resources
in
a
malware
fashion,
but
my
question
was
just
okay:
I
get
that
cpu
manager,
external
policy
could
be
cool
cause
issue
in
this
case,
but
granted
that
how
will
the
transition
look
like
if
doing
modular
mean
and
doing
small
changes?
Let's
say
starting
for
superior
measure,
for
example,
and
then
maybe
extending
for
memory
manager
is,
is
not
the
ideal
direction.
How
the?
F
Kind
of
question:
if
it's
okay,
have
you
prototyped
this
like
I'm,
it
doesn't
seem
that
different
from
the
existing
static
policy
other
than
it's
hyper
threat.
Aware-
and
I
imagine
the
code
looks
remarkably
similar.
D
We,
yes,
we
do
have
a
prototype.
Yes,
we
are
in
the
process
of
testing
that.
But
the
reason
why
asking
the
first
place,
which
was
kevin,
actually
said,
okay
first
answer
from
him,
was
hey,
let's
evaluate
again
the
external
policy,
so
I
will
actually
ask
the
the
the
as
much
as
I
can,
the
community.
You
see
yeah.
F
G
D
D
A
I
agree
with
alexandra
earlier
I
I
just
want
to
say
the
island
okay,
so
so
I
I
agree
with
alexandra
so
because
when
we
talk
about
external
resource
api,
this
is
what
we
have
in
the
past.
Also
we
passed
that
decision.
The
reason
it
is
use
cases
back
then
use
cases.
We
don't
have
those
those
rich
use
cases
like
customer
user
having
to
get
to
this
stage.
Obviously
today's
stage
we
predict
will
be
get
here,
but
we
know
that
then
packaging
kubernetes
adoption
is
not
that
high.
A
Yet
so
we
haven't
have
those
real-time
use
cases
and
even
like
the
typology,
a
while
use
cases,
loyalty
labor
would
use
this
customer
user
haven't
get
that
stage
yet
so
so,
at
the
same
time,
we
also
realize
one
thing,
which
is
exactly
alexandra
mentioned
earlier
here,
because
they
have
the
background
from
the
intel
and
we
also
have
the
background
from
the
board
and
the
red
hat
also
have
the
openshift
have
the
background
in
the
past.
So
we
know
there's
the
half
to
those
resources.
A
Civil
resources
have
to
working
together
to
provide
the
the
to
meet
customer
requirement
but
to
settle
down
with
that
resource,
all
three
type
of
the
resource,
even
there's
some
disk
related
stuff-
and
it
is
a
hard
decision
for
us
at
that.
So
here,
what
do
you
propose?
Still?
It
is
the
cpu.
So
if
we
go
with
the
extra
api,
that's
the
concept,
maintenance,
people
concept,
it's
not
like
the
okay,
we
cannot
maintain.
A
It
is
once
you
only
focus
on
the
cpu
and
then
a
lot
of
when
there
maybe
just
go,
implement
their
own
and
based
on
that
api,
and
it's
really
hard
for
us.
If
we
have
to
reverse
api,
if
you
saw
that
the
c
cri
container
runtime
interface
today,
even
that's
really
wild
guided,
while
the
group
and
the
api,
it
is
post
problem,
same
thing
for,
like
the
today's,
the
csi
storage
api,
like
the
every
time,
there's
the
production
and,
if
you're,
using
different
vendors
implementation
of
the
csr.
A
D
Okay,
so
I
worked
for
me
and
I
really
felt
it
was
the
first
question
to
ask,
and
I
got
a
good
answer.
I
think
I
I
would
just
like
to
wrap
up
and
my
takeaways
are
I
we
are
go.
It
seems
we
still
prefer
to
have
built-in
policy
and
evaluate
if
the
actual
policy
we
would
like
to
propose
is
could
fit.
D
So
I
we,
I
think
we
will
just
post
the
pr
to
let
people
review
and
make
comments
and
polish
the
cap,
which
we
already
almost
done
so
people
can
review
and
regarding
the
external
policies,
kevin
ed
seems
to
have
the
some
ideas,
so
I
will
just
reach
with
with
him
and
in
parallel.
Let's
see
work
with
him
to
see
how
we
could
look
like,
but
for
the
real
deal,
which
is
the
policy
we
would
like
to
do,
we'll
just
keep
with
built
in
does
it
does
it
seem
fair
to
everyone
there.
E
Francesca,
I
also
want
to
participate
in
the
discussion.
I
think
our
people
from
our
side
as
well,
but
in
reality
like
for
a
question
for
external
policies
for
cpu
manager,
I
would
still
try
to
suggest
to
rephrase
it
as
a
external
policies
for
topology
manager,
so
practically
the
api.
E
What
we
really
need
is
to
have
external
policy
which
takes
information
about
what
kind
of
workload
is
planning
to
be
started
and
what
kind
of
all
the
coblet
managed
resources
available
on
the
system
and
when
it
can
reply
with
assignment
of
resources,
would
it
be
cpu
memory
devices
or
whatever
else
so
practically
make
the
topology
manager
policies
external?
D
This
is
a
good
point,
so
I
I'll
just
talk
with
kevin
in
the
public
channel,
so
I'll
just
make
sure
to
tag
you
and
you
can
add
the
tags
for
be
part
of
the
conversation.
So,
okay,
one
taken.
I
Just
before
we
wrap
up
francesca,
I
just
want
to
mention
that
I
think
prototype
came
into
discussion.
We
do
have
a
prototype,
maybe
next
week
we
can
come
and
show
a
demo.
We
have
it
already
working
and
even
the
implementation
is
pretty
much
complete,
so
it
might
give
people
clarity
as
to
what
we
are
proposing
and
it
might
help
with
the
reviews
as
well.
F
F
We
could
evaluate
any
externalization
approach
against
like
the
present
state,
but
I
I
just
want
to
make
sure
that,
like
this
isn't
the
only
use
case
that
would
allow
us
to
iterate
and
evolve,
and
I
wasn't
sure
if
that
was
understood.
I
I
more
look
at
this
than
just
ask.
Was
it
an
error
when
we
did
the
static
policy
that
we
didn't
ask?
F
Is
it
hyper
threat
aware
or
not,
and
is
it
an
error
that
when
we
define
the
policy
flag
that
we
only
did
it
with
one
flag
which
was
static,
you
know
none
or
whatever,
and
we
didn't
provide
a
secondary
flag,
which
would
say
like
something
more
nuanced,
like
hyper
thread,
aware
or
you
know
some
opaque
string
or
something.
So
I
don't
know
that
that's
kind
of
how.
I
I
Yeah,
I
think
what
one
comment
I
have
on
that
is
given
that
cpu
manager
it
does
a
best
effort
of
first
allocating
on
the
basis
of
sockets
and
then
tries
to
get
cores
and
then
kind
of
remaining
threats.
It
was
kind
of
the
decision
that
was
made
that
you
didn't
do
the
best
effort
job,
but
in
scenarios
that
it's
not
able
to
do
it
wouldn't
fail,
whereas
here
what
we
are
trying
to
emphasize
is
that
if
the
request
cannot
be
fulfilled
from
single
core,
it
ends
up
in
an
error.
F
Or
its
best
effort,
and
basically
should
we
have
a
policy
knob
on
cpu
manager
that
we
didn't
have
in
mind.
That
would
let
us
express
this
without
having
to
feel
we
need
to
get
all
externalization
perfect.
Is
my
my
thought
like
we
can?
We
can
learn
as
we
go.
It
seems
like
we
would
want
maybe
two
pieces
of
information
where
we
had
one
now,
but
anyway,.
I
And
and
one
more
thing
like
even
in
the
implementation,
what
we
are
doing
is
we
are
not
exactly
copying
what
static
policy
already
does.
We
are
trying
to
reuse,
static
and
add,
essentially
an
admit
handler
with
the
additional
check
that
we
care
about,
so
it's
not
replacing
the
entire
or
just
entirely
replicating
starting
and
making
a
minor
change,
so
implementation
wise,
we've
kept
that
in
mind.
But
again,
I
think,
with
the
demo
and
kept
kind
of
polished
a
bit
more,
it
would
become
clearer
and
we
can
follow
next
week
on
this.
A
Thank
you
francesco
and
to
this
one.
So
let's
move
to
next
topic,
so
we
can
follow
up
next
week
unless
even
more
detail
with
the
demo
with
the
sweaty
and
alexandra
francisco.
So
next
one
there's
another
from
curry
from
the
april
13,
but
so
at
you
do
you
want
to
talk
about
this.
One
extended
part
resource
api
with
the
memory
management
matrix.
J
Yeah
hi
folks,
so
in
general,
part
of
promotion
of
the
memory
manager
to
better
derek
requested
to
provide
some
kind
of
metrics
and
I
believe,
the
best
way
just
to
provide
such
metrics
under
the
pod
resources
api,
because
we
already
have
the
end
point
and
we
already
provide
some
cpu
metrics
under
it.
Regarding
the
node
regarding
the
pod
and
container.
F
Gate,
I
don't
think
we
need
a
new
feature,
yet
I
would
just
edit
the
existing
cap.
J
F
J
A
K
Yeah
sure
so
I
think,
last
week
we
went
through
a
first
round
of
planning.
We
talked
about
all
the
features
in
the
document
and
then
one
thing
we
wanted
to
do
is
kind
of
prioritize
all
of
them.
So
I
was
thinking.
Maybe
we
can
quickly
go
through
the
list
and
attach
like
a
high
medium
low
to
that
list
as
a
group,
so
we
know
like
what
you
want
to
focus
on
first
and
we
know
what
we
can
drop
if
we
don't
have
enough
bandwidth.
K
A
K
All
right,
can
you
see
my
screen.
K
K
Okay,
hi
and
in
terms
of
effort,
probably
medium.
If
we
talk
about
those
two
items:
okay,
node,
graceful
shutdown,
we're.
K
Yeah,
but
I
think
we
still
want
to
target
adding
some
phases
for
shutdown
based
on
priority
class,
if,
if
possible,
during
this
release,
so.
C
K
So
I
think
we
need
to
clarify
right,
like
as
a
from
a
priority
perspective.
We
are
talking
priority
for
not
just
stuff
that
is
graduating,
but
also
in
terms
of
bandwidth
will
assign
to
work.
That
is
that's
ongoing,
like
c
groups
b2
when
cri
graduation
is
high,
it
may
not
make
it
so
I
don't
know
derek
don.
What's
your
perspective
like
do
we
want
to
only
talk
about
things
that
we
want
to
change
the
level
of,
or
also
ongoing,
work.
C
B
K
You
know
for
this
one
like
what
we
said
was
like
we'll
stay
beta,
but
we'll
probably
we'll
add,
more
features
here
like
on
the
value
of
the
priority
class.
So
what
do
we
want
to
do
like?
Do?
We
also
want
to
give
a
priority
to
everything
based
on
things
that
level
is
not
going
to
change,
but
we're
still
going
to
work
on
it
or
no,
and
then
sergey
suggested
we
could
do
a
beta
2,
which
doesn't
sound
bad
to
me.
N
M
F
F
I
don't
see
any
issue
with
having
a
new
gate
right.
It's
it's
easier
to
manage
than.
F
K
So
c
groups
we
too
we
want
to
target
an
alpha
and,
like
I,
I
I
think,
like
probably
medium
to
a
high
from
our
perspective,
like
our
folks
on
the
call
will
have
any
input
on
that.
A
What
means
high
or
what
mean
media
media
highs
mean
like
the
okay.
If
we
want
to
meet,
we
want
to
meet
our
target.
So
if
we
couldn't
do
it,
then
we
will.
We
will
shift
some
engineering
resources
to
make
sure
finish
this
one.
Otherwise
that
is
a
little
bit.
It's
like
the
cases
like,
for
example,
cri
graduation,
it
is,
I
know,
open
source
is
hard
to
shift
to
internal
resources
to
do
something,
but
at
least
let's
kind
of
indicate
our
our
intention
right.
So
I'm
not
sure
this
one.
K
Maybe
just
an
order
of
like
just
ordering
this
list
right
now.
This
list
is
unordered
don
like
like
in
our
planning.
We
also
have
like
a
blocker
thing
that
we
do,
which
means
we
won't
absolutely
ship
without
this,
but
like,
as
you
said,
it
opens,
the
open
source
is
going
to
be
hard
right.
We
we
can't
like.
F
The
community
to
kind
of
express
our
our
interest
in
the
shared
outcome
right
and
obviously
some
things
that
folks
have
higher
interest
in
than
others
individually.
But
I
I
feel
like
parker
on
this
list
was
just
like
a
grab
bag
from
a
wide
variety
of
areas,
and
maybe
it's
not
people's
immediate
interest.
Right.
K
F
L
F
C
Yeah,
from
my
perspective,
I
would
read
low
as
things
that
are
safe
to
drop
from
the
release
medium
as
things
that
I'm
gonna
rely
on
the
owners
to
drive
and
hide
as
things
that
we
as
a
sig
have
decided
on
and
like.
Therefore,
I
should,
if
I
see
it
like
lagging
jump
in
to
try
to
help.
I
don't
know
if
we
have
anything
that
could
be
considered
a
blocker
for
this
release.
C
But
if,
for
example,
we
had
say
like
a
bunch
of
things
that
all
relied
on
one
particular
cap,
then
we
could
consider
and
like
that
kep
had
to
land.
We
could
consider
that
too.
That's.
K
A
K
So
the
next
one
is
memory
qos
for
c
groups
v2.
I
think
like
from
what
we
said
earlier,
like
the
author
is
interested
in
present.
So
should
we
put
this
in
the
medium
bracket,
but
we
still
need
to
get
the
kept
merge,
so
this
would
be
like
definitely
not
high,
like
a
low
to
medium,
depending
on
what
folks
think.
C
M
K
M
K
Okay,
so
usernamespace
is
like
michael
dawson
attended
last
time
and
it
looks
like
no
one
really
has
a
bandwidth
to
work
on
it
so
like,
unless
folks
feel
somehow
accommodated
or
more
resources
from
their
respective
companies.
If
you're
interested
in
moving
this
forward,
I.
A
G
F
G
A
K
C
Would
not
rate
that
higher
than
medium,
but
I'm
planning
on
getting
it
done.
A
I
think
the
we
are
going
to
this
one
will
stay
in
the
alpha,
but
we'll
have
the
more
enhancement,
because
here
he
changes
api,
so
will
be
medians
the
end
small,
relatively
small,
I
believe
so
yeah,
but
just
stay
on
the
alpha.
Okay,.
K
C
I
put
large
on
there
when
I
t-shirt
sized
it
just
because
there's
so
much
alignment
to
get
but
yeah
the
implementation
itself
might
not
actually
be
that
large
and
I've
been
treating
it
like
it's
high
priority.
But
I
am
fine
with
you
marking
it
as.
K
So,
second,
by
default,
sorry,
anyone
who's
understood
something.
K
K
So
I
think,
don
last
week
he
said
we're
gonna
target
alpha
and
things
are
lined
up.
So,
okay,
okay,.
A
A
F
Sorry,
I
I
wasn't
actually
following
along.
I
just
stepped
back
in.
A
Yeah,
it's
okay,
the
yeah
I
mean.
B
Crossport
pr
is
almost
ready.
It
was
almost
ready
before
previous
release
and
now
it's
like,
I
think
it's
about
to
be
merged.
A
J
K
That's
good
and
sergey
what
about
pod
overhead.
B
Yeah,
it's
a
small
effort,
but
the
problem
is
that
to
g8
I
really
want
to
have
to
find
usage.
K
K
B
B
K
K
F
Yeah-
I
don't
know
if
andrew's
here,
but
I
think
if
I
was
to
reflect
what
was
there,
I
think
the
work
is
is
probably
high,
because
this
was
a
blocker
to
doing
cloud
provider
externalization
and
the
effort
is
pretty
low.
O
Yeah
so
put
my
name
for
it:
renault,
please,
okay,.
K
Sure
so
whether
you
want
to
be
added
dims
in
the
reviewer
up
to
your
site,.
F
O
Yeah,
I
know
I
might
delegate
it
to
aditi
or
somebody,
but
you
know
please
put
my
name
for
now
as
the
one
doing
the
work.
Okay.
K
Thanks
james,
thank
you.
Thanks
teams,
right
node
service
log
viewer.
K
C
advisor
less
cri,
full
stats
like
like
don
david
peter,
and
I
met
last
week
when
we
had
some
discussions,
like
maybe
david
and
don,
can
talk
to
like
what
priority.
M
K
Yeah
so
the
next
one
huge
paid
storage
size
sized
as
excess
and.
K
K
All
right
sergey
next
one
is
yours:
dynamic,
cubelet
configuration
application,
yeah.
B
K
And
so
the
next
one
is
the
in-place
vpn.
This
is
the
one
derrick
that
triggered
your
comment
on.
I
K
Priority
so
I'm
assuming
it
should
be
high
yeah.
D
Yeah,
so
hey
from
side
t-shirt
sides
it's
more
leash,
because
the
code
that
was
checking
right
now
is
one
thousand
line
of
code
nine,
eighty
percent
tests.
So
it's
really
s
and
effort
wise.
If
we
think
about
well
priority-wise
sorry
for
us,
we
have
use
cases
so
I'd,
say
hi,
it's
important
for
us.
It's
self-contained!
So.
K
Any
any
objections
to
marking
it
high.
K
I
C
C
D
Yes,
that's
fair,
that's
fair,
but
I
have
a
follow-up
question,
though,
regarding
the
the
one
two
three
fourth
column,
alpha,
it's
cpu
measure
is
stable.
This
is
a
new
policy,
so
how
it
should
look
like.
I.
F
Yeah,
so
can
we
go
back
to
the
in
place
vba
one
I
just
want,
we
don't
have
we
got
through
the
cap
reviews
and
that
was
a
good
team
effort.
Are
we
sure,
does
anyone
know
if
annay's
here
don?
If
he's
available,
do
the
implementation
or
who
would
be
available
for
review.
A
Last
time
I
talked
to
vigne
and
I
believe
he
is
available
for
implementation,
but
that's
like
a
couple
weeks
ago,
so
I'm
not
sure
things
might
change,
but
if
he
is
okay
to
do
this
one
I
I
also
can
be
part
of
the
reviewer,
but
I
don't
know
I
cannot
complain.
I
mean.
F
A
F
I
D
A
So
for
the
new
cpu
manager
policy,
we
also
need
the
reviewer
and
and
the
things
it's
also
is
media.
It's
not
high,
but
maybe
we
should
engage.
I
O
F
Yes
as
well,
but
honestly
I
kevin
is
a
is,
is
is
amazing
here,
so
actually,
if,
if
he
is
able
to
review
it,
I'm
perfectly
fine
with
that.
C
K
K
H
So
should
I
explain
about
this
so
now.
H
This
is
somewhat
similar
to
the
type
of
username
space,
but
it's
different,
so
username
space
proposal
is
for
just
running
containers
as
a
non-reducer,
but
on
the
other
hand,
rootless
mode
means
running
everything,
including
cube
rate
quantity
in
run
c,
as
well
as
continuous
as
a
knowledge
user.
So
it's
different
from
the
user
namespace
proposal
and
do
not
conflict,
and
actually
the
project
is
very,
very
small,
because
almost
all
stuff
followed
by
root
risky
is
a
turret
used
by
both
joker
and
fatima
and
the
contrary.
H
It
runs
the
already
supports
rules
mode,
so
the
parts
on
the
keyboard
is
really
really
small.
So
the
cube
route
needs
to
be
patched
to
ignore
some
errors
during
setting
system
records,
and
we
also
need
to
ignore
everything
setting
our
limit
values
so
that
the
changes
on
humans.
It
is
very
small,
just
just
just
a
few
lines
of
the
course
and
actually
a
kind
keyboardist
in
darker
already
supports
wilderness
mode
without
patching
puberties.
H
But
the
kind
has
some
ugly
code
for
breaking
by
breaking
sorry
breaking
slash
processes
with
bind
amount.
So
so
the
code
is
not
that's
beautiful.
F
Yeah,
so
I
just
one
question
I
had
here
was
as
a
goal.
I
I
wanted
to
make
sure
that,
like
the
cube
was
always
set
up
to
be
able
to
pass
conformance,
I
know
in
earlier
iterations
on
the
cap
I
had
asked
if
we
felt
that
running
rootless
would
actually
require
us
to
change
how
we
evaluate
conformance
tests.
F
I
think
in
a
past
comment,
you
noted
that
nfs
and
block
and
other
storage
tests
might
prove
problematic.
Yeah
note
you
had
on
cis
controls,
given
that
I
know
cis
controls
just
got
promoted
to
conformance
in
their
last
release
kind
of
gives
me
a
similar
pause.
So
I
guess
what
I
was
wondering
was:
if
we
proceed
on
this
work,
how
do
we?
How
do
you
guide
us
to
ensure
that
we
can
be
conformant
with
that
word.
H
So
I
think
we
can
use
sono
boy,
sorry,
I
don't
know
how
to
pronounce
but
there's
you
know,
there's
almost
tesla
switch,
and
so
I
think
we
need
to
skip
tests
related
to
cctls
and
individuals
anyways,
but
other
tests
should
work
and
we
can
use
a
kind
for
testing
lutely
mode.
So
we
use
a
router
stroker
or
user
sportsman
as
the
provider
of
kind,
and
we
can
run
kind
of
for
running
most
of
performances.
F
Yeah
I
just
this
is
the
thing
that
I
think
is
like
an
unclear
cost
so
like
when
we
size
this.
I
think
we
need
to
make
sure
that
we
have
those
testing
things
called
out
and
I
wasn't
sure
them
as
if
you
wanted
to
approach
this
from
the
perspective
of
like
a
different
conformance
profile.
You
know,
I
think
about
earlier
conversations
we
had
where
I
think
jack
came
forward
to
the
sig
and
was
like.
I
want
to
change
the
visibility
of
mount
propagation
points
and
we
had
a
lot
of
back
and
forth
on.
F
O
Yeah
derek
the
conformance
is
when
we
go
from
beta
to
ga,
so
we
can
get
to
alpha
and
throughout
the
concept
and
see
what
breaks
right
and
that
will
give
us
a
good
set
of
things
to
like
worry
about
to
see.
If
we
can
even
move
to
beta
from
alpha.
F
Yeah
thanks
and
then
the
last
question
I
had
was.
I
thought
this
is
still.
Resource
management
was
turned
off
when
this
was
enabled.
H
Also,
we
need
to
use
a
c
group
version
2
and
with
a
signal
version.
2,
it
supports
redress
mode,
so
we
can
enable
resource
communications
using
c
groups.
I
H
Yeah
so
yeah
for
performances.
We
need
to
have
seagull
broughton
to
this
first
yeah.
O
Yeah,
the
way
I
see
it
direct
is
anything
that
this
depends
on
has
to
be
at
the
same
level
or
higher
right.
If
this
depends
on
something
else
that
has
to
be
either
alpha
or
beta,
because
this
is
going
to
alpha
right,
yeah.
F
So
just
from
that
regard
I
was
wondering
like
do.
We
want
to
give
this
a
high
versus
the
secrets,
v2
worker
medium
or
vice
versa,
but
yeah
I
was
just
trying
to
say
like
it.
It
should
have
equivalent.
F
K
K
K
H
H
K
So
the
next
one,
mike
brown,
ensure
secret
pulled
images
mike.
I.
G
For
the
most
part,
there
was
one
review
request
for
an
additional
minor
sub
feature,
and
I
I
I
put
a
demo
of
what
we
could
code
in
there
could
use
a
little
help
on
the
test
test,
buckets
or
whatnot
in
a
immigration
level.
Could
the
unit
buckets
are
all
there.
G
So
the
the
idea
is,
you
know,
in
the
secure
by
default
context,
you
would
you
would
you
would
think
that
when
pod
a
pulls
an
image
that
required
a
secret
to
get
that
pod
b
would
not
be
able
to
use
that
image
if
plot
b
didn't
have
that
secret
right,
but
unfortunately
we're
in
current
situation
where
you
can
configure
where
pod
b
will
use.
You
know,
never
pull
or
you
know,
use
the
image
that's
already
on
there.
G
So
we
need
to
keep
some
state
in
kublet
after
we've
done
the
garbage
collection,
so
that
we'll
know
that
when
we
pull
an
image
using
a
secret
that
we
won't
allow
some
other
pod
to
use
that
that
particular
image
without
checking
first,
if
their
authentication
can
pull
it
or
that
they
have
the
secret
that
matches.
C
G
K
Do
we
have
reviewers
and
we
can
do
it
offline
if
you
want.