►
From YouTube: Kubernetes SIG Node 20210601
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Good
morning,
everyone
today
it
is
the
june
1st,
and
so
it
is
our
first
signal
the
weeknight
in
the
june
and
the
summer
is
officially
here
and
let's
start
with
our
normal
start
circuit
and
anna.
Do
you
want
to
share
with
the
team
about
the
pr
status
and
also
about
translators.
B
Yeah
we
are
growing
on
pr
number,
but
I
think
mostly
because
the
week
was
slow
and
there
was
a
weekend
in
like
one
quarter
day
in
us
and
weather
is
good
outside.
So
I
guess
that's
a
reason
that
everything
is
slow.
C
Yeah
agreed
that
things
have
been
a
little
bit
slow,
but
I
think
the
board
is
mostly
caught
up.
One
of
the
things
I
wanted
to
mention
was
that
a
reminder
that
code
freeze
is
coming
sooner
than
you
think
it's
july
8th,
which
is
in
about
five
and
a
half
weeks
from
now.
C
So
it's
coming
and
in
terms
of
prs
I've
seen
coming
in,
I
know
we
have
a
lot
of
caps
for
this
cycle
and
a
lot
of
those
caps
have
not
had
any
code
prs
yet
so
please
make
sure
you
get
that
up
early.
We
don't
want
to
have
like
a
big
last
minute.
Thundering
herd
of
changes
to
make
so
you've
got
almost
six
weeks
get
at
it.
A
A
Meeting
yeah
we
can
do
the
june
22nd
or
to
june
28th.
I
think
maybe
something
like
that
before
the
totally
code.
B
Yeah
we
have
some
contributors
from
apac
who
wants
to
join
ci
meetings,
so
we're
thinking
to
start
once
a
month
having
a
park
friendly
time
for
sti
and
triage
meetings.
It
will
probably
start
next
week.
Health
updates
invite
so
it
will
be
9
00
pm,
pst.
C
B
A
E
Yeah,
so
I
had
been
investigating
quad
resource
api
kind
of
for
the
use
case
that
we
have.
We
are
kind
of
trying
to
work
on
node
feature
discovery
to
expose
resources
per
noma,
so
we've
been
kind
of
heavily
invested
on
this.
E
This
work-
and
I
came
across
a
few
things
when
I
was
working
on
this,
so
I
created
two
issues
pertaining
to
the
two
items
I
identified
so
the
first
one
as
you
see
linked
in
the
agenda
dock
is
the
ability,
or
rather
the
inability
to
account
for
available
cpus
as
guaranteed
pods
could
belong
to
shared
pool.
So
a
kind
of
the
primary
goal
for
pod
resource
api,
which
has
been
captured
in
the
kubernetes
enhancement
proposal
itself,
is
that
you
should
be
able
to
account
for
available
resources
on
a
certain
load.
E
So
part
resource
gives
information
about
the
available
resources
kind
of
on
the
node.
The
resources
known
to
the
node,
which
could
be
cpus
and
and
devices
and
memory,
is
kind
of
is
underway,
with
memory
manager
more
moving
to
beta
and
the
list
endpoint
gives
us
information
of
the
pods
that
that
have
been
allocated
certain
resources,
so
that
basically,
basically
gives
you
the
ability
to
kind
of
identify.
What
are
the
available
resources.
E
But
the
challenge
that
that
we
have
is,
which
is
kind
of
captured
in
this
issue-
is
that
guaranteed
pods,
which
which
are
non-integral?
They
belong
to
the
shared
pool
and
body
source.
Api
exposes
those,
and
essentially
that
becomes
challenging
because
you
can.
We
don't
know
what
exactly
the
shared
pool
is
it
shrinks
and
and
grows
depending
on
how
the
the
exclusive
cpus
are
allocated.
E
So
that
was
the
primary
issue,
and
then
we
had
kind
of
a
lively
discussion
on
the
on
one
of
the
proposed
fixes,
so
it
ranged
from
maybe
explosi
exposing
a
simple
flag
to
a
new
endpoint
which
exposes
the
shared
pool
all
the
way
up
to
even
a
generic
point,
a
generic
endpoint
that
exposes
all
the
pools
that
could
exist.
Now
we
had
a
follow-up
discussion
with
intel
on
this,
and
the
idea
that
came
up
was
that
we
could,
as
part
of
body
resource
api,
expose
reso
requests
and
limits
as
well.
E
But
but
I
wanted
to
use
this
form
as
as
an
opportunity
to
see
what
others
in
the
community
think
how
do
they
think
like
which
what
seems
to
be
the
right
option.
So
if
you
have
opinions,
just
please
go
to
the
issue
or
the
discussion
that
we
had
and
kind
of
maybe
express
what
you
prefer
and
what
probably
is
the
ideal
ideal
solution
in
this
case
to
tackle
that?
E
A
I
think
it
depends
what's
the
proposal
and
also
what's
the
problem
scope.
It
is.
I
I
read
your
back
before
this
meeting
your
in
the
issue
of
file.
I
have
to
say
that
I
I
I
know
that
here's
the
problem.
So
that's
why,
when
the
first
exclusive
proposal,
I
think
I
did
a
warning.
A
I
said
if
you
want
to
reserve
the
cpu,
if
for
sure
guarantee,
please
make
sure
you
are
fair
of
the
integer
of
the
cpu
request,
which
it
is
and
also
guarantees,
which
means
you
are
always
got
exclusive
reserved,
cpu
assigned
that
it
is
simplified.
But
a
lot
of
users
still
want
a
professional
for
fractional
cpu
request,
and
so
we
even
discuss
nega.
Can
we
do
we
want
to
run
the
app
or
scale
down
to
the
interval?
A
I
remember
we
still
have
that
discussion
a
while
back
but
anyway,
there's
no
conclusion,
so
we
basically
want
to
do
the
best
effort.
If
you
ask
of
the
fractional
cpu
and
then
you
end
up,
have
the
shared
pool
my
question:
it
is,
I
read
this
one
so
so
you
did
the
least
the
problem
in
the
when
we
first
designed
we
reset
the.
I
did
reset
problem
because
I
saw
that
in
the
past.
So
my
question
is:
I'm
a
little
bit
confused?
A
What
do
you
propose
what
kind
of
problem-
and
we
want
to
address
here?
It
is.
E
Yeah,
please
let
me
clarify
that
a
bit
so,
like
you
mentioned,
the
cpu
manager
currently
just
allocates
exclusive
cpus
to
pods
that
are
guaranteed
and
request.
Integer
all
the
others
belong
to
the
shared
pool,
so
that
includes
your
best
effort
and
guaranteed
with
non-integral
resources,
so
in
case
of
odd
resource
api,
it's
an
end
point
that
monitoring
applications
could
use.
E
We
use
it
for
like
accounting
resources
in
case
of
nfd,
and
it
exposes
information
like
pertaining
to
pods
that
are
allocated
devices
and
and
guaranteed
pods
in
general,
so
the
guaranteed
point
would
be
paused
with
integral
requests
and
one
integral
request
so
currently
pod
resource
api
exposes
both
such
like
both
of
these
kinds
of
pods
and
the
challenge
is
when,
when
you
query
the
end
point,
you
end
up
seeing
pods
that
belong
to
the
shared
pool
as
well,
and
there's
no
way
to
to
know
if
that
pod
was
actually
allocated
those
cpus,
because
it
was
an
exclusive
cpu
request,
or
it
was
because
it
was
shared.
E
E
So
yeah,
so
in
terms
of
my
question
I
think
like
initially,
we
thought
that
it
could
have
been
like
almost
a
bug
mine
of
bug
fix,
but
it's
almost
going
towards.
You
know
changes
in
the
api.
So,
like
I
kind
of
know
the
answer,
but
I
just
want
to
make
sure
like,
is
there
any
scope
that
we
could
maybe
handle
it
in
1.22
time
frame
or
like
it
would
have
to
be
pushed
to
1.23.
C
So
I
would
say
I
mean
I
just
took
a
very
quick
look
at
the
cap.
If
the
graduation
guidelines
for
beta
are
not
being
met,
it
can't
graduate
this
release,
but
like
it's
already
targeted
for
beta
this
release.
So
as
long
as
you
fix
that
there's
no
issue
with
trying
to
still
graduate
it
for
122.,
it's
just
a
matter
of
whether
or
not
that
gets
done
and
if
the
graduation
criteria
are
being
met.
E
So
essentially
the
grad
we
are
making
changes
to
an
api
which
has
already
graduated,
I
believe
so
party,
so
cp
has
already
graduated
the
beta
and
we
are
making
planning
to
make
changes
to
that.
C
E
C
There
is
a
feature
gate,
so
I
mean,
maybe
you
maybe
the
not.
The
right
thing
was
linked.
B
G
Here
I
would
read
the
two
403
and
243
concerns
with
the
new
endpoint
thing.
Is
additions
were
made
to
both
list
and
get
electable
endpoints
to
represent
the
the
the
concrete
resources?
So
it
went
like
this
changes
were
made
to
list
and
to
represent
the
concrete
researches,
namely
the
cpu,
the
cpus,
and
then
poor
resources
went
stable
after
that,
the
cycle
after
we
introduced
the
new
endpoint,
which
its
own
feature
gate.
C
C
Yeah,
I
would
say
I
mean
in
terms
of
api
changes.
If
you
are
making
a
compatible
api
change,
it
should
be
fine
if
it's
not
a
compatible
api
change.
That's
where
the
issue
comes
in,
and
I
don't
know
that,
like
it's
a
matter
of
a
single
cycle,
I
think
you'd
have
to
talk
to
an
api
reviewer.
H
F
Of
the
changes,
what
we
are
trying
to
propose
is
additional
fields.
So
the
background
of
this
is
what
like,
if
we
need
to
calculate
properly
the
state
of
the
couplet
and
by
accounting,
what
cubelet
is
doing.
We
need
to
know
not
only
where
the
cpu
is
assigned,
but
how
much,
actually
it
was
requested
and
limits
that
will
help
us
to
what
will
allow
us
to
calculate
properly
how
what
is
the
size
of
a
shared
pool
like
how
much
it
can
be
shrunk
or
expanded,
based
on
not
only
guaranteed
containers
but
the
rest
of
the
containers?
A
Now
I
understand
what
slightly
proposal-
and
I
think
it
is
like
what
do
you
see
is
kind
of
like
the
could
be
treated
back
fix
for
existing
feature
and
but
because
it
is
required
api
change.
If
you
can
get
api
review,
let's
just
go
proceed
because
it
is
indeed
is
a
bug
for
the
feature
as
we
draw
art
so.
E
I
I
E
Thanks
marcus
and
then
I
have
just
kind
of
related.
The
second
issue
that
I
have
so
this
one
is
related
to
clarifying
the
behavior
of
get
allocatable
resources
so
again,
no
major
change.
It's
just
update
to
the
dock
and
an
update
to
the
kept
to
clarify
and
emphasize
the
capability
of
this
particular
endpoint,
and
I
don't
think
that
this
needs
to
be
handled
in
a
separate
cycle.
So
if
people
could
take
a
look
at
this,
so
this
can
kind
of
make
its
way,
and
that
would
be
really
helpful.
A
A
Sweaty
thanks
for
the
reason
this
one
we
will
process
of
nine
and
just
like
earlier
what
I
say
that
depends
on
what
you
propose:
what
and
also
scope
and
also
compatibility.
So
we
so
for
the
small
bug
fix
and
the
api
change.
I
think
the
you
really
need
the
comments
of
the
api,
reviewer
and
and
signal
will
support
if
you
can
convince
them
so
to
make
this
move
forward,
and
but
if
we
couldn't,
let's
just
follow
the
process
not
to
stop
the
work
right.
E
A
Yeah
exactly
so,
that's
also
I
try
to
mention
here
because
it's
not
treated
the
same
and
then
just
kind
of
get
to
the
one
thing.
So
that's
why
we
I
just
want
don't
want
to
see
nectar.
We
will
support
the
first
one
if
api,
reviewer
and
support
fix
that
back
and
then
the
second
one
needs
to
follow
the
process
and
like
what
we
do
and
we
do
our
best
and
but
if
we
couldn't
and
that's
the
follow
the
process,
yeah.
Okay,
sure
thanks
thanks
john
thanks.
J
Oh
yeah
hi
everyone,
I'm
vinayak,
and
I
just
wanted
to
briefly
introduce
this
cap
that
we're
proposing
for
ambient
capability
support.
J
I
talked
about
it
in
the
e6
security
meeting
and
there
like
there's
a
lot
of
people
who
want
this
to
land
and
so
like
the
cap
in
the
cab,
we're
proposing
some
changes
to
like
the
cri
api,
and
we
were
looking
for
someone
from
signord
to
kind
of
sign
up
as
our
volunteer
as
like
the
of
the
approver
and
reviewer
for
this
change,
and
I
think
in
the
doc
someone
did
so,
which
is
cool
yeah.
It's
just
like
we're
proposing
a
chain.
J
So
we'd
like
someone
from
signor
to
kind
of
look
at
it
and
approve
the
change.
Also,
if
someone
from
container
d
is
here-
and
they
would
like
to
look
at
it,
because
we
are
also
proposing
a
container
d
kind
of
make
changes
once
the
cri
api
changes
land
so
that
they
can
start
like
setting
the
ambient
capabilities.
D
D
J
D
J
Yeah
tim
tim
already
reviewed
the
cap,
I
think,
did
one
pass
over
it,
so
I
think
he
knows-
and
I
know
he's
super
heavily
involved
in
the
parts
security
replacement,
stuff
cool.
That
was
quick.
Thank
you
for
thanks.
Everyone
who
can
I
see
if
mike
brown
is
here,
can
they
add
their
github
tag
or
something.
J
Oh
great
awesome
that
was
great.
Thank
you.
Everyone
and
I'd
love
to
hear
everyone's
comments
on
the
gap
thanks
so
much
for
your
time.
A
Yeah
and
also,
I
still
say
the
couple
of
folks
earlier
before
this
meeting,
because
I
saw
manu
already
there
and
represent
the
signaled
and
also
the
cry
out,
and
I
also
see
the
circuit
and
represent
of
the
continuity
and
the
cri
also
because
there's
the
also
created
staff
and
also
you
are
under
you
are
also
got
to
the
team
or
claire,
if
tim,
like
claire
in
the
t.
In
still
in
the
signal.
A
K
Yes,
hi
everyone,
I'm
niraj,
I
am
the
first
time
contributor
to
a
kubernetes
community
and
the
issue
which
I
am
working
on
is
so
basically
it
asks
to
redirect
stdl
and
sdd
error
to
some
files
instead
of
the
pretty
predefined
log
files.
K
So
the
couple
of
questions
which
comes
to
my
mind
is-
and
I
think
one
of
them
on
which
elena
has
confirmed
is
that
does
this
require
a
full-fledged
kep
because
it
kind
of
touches
upon
the
pods
api.
So
that
was
one
and
the
second
one
was
that,
while
I'm
working
on
this
kap
or
the
technical
solutioning,
could
there
be
someone
you
know
who
I
can
reach
out
to
get
in
initial
design
feedback?
Or
you
know
technical
feedback.
D
I
think
hey
any
rush
to
like
so
also
like
don
like
derek,
and
I
have
been
like
having
some
internal
conversations
around
the
cri
logging
format
and
some
of
the
challenges
we
are
seeing
in
production.
D
Like
the
way
things
are
right
now,
like
some
of
the
things
that
we
see
missing
is
a
way
to
do
throttling
like
in
the
past,
we've
had
customers
that
are
using
the
docker
journal.
Logging
driver
that
we
contributed
and
like
journal
is
able
to
put
back
pressure
on
the
logs
and
so
on.
So
I
I
feel
that
this
kind
of
fits
in
this
category,
but
like
just
looking
at
the
description
neeraj
like
I'm,
not
clear
what
you
are
suggesting,
because
today
the
cri
lock
format
is
going
to
a
file.
D
K
Right,
so
what
actually
I
understood
from
this
task
was:
is
that
it
needs
to?
You
know,
just
send
to
a
very
specific
fiber,
not
an
additional
file,
but
just
one
file.
You
specify
that
this
is
where
std
out
should
go,
and
this
is
where
std
error
should
go.
D
K
K
Yeah,
so
so,
basically,
generally
people
kind
of
you
know,
modify
the
docker
image
and
specify
where
should
std
out
should
go.
Where
did
stdr
should
go,
but
there
are
a
couple
of
scenarios
where
people
they
don't
have
control
over
the
images.
So
what
they're
asking
is
that?
Can
we
have
a
specific
feature
in
the
parts
pack?
We
can
control
this.
B
D
A
Actually,
this
topic
is
being
discussed
when
we
first
designed
ci
I
sonata
joined
today's
meeting.
Maybe
lantau
can
fail
the
gap
here.
So
initial
have
the
proposal
there's
the
attach
additional
file,
also
redirects
the
different
file
for
the
standard
alt
and
the
standard
arrow,
and
we
didn't
do
that.
I
think
that's
the
mini
region
and
one
reason
it
is.
A
We
we
think
about
even
today's,
like
the
using
the
journey
and
many
other
logging
mechanism,
actually
what
we
have
already
covered
most
the
cases.
I
know
your
cases
is
not
covered,
but
there's
other
way
to
do
like
the
neck.
The
different
logging
warning.
A
We
also
talk
about,
I
think,
there's
some
people
in
the
production
already
using
additional
logging
warning,
and
so
so
that's
why
we
didn't
do,
and
they
also
have
the
complexity,
concern
and
also
the
performance
concern
risk
by
the
people.
So
that's
why
we
didn't
move
forward.
So
just
like
the
manuscript,
this
is
involved
a
lot
of
the
ci
design.
This
is
why
we've
been
back
forced
design
couple
times
and
didn't
do
its
work
in
the
past,
so
so
we
can-
and
I
think
this
need
a
a
real
design
type
sent
to
us.
A
I
I
just
saw
the
last
and
that
also
request
of
the
cap
and
and
start
from
the
province
statement,
because
I
believe
what
you
this
state
here,
actually
there's
the
different
way
to
address
issue,
but
I'm
not
sure
all
the
possibilities.
Your
car
is
cover
your
cases
here,
but
for
a
high
level.
What
I
just
heard,
I
think,
there's
the
some
work
wrong
and
I
want
to
know
why
the
workaround
don't
fit
into
your
cases
here.
A
Not
to
so
that's
why
we
didn't
really
pursue
this
further.
So
I
wanted
to
understand:
what's
the
problem
and
which
use
cases,
the
interview
indeed
cannot
cover
why
the
workaround
not
work
for
your
kisses
here.
H
Yeah,
actually,
oh,
this
is
lantau.
Actually
I
think
I
read
this
video.
I
mean
this
this
issue
a
well
backhand.
If
I
remember
correctly,
what
you
want
is
that
you
want.
You
have
a
container
image.
That's
legacy,
you
don't
know
the
code,
you
don't
want
to
change
you,
don't
you
don't
want
to
change
it,
but
you
want
to
pass
this
output
from
the
other
container.
H
That's
my
understanding
and
I
think,
as
you
know,
you
know
mentioned
that
we
already
output
the
the
the
stl,
the
stro2fl
log
file
and
if
there
is
no
security
concern,
I
just
want
to
know
what's
your
concern,
because
if
there's
no
security
concern,
you
can
actually
just
mount
that
log
file
into
your
the
other
container
and
pass
the
output
from
the
other
container.
There
there's
already
that
file,
it's
just
that
it's
on
the
host
instead
of
in
the
pod
environment
or
the
container
environment.
K
So
I
think
I
think
it
requires
a
private
escalation
or
something
and
the
initial
person.
K
A
You
don't
need
the
modified
image,
at
least
you
just
like
the
okay.
Maybe
how
about
this
way?
Lantau?
Maybe
you
can.
I
haven't,
read
the
proposal
yet
and
then
maybe
you
can
you
can
give
that
a
suggestion.
How
can
pursue
and
okay.
H
A
C
So
don
just
for
for
clarity,
because
I'm
reading
through
this
issue,
so
the
statement
is
some
applications
specifically
cri-
do
not
write
to
var
log.
Therefore,
some
result
values
have
to
be
collected
from
standard
out.
C
So
basically,
I
guess
there's
concerns
about
these
things
not
going
to
standard
out
for
some
reason
and
then
apparently
you
can
redirect
them
to
a
file
according
to
this,
but
then
that
involves
changing
the
entry
point
of
the
container,
which
in
a
lot
of
cases,
is
not
desirable,
so
I
would
say
yeah
like
as
as
currently
is.
C
I
would
almost
wonder
if
this
is
a
documentation
issue
and
not
like
an
actual
api
spec
issue
like
we
don't
have
documentation
telling
you
how
to
work
around
this
or
do
things
for
this,
and
so
you've
got
a
lot
of
people
who
are
like.
Oh,
I
want
this,
but,
like
you
can
already
do
this,
we
just
don't
tell
you
how.
So
I
would.
B
C
That's
possibly
something
that
we
could
consider
as
well,
because
making
api
changes
to
the
pod
spec
are
quite
involved
and
usually
need
like
pretty
good
strong
arguments
behind
them.
A
I
agree,
but
I
found
what
the
problem
described.
I
think
we
already
handled.
I
don't
agree,
it's
maybe
somewhere
the
documentation
falling
apart,
because
our
system
is
so
complicated
and
occasionally
the
people
remove
our
good
documentation
and
just
think
about
that,
like
stale
or
duplicate,
or
it's
just
too
much
detail
a
lot
of
people
remove,
they
think
about
too
much
detail
and
which
is
messed
up.
So
we
can
fix
those
problems.
But
again
I
want
to
say
that
maybe
we
still
didn't
solve
your
problem.
A
I
want
to
understand:
what's
your
problem,
more
detail
from
hello,
I
think
we
covered
that
long
time
back
so
so
that's
why?
Let's
come
back
to
the
problem?
More
stack
first
try
to
solve
your
workaround
and
solve
your
problem
first,
if
still
and
then
we'll
come
back.
What's
your
problem
and
we
can
see
how
we
are
going
to
solve
is
that
okay.
H
Yeah
I
will
comment.
I
will
comment
on
your
on
your
issue
and
tell
you
what
to
work.
What
kind
is
and
feel
free
to
reply
if
it
doesn't
work
and
if
you
have
any
concern
about
it.
L
Like
so,
I
wanted
to
raise
this
issue
and
like
wanted
to
know
how
to
proceed
further.
So
few
days
back,
there
were
efforts
going
on
to
migrate
flags
to
component
conflict
and
lots
of
cubelet
flags
have
been
marked
as
deprecated
without
any
timeline
for
removal
and,
like
all
those
flags
shows
as
like,
they
are
deprecated
and
like
whatever
new
flags
and
new
options
that
are
being
added
to
cubelet
are
like
added.
L
L
So
it's
like
it
looks
confusing
from
users
perspective
that,
like
new
options,
you
are
adding
and
they
are,
they
looks
deprecated
as
soon
as
you
have
added
so
like
what
should
we
do
in
this
regards
like?
How
should
we
proceed?
Should
we
stop
adding
flex
public
to
all?
Should
we
stop
adding
flags
all
together
or
like?
L
Should
we
stop
making
them
marking
as
deprecated,
so
so,
particularly
like
we
were
getting
like
problems
from
cluster
life
cycle
said
cubanium
like
it,
it
is
getting
confusing
and
like
a
lot
of
usability
problem
for
them.
So
like
we
want,
like
you,
can
say,
a
clear
guidance
from
the
sick
that
how
should
we
go
ahead?.
C
So
I'm
just
kind
of
digging
through
this
adida
and
I
looked
at
the
email
that
was
linked
in
the
issue
and
it
was
quite
old.
It
was
from
2019.
So
I'm
looking
at
the
google
group,
this
looks
like
it's
owned
by
a
working
group
and
I
would
expect
that
the
working
group
would
be
responsible
for
coordinating
this,
but
according
to
their
mailing
list,
both
of
the
chairs
of
that
working
group
have
stepped
down.
So
the
working
group
is
effectively
defunct.
So
I'm
not
sure
what
we
should
do.
Please
you
turn
on
your
camera.
M
No,
I
leave
you
the
honor.
Thank
you.
So
yes,
the
working
group
is
dissolved
and
there's
nobody
taking
that
up.
But
this
the
working
group
was
trying
to
churn
the
whole
world
kind
of
thing
like
they
they've
taken
up
all
components,
so
it
was
going
to
be
hard
for
them
to
reach
their
goal.
M
As
far
as
I,
I
think,
the
blast
radius
is
smaller
if
it's
just
cubelet
and
like
the
first
thing
that
we
need
to
do
is
stop
the
bleeding
in
the
sense
that
we
should,
if
we're
gonna,
add
a
command
line
parameter
then
it
should
be
in
the
config
also,
and
ideally,
we
should
avoid
adding
a
command
line
parameter
right,
and
if
we
want
to
do,
then
it
should
be
in
the
config
2..
So
we
would
set
up
some
unit
tests
to
make
sure
that
people
trip
on
it.
M
So
we
stop
people
from
adding
things
and
remind
reviewers
that
things
can't
be
added
using
unit
tests,
and
then
then
we
have
to
figure
out
a
timeline
when
we
would
remove
the
deprecated
flags
and
publish
it.
So
people
know
when
to
expect
this,
so
those
are
the
two
things
that
I
would
want
us
to
do:
elana
and
don
aditi.
Does
that
reflect
with
what
you
were
thinking
about.
L
Yeah
so
actually
like
from
the
prs
I
have
seen
like
you,
have
also
raised
one
issue
that
the
flag
should
be
added
to.
The
option
should
be
added
to
flag
and
config
as
well,
but
most
of
the
pr
like
almost
all.
We
are
following
this
consensus
that
it
is
being
added
to
both
the
places,
but
the
problem
is
that
the
new
flags
are
deprecated.
The
whole
flags
like
everything,
if
you
like,
if
you
run,
if
you
grab
all
the
cubelet
flex,
everything
looks
deprecated.
M
Yeah,
so
what
I
would
say
there
is,
we
should
have
a
page
somewhere,
maybe
in
community,
where
we
we
or
maybe
update
the
cap
or
add
a
new
cap
that
says
this
is
the
date
when
we're
gonna
the
new
flags
are
going
away
right.
We
should
have
that
information
somewhere.
That's
easy
to
look
up.
L
C
You
can
say
I
think
that
we
just
need
to
ensure
that
it's
like
written
down
and
announced
like
they've,
already
been
deprecated.
For
goodness
knows
how
long
so,
I
would
say,
like
probably
the
blocking
thing
here
is
not
that
you
know
we
haven't
given
enough
time
or
notice
necessarily
for
deprecation.
C
It's
that
nobody
is
owning
and
coordinating
this
effort,
so
everything's
getting
done
piecemeal
and
there's
no
comms
going
out
saying
by
the
way,
stop
using
flags
because
it
would
be
best
if
we
did
that
all
in
sort
of
one
fell
swoop,
as
opposed
to
like
oh
okay,
we
got
rid
of
this
flag.
Oh
okay,
we
got
rid
of
that
flag,
like
that's,
probably
the
thing
that
is
driving
like
downstream
consumers
like
cubadm
nuts,
because
how
do
they
know
which
flag
or
which
flag
or
which
flag
it's?
C
I
would
say
that
would
be
a
bad
user
experience.
So
if
we
could
just
do
it
all
at
once,
that
would
be
great.
I
don't
think
we
should
do
it
for
this
release,
given
there's
not
enough
notice,
but
if
we
could
have
a
cap
that
was
like
okay,
here's,
the
state
of
the
world,
here's
the
implementation
history,
we
know
we
want
to
get
rid
of
this
stuff
like
here,
is
the
design
proposal
for
how
we're
going
to
do.
C
The
deprecations
here
are
the
dates
and
like
give
everybody
an
opportunity
to
give
feedback
and
then
coordinate
that
all
within
node.
I
think
that
that
would
be
totally
doable
and
would
probably
be
the
right
amount
of
comms
and
would
ensure
that
we
get
it
done.
D
A
I
to
understand
this
component
called
config
status
because
the
kubernetes
the
signal
is
the
first
one
follow
the
call
out
when
we,
when
we
deplicate
a
lot
of
move,
a
lot
of
the
flag
to
the
config
and
then
that
allows
a
lot
of
application.
Other
components
didn't
follow
for
more
than
one
year.
So
that's
why
I'm
kind
of
good.
So
this
is
why
we
also
pause,
because
we
cannot
just
make
the
community
announcement
and
they
just
based
on
kubernetes,
I
said
and
not
kubernetes
signal
the
manage
the
component.
A
A
What's
the
process,
the
reason
I
can
share
some
background
context,
so
we
basically
identity
goes
through
all
those
factors
go
through
all
the
kubernetes
flag
and
they'll
identify
some
is
going
to
be
permanent
and
we
immediately
take
action
and
through
the
carbocation
effort
and
move
them
to
the
component
configure
some
is
left
there.
It's
just
because
either
feature
is
going
to
be
duplicate
or
it
is
the
configure
that
the
flag
file
is
going
to
change.
So
that's
why
we
might
duplicate
it.
A
So
I
do
think
about
a
lot
of
feature
map
already
deprecated
feature
itself.
All
it
is.
The
feature
is
already
graduated
by
the
ga,
so
the
flag
can
be
deprecated
safely
no
issue,
but
there
are
certain
of
the
flags
call
out,
as
the
deprecator,
for
example,
dynamic
tubulated
config
also
have
some
negative
flag
originated
listed
there.
So
then
we
call
out
deprecated
but
didn't
directly
coupling
config.
It
is
cell
phone,
it
is
not
removed
so
so
that
the
flag
is
still
there.
A
So
we
could
remove
like
the
circuit,
help
this
quota
and
to
duplicate
that
feature.
Remove
that
feature
so
relate
the
flag,
can
be
completely
removed
and
not
just
duplicate
the
writers.
So
so
the
feature
can
be
dedicated
and
the
flag
further
removed
the
source
code
and
the
flagpole.
So
there's
the
mixed
situation.
A
N
Yeah,
so
I
I
agree
with
this
concern.
The
situation
across
kubernetes
is
confusing
for
the
user
and
and
also
for
the
implementer
like
of
course,
or
the
tool
on
top
of
kubernetes.
So
the
the
problem
is
raised
at
the
earring
in
in
in
thick
node,
because
kubernetes
is
the
only
component
that
is
already
playing.
He
is
already
issued.
A
deprecation
notice
and,
and
people
and
tools
are
asking
okay,
but
we
have
to
go
to
component
config,
but
there
is
not.
N
They
are
not
implemented
in
the
same
states
and
stuff
like
that,
so
the
situation
is
really
complex.
What
is
really
important
for
tool
like
hubert
mean
is
that
we
have
at
least
something
consistent.
So
if
kuber,
let
us
want
to
use
computer
company,
we
should
be
able
to
basically
pilot
each
knobs
each
existing
nodes
through
the
the
component
and
then
maybe
there
are
also
also
other
ongoing
effort
like,
for
instance,
the
insta-specific
component.
Config
starts
that
related.
N
We
we
have
to,
let
me
say,
try
to
to
find
a
way
forward
to
out
of
this
mess,
because
it
is
really
confusing
for
the
user
and
for
us
implementing
the
stuff.
On
top
of
that,
so
it
is
not,
let
me
say
it's
not
a
problem
only
of
google.
I
agree
this
is
cross
kubernetes,
but
in
kubernetes
kind
of
more
pressing,
because
people
look
at
these
deprecation
notice
and
and
they
get
worried
and
stuff
like
that,.
M
Yeah
and
ideally
I
wish
there
was
people
who
are
continuing
to
work
on
competent
conflict,
but
we
know
that
there
isn't
and
cubelet
is
special
in
the
sense
that
the
rest
of
the
things
run
as
containers.
So
it's
less
of
an
issue.
Cubelet
is
the
one.
M
Where
is
the
lot
of
magic
going
on
between
cubadium
and
cubelet
and
then
cubelet
starting
everything
else
are
static,
pods,
so
cubelet
is
a
little
bit
more
special
and
I'm
hoping
that
the
people
that
are
working
on
this
competent
conflict
just
for
cubelet
will
will
be
able
to
help
the
the
rest
of
the
components
over
a
period
of
time.
M
Instead
of
trying
to
put
everything
in
the
same
bucket,
then
we
make
progress
nowhere
right,
just
like
we
did
for
structured
logging.
We
were
able
to
convert
the
cubelet
first,
so
I
think
we
should.
We
should
lead
the
way
here
don,
that's
the
way
I
look
at
it
and
we
are
in
a.
A
A
A
So
we
have
to
see
the
scalable
about
what
kind
of
things
so
we
basically
at
the
end,
we
decided
okay,
finish
what
every
single
component
we
own
and
make
sure
there's
the
at
least
anything
integrated
with
us.
There's
the
rule,
there's
the
policy
right.
So
so
we
follow
all
those
kind
of
laws
and
policy,
and
sometimes
even
our
law
policy
is
rewarded
by
the
other
community
other
things,
because
they
don't
understand
the
complexity
on
the
node
side
and
because
another
side
have.
The
complexity
is
different
from
other
components.
A
Other
components
that
can
arch
r
r
challenge.
It
is
the
scalability
single
node
that
will
run
our
system
so
have
to
be.
We
also
carry
a
lot
of
legacy
stuff
right
so,
like
the
different
kernel
version,
different
system
d,
different,
those
kind
of
different
container
runtime
other-
is
actually
more
centralized
and
so
could
be
like
one
successful.
So
for
us
we
have
to
follow
the
rule,
but
on
other
hand
we
have
the
most
just
like
what
do
you
say?
A
We
basically
already
make
ourselves
most
of
the
flexible
most
partner
already
we
could
do
and
we
basically
cite
the
leading
of
the
community
by
the
example
instead
just
call
out
and
and
but
don't
execute.
I
think
we
do
call
out
execute
and
I
think
that's
the
best
leadership
from
me
perspective.
Instead
of
just
opinion
needs
and
that's
at
least
that's
the
way
how
I
set
the
leadership,
set
back
execution
and
and
follow
those
things.
So
I
think
about
do
we
basically
could
have
our
internal
rule
and
we
could
call
out
a
cube
adm.
A
How
will
integrate
with
us
here
it
is.
We
do
mean
about
the
application
flag
rather
right,
but
I
also
agree
with
you.
Alaila
and
ernie
mentioned
that
even
with
deprecated.
Maybe
we
did
the
poor
job
to
set
the
timeline
that,
but
the
application
actually
is
mean.
This
community
really
means
that's
deprecating.
We
won't
reward
back
and
we
can
start
from
there
and
then
we
can
fix
the
problem.
Like
the
for
all
the
deprecation
things
we
give
the
top
nine
and
we
try
to
do
our
best
that
one
that's
all
I
can
do.
I
think.
C
I
don't,
I
don't
think,
that's
the.
I
don't
think
it's
a
realistic
option.
I
think
that
this
needs
to
be
done.
I
agree
like
that's
the
whole
reason
that
we
had
a
working
group
for
it.
I
think
I
agree
with
don
in
that,
like
the
cubelet
is
really
overloaded
right
now
we
have
a
lot
of
stuff
going
on
it's
one
of
the
largest
sigs
in
terms
of
code
that
we
own-
and
this
is
the
sort
of
thing
I
think
oh
and
the
keyboard
is
also
a
weird
special
case
right.
C
C
That's
not
really
the
case
for
the
cubelet,
so
just
given
how
much
stuff
we
have
going
on
and
that
it
honestly
might
be
easier
for
other
components
to
lead
here
which
are
smaller
and
can
you
know
figure
out
the
standardization,
I
would
say
it
might
make
sense
for
someone
else
to
pick
up
the
effort,
or
at
least
try
to
have
some
sort
of
coordination
in
terms
of
like
something
like
structured
logging.
C
M
Elena,
then,
basically,
what
we
are
ending
up
saying
that
we
won't
do
anything
and
I
don't
believe
anybody
else
will
pick
up
either.
So
if
you
want
to
do
it
for
our
own
good,
we
do
it.
Let's
not
put
the
burden
on
somebody
else,
because
nobody
else
is
going
to
do
it
for
any
other
company
like.
I
can
guarantee
you
right
now.
If
you
want
to
do
it
because
it
makes
sense
for
us,
we
do
it.
Otherwise
we
don't
do
it
so.
M
So
aditi
wants
to
do
it
and
we
she'll
be
rounding
up
a
few
more
people,
hopefully
to
help
with
this
war.
A
I
just
want
to
see
the
signal,
the
reason
we
call
out
that
and
do
that
four
years
ago,
as
the
first
six.
It
is
because
our
user,
any
things,
have
the
leads
for
this
one.
I
earlier
I
mentioned
that
we
definitely
flag
not
flexible
enough
to
satisfy
all
those
requirements.
So
that's
why
compound
config
satisfies
this
is
also
why
we
have
the
dynamic
kubernetes
config
right.
So
I
believe
everyone
here
it
is
the
window
provide
the
kubernetes
offer
they
will
understand.
A
Actually,
the
component
config
may
be
more
flexible
compared
to
fly,
but
anyway,
there's
many
way
to
solve
that
problem.
So
we
I
think
for
for
us,
we
have
the
needs
so,
but
I
believe
for
other
components.
They
also
have
the
needs
just
but
nega.
Let's
say
that
need
is
not
so
strong
as
us,
but
for
the
overall
kubernetes,
healthy
and
easy
are
using
usb.
All
those
kind
of
things
actually
unify
that
one
consistent
that
it
is
definitely
is
the
benefit
for
cust
for
the
users.
A
So
we
again
we
are
going
to
because
we
do
have
user
and
the
partner
team
engagement.
Others
need
signal.
They
will
keep
current
status.
Make
sure
all
those
kind
of
things
it
is
the
com
is
the
config
component
config
and
the
depictor
of
the
flag,
and
we
continue
to
do
that
work
because
for
benefit
of
the
user.
A
I
do
think
about.
The
community
should
pick
up
this
work.
If
they
didn't
pick
up
this
work,
maybe
it
is
just
because
priority
is
not
very
important,
but
for
our
user
and
the
kubernetes
user
to
engage
with
his
admin
like
cluster
enemy
and
to
management,
their
node
worker
load,
and
we
need
to
keep
our
standard
component.
Config
is
important
to
switch
back
to
the
flag,
so
many
law,
it's
just
impossible
for
those
items-
management
system.
At
least
we
stop
that
problem.
C
K
C
C
I
think
the
other
thing
that
we
could
do
right
now
that
would
be
very
helpful,
is
going
through
and
like
at
least
doing
the
inventory
making
sure
I
know
that
there
was
like
an
issue
that
was
linked,
making
sure
that's
up
to
date
in
terms
of
like
the
status
of
all
of
those
flags.
So
at
least
like
we've
communicated,
you
know,
here's
the
state
of
things.
C
We
may
not
necessarily
make
any
progress
on
it,
but
like
here's,
where
it's
at
long
term,
our
goal
is
xyz
but
like
at
least
ensuring
that
it's
well
documented
and
up
to
date
like
I
think
those
are
two
things
that
we
could
definitely
do
right
now
beyond
that,
I
don't
know
what
value
there
would
be
in
like
trying
to.
You
know,
for
example,
like
we
could
go
and
like
make
a
cap
for
this,
but
just
compared
to
like
other
sort
of
technical
debt
that
we
have
in
the
cube.
C
M
Okay,
I
think
because
there
is
a
interconnect
between
cubelet
and
cluster
lifecycle.
That's
why
it's
pressing
elena,
it's
not
pressing
because
of
just
cubelet
itself,
but
it
is
crossing
a
boundary
here
and
that's
why
lubomir
is
here
and
fabricio
is
here.
O
Yeah,
I
think
that
I
can
say
that
this
is
very
confusing
on
the
user
side
and
I
think
we
should
treat
the
users
with
high
priority
when
users
don't
know
what
to
use,
and
especially,
we
also
have
cases
where
a
certain
flag
is
present,
but
the
mirror
option
is
inside.
The
config
is
missing,
which
is
especially
more
confusing,
but
at
least
some
of
these
flags
are
not
applicable.
O
We
should
get
the
story
for
the
kubrick
at
least
consistent
in
my
opinion,
mirror
flags
to
have
options
in
the
config
start,
removing
flags
that
that
we
don't
need,
and,
quite
frankly,
this.
I
think
we
should
actually
create
a
proposal
that
is
signed
by
sig
architecture.
To
have
something
horizontal
across
the
project.
Cubadem
is
following
the
equivalent,
but
keep
proxy
and
keep
schedule
are
not.
They.
M
M
C
I
don't
know
if
it's
important
enough,
I
don't
want
to
say
it's
not
important
enough.
I
just
genuinely
cannot
prioritize
that,
among
the
other,
like
literally
20,
some
caps
that
we
have
ongoing
and
this
one's
a
tech
debt
thing
which
is
important
for
us
to
be
addressing
it
sounds
to
me
like
I
would,
I
would
hesitate,
lumiere
to
trying
to
like
pass
the
ball
to
sig
architecture,
because
I
think
that
was
done.
We
got
a
working
group
and
then
the
working
group
died
before
it
finished
its
job.
C
It
does
sound
like
potentially-
and
this
is
not
to
push
work
on
other
folks,
but
if,
like
sig
cluster
life
cycle
is
the
one
that's
feeling
the
most
pain
from
this,
then,
like
being
the
sort
of
like
voice
of
the
user
and
saying
like
okay
here
are
our
needs
hear
the
pain
like?
Can
we
do
something.
M
That's
exactly
what
the
three
of
them
are
here
for
lubomir,
fabricio
and
aditi.
That's
what
they
are
here
for
the
lana.
C
Yes,
so
I'm
trying
to
call
that
out
and
say
like
I
don't
think
that
we
should
then
try
to
like
push
it
again
to
like
somebody
else.
We
should
keep
this
within
the
group
and,
if
there's
like
a
specific
proposal
like
I
don't
know
don,
what
do
you
think.
A
M
B
A
So,
let's
how
about
this
week
the
and
the
like
the
word
I
suggest
earlier,
we
treat
the
deprecation
and
we
we,
we
treated
the
component
config
seriously
by
the
signal
and
we
didn't
add
the
new
flag
things
like
that.
A
No,
we
are
focused
on
the
documentation
existing
stuff,
but
we
just
cannot
like
one
time
get
rid
of
all
the
application
flag.
I
didn't
mention
that
the
application
there's
the
legacy
feature
a
lot
of
is
behind
lead
the
more
engineering
resource.
This
is
why
we
did
some
is
missing
from
component.
Config
is
just
because
that
flag
need
to
be
deprecated
and
removed
for
hour,
but
we
already
marked
that
deprecated
strong,
which
is
give
you
signal
strongly
suggest
you
never
using
it,
because
the
feature
already
implement
only
like
that.
A
You
want
to
override
or
something
like
that.
So
there's
the
case-back
cases.
We
could
do
more
documentation
more
those
kind
of
things
and
each
quota
we
can
each
release.
We
can
planning
to
that,
bring
more
and
clean
those
kind
of
things,
but
just
like
one
cannot
do
the
one
time
fix
all
yeah.
Absolutely
don't.
M
We
are
not
advocating
for
one
time
clean
everything
you
know.
What
I
want
to
do
really
is
like
elena
was
saying
that
we
take
talk
of
which
are
the
flags
still
there.
Are
they
deprecated
when
when,
where
they
deprecated
and
come
up
with
like
a
new
timeline
for
each
of
the
other
ones,
preferably
in
buckets
of
a
few
each,
not
just
one
one
after
another
and
like,
like
you
said,
do
it
on
a
every
release?
We
we're
gonna,
get
it
off
a
few.
M
So
elena
one
specific
thing
that
I
found
is,
if
you
publish
a
schedule
there
are
people
who
jump
in
and
clean
things
up.
You
know
especially
there's
folks
from
india
and
china,
who
are
very
good
at
sporting,
these
calendar
entries
and
they
they
come
and
help
help
out
with
the
pr.
It's
just
that
we
haven't
done
that
and
I
would
like
to
do
that
sure.
A
Okay,
thanks
theme
to
pick
up
this
work
and
initiate
the
effort
we
run
out
of
time.
Okay,
maybe
we
can
need
to
circuit
quickly.
Oh
it's,
okay.
Do
you
think
about
quickly?
You
can
announce
something
or
using
what
we
need
to
discuss.
Then
we
have
to
pound
to
next.
All
we.