►
From YouTube: Kubernetes SIG Node 20220125
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
hello,
it's
january,
25th,
with
the
signot
weekly
signature
meeting,
welcome
everybody.
A
We
have
a
few
items
on
agenda
today,
but
let's
get
started
with
formal
fighting
like
if
you
afraid
you
missed
something
this
week,
what
happened
on
code
development
front?
We
didn't
have
much
happening.
We
only
have
21
pr
created
this
week.
We
merged
11
and
closed
three.
Most
of
the
three
is
just
work
in
progress
prs.
A
As
we
said
last
week,
there
was
a
lot
of
effort
last
week
to
close
all
the
pr's
and
clean
up
things.
That's
why
numbers
are
so
low
and
close
to
merged,
but
yeah.
We
grew
a
little
bit
on
pr's
up
front,
but
still
very
low
on
numbers
like
166
is
unusually
low
number
for
signal,
and
this
is
a
great
progress.
Thank
you,
everybody
now
ilana.
Do
you
want
to
make
an
announcement.
B
Yeah
welcome
everyone
meeting.
It's
me
here
to
pester
you
about
upcoming
deadlines.
This
week
we
have
our
upcoming
production
readiness
review
deadline,
which
is
not
like
a
hard
freeze
or
anything
like
that.
But
if
you
don't
get
your
prr
questionnaire
completed
and
a
reviewer
assigned,
we
might
not
get
to
it
in
time
to
approve
your
thing
for
the
release,
so
it's
kind
of
like
our
sla.
So
if
you
could
please
ensure
that
you
get
that
filled
out
by
january
27th.
B
I
went
and
I
did
a
pass
of
all
of
the
node
prs
and
I
sent
an
email
based
on
the
ones
at
the
time
that
either
like
didn't
have
one
or
they
did
have
it,
but
they
didn't
have
a
person
assigned
or
something
like
that.
So
you
can
pick
anybody
on
the
list.
I
think
me
and
wojtek
might
be
a
little
over
subscribed,
so
maybe
consider
signing
things
to
either
john
or
maybe
david,
and
I
think
matthias
is
also
shadowing
this
release.
Thank
you,
matthias,
but
yeah
we've
gotta.
B
We
gotta
get
that
stuff
reviewed
to
make
sure
that
your
questionnaires
are
done
and
filled
out,
so
we
can
get
them
reviewed
and
then
similarly
enhancements.
Freeze
is
february
3rd.
If
you
are
on
my
side
of
the
globe
february
4th,
perhaps
if
you
are
in
europe
so
or
even
further
east
of
there,
so
please
ensure
that
you
get
your
caps
complete
and
approved
and
merged.
So
you
can
work
on
your
features
this
released.
A
Thank
you
now,
let's
go
to
excitement
excited
topic
of
kubler
tracing.
Do
you
want
to
speak
about
it.
A
C
Let's
see
I
can
share
my
screen,
I
don't
have
to.
I
can
just
talk
about
it.
I
I
don't
have
the
ability
to
now.
A
C
A
C
Yeah,
you
can
go
to
the
readme
if
you
want,
but
basically
I
need
a
node
review
of
this
pr.
It's
been
reviewed
by
the
sig
instrumentation.
C
It's
been
around
for
quite
a
while
kind
of
been
back
and
forth
on
the
back
burner
of
my
of
my
plate,
and
so
now
I
really
would
like
to
get
it
in
for
this
enhancement.
Freeze.
If
you
click
on
the
file,
you
can
see
it
rendered.
You
know
it
doesn't
matter.
I
guess
thanks
we'd
like
to
add
tracing
to
the
cubelet.
Currently,
there's
tracing
open,
telemetry
tracing
in
fcd,
cube,
api
server
and
in
cryo
as
well
so
adding
to
the
cubelet
instrumenting
for
the
cubelet
would
would
logically
come
next.
C
D
And
then
you
you
receive
a
review
from
david
dashpool
david
dashboard
is
from
the
noting,
and
also
right
now
is,
in
the
sake
sick
instrument,
so
so
he'd
been
working
on
chasing
for
a
while.
So
I
think
that
you
basically
already
got
to
know
the
reviewer
okay
cool.
D
B
Of
the
chairs,
that's
sig,
instrumentation
and
david.
Is
our
tech
lead?
So
him-
and
I
have
been
reviewing
this-
and
I
even
did
a
little
bit
of
api
review
on
this
one
as
well,
but
we
needed
to
make
sure
that
you
know
we
need
node
approvers
to
be
able
to
merge
any
code
changes
here
and
I
don't
think
it's
limited
to
the
parts
of
the
code
that
david
has
approver
on.
So
I
just
wanted
to
make
sure
that
everybody.
B
It
and
that
we
can
proceed
with
this,
because
it
we
saw
some
demos
and
looked
really
cool
yeah.
C
Yeah
thanks,
I
had
every
intention
on
setting
up
the
demo
again,
but
I
do
have
screenshots
of
it
that
I
can
link
in
the
chat.
A
few
things
I
wanted
to
note
now
that
I
have
my
voice
connected
to
my
brain.
Is
that
the
the
tracing
configuration
that
currently
david
is
using
a
a
tracing
configuration
in
the
api
server?
And
here
I
would
add
one
for
the
cubelet
so
we'll
we'll
move
that
to
component
base,
rather
than
you
know,
to
a
central
location.
C
So
that's
that's
one
main
thing
in
addition
to
you
know
enabling
tracing
and
cubelet
that
will
happen
with
this
enhancement,
and
it
will
be
experimental,
of
course,
I'm
really
looking
forward
to
experimenting
with
it.
You
know
seeing
it
run
at
scale,
seeing
where
it
makes
sense
to
add
spans
where
it
doesn't.
You
know
all
of
those
things
they're
hard
to
do
just
running
locally
and
without
a
real
environment
up.
C
I
am
going
to
send
a
link
here
to
a
repository
that
has
some
screenshots,
that
you
can
check
out
plus
it's
it's
a
reproducible
environment
that
you
can
follow
if
anyone
else
wanted
to
try
it
out
so
here
that
is
in
the
chat.
C
And
again,
I
don't
know
why
I
came
there.
We
we
do
have
tracing
enabled
in
cryo,
just
experimentally
as
well
and
in
this
link
that
I
just
sent.
That
explains
how
you
can
enable
it
in
cryo
per
node.
C
So
what
I,
what
I
haven't
figured
out
yet
is
in
the
implementation.
Pr,
I
have
it
all
working
as
you
can
see,
but
I
don't
have.
I
don't
think
I
have
the
the
places
where
it
makes
sense
to
be
generating
spans.
I
have
added
spans
to
the
cri.
C
A
Okay,
tracing
is
great.
I
mean
I
have
a
lot
of
stake
in
the
games
as
well.
I
will
definitely
review
that.
Thank
you
for
bringing
it
up.
So
you
said
what
is
important
from
signal
to
review
is
where
to
put
spent
and
how
to
like,
when
specifically
to
generate
spams,
and
you
already
figured
out
also
your
propagation
of
context.
A
Yeah
and
just
for
everybody
and
saying
like
in
slack,
somebody
was
complaining
about
building
together,
continue,
runtime
and
cooperatives,
and
the
open
ceremony
turned
out
to
be
one
of
the
dependencies
that
we
currently
have
across
many
projects.
So
this
will
be
a
go
mode
nightmare
at
some
point.
Hopefully
we
can
just
stabilize
open
cities
and
it
will
solve
all
the
problems.
C
E
D
Years
ago,
one
of
my
intern
not
using
the
open
television,
but
this
just
want
to
build
the
chasing
for
kubernetes,
but
we
passed
that
project.
The
reason
is
because
we
want
to
using
industry
standard
open
pyramid,
so
so
the
parameters.
So
that's
why?
Since
that
time,
I
think
that
they
would
start
working
on
those
things
and
start
to
enjoy
participate
from
the
signal.
The
participant
of
the
state
instrument
effort
try
to
help
the
push.
So
I'm
glad
to
see
this
has
come
to
some
some
some
some
yeah
progress
made
some
progress
and.
C
Oh
great
yeah,
so
thank
you.
Thanks,
yeah
I've
been
working
with
someone
from
tufts
university
who
you
know
he
has
his
phd
in
telemetry
and
doesn't
know
that
much
about
kubernetes,
but
he
has
a
lot
to
say
about
where
collecting
traces
makes
sense
and
where
it
doesn't
so.
C
A
So
next
one
is
batch
update,
batch
workbook.
F
Hi
yeah,
so
just
following
up
on
last
week's
meeting,
and
hopefully
this
is
going
to
be
quick-
I'm
not
going
to
take
a
lot
of
your
time.
So
we
had
a
discussion
with
the
topology
aware,
scheduling,
working
group
I
think
suwati
is
was
in
the
meeting
as
well
as
that
can
corroborate
my
understanding
here.
F
So
the
working
group
that
is
working
on
top
of
your
scheduling,
they
would
prefer
to
continue
to
organize,
as
like
as
their
own
security
group,
but
having
swati
like
a
co-chair
on
the
working
on
on
the
batch
working
group
to
ensure
a
smooth
transition
to,
hopefully
the
bigger
the
wider
umbrella,
which
is
the
batch
working
group.
The
idea
here
is
that
they
are
concerned
that
the
party
aware
scheduling,
effort
might
not
get
too
much
traction
if
we
merge
it
and
the
proposal
here
was
okay.
F
You
could
continue
to
organize
as
you're
doing
right
now,
but
we
would
be
getting
updates
in
the
in
the
batch
working
group
and
hopefully
eventually
we
will,
we
will
merge
swati.
Do
you
want
to
add
something
yeah.
G
So
I
think,
like
concern
is
not
essentially
that
we
won't
get
enough
traction,
but
the
concern
is
that
the
way
we've
been
operating
the
flow
would
would
probably
not
continue
as
we
have
been
working
on,
probably
because
the
scope
of
batch
working
group
is
going
to
be
much
more
than
just
apology,
aware
scheduling
so
just
in
in
the
favor
of
that
we
just
decided
that
we'd
continue
as
we've
been
operating
and
just
reevaluate
in
two
three
months
time
and
see.
G
Obviously,
we
join
the
batch
working
group
meetings
in
new
time
and
relay
the
information
back
to
signored.
If
that's
what
is
needed
and
and
as
well
as
the
topology
we're
scheduling
working
group
and
in
in
due
time
just
re-evaluate
if
it
still
makes
sense
to
continue
those
meetings.
If
not,
we
can.
G
We
can
just
kind
of
fully
converge
with
batch
working
group,
but
I
think,
as
things
become
clearer,
we'd
like
to
take
that
decision,
then,
as
opposed
to
now,
because
the
working
batch
working
group
is
kind
of
would
be
at
an
early
stage
of
formation
and
how
things
are
going
to
work
would
become
clearer
in
their
time.
G
I
I
left
kind
of
a
comment
summarizing
all
this,
but
at
the
end
I
did
defer
the
final
decision
to
dawn
and
derek.
You
know,
given
that
their
signal
chair.
D
Yeah
sweaty,
I
I
read
your
comment.
I
also
agree
with
what
so,
basically
we
are
all
concerned
it
is,
it
is
just
converging
right,
so
not
and
also
continuous,
sustainable
long
term,
and
so
I
agree
with
you
and
I'm.
I
have
no
objections
to
this
bad.
D
She
group-
and
I
think
that
last
time
I
already
state
the
only
concern
it
is
just
worry
about
the
not
converging
riser
and
also
it's
beyond,
but
I
do
see
the
work
group
will
perhaps
the
value
here
because
in
the
past
a
lot
of
the
topology
of
wire
and
scheduling
and
cpu
ping.
All
those
kind
of
things
actually
involved
a
lot
beyond
signaled
effort
and
all
in
general
actually
is
that
make
the
little
progress
is
just
because
we
start
from
bottom
up
and
then
we
try
to
so.
D
H
The
comment
thread
that
I
had
had
on
the
slack-
I
guess
I
didn't
get
to
talk
much
about
last
week
while
I
was
out,
was.
H
H
H
H
Which
is
what
I
thought
swati.
We
were
on
the
path
to
doing
or
trying
to
make
a
larger
philosophical
change
of
approach
to
this,
and
I
still
have
that
concern,
which
is
why
I
asked
like
what
was
the
outcome
of
the
work
group
if
it
was
like
a
a
technical
recommendation
or
like
what
was
the
actual
deliverable
going
to
be
because
it
just
felt
very
open-ended
and
then,
when
we
ran
the
resource
management
work
group,
I
think
we
ran
that
twice.
H
I
H
They
all
we've
made
a
lot
of
progress
each
time,
but
then
they
do
kind
of
sputter
out
in
some
instances
and
it
becomes
very
fragmented
so
anyway
I'll
catch
up
on
the
discussion.
But
I
don't
know
swati
if,
when
you
all
have
met
or
or
if
anybody's
proposing,
where
group
batch
was
here,
if,
if
there's
a
genuine
intent
to
continue
the
path
wrong
or
is
there
a
more
radical
reformulation,
that's
being
maybe
indirectly
assumed
by
the
formation
of
this
right.
G
I
believe
the
intent
is
to
continue
at
the
path
we're
on
and
then
figure
out
if,
if
there
are
radical
changes
that
have
to
be
made
and
that's
when
we
bring
back
that
information
back
to
us,
ignored
and
figure
out
like
strategy
again,
but
for
now
that's
what
I'm
trying
to
convey
here
that
for
now
we
just
continue
operating
and
continue
operating.
Based
on
the
decision
that
we
made
previously.
G
G
Yeah,
I
think,
in
terms
of
topology
we're
scaling
at
the
moment.
We're
not
considering
that,
but
I
believe
batch
working
group
is
considering
that
as
something
they'd
be
looking
at
and
that's
one
of
the
reasons
that
we
are
concerned
that
that
might
impact
our
velocity.
So
we
want
to
continue
kind
of
operating
the
way
we
do.
F
So
let
me
just
answer
this
at
the
higher
level
like
and
I'm
here,
because
I
I
initially
proposed
the
formation
of
the
working
group,
I'm
abdullah,
so
the
way
that
we're
thinking
about
it
is
that,
like
we
want
to
have
like,
let's
call
them
pillars
or
in
the
working
group
like
one
pillar,
is
related
to
trying
to
improve
the
job
api,
another
one
trying
to
introduce
like
job
queueing,
job
level
management
and
then
the
third
one
that
was
trying
to
get
swati
on
it,
which
is
like
related
to
unknown
related
high
performance
computing.
F
You
know
works
we're
not
here
to
propose
a
radical
change,
we're
actually
trying
to
solve
the
problem
of
fragmentation
as
people
who
are
interested
in
batch.
I
want
to
have
these
discussions
like
related
to
topology
awareness
scheduling
in
a
place
where
I
could
potentially
relate
to
other
batch-related
efforts,
and
that's
why
I
was
hoping
that
swati
will
join
us
as
a
co-chair
as
well.
F
Just
like
okay
she's,
the
point
of
contact
she's,
the
one
that
is
trying
to
advise
and
and
and
basically
potentially
having
the
part
of
those
discussions
in
the
batch
working
group
to
see
how
it
would
relate
to
job
level
management,
for
example,
so
we're
not
trying
to
change
course
we're
actually
trying
to
defragment
batch
related
discussions
a
little
bit
and
have
some
higher
level
context
that
can
group
them.
So
I
I
mean
that's
the
I
guess
the
intent.
F
And
we
completely
acknowledge
that
there
has
been
a
lot
of
progress,
not
that
there
isn't
and
I'm
sorry
if
I
like,
conveyed
something
else.
It's
just.
We
want
to
understand
how
that
relates
to
higher
level.
Big
picture
right
if
we're
trying
to
develop,
improve
the
job
api
itself
or,
for
example,
like
improve
job
job
level,
management
and
job
level
scheduling.
F
How
can
we
integrate
that
with
dynamic
resource
management?
How
can
we
integrate
that
with
numa,
aware
scheduling,
so
that
is
the
the
pitch
here
for
the
proposal
and
that's
why
I
wanted
I
was.
We
were
hoping.
That's
like
someone
from
node
who's
interested
in
badge
would
be
on
the
on
the
working
group
in
the
world.
D
I
think
I
I
see
the
difference.
This
is
the.
Let
me
let
me
give
me
a
little
time.
First
thing.
Let
me
clarify,
I
think
that
direct
earlier
I
said
that
we
didn't
make
a
pro
magic
progress.
It's
not
going
to
know
that
didn't
make
much
progress,
but
I
can't
see
the
how
painful
for
the
node
feature,
because
it's
bottom
up
and
then
we
try
to
influence
other
like
the
application
application
level.
Api,
not
powered
api
part
apis,
but
whatever
it
is
other
like
the
higher
level
of
the
concept
it's
so
hard
and
how
much?
D
How
hard
do
I
try
to
influence
the
sky
donor?
Six
guy
donor,
since
this
is
father,
sick,
scheduler
and
from
the
sikh
eyebrows,
and
I
think
this
is
the
good
things
for
us
to
converting
and
to
help
us
or
a
lot
of
feature.
But
I
do
see
the
difference
with
the
signal
when
we
try
to
solve
problem.
D
We
first
try
to
abstract
the
problem
based
on
the
efficiency
based
on
the
performance
based
on
the
powder,
and
I
can
see
that
the
other
and
I
could
see
the
app
and
see
the
scheduler
when
they
try
to
solve
problems.
The
first
thing
abstract
it
is
from
the
higher
level
of
the
object
concept.
This
is
a
this
is
what
I
think
about
this.
What
group
can
help?
D
D
But
on
the
other
hand
I
do
ask
last
week
I
did
ask
I
said
how
we
are
going
to
call
off
or
how,
when
we
are
going
to
dismiss
this
work
group,
what
is
top,
and
I
think,
the
kind
of
argument
that
I
got
which
are
buying
at
the
end,
because
this
is
the
first
thing
they
need
to
figure
out
how
to
dismiss
them
like
what's
the
goal
and
once
they
achieve
that
goal,
they
will
dismiss
so,
but
they
need
to
have
that
work.
D
Group
and-
and
I
have
represented
from
the
involvement
of
the
seeker
and
to
help
define,
what's
what
it
is,
the
exit
criteria
and
I
think
that's
argument
at
the
end
convinced
me.
So
maybe
we
should,
I
think,
that's
ready
missing
like
the
three
months
after
three
months,
and
we
give
we
basically
decide
decided
what
what's
next,
I
I
buy
that
term
and-
and
just
so
then.
H
I
think
if
it's
okay,
I
think,
to
cut
in
there.
I
think
what
you
just
said
is-
maybe
me
being
out
of
date
with
latest
updates
on
the
pr,
but
it
hits
the
core
concern.
I
had
right,
which
was
having
done
the
resource
management
work,
which
was
the
first
work
group
in
kubernetes
like
it
reached
a
point
where
the
scope
was
so
unbounded
and
eventually,
then
we,
we
said:
okay,
we're
going
to
focus
the
scope
on.
H
But
the
parts
I
had
read
prior
was
basically
taking
elements
of
individual
sig's
scope
of
responsibility
and
leaving
it
open-ended
for
basically
what
I
first
read,
and
this
could
be
an
error
on
my
part
as
in
perpetuity
and
if,
instead,
it's
been
updated-
or
I
I
misrecall
to
say,
like
we're
going
to
define
a
road
map
or
recommend
something,
I
think
that's
that's
a
clear
end
state
and-
and
I
say
this
abdul
just
in
the
experience
of
having
done
this
before,
like
you
will
you
will
be
happier
for
having
had
that
than
than
you
otherwise,
because
that
that
was
our
prior
experience
so
I'll
catch
up
on
the
pr.
F
Just
please
comment
on
the
pr
if
you,
if
you're
looking
for
a
specific
language
that
basically
captures
what
you
had
in
mind
and
we're
happy
to
update
it
and
make
sure
that
it
is,
it
is
taken
into
account.
Okay,
thank
you
very
much.
Yeah
around,
like
the
defragmentation
that
the
goal
okay,
we
should
get
these
groups
into
working
groups
into
one
working
group
with
explicit
verticals,
each
working
on
that
specific
feature,
etc.
Yeah.
H
F
H
Struggle,
I
sometimes
have
is,
I
don't
always
equate
a
working
group
to
execution
in
the
sense
that,
like
it's
still,
the
sig
that
is
supposed
to
own
the
code,
and
so
that's
like
the
deliverable
I
was
tending
to
look
towards
was
like
concrete
recommendation
or
maybe
an
enhancement
right
and
not
necessarily
like
unbounded,
open
implementation.
F
Yeah,
I
think
that's
a
better
language
of
framing
like
recommendations
on
how
to
handle.
You
know
topology,
aware
scheduling,
end-to-end,
for
example,
that
could
be
like
one
strong
outcome
out
of
the
working
group,
but
I
I
I
think
already
the
topology
of
scheduling
is
making
progress
towards
that.
F
We
just
try
to
formulate
it
in
the
context
of
again,
as
I
mentioned,
an
interesting
journey,
how
it
would
look
like
intermittent,
for
example,
and
and
and
in
the
working
group
we're
proposing,
we
want
to
propose
you
know,
job
level,
management,
job,
probably
queuing
again.
We
could
have
discussions
there,
okay,
how
how
we
can
represent
new
methodologies
in
in
the
queue,
for
example.
H
F
That's
what
we
have
been
trying
to
do
for
the
past
year,
so
we
tried
we,
we
contributed
a
lot
of
improvements
to
the
job
api.
We
formed
some
sort
of
like
a
a
good
collaboration
with
magic
who
was
the
co-chair
owning
that
package
and
he's
a
co-chair
in
this
working.
H
And
that's
that's
sounds
positive,
it's
just.
It
could
be
that
I
was
out
of
sync
with
what
the
sig
had
felt.
Maybe
prior,
which
was,
we
were
not
trying
to
make
core
kubernetes
a
a
full.
H
You
know
feature-rich
job
scheduling,
system
versus
leaving
that
to,
I
guess
other
projects
that
might
have
been
building
around
the
core,
but
either
way
it
sounds
like
I'm
behind
a
little
bit
and
I
will
catch
up,
but
I
I
would
just
want
to
keep
it
focused
on
a
clear
outcome
or
deliverable.
D
Derek,
I
don't
I
don't
think
about
you
are
behind.
I
think
all
your
concern
raised
here
also,
I
totally
agree.
I
reached
the
similar
concern
this
also.
Even
I
agree
when
I
talk
to
the
talk,
the
talk
to
team
and
then
I
agree
to
this
work
group,
but
I
also
have
the
same
concern
as
you.
So
this
is
why
I
risk
of
the
when
to
dismiss
this
work
roof
and
what
is
your
concrete
goal?
D
The
proposal
there's
the
there's
this
right
man
selected
three
months,
but
I
don't
think
in
the
in
the
proposal
itself
did
literally
give
the
timeline,
and
we
need
to
include
that
one
in
the
proposal
and
the
newton
a
say,
or
at
least
if
we
cannot
give
like
the
oh
define,
what's
the
clearly
exit
criteria
and
we
need
to
say
our
first
month
decided
those
kind
of
things
something
like
that
so
yeah
that
will
be
make
us,
then
we
can
reiterate
those
things
right.
D
So
then,
and
just
like
the
our
experience,
it
is
the
group.
Actually,
you
not
just
fragment
the
six
and
also
not
on
that
execution,
so
in
the
app
each
sig
have
to
own
the
execution,
but
not
really.
If
we
don't
execute
well
well,
not
really
fill
the
gap
about
the
communication,
but
if
we
can
execute
where
well
definitely
it
is
negative.
Yeah.
F
I
think
a
really
good
way
of
saying,
recommendations
on
on
job
api
improvements,
recommendations
on
drop,
curing
improvements
and
maybe
a
discussion
whether
that
should
be
in
core
or
or
outside
core,
or
what
kind
of,
for
example
like
hooks
that
we
need
to
add
if
we
have
decided
to
do
it
outside
and
then
the
third
pillar
is
again
how
to
make
kubernetes.
You
know
friendlier
for
hpc
with
the
work
that
we've
been
doing
for
new,
aware
scheduling
and
and
cpu
manager
and
whatnot.
F
H
So
let
me
send
that
to
you
afterwards
and
then,
if
you
want
to
make
an
update
to
the
pr
in
light
of
today's
discussion,
feel
free
and
then
I'm
I'm
happy
to
review,
but
I'm
not
hopefully
not
coming
across,
like
negative
versus
just
trying
to
give
you
some
tips
on
having
run
something
similar
in
the
past.
What
you
might
actually
find
will
work
work
best
so.
F
No,
that's
actually
great,
like
again.
Our
goal
is
just
to
improve
batch
experience
and
we
really
need
all
the
help
here
and
from
previous
experiences
on
how
to
actually
navigate
that
in
the
community
and
what
actually
is
practical.
What's
not
so
so
that's
great
yeah!
Thank
you.
So
much.
A
Great,
if
no
more
topics
on
that,
let's
move
on
to
vinay
all
right,
you
need
to
update
us
on
in
place,
update.
K
Yeah
hi,
so
I
think
we
are,
the
pr
is
still
an
holding
pattern.
There
is
the
enhancements
pr,
the
enhancement
request,
3153.
I
think
it's
more
of
a
formality
at
this
point
so
don
and
direct.
If
you
can,
please
take
a
look
at
this
and
get
this
merged
before
the
freeze.
I
just
don't
want
this
to
slip
through
the
cracks.
H
I'll
do
this
down
today,
and
I
saw
you
saw
the
comment
on
there
yeah.
I
was
out
last
week
and
I'm
getting
over
known
this
right
now,
but
I
I
am
going
through
your
pr
and
yeah.
I
will
get
that
done
so.
Yeah
apologies.
K
Yeah
I'll-
and
I
think
the
timing
works
quite
well-
I'm
still
neck
deep
in
my
company
project,
the
ebpf
networking
stuff,
but
I'm
hoping
that
we'll
get
the
release
done
in
the
next
couple
of
weeks
and
then
I'll
have
some
time
to
address
any
major
issues,
but
I'm
still
gonna
seek
out
help.
I'm
gonna
create
issues
for
most
of
the
things
and
we'll
fix
them
in
the
124
milestone
as
much
as
possible.
K
Does
that
I
think
that's
pretty
much
it
from
me
we'll
follow
up
next
week.
H
A
Okay,
the
next
topic
is
file
size
logs
web
webcam.
If
you
here.
L
Yeah
hi
so
yeah,
so
this
is
regarding
the
log
issue
related
to
the
container
log
rotating
container.
So
here
I
just
wanted
to
get
some
leads
or
inputs
from
your
site,
so
that
can
help
us
in
examining
the
issue
and
investigating
and
resolving
that.
M
K
I
have
a
question
about
this,
so
in
in
the
in-place
vertical
scaling
we
had
one,
I
think
direct
brought
this
up
where
empty
their
back
file
systems.
If
the
containers
exit,
then
the
quota
that's
used
by
the
containers
goes
to
the
parent
and
it
sort
of
we
lose
accounting
for
it.
I'm
wondering
if
there
is
a
root
cause
here
that
that's
common
and
can
be
addressed.
H
Yeah,
so
that
was
only
for
memory
backed
volumes,
yeah,
something
that's
on
temp
fest.
So
for
this
this
is
just
a
regular
file.
That's
not
in
a
memory
back
volume,
so
I
trust
that
peter
menaul
can
help
chase
this
down.
H
In
general,
though,
we've
had
I'm
trying
to
recall
on
the
ephemeral
storage
accounting
should
be
covering
the
writable
layer
of
the
container
it
should
be
covering
logs,
but
there
were
a
couple
edge
cases
depending
on
how
your
mhfs
was
structured
or
even
how
your
root
fest
was
structured,
where
there
might
have
been
gaps
and
incapabilities.
So
I
I'm
I'm
sure
peter
or
manola,
and
anybody
wants
to
jump
in
can
help
chase
that
down.
H
But
ephemeral
storage
is
not
as
rich
in
capability
as
say
what
we
have
with
cpr
and
memory
and
others
so,
okay,
so
unlikely
that
it's
a
common
root
cause
yeah,
not
not
likely
at
all,
and
then
this
was
related.
I
know
there
was
a
previous
pr
a
couple
weeks
ago
where
someone
was
proposing
to
enhance
the
xfs
quota
implementation
or
try
to
help
that
get
over
the
limit.
H
I
thought
in
that
that
the
log
the
logging
doors
weren't
being
tracked
under
xfs
quota,
so
maybe
of
above
and
peter
minol
when
you
do
explore
this.
If
we
can
see
that
we
all
get
to
the
same
accurate
level
of
understanding
or
help
me
understand.
If
I
was
wrong
on
that,
that
would
be
helpful,
but
I
was
pretty
sure
that
it
was
not
included
in
that
as
well.
A
Hey
next
epic,
next
epic
is
by
me.
I
created
this
issue
once
we
get
into
like
we
start
using
v1
of
ci
api
and
it
was
introduced
in
120
and
now
in
123.
We
started
using
it
and
it's
already
diverged
a
little
bit
from
the
moment
it
was
introduced.
So
we
have
very
simple
versions
of
like
same
version
of
sierra
api,
v1
being
already
released
with
different
set
of
features
that
that
is
a
question.
How
do
we
emerge
here?
Api?
Do
we
version
it?
A
Do
we
do
anything
about
it,
and
I
started
this
document
explaining
requirements
so
far
about
here
api.
I
linked
it
in
the
meeting
notes
and
this
issue.
So
if
you
can
start
commenting
on
requirements
like
what
we
need
for
cri
api,
then
we
can
form
how
to
achieve
that.
Just
to
give
a
brief
overview
on
a
single
node,
you
may
have
so
many
components
implemented
extra
api
you
may
have
coupled
that
calls
into
container
runtime
and
separately
goes
into
image
service
proxy.
A
We
have
open
source
projects,
doing
like
image
service
proxy,
to
enhance
how
images
being
pulled,
and
this
two
components
like
container
on
time
typically
would
be
versioned
with.
I
mean
not
typically
like
container
on
time.
Sometimes
versions
is
os.
A
Kublat
is
versioned
by
a
system
administrator
or
cloud
vendor
image
service
proxy
may
be
versed
by
some
certain
party
vendor
and
they
all
implement
in
different
versions
of
cri
api
and
they
like
code
into
each
other
and
image
pro
service
proxy.
A
It
is
a
proxy,
but
it
will
call
to
continue
runtime
back
typically
and
then
there
may
be
cri
tools
that
version
by
end
user.
So
some
system,
let
me
know
like
somebody
who
want
to
troubleshoot
something-
may
install
c
right
tools,
bring
their
own
version
of
it
and
it
will
also
be
compiled
with
its
own
version
of
sierra
api,
and
the
cri
tools
will
run
container
on
time
as
well
as
third
party
demon
sets
that
are
doing
some
sort
of
monitoring
or
security
analysis.
A
They
also
can
be
brought
by
end
users,
and
they
also
implement
a
version
of
cri
api,
and
this
version
may
be
different
and
the
single
cluster
may
have
like
so
many
verses
of
that
implemented
anyway.
That
just
points
at
the
like
problems
that
so
many
versions
being
used
and
if
you
look
at
graph
of
invocations,
we
get
into
the
requirements
that
versions
of
sierra
api
needs
to
be
backward
and
forward
compatible.
A
We
need
to
declare
the
minimal
kubernetes
version.
This
couplet
supports.
So
let's
say
we
working
on
124
and
we
may
declare
that
it
supports
all
the
all
zera
api
versions
down
to
120
when
it
was
introduced
and
then,
if
container
time
decided
to
implement
a
sierra
api
for
120,
couplet
can
still
work,
and
I
propose
we
do
like
conformance
tests
on
that
and
then
opposite
is
also
true.
A
If
coblet
123,
that
we
just
released
will
be
used
by
customers
and
they
will
decide
to
go
with
container
geo
of
1.6
that
is
1.7
whatever
that
is
released
and
compiled
with
the
latest
version
of
kubernetes,
a
healthy
like
in
maybe
a
year
and
a
half
is
126
and
it
still
needs
to
be
supported,
and
we
probably
want
to
have
this
forward
compatibility
as
well.
So
those
are
requirements
that
I
listed
here
in
the
reasoning
for
that.
A
Yeah
there
is
no
quick
feedback,
yeah.
M
I
think
so
sorry
it
makes
sense
to
to
have
to
like
solidify
this.
For
sure
and
like
I
have
a
common
guideline
and
I
think
like
maybe,
we
can
also
look
across
the
project
like
what
other
components
using
grpc
are
doing.
I
think
tim
hawkin
booked
some
folks
from
other
projects
right.
A
Yeah
yeah,
I
already
went
to
see
architecture
meeting
and
what
else
does
a
project
and
so
far
those
feedback
that
it
it
was
never
formulated.
So
there
is
no
formal
definition.
How
versions
needs
to
be
done
on
grpc
apis,
so
that
would
be
first
yeah.
Okay,.
D
H
No,
not
storage,
I'm
just
saying
I'm
thinking
of
other
hooks.
We
had
in
cubelet,
and
I
was
just
wondering
if
sergey
wanted
to
grow
this,
to
be
just
a
cubelet,
grpc
project
statement
and
not
necessarily
specific
to
cri,
because
we
have
other
grpc
endpoints.
I
thought.
A
N
Thank
you
that
makes
sense
we're
also
talking
about
moving
cni
to
the
grpc
and
v2
of
cni.
So
we'd
have
the
same
problem
there.
We
just
have
to
have
a
common
way
to
define
these.
A
Demanding
to
the
hope
for
everything
anyway,
thank
you
next
one
is,
I
don't
know
what
this
is
about.
I
Marlo,
so
the
quick
summary
is
we
put
in
the
cpu
use
cases
doc.
A
while
ago,
we've
been
asking
for
comments
as
far
as
clearing
up
various
questions
we
have
on
items,
we
have
a
bunch
of
them
requesting
some
last
I'd.
I
You
know
if
you,
if
people
could
read
through
this
and
give
any
guidance
over
what
the
couplet
currently
can
and
cannot
do
and
what,
where
the
technology
is
and
what
parts
of
the
documentation
aren't
accurate,
because
we'd
like
to
start
clearing
up
the
documentation
and
start
figuring
out
where
we
can,
where
we
can
put
in
resources,
because
we
we
have
use
cases
we
care
about
too
right.
As
far
as
getting
cpu
management,
you
know
bigger,
better,
faster,
maybe
not
bigger,
better,
faster.
H
Yeah
and
then
abdul,
I
encourage
you
if
you
haven't
seen
this
talk
to
review
this
as
well,
because
it
seems
potentially
related
to
what
we're
discussing
earlier
on
the
batch
item.
A
Thank
you,
marlo
and
mike.
O
Yeah,
so
I
didn't
actually
start
the
slack
thread,
but
we're
running
into
the
same
problem
at
adobe,
where
we're
trying
to
switch
to
c
groups
v2
and
basically
what
happens
is.
Is
the
seat
advisor
that's
running
in
kubernetes,
1.21
and
1.22?
O
The
node
metrics
for
cpu
and
memory
are
missing
because
they're
because
of
a
bug
inside
of
c
advisor,
that's
not
supported
until
0.43
kubernetes
1.23
has
0.43
in
it.
A
Do
you
have
a
backpack
in
that
beyond
a
slack
message.
O
I
don't
joe
do
you
happen
to
remember
on
that.
I
know.
There's
we've
I've
seen
some
tickets
about
it
kind
of
saying.
Oh,
it's
metric
server.
I
know
it's
the
advisor
no,
it's
this.
I
don't
know
that.
There's
anything
specific
to
the
findings.
That's
been
from
this
slack
thread,
I'm
happy
to
create
one,
but
I
didn't
see
one
specifically
calling
out
secret
v2s
and
this
just
more
vague
closed
or
not
our
problem
sort
of
stuff,
yeah.
H
Mike,
I
think,
we'd
love
to
be
able
to
help
you
out
here.
I
guess
what
I
was
maybe
not
aware
of
is:
can
you
what
are
your
expectations
on
121
when
running
with
secret
speed,
2,
and
I
just
want
to
make
sure
they
were
in
line
with
present
capability
and
support
levels
from
the
project
and
if
you'd
worked
with
maybe
marinol
or
just
sepia
or
any
engineers
who
had
been.
D
O
O
So
I'm
I'm
new
to
a
lot
of
this
and
secret
v2,
so
I'm
not
super
familiar
with
it.
What
we
were
trying
to
do
was
upgrade
to
the
latest
flat
car
image.
You
know
it's
a
newer
flat
card
image
to
get
some
cds
fixed
and
patched,
and
they
switched
to
c
groups
v2
by
default,
and
when
we
noticed
that
when
running
that
the
node
metrics
were
missing
from
metric
server
and
we
were
trying
to
run
c
advisor
as
a
daemon
set
as
a
workaround
to
that.
O
When
it
read
memory
and
cpu
pressure
implied
to
us
that
it
may
be
with
those
metrics
missing
just
running
the
c
advisor
as
a
damon
said,
it
may
not
be
a
feasible
work
around.
D
I
just
want
to
say
that
single
version
2.
Actually
we
are
not,
we
even
know
the
kubernetes
is
not
support.
I
mean
signal,
do
not
claim
if
I
support
the
first
sql
version
2
in
the
agnes
1.21.
If
I
remember
correctly,
there's
the
minimum
alpha,
it
is
the
1.22
menu.
Please
correct
me
and
the
david
yeah.
M
Yeah
yeah,
I
think
121
is
too
old
because
fixes
have
been
going
into
it
and
like
like
just
yesterday,
david
porter,
and
I
were
talking
about
how
we
can
use
the
new
psi
for
eviction.
So
we
haven't
even
like
started
handling
that,
but
I'm
not
sure
whether
you're
talking
about
like
v2
specific
eviction
or
just
the
existing
eviction,
not
working
because
of
the
way
you're
running
c
advisor.
M
And
I
think
that
we'll
just
have
to
drill
down
into
like
what
exactly
you're
you're
hitting
in
your
setup.
O
Yes,
the
basic
setup
is
when
you
try
to
hit
the
c
advisor
report
for
cubelet
the
the
node
cpu
and
memory
metrics.
Don't
come
back
to
come
back
with
zero.
H
Yeah,
I
guess
mike
I'd,
recommend
following
up
on
there's
a
cap
related
to
secret
suite
too,
and
the
kubernetes
enhancement
repo,
which
would
tell
you
what
the
capability
was
for
a
particular
lease
and
at
what
level
and
right
now
we're
trying
to
graduate
support
for
secret
speed
2
to
a
higher
level.
But
we
hadn't
reached
that.
H
So
there
are
a
number
of
limitations
you
might
see
and
if,
if
you
were
having
a
near-term
issue,
I
don't
know
if
it's
possible
in
your
environment,
but
I
would
probably
see
if
you
could
override
that
default,
that
your
distribution
was
having
to
go
back
to
v1
and
just
or
or
that
seems
like.
If
it
were
me,
if
that
was
possible,
that's
what
I
would
do
and
then,
as
a
sig
here,
there's
a
group
of
us
who
are
interested
in
trying
to
get
v2
support
smooth
out
of
the
box.
H
O
So,
to
kind
of
rephrase
what
I
heard
there's
a
cap
for
c
groups
v2,
it
hasn't
gone
beyond
alpha
or
fully
implemented
implemented.
Yet,
and
so
the
recommendation
is
until
that's
done,
stay
on
c
groups,
d1.
H
Runtime,
you
were
running
okay,
yeah.
H
That
one
is
further
that
one
isn't
good
in
at
least
I
can
speak
to,
I
know,
has
moved
further
along,
but
there's
just
a
lot
of
complexity
in
this
one,
so
at
least
catch
up
on
the
cap.
And
if
you
wanted
to
help
us
drive
this
even
further
forward,
you
can
see
the
the
authors
on
that
cup
and
and
reach
out
and
we'd
be
happy
to
help,
because
we
do
want
to
get
the
project
to
move
forward.
O
Yeah
we're
working
towards
that.
We
also
have
eks
dependencies
which
limit
what
versions
we
can
run
as
well.
Okay,.
P
So
yeah,
I
just
wanted
to
follow
up,
say
the
same
thing,
because
we
have
been
fixing
some
issues
with
segregate
two
and
it's
kind
of
challenging,
because
we
can't
just
back
ports
some
of
the
c
advisor
versions.
They
have
like
a
lot
of
changes
between
versions,
so
we
can't
it's
not
a
simple
thing
to
just
backboard
the
whole
c
device
or
release
to
an
older
kubernetes
version
so
trying
the
latest
version
recommendation
and
we
did
put
out
some
smaller
patches
for
older
versions.
P
E
H
H
And
that
setting
needs
to
be
configured
consistently
in
your
run
time,
whether
that's
docker
engine
or
cryo
or
whatever
or
container
d,
but
then
for
c
groups,
v1
or
v2
enablement,
there's
no
special
flag
in
the
cube.
It
just
looks
to
see
what
the
secret
subsystem
is
on
the
host
and
will
try
to
work
appropriately.
H
It's
just
right
now.
The
project
itself
doesn't
have
a
beyond
alpha
support
for
supporting
a
secret
suite
two
configured
hosts.
D
I
I
think
that
tony
maybe
this
is
the
problem.
It
is
the
system
d
configuration
you
have
so
systemd
can
boot
with
the
sql
version
on
version
two
and
to
keep
the
actual
dynamic
figure
out.
So
so
I
think
you'll
know
that
maybe
you
have
to
com
during
the
boot
time.
You
need
to
configure
the
systemd.
It
is
the
sql
version,
while
on
version
2.
O
Yeah,
there's
we
have
some
hacky
workarounds.
We
can
do
to
default
it
back
to
sql
v1.
We
were
just
hoping
that
it
was
a
c
advisor
patch
and
we'd
be
able
to
back
court.
You
know
cherry
pick
that
back
and
then
you
know
be
good
to
go,
but
it
sounds
like
there's
a
lot
more
involved
in
it
than
that,
and
so
we're
gonna
have
to
take
a
different
approach
than
what
we've
been
hoping
for,
which
is
fine.
It's
totally
understandable,
but.
M
I
think,
based
on
this
conversation,
it
feels
like.
Maybe
we
should
update
the
documentation
or
write
a
blog
on
where
things
are
at
so
maybe
the
user
community
like
where
we
are
at
and
what
are
the
next
steps.
A
Okay,
then
thank
you
and
mike.
If
you
want
to
continue
conversation,
perhaps
it
should
be
back
bugs
are
typically
easier
to
track.
If
you
say
fight
visit
with
answer
yeah,
it's
fine,
we
can
just
close
conversation.
O
Okay
yeah
it
was,
I
mean
it
was
pretty
123.
So
that's
why
I
didn't
think
of
opening
an
actual
bug
for
it,
because
I
know
it's
already
123.
that
purge
is
already
in
there.
O
So
do
you
think
it's
still
worth
opening
a
jet
issue,
whatever.
A
A
Is
there
anything
else
we
over
time,
one
minute
over
time,
if
nothing
else?
Thank
you.
Everybody
for
attending
bye,.