►
From YouTube: Kubernetes SIG Node 20220802
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220802-170644_Recording_640x360
B
Hi,
everyone
welcome
to
signode
meeting
August,
2nd
2022.
B
So,
first
off
we
have
when
I,
with
the
in
place
for
vertical
scaling,
PR.
C
Yeah
hi
everyone
thanks,
I,
think
the
status
update
on
this
is
that
last
week
the
I
was
I
had
to
rebase
it.
After
some
other
changes
came
in
and
when
I
rebased
it
on
the
weekend
like
on
Friday
I,
found
that
the
ETV
test,
resize
E2,
just
failed
at
first
I
was
nervous,
seeing
something
broke
and
then
I
realized
that
the
test
is
doing
exactly
what
it
should.
C
What
happened
was
that
last
Monday
they
switched
from
the
old
cost
version
to
cos
97
and
that
caused
that
that
is
C
group
B2
by
default.
That
caused
that
the
test
to
fail,
because
the
test
looks
for
when
it
does
a
resize.
It
goes
and
looks
at
the
C
group
values
in
the
Pod,
and
it
looks
for
the
specific
paths
that
are
us
that
are
C
group
E1,
like
a
memory
limited
bytes
that
doesn't
exist
in
V2
I,
adapted
the
test
to
work
for
both
V1
and
V2.
C
It
can
now
verify
the
initial
config
is
fine
for
v2
as
well
as
V1,
but
we
need
the
implementation
for
C
group.
We
port
in
the
core
code,
which
I'm
very
nervous
to
make
at
this
point,
although
it
doesn't
look
so
bad,
so
the
risk
of
doing
that
change
was
was
one
aspect
to
it.
The
other
part
is
that
the
scheduling
tests
coming
in
so
late,
they
found
some
issues
with.
C
One
of
the
issues
was
that
when
you
resize
the
Pod
down
the
scheduler,
we
expected
to
go
to
its
queue
of
pending
pods
and
then
re-evaluate
them
to
see
if
you
know,
there's
a
fit
because
one
of
the
pods
has
shrunk.
C
So
that
was
not
happening
until
the
timeout
value
and
then
the
scheduler
folks
gave
us
a
change.
They
feel
it's
low
risk.
So
this
is
a
change
core
code
in
the
scheduling
code,
not
the
test,
there's
a
test
change
as
well,
which
we
traded
over
the
past
couple
of
days
and
then
fixed
a
bunch
of
issues
and
I
think
it's
working
well,
but
overall,
my
sentiment
is
that
you
know
we're
playing
Cowboy
so
close
to
the
so
close
to
the
release.
First,
for
a
PR,
this
big
so
but
the
good.
C
A
Think
so
I
had
reviewed
it
with
murnal
again
this
morning
and
I
think
you
might
have
seen
my
comment
there.
But
if
we
just
made
the
CRI
change
in
a
separate
PR
I'd
be
happy
to
merge
that,
like
immediately
because
it
does
make
it
I
saw
Peter
from
cryo
commented
that
it
would
make
it
easier
for
them
to
iterate
and
I
assume.
Mike
and
Rowan
would
find
it
easier
on
the
containerdy
side.
D
A
A
Having
a
CRI,
only
change
I'd
be
more
than
happy
to
push
that
through
and
I
think
it
would
be
low
risk
and
help
help
ease.
I.
Guess
us
when
we
felt
more
confidence,
yeah.
A
Don't
think
it
matters
I,
don't
think
you
need
a
new
cap.
Yeah
I
think
we
have
a
cap
that
describes
the
the
scope
of
the
future
I.
Don't
even
think
the
CRI
change
needs
to
be
tied
to
a
feature
gate
but
I
think
as
a
community.
We
understood
we
wanted
this
operation
now
to
be
Atomic
and
the
operation
needed
to
be
there.
So
I
I
don't
see
an
issue,
I,
don't
know
if
anyone.
B
C
Can
take
a
look
into
it?
How
about
merging
this
really
early
in
the
next
release?
C
B
D
C
A
A
Get
a
CRI
only
change
first,
and
then
that
would
ease
our
implementers
to
get
something
we
can
get
in
the
CI
earlier.
So
I'll.
C
Truly
up
to
you,
I'll
take
a
look
at
it,
so
this
is
going
to
be
a
new
PR
for
the
CRA
only
which
is
extracting
whatever
is
currently
there.
I,
don't
think
I
have
to
invent
anything
new
or
do
any
thinking.
It's
just
blind
drag
drop
as
far
as
I
can
tell,
and.
G
A
Think
this
is
a
lesson
that
we
need
to
do:
ERI
usage
and
GPRS
yeah.
This.
C
Is
why
I
had
a
separate
cap
and
then
at
some
point,
I
thought
you
know
that
it
just
Stacks
it'll
be
like
a
third
wheel,
just
sitting
there
with
having
anyone
using
it
and
did
it
all
in
one
go,
and
now
we
realize
that
that's
probably
the
right
thing
to
have
done.
Okay,
let
me
take
a
look
if
it's
a
blind
drag
and
drop
I,
don't
have
to
do
any
metal
gymnastics
for
it.
I'll
see
if
we
need
to
get
an
exception.
For
that.
Can
we
get
a
separate
exception.
D
A
C
So
what
I'll
do
is
that
I'll,
if
I
can't
get
it
done
by
afternoon,
I
think
this
code
phase
is
what
at
five
o'clock
today,
our
time
or
nine
I,
don't
know,
thought
it
was
tomorrow.
C
C
This
release
so
okay,
if
it's
8
P.M,
then
yeah
I
think
they
never
short.
I
do
want
to
be
able
to
do
a
full,
build
and
see
that
nothing
is
broken.
C
I
broke
the
test
this
morning,
playing
Cowboy,
so
I
don't
want
to
waste
time
on
that,
okay,
so
new
PR,
that
contains
only
the
CRA
changes.
Hopefully
it's
just
like
a
handful
of
files,
along
with
the
generated
changes
that
go
in
and
that's
gonna
make
it
a
lot
lot
easier
for
me
to
you
know
have
to
do
this
invasive
C
group
checking.
We
can
still
do
it
as
an
option,
but
I
mean
it's
it's
going
to
give
us
data,
so
it
doesn't
hurt,
but
it
won't
work
for
non
Linux.
A
C
Easier
for
us
to
coordinate
this
yeah,
and
can
we
please
commit
to
getting
the
rest
of
the
the
big
elephant
early
as
soon
as
the
code
opens
as
soon
as
they
throw
it?
I
I'll
be
a
lot
more
comfortable.
If
this
you
know
in
126,
it
goes
in
on
day
one
and
then
we,
you
know
slowly
find
issues
throughout
the
release
then
coming
in
like.
A
The
cubic
code,
I
thought
looked
generally
good
with
the
problem
was
I,
didn't
have
a
way
of
affirmatively
knowing
it
was
good
and
So
like
that
was
what
I
was
trying
to
think
about
how
to
responsible
stage
like
I.
Don't
expect
to
be
pushing
you
on,
like
hey
random
questions
or
randomly
factors,
or
that
type
of
feedback
I
feel
pretty
good
about
the
general
shape
for
now.
Yeah.
G
A
Just
don't
have
a
way
of
knowing
if
it
tests
positively
and
so
I
was
even
trying
to
think
through,
like
if
a
CRI
had
implemented
the
method,
and
then
we
updated
CI
with
CI
magically
appear
broken
at
that
moment
in
time
versus
like
the
other
direction.
Yeah.
C
I
think
I
can't
be
sure
anymore.
As
of
December.
It
was
good
because
we
had
Dr,
Shim
and
Dad
I
had
implemented
Dr
Shim
support
and
it
was
doing
the
full
end-to-end
and
but
since
takashim
got
deprecated
and
removed,
I'm
I
lost
a
wheel.
There.
A
Okay,
well,
it
sounds
like
we
have
a
plan
of
our
next
step
here
and
thanks
good
night
for
yeah
responsiveness,.
C
Sure
I'll
get
working
on
it
around
lunchtime.
A
Cool
anything.
H
C
B
A
G
I
linked
to
the
release
page,
where
they've
got
the
times
unlisted
too.
Okay,.
A
Right
then,
next
up
I
guess
I'll
show
you
want
to
talk
through
the
invented
plug
feedback.
I
Oh
yeah,
so
thanks
thanks
Eric.
So,
while
sorry,
while
we
are
able
to
see
the
Continental
life
cycle,
go
find
wherever
the
runtime
is
supporting
the
events,
but
the
way
it's
run
in
CI
right
now
is
it
uses.
I
Have
obviously
the
evented
plug
implemented
under
those
circumstances,
for
the
reason
that
unknown
to
you
the
unknown
to
me
yet
the
tests
that
test
particularly
fails
and
that's
the
reason
I'm
splitting
this
PR
into
CRI
VR,
and
it
will
be
followed
by
the
actual
implementation
when
we,
so
it's
a
non-blocking
stay
a
test
right
now
it
only
runs
Alpha
features,
and
so
the
combination
is
Alpha
feature
enabled
for
evented
plague
and
runtime
that
doesn't
have
any
evented
support.
It.
E
I
Of
leads
to
this
long
waiting
of
container
creating
messages
and
then
in
the
test
is
failed.
So
anyway,
I
have
mentioned
the
link
in
the
dock.
The
second
one
I'm
still
preparing
one,
which
has
only
CRI
changes
and,
if
possible,
I
like
that
to
get
much.
A
Okay,
that
makes
sense
so
this
this
is
basically
the
same
issue
that
vinay
was
running
into
as
well.
F
A
A
Cool
and.
A
The
pr
her
show
it
was
like
the
the
grpc
call.
When
it
would
time
out,
we
were
going
to
have
to
read
list
and
then
start
a
watch
as
long
as
I.
Don't
see
that
that
would
have
resulted
in
any
needed
update
to
the
container
events
call
unless
we
needed
some.
Like
a
token
that
said
start
watching
since
some
period,
is
there
a
thought
that
we
might
need
something
like
that.
I
We
can
use
context
there,
but
or
if
we,
if
it's
going
to
written
the
error,
then
already
added
based
on
your
feedback,
retry
mechanism.
So
what
we
do
now
is
we
retry
for
this
getting
the
events
and
if
it
fail,
if
we
fail,
we
fall
back
to
genetic
play.
So
that's
that's.
What
I'm
I.
A
Mean
I
was
just
trying
to
think
it
like
in
in,
like
Cube
controllers,
that
talk
to
the
API
server
right.
They
establish
they,
they
do
a
list
and
then
a
watch
right,
and
that
made
it
that
you
knew
Allstate
to
that
time
from
the
diff
that
you
started
watching
the
watch
from
I
wasn't
sure
if
we
would
have
potentially
had
a
gap
between
our
our
list
and
event
call
if
we
continue
and
I
guess
if
we
didn't
combine
them
so
right.
A
Yeah-
and
so
the
only
thought
there
is
like
do-
we
want
to
update
the
CRI
to
be
like
a
list
and
then
events
type
of
operation,
or
do
we
have
a
way
of
communicating
the
events
from
a
particular
period
concept.
B
A
B
F
Oh
I
thought
it
was
five:
did
we
move
it?
I.
D
C
A
Right
well,
then,
I
think
that
still
is
okay.
I
was
just
a
thing
about
to
watch
out
on
so.
I
A
Raise
on
this
Herschel.
D
B
H
I
can
hear
you,
oh
okay,
sorry
I
was
saying
we
talking
with
Monroe
on
a
slack.
We
filled
an
exception
because
we're
kind
of
close
to
the
freeze-
and
we
asked
for
for
extra
days
but
yeah
Manuel
told
me
to
also
to
join
this
call,
because
there
were
some
last
minute
concerned,
you'll
better,
discuss
here
and
see
where,
where
we
are
at.
A
Yeah,
so
maybe
I'll
go
first
and
then
I
know,
Heyman
was
catching
up
with
us
from
Storage
on
some
stuff
and
I.
Don't
know
if
that
got
on
the
PR.
But
yes.
A
When
we
do
PVS
I
was
trying
to
recall
what
was
the
approach
we
were
going
to
have
in
phase
two
to
ensure
that
Cuban
on
Node,
1
and
keyboard
on
node
2
gave
the
same
mapping
for
a
given
pod.
So
I'd
ask
that
question
on
there
I
think
that
was
Jordan's
primary
concern.
Maybe
he
chimed
it
otherwise,
but
so.
H
Yeah
yeah
so
yeah
that
is
not
in
the
cap,
because
we
haven't
decided
that
the
cubelet
config
field
names
like
it
was.
It
seemed
distant
in
the
future.
But
the
idea
is
that
you
will
configure
which,
which
range
to
use
for
parts
with
volumes
and
that
configuration
needs
to
match
in
different
cubelets
the
same
as
when
you're
I,
don't
know
the
12
provider
needs
to
match
in
different
quizlets
and
things
like
that.
But.
A
It
didn't
need
to
necessarily
be
deterministic
I
guess
what
I
was
unsure
was
what
was
going
to
ensure
that
the
range
that
got
allocated
for
a
pod
that
used
two
PVS
was
going
to
be
the
same
range
independent
of
the
node.
It
ran
on.
H
A
H
A
common
shift
yeah
so
so
like
the
uid
space
is,
is
32
bits
right,
so
we
divided
in
ranges
of
six
16
bits.
So
the
first
range,
for
example,
is,
is
exclusive
for
the
node,
so
but
I'm
node
processes
won't
overlap
in
new
ideas.
H
The
second
range
can
be
configuring,
the
cubelet
that
in
the
future-
but
it
will
be
for
parts
with
volumes
with
like
with
real
volume,
with
persistent
volumes
and
the
rest
of
the
ranges
are
used
for
for
parts
without
volumes
or
with
contain
maps,
and
so
we
allocate
non-overlapping
ranges
so
basically
at
startup
we
just
read
from
the
config,
which
is
the
the
range
that
is
going,
that
we
need
for
parts
with
volumes,
and
we
just
always
answer
that.
If
the
bot
has
these
volumes,
that
part
should
be
easy.
I
think.
A
A
A
H
Yeah
yeah,
when
we
Implement
phase
two
we
can.
We
can
do
it.
However,
we
want
right
now,
even
just
being
cautious.
We
are
resolving
one
block
just
for
for
phase
two
in
the
future,
so
there
are
not
Bots
using
those
ranges.
Whenever
we
upgrade
but
yeah,
we
can
do
it.
However,
we
want,
when
we
add
the
configuration
options
for
that
phase.
B
So
maybe
I
think
it
will
be
easier
for
Jordan
to
review.
If
we
can
like
capture
that
and
maybe
direct
do
you
think
we
can
open
a
cap
update,
saying
adding
being
more
explicit
about
the
options
we
have
for
phase
two.
Oh.
A
A
Is
pretty
overcomeable
yeah?
The
the
other
part
was
maybe
the
nuances
that
exist
with
particular
volume
types
or
particular
CSI
drivers
which
could
be
harder
so.
D
H
What
he
posted
might
be
an
issue
for
phase
two
but
I
think
which
there
should
be
a
way
out
either
a
way
to
fix
it
or
a
way
to
to
disable
this
feature
in
combination
with
first
user,
SQL,
false
or
or
even
in
the
worst
worst
case.
H
I
think
this
should
not
be
an
issue
with
ID
map
mounts,
or
that
is
the
kernel
support
for
for
mount
for
doing
ID
shifting
on
mounts,
but
but
he
also
mentioned
that
someone
else
explore
identity
a
while
back
and
found
some
limitations,
but
not
as
specific
about
the
limitations
for
if
there
are
real
I
think
like
I,
have
also
exploration
pounds
and
on
our
first
sight
they
look
just
as
what
we
needed,
but
I
don't
have
background
on
CSI.
H
So
it's
just
a
bind
Mount.
Basically,
it's
a
bind
Mount
using
the
new
Mount
API,
where
you
shift
the
ladies
and
and
the
LED
written
on.
This
is
the
same
as
like.
If
you
run
a
fruit
in
the
container,
whatever
as
mapped
to
whatever
user
in
the
host,
the
files
original
disk
are
with
ID
0.
So
it
should
get
rid
of
all
lasers
that
I
can
think
of,
but
yeah
there
might
be
some
issues
to
overcome
that.
A
So
I
think
for,
for
my
part,
Rodrigo
is
like
the
immigration
look
generally.
Okay
and
I
saw
the
feedback
on
the
like
cleanup
of
the
mapping
files,
which
was
great
thanks.
I
think
your
comment
earlier
in
here
with
us
would
come
like
maybe
Jordan's
fear
is
on,
can
the
feature
ever
actually
achieve
beta,
and
if
we
don't
have
a
way
of
achieving
data,
then
we
shouldn't
do
alpha
I'm,
not
sure
if
we
can
get
him
on
over
that
hump,
but.
A
A
But
they
seem
legitimate,
so
maybe
some
more
time
to
parse
through
them
with
him.
That
would
be
good.
H
Yeah
I
I
think
there
should
be
two
paths
forwards
possible.
One
is
maybe
more
involved
and
in
the
very
worst
case
we
need
to
modify
the
CSI
interface
to
add
some
more
communication
about
the
about
the
mappings.
The
drivers
need
to
implement
and
if
they
don't
implement
it,
then
then
we
reject
to
create
the
part
with
the
CSI
volume
or
something
like
that,
and
the
other
path
is
even
more
worst
case
is
I.
Think,
even
if
there
is
no
reasonable
way
to
support
bonds
with
volumes
and
username
spaces.
B
A
It's
like
we
support
stateless
pods
with
username
space
remapping.
That's
awesome,
the
cap
has
written
I
think
is
confusing
folks
now,
because
they
don't
feel
as
confident
that
we
can
get
the
beta
status.
A
So
we
could
get
two
feature
Gates.
We
could
do
username
space,
remapping,
stateless
pods
feature
and
user
Main
with
stateful
feature.
You
know
we
could
do
those
two,
but
I
didn't
know
enough
to
know.
If
that
was
truly.
What
was
needed.
I
was
just
like
you.
Hannah
was
giving
us
feedback
right
before
the
meeting
based
on
his
own
review.
That
I
just
want
to
make
sure
we
all
absorb,
because
I
do
agree.
H
Yeah,
but
what
I
am
not
sure
I
follow
is
if
we
know
that
we
want
support
for
stateless
spots,
even
even
if
we
it's
impossible,
that
seems
unlikely
to
to
support
in
a
reasonable
way.
Stateful
parts
do
we
need
to
Define
if
we
can
or
cannot
supports
now
to
merge
stateless
part
that
I.
A
Space
remapping,
stateless
pods,
no
one
could
question
that.
There's
not
a
clear
path
to
Beta
for
that
feature
on
your
present
PR
and
it
would
be
much
easier
to
get
shared
understanding
across.
You
know:
six
storage,
sick,
note
here
and
API
review
I
think
it's
the
stateful
thing
that
is
causing
questions.
I,
think
stateless
username
space
for
mapping
pods
is
very
useful.
Others
can
speak
up
and
maybe
share
the
same.
B
H
D
H
Even
if
it's
in
the
same
cap-
and
we
can
achieve
better
with
with
volumes,
I
think
it
makes
sense
to
have
them
as
super
super
feature
games
like
because
basically
stateless
spots
is
very
easy
to
support
and
stateful
part
is,
when
things
get
more
complicated,
so
to
gather
more
feedback.
It's
it
would
be
nice
if
users
can
enable
one
thing
without
the
other,
and
we
can
collect
more
feedback.
B
D
H
H
Should
I
open
up
here
to
modify
the
cap
or
what?
What
is
the
simplest
way
forward,
because,
from
my
point
of
view
like
it
doesn't
really
matter
if
beta
is
with
or
without
volumes,
because
Alpha
support
will
will
not
change
because
of
that
like
what
we've
written
so
far,
I
don't
take
it
I,
don't
see
it
changing.
H
A
Recommendation
right
now
would
be
to
update
the
exception
based
on
today's
discussion,
which
is
we're
going
to
reduce
feature
scope
to
just
being
stateless,
pods
and
there's
a
clear
understanding
on
a
path
to
GA
on
just
that
capability
and
let's,
let's
iterate
on
that
and
I,
think
that
that
would
be
fine
and
it
we
will
defer
to
you
know
the
next
turn
of
the
crank
on
a
feature.
That's
specific,
to
stateful
thoughts.
The
stateless
stuff
I
mean
correct
hormonal.
A
A
We
find
Value,
at
least
with
my
red
hat
hat,
on
right
now,
supporting
certain
classes
of
applications
with
username
space
remapping
that
are
stateless
and
like
so.
There
was
capability
added
to
the
runtime
for.
D
A
A
Totally
useful
and
so
I
don't
think
that
it
makes
sense
that
we
don't
make
that
value
available
to
the
rest
of
the
broader
kubernetes
Community,
because,
like
at
Red
Hat,
we
were
using
it
for
container
builds
and
a
number
of
other
useful
scenarios
so
functions
the
number
of
things
that
would
be
useful
with
that.
So
yeah.
A
Is
all
I
was
going
to
say
so
if
you're.
D
D
H
H
B
H
A
So
thanks
for
your
patience
on
this
because
I
know,
we've
talked
about
this
also
for
a
long
long
time
and
I
I
looked
at
it
yesterday
when
I
was
like,
oh
I
had
opened
the
issue
on
this.
It
had
three
digits
127.
It
was
like
from
2016.,
so.
H
And
one
more
question
just
trying
to
think
about
potential
issues
that
other
people
might
think
of.
If
we
reduce
the
scope
to
stateless
parts,
and
would
it
maybe
be
a
a
concern
for
someone
that
we
don't
want
to
tackle
stateful
Parts,
there
is
the
difficult
problem
or
maybe
they
don't.
They
just
reject
the
moving
with
that
because
they
seem
like
a
partial
feature
or
something
like
that.
B
D
A
Then
Renault
you
can
read
through
it.
I
can't
see
the
Google
Doc.
B
Tab,
okay,
no
worries,
so
the
next
one
is
from
Jing.
Are
you
on
the
call
you
want
to
talk
to
your
item
when
I
open
it
resource
quota
should
enforce
limits,
requests
on
FML
storage.
A
A
A
On
the
instrumental
yeah.
B
Yeah
I
I
guess
that's:
okay!
Yeah!
We
can
move
on
to
the
next
one
David
on
the
c
groups.
V2Ga
update.
E
Yeah
sure
yeah
I
just
chat
a
little
bit
about
the
string,
elastic
node,
meaning
I
kind
of
wanted
wanted
to
to
just
give
another
little
update
in
of
the
euderic
on
on
the
status
here,
so
yeah
we're
working
through
the
secret
v2ga
on
same
for
this
release.
What
we
did
is
we
updated
a
CI
kind
of
a
couple
weeks
ago
to
see
group,
two
images
and
sorry
even
a
I
know
it
has
some.
It
caused
some
issues
for
your
ECE
test.
So
sorry
about
that,
but.
E
Sure,
yes,
we've
updated
all
the
kind
of
tests
in
CI
to
crb2
images
across
cost
in
Ubuntu,
so
everything
is
running
there,
including
blocking
jobs,
are
all
on
sync
Ruby
2.
by
default,
Now
with
systemdc
group
driver
enabled.
So
we
have
pretty
good
signal
there
and
and
haven't
seen
any
any
test
issues
there
and
We've
also
added
secret
V1
based
tests
there
in
CI
as
well,
because
there
was
a
concern
that
you
know.
E
Once
we
switch
over
to
see
Ruby
2
based
images,
we
still
want
to
support
secret
V1
and
want
to
continue
to
have
test
coverage,
for
you
know,
folks,
who
are
running
older,
older
OS
images.
So
we
have
that.
So
that's
going
well
in
terms
of
test
coverage
and
then
also
I've
done
a
little
bit
of
work
on
on
kind
of
customer
reach
out
and
like
reaching
out
to
folks
about
their
secret
V2
than
deployments
and
stuff
and
got
some
feedback
there.
A
couple
of
folks
were
who
gave
feedback.
E
There
was
one
from
from
tencent.
Actually
one
of
the
folks
who
worked
on
the
memory
Qs
feature,
their
kind
of
whole
company
is
running
on
on
C
group.
V2
and
it's
been
running
for
a
while
and
they've
got
some
pretty
positive
feedback
from
them.
Additionally,
we
recently
in
gke
we
actually
released
a
feature
where
customers
can
opt
into
cigarette
V2
to
kind
of
test
it
out
and
play
around
with
it
and
I
reached
out
to
some
of
our
customers
there
and
got
some
good
feedback
from
them
as
well.
E
So
got
some
feedback
from
customers
that
it's
working
as
expected
and
kind
of
for
GA.
We
don't
have
any
major
kind
of
code
changes
or
anything
like
that.
Main
thing
we
have
planned
is
kind
of
some
Doc
updates
around.
You
know
how
to
how
to
ensure
you
know
your
OS
distro
is
on
C
group,
V2
and
kind
of
the
recommended
setup.
It's
like
specifically
enabling
systems
system
DC,
Group
driver
across
both
kublet
and
your
container
runtime,
and
have
a
blog
post
and
stuff.
E
So
people
are
aware
of
this,
so
yeah
just
once
again
and
give
an
update
there
of
any.
You
mentioned
any
questions.
C
Yeah
I
think
you
partially
answered
my
question.
That
was
going
to
be
one
of
my
question
whether
you're
still
running
the
CIA
jobs
on
V1
as
well,
for
ensuring
back
compact
support
that
gives
them
that
I
breathe
a
relief,
a
sigh
of
relief
when
I
heard
that,
because
that
means
the
I'm
presuming
that
the
alpha
job
all
Alpha
job
also
runs
on
V1
as
well
as
V2.
E
Yes,
yes,
so
I
think
we
have
parity
with
almost
all
the
tests
to
have
one
it's
on
the
container
D
tab,
the
node
containerdy
they're
running
on
continuity,
so
yeah
I
think
we
should
have
see
everyone
job.
If
not,
we
can
at
first
specifically
for
also
we
can.
We
can
add
it,
but
we
have
it
for
features
and
cluster
ET,
node,
ET
and
so
forth.
Okay,.
D
A
I've
received
from
our
users
and
I
appreciate
Google
sharing
their
experiences.
Everyone
thinks
it's
gonna
work
until
it
doesn't
with
something
specific
to
their
workload,
and
so
the
one
thing
I
was
curious.
If
you'd
had
feedback
on
was
anything
from
any
monitoring
ecosystem.
E
E
E
I
think
doc,
updates
blog
posts
Etc
will
help
to
ensure,
because
people
do
need
to
update
to
the
kind
of
more
later
versions
of
these
agents
and
so
forth,
right
monitoring
agents,
security
agents
that
may
have
kind
of
underlying
C
group
dependencies,
but
we
have
reached
out
to
them
too,
and
most
of
them
have
been
updated
or
in
your
progress.
A
C
Because
the
first
thing
on
my
to-do
list
for
the
impress
update
one
one
ask
is:
is
there?
Is
it?
Is
it
possible
to
have
like
a?
Is
there
a
system
API
a
standard
way
to
determine
if
you're
on
V1
or
V2,
currently
I'm
checking
for
like
under
the
under
the
C
group
folder
looking
for
the
controllers
file,
I,
don't
know
if
that's
the
best
way
to
do
it.
Yeah.
E
Yeah
we
we
have
other
ete
tests
that
actually
do
kind
of
look
into
the
C
group
file
system
and
need
to
check,
and
it
basically,
you
can
usually
I,
can
send
you
an
example,
but
yeah
there's
like
an
import
you
can
make
and
just
that
it
does
it'll
check
everything
for
you.
If.
A
C
All
right
so
I
am
I
was
in
the
process
of
changing
these
files.
It
looks
like
a
handful
of
files,
no
feature
gate
involved.
Is
that
am
I
reading
this
right
or
am
I
missing
something
with
the
CRI
I'll
I'll?
Send
you
a
pair
on
slack
I
think
we
can
carry
the
discussion
there?
C
Okay,
and
in
this
case,
if
CI
is
enabled
with
C1
already
with
C
group
V1,
then
let's
hope
that
once
126
opens
up,
we
can
get
much
of
the
code
in
I
do
plan
to
do
some
Carnival
barking.
If
you
will
about
this
feature
in
the
kubecon
trying
to
get
contributors
to
help
sign
on
help
drive
it
to
GA.
So
it'd
be
nice
to
have
the
code
in
there
by
the
time.
D
A
Very
cool
all
right,
well,
I,
think
that
was
the
last
topic
for
today.
All
right
are.
D
A
Yeah
so
Sergey,
even
though
he's
on
Parental
leave,
he
he
reached
out
on
the
15th.
So.