►
From YouTube: Kubernetes SIG Node 20220726
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220726-170429_Recording_2354x1440
A
C
D
Yeah
sure
so,
just
a
a
quick
reminder
on
the
calming
code
freeze
on
the
2nd
of
August,
however,
wrote
up
a
dog
summarizing
the
cap
status
as
of
yesterday,
so
yeah
I,
don't
know
if
you
can
share
screen
and
open
the
dock
or
you
can
give
me
the
whole
second
try.
Oh
wait.
I
I
just
gave
to
you
yeah,
okay
cool,
because
I
used
to
have
some
trouble
sharing
the
screen.
Hopefully
this
happened
this
time.
D
D
E
D
Yeah,
so
here
is
a
table
of
the
current
status
of
the
Caps,
so
basically
I
just
have
this
status.
Sorry
for
the
typo
status,
colon
and
I
marked
all
of
the
caps
with
an
open,
PR,
yellow
and
most
of
the
most
of
the
care
most
of
the
capsu.
We
have
currently
still
have
open
PRS
the
for
the
in
place.
A
bit
of
pod
resources
has
an
open,
PR
and
I
think
it's
close
to
getting
merged
and
the
username
space.
D
Also
an
open,
PR
secretly
to
SQL
V2
only
needs
a
doc
update,
but
the
Pras
open,
forensic
container
checkpoint,
teens
PR,
is
emerged.
So
looks
we
are
all
good
on
that.
One
State
advisor
list
container
and
positives
stats,
Pro,
pen,
Dynamic
resource
allocation,
also
Pro,
pen,
yeah
and
broad
condition
starts.
The
sandbox
creation
also
has
a
PR
open
on
kublish,
open,
Telemetry,
PR,
open
configurable
grace
period.
D
It
has
multiple
PRS
and
all
of
them
are
open,
I
believe
the
coolant
credential
provider
saying
this
on
this
for
ga
graduation
and
only
need
the
test.
The
pr
is
coming
very
soon.
Ephemeral
container
is
PR,
open
and
enables.com
by
default.
D
If
the
pr
is
merged,
so
we're
all
good
on
that
one
new
CPU
manager,
policy,
PR,
open
split
the
log
stream,
Pro,
pin
and
I
think
Paco
added
the
last
one
quotas
for
ephemeral,
storage,
yeah,
the
pr
is
also
open
for
that
one
I
think
that's
the
yeah,
that's
all
of
the
Caps
yeah!
We
have
a
bunch
of
PRS
open.
Hopefully
we
can
get
a
well
as
many
as
we
could
merged
yeah.
H
Yeah
I've
spent
all
of
today
going
through
the
signaled
PR
triage
board
between
triage
waiting
on
author
needs
review
over
like
200
fmps
right
now.
G
Yeah
I
I
think
like
I
should
be
able
to
make
another
pass.
Username
spaces
I
can
cover
the
V2.
I
can
also
review
the
C
advisor
one
and
I
I
think
I'll
be
able
to
review
most
of
the
ones
that
are
assigned
to
me,
but
I
think
we'll
also
need
approval
done.
So
I
may
ping
you
for
approval,
because
Derek
is
out
this
week.
So
yeah.
C
Also
will
finish
the
what
Whoever
has
signed
to
me
and
include
of
the
informal
container
and
I
think
the
yeah
stepcom
is
already
merged,
and
what
about
the
dynamic
resource
allocation
I
think
that's
needed
some
more
work,
and
so.
G
C
C
C
F
I,
don't
know
how
far
Derek
has
got.
He
mentioned
that
he
wouldn't
be
able
to
get
to
it
till
Thursday.
As
of
last
week,
that
was
the
status
I
just
haven't
seen
any
further
updates.
F
I
know,
Mike
and
I
went
back
and
forth
as
far
as
the
guidance
to
runtime
is
concerned,
and
we've
come
up
with
something
that's
specific
enough,
but
doesn't
micromanage
the
runtime
implementation
more
details
on
examples
of
why
we
are
giving
that
guidance
will
be
in
the
documentation
because
I
don't
want
to
clutter
the
API
code
so
but
I
think
direct
has
already
taken
a
pass
at
it.
I
just
don't
know
whether
he'll
have
the
bandwidth
to
you
know.
He
mentioned
that
he
needs
to
take
out
the
password
if
it
was
ready.
A
G
F
He'll
be
coming
back
next
Monday
yeah
I'll
reach
out
to
him
on
slack
next
Monday
and
see
if
I'll
make
sure
that
that's
the
week
I
was
hoping
they
push
out
the
enhancement
by
one
week
right
right
when
they
push
this
out
by
a
week
scheduled
chicken,
never
mind.
F
Worst
case,
we'll
have
to
ask
for
an
exception,
we'll
have
to
justify
it
right.
The
code
hasn't
changed
a
lot
and
this
one
is
going
to
be
a
bit
of
getting
so
close.
I
wanted
to
split
up
in
a
multi,
multiple
Piers,
but
getting
so
close,
it's
easiest
to
chase
down
I
think
we
need
Tim,
we
need
one-way.
All
of
them
are
green
on
the
current
code
status.
The
way
it
is
to
merge
it
and
then
bring
in
fix
all
the
other
issues
that
have
been
added
and
track
later.
F
I
know:
Wong
Chen
is
working
on
getting
the
tests
that
the
scheduling
team
asked
and
she's
pretty
close
to
ready.
We'll
do
some
debugging,
either
tonight
or
tomorrow.
Night
I'll
look
at
some
of
the
issues
there
with
Helper,
but
we
should
be
pretty
close
on
that.
That's
not
necessary
for
the
pr
to
merge,
but
we'd
like
to
have
that
scheduling
test
in
there.
A
G
Also
out
right
so
yeah
so
I
think,
like
I
I
made
a
couple
of
passes
like
I'll,
make
one
more
pass
today
and
I
think
approval
is
where
we'll
need
folks
from
different
areas
to
finally
approve
it.
So
yeah.
G
I
It's
in
in
the
in
the
meeting
notes
like
there's
a
sub
item
that
says
we
have
a
look
to
me
from
rui
one,
but
we
need
a
product
from
a
lot
of
paths
and
if
you
follow
that
link,
it's
a
link
to
the
GitHub
comment
from
the
corner
of
this
bot.
Saying.
G
I
Know
yeah
Giuseppe
told
me
that
you
told
him
to
to
ask
him
and
we
asked
him,
but
Tim
is
out
until
I
think
today
or
tomorrow.
G
Then
we
can
I,
think
ping
specific
folks
like
paying
specific
owners,
I
think
on
volumes,
and
we
can
try
to
get
as
much
as
we
can.
C
Okay,
let's
also
since
team
is
not
available
for
the
for
the
usernames
business
to
get
Jordan.
How
about
that
yeah.
C
Yeah,
let
me
show
them
and
then
at
the
end,
the
Francis
notice,
since
dark
is
not
here.
I
can
be
your
approver.
Yes,.
I
Yeah
yeah
I
was
saying
them
and
thank
you
very
much
because
he
also
put
suggestions,
so
it's
very
easy
to
to
fix.
I'll
fix
fix
it
right
away,
also
on
unrelated
to
the
getting
things
immersed
and
probably
subject
to
that.
Maybe
Eastern
new
spaces
is
a
good
topic
for
a
blog
post
in
the
coordinates
blog.
Definitely.
G
H
They
started
asking
a
couple
of
weeks
ago,
okay,
but
we
never
gave
them
a
response.
So,
okay.
I
C
I
Okay,
so
so
the
procedure
is
someone
from
signal
in
this
case,
probably
you
Daniel
will
let
them
know
or
should
I
write
on
slack
to
someone
or.
E
J
C
K
Hey
sorry,
this
is
just
a
quick
update
on
whether
the
the
new
condition
PRS
so
basically
I.
Think
thanks
Ronald
for
taking
a
look
so
looks
like
you
started.
G
K
I
think
Danielle
had
a
suggestion
to
add
a
a
node
e2e,
so
I'll
be
looking
at
that
today
and
get
that
added.
I
was
also
curious
if
anyone
feels
since
part
conditions
are
pretty
visible,
is
this
like
do
the
blog
entries
happen
when
something
goes
beta
or
is
that
something
wait?
Is
this
relevant
enough
to
blog
about,
or
maybe
when
it
reaches
beta?
We
can
think
about
that.
C
The
SQL
version
2
is
just
because
the
is
the
kernel
feature.
We
support
it's,
not
that
too
many
people
is
not
that
so
visible,
but
to
real
user
to
work
not
running
on
the
Node.
That's
so
so.
This
is
why
we
always
want
to
call
out
loudly
to
make
people
notice
those
or
wellness,
and
if
this
is
certain
performance
difference
or
we
have
a
difference,
so
they
maybe
can
link
that
to
the
underneath
infrastructure.
C
A
user
naming
space
because
it
is
such
a
big
feature
is
in
will
impact
a
lot
of
the
production
from
day
one.
So
that's
why
we
also
want,
and
but
a
lot
of
people
might
not
really
using
that
feature.
So
this
is
what
we
want
to
call
out
that
feature,
so
so
there's
just
different
different
stage
at
a
different
time.
We
want
to
be.
We
want
to
call
out
those
things.
Yeah,
foreign.
C
F
Think
we're
pretty
well
covered
on
that
the
summary
I
gave
earlier.
Let
me
take
a
look:
yeah
I,
don't
really
have
anything
to
add
to
it.
Besides
that
the
outstanding
the
e2e
test
I
think
there
are
two
Main
feedbacks
in
there.
One
is
adding
scheduling
and
the
other
one
is
Daniel's
feedback
to
Move
It
from
current
location
to
e2e,
node
and
hopefully
we'll
get
to
that
next.
L
F
F
Believe
so
so
the
code
phrase
is
third
of
August
and
test
phrases
10th
of
August,
so
it
gives
us
a
little
bit
more
breathing
room
when
it
comes
to
the
test
code.
It's
mostly
there
Daniel,
please.
Let
me
know
if
that's
Alpha
blocker.
H
Yeah
pulling
it
from
the
infrastructure
is
hellish
to
work
with
to
test
across
different
node
configurations.
If
it's
in
18
node,
we
can
very
easily
run
it
everywhere.
H
Okay,
and
it
also
gets
run
as
part
of
like
containerdes
PR
process
and
I-
think
the
same
might
be
true
for
cryo,
so
like
putting
it
in
the
right
place.
Just
makes
it
easier
for
everybody.
F
H
If
you
can't
run
it
as
part
of
cuss,
it
is
already
running
no
it.
It
won't
be
running
as
part
of
container
D
or
cryo
test
Suite.
Otherwise,.
A
So
so
Daniel,
what's
your
suggestion,
because
we
also
have
a
test
for
schedulers
and
then
I
see
the
the
node
test.
Part
can
I'll
only
work
with
cubelete
and
container.
F
Okay,
so
currently
it
is
where
it
is
located.
It
runs
as
part
of
the
all
Alpha
features.
Is
that
not
good
enough
for
Alpha.
H
It
is
actually
fairly
well
maintained,
looked
at
by
people
who
work
on
CRI
implementations
and
also
like
has
me
basically
looking
at
them
all
of
the
time
and
so
actually
gets
like
a
decent
amount
of
Maintenance
and
is
good
signal
for
both
kubernetes
and
Cris.
Okay,
and
also
like
useful
for
like
finding
issues
and
distributions
and
like
various
configurations
on
like
gke
and
whatever
else
as
part
of
just
the
regular
test.
Suite.
It
doesn't
because
those
things
aren't
important
to
the
rest
of
like
kubernetes.
Generally
speaking,.
F
Okay:
okay,
let
me
try
and
see
if
I
can
prioritize
that
to
this
week
and
see
I
might
pick
you
on
slack
if
I
run
into
issues
moving
it
to
the
different
location.
Okay,
thanks:
it.
M
H
M
A
C
C
Okay
looks
like
they,
they
couldn't
join,
and
so
please
join
the
meeting
and
propose
this
topic
actually
in
this
one.
I
really
interested
in
this
topic,
because
I
suggest
this
puppy
got
a
while
back
and
and
the
wish
to
drive
those
time,
but
we
didn't
get
much
of
attention
from
the
open
source
community.
So
then
we
dropped.
That
idea
then
move
forward.
If
people
fail
still
there's
the
value
I
want
to
talk
about
how
to
address
that
problem.
C
We
can
come
back
and
we
can
discuss,
but
the
content
is
both
cap
and
also
the
proposal
has
been
closed
because
due
to
the
no
activities
for
last
I
think
four
years
almost
it
was
originally
when
we
proposed
this
one
that
time
the
one
was
look
at
the
gke
I
want
to
do
the
more
fully
managed
note.
C
So
that's
why
I
want
to
give
more
signal,
so
so
I
think
that
there's
the
problem,
but
next
up
for
all
the
system,
demons
and
but
the
whole
world
with
the
efficient
aggregate
need
to
show
this
node
is
ready
or
not
ready,
especially
right
now
we
have
a
lot
of
proposal
like,
for
example,
deadline,
resource
allocation
and
other
more
extensible.
We
make
the
node
more
extensible
we
may
need
it's
hard
for
us
to
figure
out
is
node
is
ready
or
not
ready,
not
like
all.
C
O
Yeah
sorry
about
that
I
thought
this
was
tomorrow,
so
clearly,
I'm
I'm
super
organizing
together
hi
everyone,
I
added
this
and
I'm
gonna
yeah
like
Mark
mentioned.
Hopefully
Anish
can
join,
he's,
got
more
context,
but
I
think
he
was
just
curious
about
the.
If
the
Sig
would
be
willing
to
revisit
that
cap
and
potentially
put
it
forward
for
one
of
the
upcoming
releases
but
I'll
see
if
I
can
get
a
new
shot
on
here
as
well,
and
he
can
add
some
more
some
more
detail.
N
It
looks
like
there's
a
recently
linked
issue
to
for
some
of
the
CSI
Secret
store
stuff
that
maybe
isn't
getting
cleaned
up
properly
should
be
on
in
a
minute
too.
Yes,
he's
joining.
C
C
C
B
Yeah
so
yeah
I'm,
a
niche
so
I'm,
actually
a
maintainer
for
this
project,
Secret
store
CSI
driver,
which
is
a
Oasis
project.
It's
a
sub-project
of
cigarth
and
then
basically,
it
helps
mounting
secrets.
That's
stored
in
external
Secret
store
like
how
she
got
Vault
Azure
keyword,
Google
Secrets
manager,
so
it
gets
those
and
then
mounts
that
into
the
pods
and
temp
FS,
so
it's
accessible
by
the
pods
right
and
then,
as
part
of
this,
like
one
common
thing
that
we've
observed
is
the
CSI
driver.
B
Pods
need
to
be
running
on
the
Node
before
the
workload
part
gets
deployed
so
that
if
it
tries
to
request
a
volume,
it
needs
to
already
be
up
and
running
right
and
then,
on
the
other
hand,
when
a
node
is
being
scaled
down.
If
the
CSI
driver
pod
gets
deleted
before
the
unmount
operations
are
done
for
the
volume,
then
the
pods
basically
just
get
stuck
in
that
state
where
they
never
get
deleted,
and
then
that
causes
the
node
to
also
be
in
the
deleting
state
are
not
completely
deleted.
B
So
there
are
these
two
scenarios,
one
during
node
scale
up
event
and
then
another
during
node
scale
down
event,
and
then
we
were
looking
at
this
from
the
perspective
of
Secret
store,
CSI
driver
and
then,
when
we
started
looking
more
and
more,
we
have
started
moving
towards
using
out
of
free
volume.
Plug-Ins
so
like
everything
for
volumes
now
is
done
using
CSI
drivers.
So
there's
like
Azure
disk
Azure
file,
Google
disk,
like
file
and
all
of
those.
This
seems
to
be
like
a
common
pattern
that
can
cause
issues
like.
B
Even
though
there
are
retrial
Logics
in
cubelets
to
mount
volumes.
And
then
there
is
a
way
to
reach
a
state
where
the
desired
state
is
equal
to
active
state.
It
can
take
like
a
couple
of
retries
and
then
it
makes
it
really
hard
for
users
to
see
like
why
it's
failing
and
all
of
that
so
I
found
this
Gap.
That
Xander
also
shared
on
the
dock.
Where
there
was
this
idea
about,
maybe
marking
the
node
ready.
B
Only
when
certain
critical
add-on
components
for
the
node,
I've,
already
deployed
and
running
and
I
think
CSI
is
one
of
them
cni,
because
it
runs
as
a
binary.
It's
a
little
different,
but
if
cni
also
adopts
the
model
where
they
say
like
hey,
we
want
to
run
as
a
pod,
and
then
we
want
to
Mark
the
node
ready
only
after
the
cni
is
initialized.
B
C
Before
you
and
Zach
is
joining
I
just
share
the
background,
because
in
the
past
I
feel
because
I
wrote
the
initial
note
Readiness
that
time
we
only
have
the
kubernetes
and
the
docker
Network
right
and
also
current
version.
So
it's
definitely
is
not
enough.
C
Csi
driver
that's
good
example.
In
the
past
we
also
have
the
device
plugin
and
it's
ready
right.
So
so
the
TPU
driver
is
really
properly.
The
driver
actually
is
properly
installed
and
is
ready,
served
there.
Also,
it's
not
that
that
they
also
have.
We
have
like
the
monitoring
pipelines,
really
really
critical.
C
It's
in
Crash
Loop,
so
we
do
so
part
of
those
problems
through
the
MPD
know
the
problem
detector
and
because
that
is
also
is
extensible
plug-in
so,
but
only
because
that
one
only
didn't
really
get
a
secret
or
from
the
demon
right.
So
so
the
proposal
we
just
wanted:
okay,
how
to
define
a
common
way
mechanism
need
to
those
systems
demons
to
give
us
the
signal.
You
are
writing.
You
are
ready
to
serve
then
so.
C
So
that's
why
I
the
which
one
I
discuss-
and
we
just
think
about
the
there's-
the
many
ways
to
do
this,
so
we
could
have
a
matrix,
but
that's
not
enough
suffixes,
you
want
to
say:
oh
I'm,
really
not
just
alive,
I'm,
really
writing
State
and
we
could
have
like
npt,
but
MPD
only
can
detect
of
the
finger
really
finger,
but
there
are
certain
things:
still
it's
like
the
product
ready
State.
We
cannot
detect.
So
this
is
why
we
have
that
proposal.
C
But
the
reason
not
many
people
really
not
get
the
signaled
I
mean
I,
have
this
idea,
but
to
the
province
which
will
have
this
proposal
and
we
you
can
see
that
from
the
original
cap
they
have
the
several
mechanism.
At
the
end,
we
consolidate
as
the
the
plan
B.
You
can
see
that
I
think
that's,
but
it
didn't
move
forward.
Nobody
really
got
much
of
attention,
much
of
the
things
and
also
probably
is
way
more
complicated.
C
Like
the
you
don't
want,
like
the
one
demon
side,
one
demon
have
the
issue
and
you
mark
the
entire
of
the
node,
not
the
work
right.
So,
especially
when
I've
been
here.
A
lot
of
Provider
move
forward.
We
come
to
separate
rule
out
like
the
those
system
demons
owned
by
different
team
and
the
separate
lot.
You
don't
want
that
to
rule
out
until
the
entire
cluster.
C
You
could
have
some
problem
for
the
cannot
support
the
new
certain
work,
node
or
support
the
new
work.
Node
new
scheduling,
but
you
just
don't
want
to
hear
that
make
the
other
node.
So
that's
why
we
never
really
reach
consensus
on
network
and
but
instead
in
the
production
like
even
like
what
I
talk
about.
This
is
for
the
gke
Fleet
Management
node
I
want
to
have
certain
features,
but
the
filament
won't
load
already
draw
out
for
a
long
time.
We
still
didn't
really
react
on
this
feature.
So
that's
why?
C
But
I
totally
understand
where
you
came
from
CSI
driver,
so
I'm
totally
open.
We
open
this
one,
but
just
want
to
share
with
you
back
what
what
do
we've
been
discussed
and
then
looking,
and
we
call
out
at
least
I,
think
wish
and
I
called
at
the
signal
the
couple
times
and
not
much
of
attention
at
one.
So
basically,
we
just
let
go
because
it
looks
like
it's.
Not
the
production
is
done
to
really
have
that
really
needed
this
way.
So
yeah.
K
Yeah
I
I
did
want
to
add
at
one
point
about
something:
I
guess
don't
just
mentioned,
which
is
like
for
regular
CSI
plugins,
like
we
had
run
into
a
similar
problem
and
what
we
did
find
is
like
yeah.
The
node
overall
is
ready
and
it
can
set
take
on
stateless
spots.
But
let's
say
if
the
node
needs
a
specific
PV
from
that
storage
class
who
CSI
plugin
is
not
up
that
in
that
context,
a
scheduler
shouldn't
schedule
those
nodes
in
that
those
pods.
K
In
that
specific
note
so
yeah
it
seemed
like
a
little
more
subtle
than
like,
overall
known
being
not
ready,
which
is
why
NPD
could
quite
provide
the
semantic
we
were
looking
for.
I.
P
B
But
that
is
also
something
that
we
have
been
seeing
coming
up
more
and
more
because
if
it
just
fails
to
unmount,
that's
fine,
but
it
basically
just
blocks
the
node
from
being
deleted
completely,
because
all
those
volumes
are
just
stuck
in
that
state
and
then
like.
If
you
have
like
a
cluster
Auto,
scaler
like
this
can
just
be
really
bad,
because
users
have
to
manually
go
in
and
like
force,
delete,
pods
and
do
all
of
that.
L
Thanks
so
we
have
a
project
called
Telemetry,
we're
scheduling
where
we
go
through
and
we
Mark
notes
with
various
labels
or
scheduler
to
schedule
according
to
for
various
properties
on
the
Node,
so
I'm
wondering
if
we
could
do
something
like
that
via
scheduling,
if
that
would
if
that
would
help
this
sort
of
use
case,
because
then
you
can
have
something
that
is
pulling
in
information
and
and
exposing
it
Telemetry
where
a
scheduler
that
says
if-
and
this
address
is
your
your
sometimes
CSI
is
right
on
these
notes,
but
not
these
nodes
right
to
go
through
and
look
at
that
and
then
choose
scheduling
accordingly,
according
to
internal
parameters.
L
So
if
you
want
I
can
share
that
project,
we're
currently
trying
to
move
it
over
to
a
plug-in
model
over.
It's
currently
in
extenders,
which
I
understand
scheduling,
extenders
are
different
than
plugins,
so
we're
trying
to
move
it
over
to
a
plug-in
model
and
Upstream.
So
if
you
want
to
go
and
check
that
project
out,
it's
open
source,
but
it's
not
upstreamed
yet
I
can
I,
can
work
with
you
and
show
you
what
we
have.
C
So
then,
Ellis
I'm
hope,
okay
to
open
that
one
and
we
can
continue
discussing.
Can
you
write
down
your
use
cases
there
at
the
end
of
that
the
original
cap-
and
we
can
continue
discussing
and
just
we
just
need
people
driving
those
things
like
original
I
really
think
about?
This
is
useful.
So
that's
why?
But
it
looks
like
it's.
C
The
open,
a
community,
don't
think
that's
useful,
so
they'll
always
have
some
other
way
walk
around
so
at
least
so
far
all
the
products
you
need
that
we
can
say
that
is
walk
around
so
fast.
So
that's
why
nobody
really
take
on
that
project.
So
we
just
move
on
yeah,
but
if
we
can
like
what
I
say
earlier,
also
because
it's
really
complex
problem
and
to
build
this
abstract,
like
I
said
so.
This
is
why
it's
also
hard
and
the
people
more
is
like
the
when
here
is
the
problem.
C
I
solve
that
particular
use
cases
for
my
production
and
and
so
far
those
varieties
is
still
is
under
control,
because
original
I'm
thinking
about
this
is
will
be
too
complicated.
It's
out
of
control,
so
we
got
to
have
the
mechanism
to
Upper
director
defined
and
a
mechanism
how
to
demon
register
itself.
Oh
I'm,
so
critical,
you
got
to
take
care,
take
my
opinion,
but
the
next
idea
is
under
control
so
far
after
couple
years
and
nobody
really
pay
attention
until
you
have
this
more
concrete
use
cases
here:
okay,.
B
Yeah
yeah
for
sure
I'll
add
that
over
there
and
then
so,
hopefully
start
coming
to
more
signaled
calls.
So
we
can
click
this
one
up.
M
You
just
a
quick,
a
quick
note
to
a
national
on
this
we're
doing
some
related
activities
in
the
plague.
Eventing
and
again,
I've
got
a
friend,
that's
been
doing
doing
some
work
in
you
know
networking
his
name's
Michael
Kappa.
He
he
we
might
want
to
link
up
he's
been
doing
some
traces
on
this
and
trying
to
figure
it
out
and
I.
Think
it
works
a
little
differently
than
you
might
expect.
We
don't
create
a
pod,
we
don't
have
it
completely
started
without
a
cni
network.
So
what
does
it
really
mean?
M
C
P
Yes,
so
I
think
of
I
discussed
with
Ben
Benjamin
about
the
automatic
detect.
You
know
if
the
system
cannot
completely
find
the
root,
FS
information
and
then
kind
of
Silence.
This
feature
right
but
I
think
there's
some
good
concern
then
also
put
it
in
the
issue
comments.
P
So
we
did
some
like
search
and
investigation
I
think
so
far.
He
found
a
number
of
systems
right
to
have
to
disable
this
feature
and
then
most
of
them
using
the
root
list.
So
ruthless
is
the
the
one
seems
most
be
affected
and
we
are
not
sure
they
are
in
production.
Any
system
like
used
in
production
or
not,
but
we
don't
have
that
information,
but
considering
if
we
do
that
we
write
automatic,
detect
and
then
silence.
This
feature
a
thing
about
a
system.
P
P
So
we
consider
this
post
a
risk
to
do
that
way
and-
and
we
think
it's
safer
actually
to
still
have
certain
like
a
you
know,
config
right,
so
the
user,
the
customer
needs
to
exclusively
configured
and
to
to
Mark
as
disabled.
So
they
understand
what
is
this
feature
about
and
in
their
system?
If
it's
not
working
right,
they
can
turn
it
off.
P
So
we
still
think
yeah
the
post,
it's
quite
risky
to
do
automatic
and
silence
the
feature.
P
If
there's
I
checked
yeah,
it
has
a
lot
of
you
know
config
options,
but
if
it
does
not
like
add
too
much
like
a
one
more
like
a
certain
knob
seems
not
making
this
any
worse
right.
P
Right
right,
right,
okay,
it's
similar
to
other
like
we
do
have,
for
example,
enable
attach
the
touch
controller
by
default
is
true,
but
for
backward
compatibility
with
will
allow
some
system,
like
a
user,
to
disable
it
so
similar
that
way
yeah.
P
C
P
Okay,
then
I
will
I
think
can
make
a
PR
ready,
hopefully
better
day
and
I
will
send
or
review.
J
Yeah
I
just
wanted
to
provide
a
little
update
on
this,
so
we've
been
working
to
make
secret
v2ga
in
the
cycle
and
kind
of
the
main
things
we
wanted
to
do
so
from
the
code
perspective
kind
of
everything's
there
and
everything's
supported
today.
But
we
wanted
to
work
a
little
bit
on
the
test
side
of
things
and
also
I'm
getting
feedback
kind
of
from
people
who
are
already
using
secret,
V2.
So
just
kind
of
quick
update
for
the
test
side
of
things.
We
wanted
to
ensure
that
we
have.
J
J
Okay,
sorry
so
yeah
we
just
wanted
to
ensure
that
we
have
kind
of
test
parity
across
secret
B1
security
V2,
because
as
secret
V2
is
again
the
default,
the
new
OS
images
we
wanted
to
ensure
that,
like
you
know,
we
continue
testing
on
secret,
V1,
so
kind
of
tldrs.
We
have
now
test
jobs
that
test,
see.
Group
V1
and
C
group,
V2
and
kind
of
the
default.
J
Now
I
see
groovy
too,
because
I
upgraded
the
test,
OS
image
to
use
the
latest
version
of
costs,
cost
97
just
to
groovy
to
by
default
and
then
also
upgraded
the
Ubuntu
images
to
use
the
latest
Ubuntu
images
by
default.
So
the
default
kind
of
blocking
pre-summits
are
actually
going
to
be
running
on
secret
V2.
So,
let's
see
groovy2
by
default.
Now,
so
that's
going
to
update
from
test
side
and
then
from
the
kind
of
feedback
side.
J
I've
done
some
a
little
bit
of
work
on
my
side
to
reach
out
to
some
folks
who
are
using
cigar
V2.
Actually
on
gke,
we
just
released
ability
to
opt
into
C
group
E2
for
customers,
so
they
can
try
it
out.
So
there's
been
some
customers
to
try
it
out
as
well
as
we
reached
out
to
actually
some
of
the
OS
kind
of
not
OS
but
SAS,
vendors
that
develop
good
monitoring
agents
and
security
agents
and
those
type
of
things
and
they've
kind
of
done.
J
C
Give
us
a
little
bit
to
see
that
how
serious
about
the
secret
with
you
and
not
just
be.
We
did
our
part
of
the
work
before
we
promote
that
feature
right
because
again,
kernel
features
really
people
take
for
granted
a
lot
of
time,
but
they
actually
affect
their
productive
work.
A
lot
a
lot
so
for
highlight
what
we
can
introduce.
What's
the
benefits
right,
what
it
is
feature
parity
but
at
the
same
time
seriously
our
signaled?
What
kind
of
the
test
that
we
did.
J
G
Okay,
go
here:
you'll
get
D,
you'll,
get
other
exciting
features
coming
exactly.
I
They,
if
you
need
more
feedback
or
something
we
also
here,
we
do
a
flat
container
Linux
and
we
run
into
the
bump
several
issues.
I
think
and
we
had
to
enable
an
option
for
people
to
provision
know
which
groups
with
D1
due
to
some
well
I'm,
not
in
that
team,
so
I'm
not
very
familiar,
but
if
you
want
to
dig
deeper
into
something,
I
can
put
you
in
contact
with
the
people
from
that
team.
Yeah.
J
I
Yeah
yeah,
if,
if
you
want
I,
can
write
you
on
a
slack
about
it,
but
if
you
had
enough
feedback
or
whatever
you
prefer.
G
H
Yeah
yeah,
we
we
don't
have
any
non-system
d-tests.
Today,
right
I
started
trying
to
figure
out
how
to
reasonably
do
that
and
it
turns
out
that
every
reasonable
distribution
now
uses
system
D
for
obvious
reasons
right,
and
so
it's
like
either
Gen
2,
potentially
I'm,
like
no.
E
H
J
Yeah
I
think
it
also
points
out
that
in
blog
would
be
useful
to
point
out
that
you
know
using
system
EC
group
drivers
kind
of
what
we
test
and
what
we
recommend,
especially
for
figure
of
E2.
So
that's
something
I
think
would
help
communication
Wisner.
G
K
G
C
Yeah
Daniel,
maybe
next
week
you
can
give
us
a
quick
update
about
the
back
trash
status
and
next
week,
sorry
I'm
going
to
a
vacation.
Finally,
finally,
after
10
months,
but
I
think
that
that's
really
worth
to
discuss
because
for
a
while,
we
haven't
talked
about
the
back
charity
status,
review
of
those
things,
yeah
yeah,
that's.