►
From YouTube: Kubernetes SIG Node 20230815
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230815-170507_Recording_1920x936.mp4
A
Hello,
it's
a
weekly
signal
meeting,
it's
August
15th
2023.
We
just
entered
second
half
of
August,
so
participation
is
not
set
huge,
we're
doing
mostly
retrospective.
Today
we
have
a
couple
more
topics,
but
I
think
with
respective
for
128
will
be
the
biggest
topic
for
today.
So
let's
get
going
Raven
take
it
over.
B
Yeah
thanks,
okay,
so
as
before
I
put
together
all
of
the
Caps
that
we
tracked
for
128
and
I
marked
them
as
a
merged
or
not
merged.
So
we
have
an
overview
of
how
many
caps
we
emerge
and
how
we're
doing-
and
we
have
some
history
data
so
first
it
will
go
through
all
the
caps
and
then
we
will
do
some
retro
on
what
we
did
well
and
What
didn't
go
too
well.
So,
let's
start
with
caps.
B
Here
is
a
list
of
all
of
the
Caps.
Let's
just
quickly
go
through
them.
We
have
a
31
caps
tracked
for
128
and
out
of
them
we
have
16
merge
I,
have
to
say
that
some
of
the
Caps
are
marked
as
tracked
on
their
cap
issue,
but
I
just
take
them
to
the
comments
a
little
bit
to
see
if
they
really
merge
or
not.
So,
if
I
misunderstand
the
status
of
any
cap,
just
let
me
know,
look
through
them.
Yeah
the
first
one
support
username
spaces.
B
This
is
an
existing
cap
is
staying
in
Alpha
in
128,
and
this
one
is
submerged
so.
B
Gotcha:
okay,
thanks
all
right.
Moving
on
to
the
next
one
support
memory,
Qs
with
SQL
V2
is
promoting
to
Beta
and
it
is
not
merged
here.
Next,
one
yeah.
C
D
Sorry,
so
we
won't
be
able
to
move
ahead
with
the
beta,
because
we
found
an
issue
in
how
we
were
understanding
the
usage
of
memory,
dot
High
it
it.
If
we
use
memory
dot
High,
it
will
actually
block
the
processes
forever
and
hence
we
don't
want
to
use
it.
I'll
also
go
ahead
and
summarize
it
in
the
cap
and
as
a
bug
comment.
B
B
And
moving
on
hot
conditions
around
Readiness
to
Star
container
after
completion
of
pots
and
path
creation,
this
one
is
merged,
as
Alpha
2.
E
Yeah
I
mean
I
actually
yeah,
that
one
is
good
to
go.
The
one
thing
I
had
to
do
was
the
the
original
author
was
wasn't
able
to
get
a
hold
of
them,
so
I
had
to
kind
of
create
a
new
ticket
to
track
this
and
get
this
more
up
to
date.
B
Gotcha
thanks:
okay,
moving
on
improve
a
multinum
alignment
and
topology
manager.
This
one
is
promoting
to
Beta
And.
It
is
merged
in
128.
B
And
the
next
one
well,
this
is
my
Gap.
A
limit
on
Parallel
image
pool
is
trying
to
be
promoted
to
Beta
and
it
is
not
merged.
I
didn't
get
the
time
to
write
the
test.
There
is
some
change
on
my
schedule
so
yeah.
That's
that
I'll
try
to
do
it
in
the
next
cycle.
A
I
think
for
this
we
have
documentation
change
that
accidentally
got
merged.
We
need
to
reverted.
B
A
B
D
A
B
Next,
one
is
the
node
memories.
Web
support
is
promoting
to
beta1
and
it
is
merged.
B
And
then
sidecar
containers
it's
there
are
some
changes,
but
it's
still
in
it's
still
Alpha
and
as
the
changes
merged
the
next
one
State
advisory,
crifle
container
past
status
is
trying
to
be
promoted
to
Beta
and
it's
not
merged
in
128..
B
And
then
fine-grained
supplemental
groups,
control,
yeah
attempt
attempting
Alpha
again
not
merged
next
one
retrievable
non-retrieval
pod
failures
for
jobs
is
some
changes
but
still
staying
in
beta
and
it
is
merged
and
then
in
place.
Update
for
pod
resources
is
sustained
Alpha
for
128
and
I,
don't
think
there's
any
change
emerged
for
it.
I
might
be
wrong
on
the
status
of
this
one.
A
Okay,
we
wanted
to
support
Windows
for
that,
but
we
failed
to
merge
that.
G
B
Okay,
next
one
CDI
devices
to
device
plugin,
API
Alpha
and
it
is
merged
and
then
split
standard
out
and
the
standard
error
log
stream.
This
is
not
merged.
I,
don't
think
the
author
made
too
many
comments
or
too
many
updates
on
the
issue,
but
yeah
it
looks
like
there
is
just
a
not
too
much
raisin
update
and
next
one
is
so
Insurance
equal,
put
images,
yeah,
Target
and
Alpha,
and
it
is
not
merged.
B
And
then
introducing
sleep
action
for
pre-stop
hook,
Target
and
Alpha
and
not
merged,
and
then
discover
C
group
driver
from
CRI,
Alpha
launch
and
merge,
and
the
next
one
is
support
for
dropping
complete
configuration,
Alpha
launch
new
feature
and
it
is
merged.
B
Next
one
is
a
field
status
host,
IP,
add
a
new
few
to
host
eyepiece
per
pod
and
Target
in
Alpha
and
it's
merged.
B
The
next
one
is
graduate
coupon
resources
endpoint
to
GA
I,
think
this
is
an
old
beta
feature
and
it's
targeting
stable
and
it
is
merged
for
108..
The
next
one
is
well.
This
is
more
of
a
policy
chain
to
support
Otis
the
node
with
newest
control,
plane,
yeah,
and
that
is
merged.
B
Right
next,
one
X10
Paul
resources
API
to
report
allocatable
resources
also
on
launch
in
a
future
to
be
promoted
to
stable
and
it's
merged,
and
next
one
is
non-gorithm,
note
shutdown
promoting
to
be
stable
and
it's
merged,
and
the
next
one,
configurable
Grace
Period
2
probes,
trying
to
be
promoted
to
stable
and
is
not
merged.
F
I
B
I
I
B
All
right
cool
thanks,
yeah
and
the
next
one
dra
Dynamic
resource
allocation,
Alpha
and
I-
think
the
changes
are
merged
yeah
and
the
next
one
is
to
support
third-party
device
monitoring
plugins.
This
is
also
a
long-standing
feature
being
promoted
to
Sable
and
it
is
merged
next,
one
incubator
resource
metrics
endpoint
of
promoting
to
stable,
and
this
one
is
not
merged
and
the
next
one
is
sub.
B
Second,
more
granular
probes,
new
feature,
Alpha,
not
merged,
and
the
last
one
is
Kos
class
resources
also
Alpha
new
feature,
and
it's
not
merged
so
yeah.
These
are
all
the
31
caps.
We
tracked
for
128,
and
so
we
have
actually
17
of
the
merge,
okay
and
yeah
to
compare
these
are
some
of
the
history
data.
We
had
way
more
caps,
attracting
128
compared
to
previous
Cycles.
B
Yeah
on
how
do
we
want
to
do
this?
We
cannot
take
first
to
take
a
quick
look
of
what
we
found
out
in
the
previous
retro
for
127,
and
then
maybe
we
can
sync
if
we
have
achieved
them
in
128..
B
So
this
is
what
we
found
out
in
the
127
red
shows
think
that
went
well.
We
did
in
place
pod
vertical
scaling.
We
had
some
good
progress
on
graduations
cap,
graduations
and
yeah
in
127
we
had
more
caps
track
emerge,
and
now
we
have
even
more
abstract,
better
overall
planning.
Everything
goes
on.
Time
did
a
good
job,
applying
milestones
and
yeah.
We
had
more
chairs
and
yeah.
We
decided
to
not
take
sidecar
for
127.
B
B
Things
could
have
gone
better
in
127
from
merged
in
place
late
due
to
the
review
Samsung
bugs
and
not
too
much
going
on
for
fixing
the
In-Place
bugs
right
and
we
broke
a
standalone
cool
word
yeah
and
also
we
did
most
of
the
reviews
right
before
the
phrase
yeah.
Okay.
So
that's
what
we
found
out
in
127.
So
let's
talk
about
128,
then.
A
If
you're
talking
about
what
could
have
gone
better,
we
still
not
very
good
in
merging
early.
So
sidecar
is
an
example
when
we
wanted
to
merge
as
early
as
possible
and
we
were
ready,
but
we
were
pretty
late.
A
B
Okay,
anything
we
did
good
for
the
last
cycle,
trying
to
think
well.
B
We
we
we
tracked
more
caps
than
before.
I
think
this
is
the
it's
the
most
accounts
we
have
ever
checked
for
a
single
cycle:
I'm,
not
100
sure,
if
that's
like
a
totally
good
thing
or
not,
but
I
won't
just
want
to
mention
that
in
the
things
that
went
well,
part.
J
C
F
But
at
the
same
time
we
need
to
work.
Also,
it
was
size
of
a
caps
were
small
and
straightforward
ones,
and
most
of
big
ones
will
just
postpone
it.
C
F
F
But
what
actually
repetition
of
what
was
some
previous
like
what
we
can
do
better,
so
we
can
use
the
time
between
our
releases
to
do
reviews
for
vowels,
which
will
not
fit
into
previous
ones,
so
like
not
not
to
postpone
it
until
the
last
moment.
H
One
area
I
think
we
struggled,
and
maybe
this
goes
back
to
the
big
ones
common
or
maybe
what
you're
alluding
to
here
is
some
of
the
Caps
I
think
that
we
present
are
not.
D
H
Idea-
and
maybe
they
have
like
80
of
the
idea
of
thought
through
and
I-
think
we
struggle
to
get
a
broader
set
of
folks
engaged
in
helping
to
flesh
out
that
final
20
I
think
we're
all
pretty
strapped
in
that
area
and
I
think
that's
where
we
seem
to
break
down
a
little
bit
like
the
more
the
idea
is
presented
and
covers
all
the
use
cases,
the
better
we
are
in
reviewing
and
closing
it,
but
helping
close
those
gaps
and
ideas
that
are
presented.
H
F
I
H
Pathway
to
close
the
20
that
could
be
missing
in
something,
or
you
know,
the
chicken
and
eggs
there
are
sometimes
like
get
presented
in
like
bootstrapping
problems
but
versus
everyone
having
the
time
to
maybe
collaborate
on
Big,
Ideas.
H
At
a
faster
Cadence
or
what
I
I
know
personally
I
struggle
with
that
I,
don't
know
if
I'm,
the
only
one
so
I
and
I
know,
there's
a
few
others
that
we
read
some
caps
and
we're
thinking
what
to
do
on
this
and
there's
probably
some
guilt
on
on
trying
to
figure
out
how
to
how
to
close
some
questions,
and
sometimes
the
Caps
present
things
as
Alpha
and
then
defer
the
resolution.
The
big
problems
until
beta
and
I.
H
I,
don't
like
that
so
yeah,
just
food
for
thought.
C
Deadline
also
acts
as
a
motivating
factor
that,
if
you
don't
review
it,
we
have
only
a
week
left.
So
that's
when
everyone
gets
lit
up
too
go
and
review
and
try
to
get
things
much
I,
don't
know
like
what
other
motivation
I
think
we
need
to
come
up
with
other
motivating
factors
to
kind
of
spread.
That
reviews
out
across
caps.
C
I
Yeah
it
almost
feels
like
for
some
of
the
Caps.
We
need
a
different
track,
an
architecture
track
where
we're
deciding
what
the
total
contents
should
be,
the
end
game.
You
know
you
know,
and
then
the
parts
that
we
can
make
progress
moving
forward
right,
I
think
sometimes
we're
too
concerned
about
putting
something
in
that
we
have
to
support
forever.
But
it's
even
if
it's
just
an
alpha.
C
H
Can
I
have
a
question
on
that
and
I
want
to
make
sure
I
was
understanding?
Was
the
behavior
of
memory,
dot
High?
D
So
I
think
when
memory
dot
high
was
launched,
the
documentation
was
incomplete
and
then
eventually
there
were
a
lot
of
people
who
had
questions
around
the
usage
of
memory
dot
high
and
that's
that's
when
it
was
clarified
by
the
kernel
folks,
I
would
say
so
memory
dot
high
is
can
essentially
be
used
to
throttle
the
workloads,
but
there
needs
to
be
an
external
workload
that
should
react
to
the
throttling
basically
say
it:
it
can
be
used
in
a
feedback
loop
mechanism.
D
Basically,
when
the
workloads
are
throtted,
there
needs
to
be
an
external
something
external
that
takes
care
of
either
moving
memory,
dot
High
to
a
higher
level
and
sort
of
help
in
increasing
the
increasing
the
resources
that
the
workloads
would
need.
Does
that
make
sense.
H
Basically,
the
expectation
is
memory.I
should
halt
your
process,
but
then
because
you're
halted,
you
can
increase
it
and
you
know
control
the
knob
at
that
level
or
you
could
have
written
your
own
boom
killer
to
do
an
action
at
that.
D
Time,
that's
good,
so
I
was
thinking.
Maybe
when
we
have
are
you
like
our
user
space
in
killing
in
place?
Maybe
then
we
can
use
it
but
yeah.
It
cannot
be
used
for
just
throttling.
H
D
I'm
not
I
did
not
check
whether
there
was
memory
pressure
before
memory
dot
high.
But
yes,
there
was
memory
pressure
after
it
reached
memory,
dot
high
and
there
were
kernel
reclaims
beyond
that
point,
but
it
never
reached
the
memory.max
level
that
would
have
resulted
in
kills.
It
always
stayed
below
memory.max.
That's
why
it
was
just
stuck.
H
I
appreciate
you
sharing
all
that
details.
I
know,
there's
other
issues
that
at
least
here
with
my
my
red
hat
hat
on
where
I
I
would
foresee
us
as
we
try
to
start
using
more
Secrets
V2
features
like
the
cube
Community
finding,
maybe
the
unexpected,
with
the
behavior
in
the
in
the
course
c
groups,
or
vice
versa.
I
know,
there's
an
issue
right
now
around
maybe
some
CPU
sub
behaviors
that
we're
trying
to
think
through
ourselves.
H
So
maybe
that's
a
good
role
model
on
just
the
importance
of
testing,
so
I
agree
with
murnal.
C
Yeah
I
think
we'll
need
something
like
umdi
integration
here.
For
this
to
be
more
useful
and
I
think
I
mix
it
up
on
it
all
right.
The
clarification
in
the
corner
documentation
only
happened
like
month
or
two
when
people
started
asking
more
questions
about
the
behavior,
so
even
within
the
carnival
Community,
it
seems
like
not.
Everyone
was
on
the
same
page
on
how
this
behaved.
C
B
D
A
Yeah
I'm
asking
if
we
want
to
do
that,
so
if,
if
there
is
a
reviewer,
an
approver
who
clearly
committed
to
spend
time
early
in
the
cycle
to
review
caps
PRS,
we
cannot
accept
anything
what
it
will
help
or
would
make
it
worse.
H
If
we
can
expand
the
approver
powers
after
this
release
as
a
separate
action
item,
I
don't
know
if
there's
any
interest
in
folks
looking
to
help
expand
the
roster
there,
but
I
think
anything
we
do
to
improve
like
the
social
contract,
for
people
is
a
good
thing.
A
Explicit
next
cycle,
so
we
we
take
it
as
a
little
bit
of
just
like
put
any
name
if
nobody
committed
like
I,
assume,
Ronald
name
very
often,
and
the
approaches,
if
nobody
else
sign
up
to
review.
But
maybe
you
need
to
be
more
explicit.
There.
J
I
also
find
myself
wondering
like
what
is
the
definition
of
success
here
with,
like
you
know,
try
to
the
object
of
trying
to
reject
caps
unless
they
have
career.
Approvers
implies
that,
like
our
success,
metrics
are
planning
the
amount
that
we
feel
like
we
can
take
on,
but
also
I
would
consider.
Like
you
know,
17
caps
in
a
cycle
is
like
or
other
than
that,
whatever
the
number
is
that's
lost.
J
It
is
like
really
good
and
kind
of
like
the
best
that
we've
done,
and
if
we
had
been
more
conservative
we
might
not
have
been
able
to
get
that
much
done
right
so
like
is
it
necessarily
that
bad
of
a
thing
for
us
to
have
the
ratio
bad
or
is
it
you
know?
How
do
we
want
to
define
success
here?.
G
I
agree,
I,
actually,
I,
think
I
think
we
did
a
good
job
of
this
circle
and
we
track
more
racial,
maybe
drop,
but
look
at
how
much
we
merge
and
how
much
make
a
progress.
I
still
think
team
Community
here
actually
did
a
good
job.
G
H
C
C
G
A
I
kind
of
agree
like
that's:
why
I
ask
like,
will
it
hurt
more
or
will
it
help
to
have
clear
up
over
some
reviews.
I
Well,
the
thing
to
to
the
point
here:
Commander
was
making
I
think
we've
got
two
things
going
on
right,
there's
two
goals:
one
one
goal
is
to
have
a
long-term
support,
very
stable
release
and
another
goal
is
to
have
all
these.
You
know
great
new
features
and
that
we
think
are
important
for
future
releases
and
because
we're
only
you
know
really
having
one
release.
One
main
release
we're
not
doing
any,
you
know
out
front
experimental
releases
and
we
don't
have
a
LTS
release
per
se.
I
I
A
I
think
the
police
could
still
really
good
like
everybody.
We
still
have
at
least
two
other
features
from
110
and
112
releases,
which
not
being
looked
at,
and
nobody
wants
to
easily
remove
or
continue
with
us.
A
I
think
another
point
I
arguing
with
myself
like
it
will
start
rejecting
more
caps
and
considerate
on
fewer
caps.
We
may
end
up
in
our
own
consensus
here,
but
we
accepted
cap
and
we
try
to
force
ourselves
to
merge
it,
no
matter
what,
because
we
already
made
a
big
commitment
and
even
if
you
see
a
big
problem
with
that,
we
still
trying
to
merge
it,
because
there
is
nothing
else
in
the
pipeline
because
we
rejected
everything
else.
I
think
incentive
may
be
a
little
bit
off
if
you
start
doing
that.
F
A
A
So
in
a
sense
of
a
little
bit
slight
like
a
little
bit
off
here,.
J
Yeah,
that's
another
reason.
I
kind
of
feel
like
the
definition
of
successing
is
important
to
Define
is
like
you
know
you
mentioned.
We
have
our
features
that
are,
you
know,
have
been
stuck
since,
like
110
or
112
or
whatever
like.
If
we
reduce
the
set
of
things
that
we
aim
to
work
on
like
the
life
cycle
of
the
Caps,
that
we've
already
kind
of
committed
to
will
extend,
because
we
have.
You
know
this
set
of
caps
that
we've
already
agreed
to
work
on.
J
But
if
we,
if
we
cut
down
the
number,
then
like
those
could
maybe
not
make
progress
in
a
way
that,
like
over
time,
will
kind
of
lead
us
to
having
Perma
betas
and,
having
you
know,
things
kind
of
languishing
in
Alpha.
So
I
I,
wonder
if,
like
really
what
we
should
be
aiming
for
is
you
know,
emerged
caps,
like
you
know,
emerged
like
little
pieces
of
work
that
incrementally
move
on
kind
of
like
what
Mike
was
talking
about.
J
You
know
the
our
small
little
changes
over
time,
taking
off
like
not
as
more
not
more
than
we
can
chew,
per
little
change,
but
like
aiming
to
take
on
those
little
as
many
of
those
little
changes
as
possible,
because
those
can
like
don't
as
much
threaten
the
stability
of
the
code
base,
but
also
like
have
little
improvements
over
time.
A
Is
there
more
ideas
and
opinions
on
these
are
metrics
or
success,
or
what
went
good
or
bad.
E
Excuse
me,
this
is
Kevin
I.
Think
one
thing
that
was
confusing
for
me
was:
if
there's
no
owner
on
the
issue
or
they're
hard
to
get
a
hold
of.
It
was
not
clear
to
me
like
what
the
progress
is
for
that
and
I
noticed
as
I
was
working
on
the
cab.
Having
your
kept
issue
up
to
date
really
helps
the
release.
Team
actually
know.
What's
going
on
so
towards
the
end
of
this
release.
E
I
just
ended
up
basically
copying
the
issue
and
not
getting
it
up
to
date
and
that
made
honestly
stuff
a
lot
clearer.
But
I
guess,
like
an
action
item,
is
making
sure
that
the
the
cap
posting
owner
is
available
because
one
of
the
issues
I
found
is
if
the
the
person
that
creates
the
issue
is
only
able
to
edit
it,
and
that
was
kind
of
a
little
bit
of
annoying
one.
E
Yeah
I
tried
to
ask
the
GitHub
admins
or
whoever
whatever
the
organization
is
called
to
see.
If
there
was
an
option-
and
it
really
was
just
the
owner
of
the
cup
should
be,
the
owner
of
the
issue
should
be
at
least
involved.
Otherwise,
you
will
have
to
kind
of
create
a
new
issue.
A
Yeah
I
think
I,
as
a
chair
has
a
permission
to
edit
the
issues
so
I
try
to
keep
them
up
to
date,
especially
if
there
is
a
comment
issue
asking
for
that.
But
I
agree
like
you:
it's
not
scaling
that.
Well,
so
maybe
some
automation
may
help
here
so.
E
Yeah
I
didn't
even
know.
I
asked
on
the
the
signal
Channel
and
then
I
was
I
was
told
that
the
the
real
wrong
way
to
do
that
is
to
create
a
new
issue.
I
didn't
know
if
who
had
access
to
those
issues
but
yeah,
it's
probably
it's
probably
better
to
have
make
sure
the
issue
owner
is
actually
creating
it,
because
it
is
crucial
to
the
release
team
of
knowing
what
steps
are
there
and
yeah.
There
are
a
few
items
there
that
I
think
are
important
to
keep
up
to
date.
I
A
couple
of
items,
I
thought
went
really
well
won
the
refactoring
of
infra
on
the
cnci
post,
I
thought
that
went
surprisingly
well,
the
the
other
that
I
thought
was
went
really
well.
This
release
more
so
than
prior
releases,
was
all
of
us
help.
The
signo
team
has
been
doing
to
push
down
dependency
requirements
to
The
Container
run
times
and
runtime
engines
a
little
bit
of
a
shout
out.
You
guys
did
a
great
job.
You
know
helping
push
us
to
get
the
feet,
the
changes
we
need
in
the
runtimes
and
engines.
I
A
Okay,
if
there
is
no
more
opinions
and
things
to
share
I,
think
extra
item
would
be
to
better
understand
what
our
goal
is
and
better
planning
would
be
interesting
challenge
like
we
need
to
be
careful,
how
much
things
we
commit
to
and
how
many
things
we
want
to
take
at
least.
I
B
All
right,
so
that
is
a
retro
for
128.
I,
will
organize
my
notes
a
little
bit
after
this
okay
I
will
another
word
practice
yesterday.
A
You
could
even
have
we
have
two
more
items
on
our
agenda
today.
First
is
Peter.
J
Yeah
speaking
of
taking
on
more
things
and
features
and
stuff,
there's
a
so
a
couple
of
what
was
it
months
ago
weeks
ago,
I
don't
know
sometime
in
the
past
right
at
the
beginning
of
the
128
cycle,
we're
talking
a
little
bit
about
you
know
some
reworking
of
cubelet
image
GC.
J
You
know
it's
kind
of
been
a
long-standing
idea
that
people
had
to
like
have
different
schemes,
there's
two
kind
of
priorities
that
like
putting
my
red
hat
hat
on,
there's
two
kind
of
priorities
that
we
kind
of
have
in
related
to
the
cubic
GC
and
I
wanted
to
see
if
other
people
had
other
sort
of
perspectives.
So
like
one
of
the
things
that
we
want
to
push
for,
is
alternate
schemes
for
like
garbage
collection
right
now.
Part
of
it
is
just
dispersantage
space
and
I
think
we.
J
We
have
a
use
case
where
it
would
also
be
useful
to
have
some
sort
of
like
time-based
garbage
collection,
which
kind
of
implies
the
existence
of
maybe
a
plug-in
system
which
could
over
complicate
things
depending
if
there's
like
no
one
else
with
other
kind
of
ideas,
but
also,
if
other
people
have
sort
of
garbage
collection
schemes
that
they
can
imagine,
they
would
want
like.
J
Maybe
we
could
so
I'd
I'd
like
to
in
part,
discuss
that
and
then
there's
like
a
separate
one,
which
is
we'd
like
to
be
able
to
separate
the,
but
basically
the
way
that
the
images
are
mounted
or
the
container.
J
The
images
are
mounted
to
create
the
containers
to
separate
the
Image
store,
which
we're
like
also
calling
the
read-only
layer
from
the
like
writable
layer,
separate
disks,
so
that
we
can
have
a
separately
mounted
image
disk
so
and
I
I
met
with
Derek
and
some
folks
in
Red
Hat
internally
yesterday,
and
it
sounds
like
that.
Second
Use
case
might
actually
be
possible,
but
there
might
be
a
couple
of
things
that
the
keyboard
is
assuming
about
the
way
that
the
containers
are
mounted.
J
That
will
not
actually
be
the
case,
so
those
are
like
we're
gonna
investigate
that
second
case,
that's
kind
of
like
a
separate.
There
are
two
kind
of
separate
but
related
things
and
I
think
they'll
end
up,
ultimately
being
two
different
caps,
but
I
wanted
to
surface
them
both
because
they're,
both
kind
of
related
to
each
other
beginning
for
the
first
question
like
for
the
image
garbage
collection
schemes
like,
are
there
other
use
cases
that
people
have
thought
of,
but
people
are
looking
for
with?
J
Like
you
know,
cleaning
up
the
images
are
there
other
ways
that
other
triggers
that
people
want
to
kind
of
see
happen
in
the
cubelet
for
starting
the
garbage
collection
process.
B
Images
to
protect
them
from
being
garbage
collected
by
kublid,
we
can
do
this
already
on
the
container
runtime
I
decided
we
had
that
PM
flag-
it's
just
if
we
have
a
more
I
know
systematic
way
to
do
this.
That
will
be
better
just
one
yeah.
J
And
kind
of
the
scope
of
the
container
runtime
piece
is
more
like
I
want
to
reserve
images
that
I
care
about
as
the
runtime,
that's
less
of
like
I'm,
a
user
and
I
want
to
Define,
like
these
sets
of
images
are
ones
that
I
just
never
want
to
garbage
collect
like
those
are
I
mean
you
can
achieve
it
with
that,
but
like
as
in
terms
of
like
an
exposable
API
for
instance,
that
would
it's
like
they're
kind
of
different
use
cases.
J
So
I
can't
imagine
that
that
would
if
we,
if
we
came
up
with
a
plug-in
scheme,
then
we
could
tie
in
that
sort
of
use
case
in
there.
But
it
could
be
like
a
a
plug-in
system
that
ignores
certain
images
and
just
doesn't
remove
them.
I
And
Derek
brings
up
a
good
point
in
the
chat,
the
some
of
these
policies.
Things
need
to
be
multi-tenant
based
or
you
know,
isolate
that
one
user
is
only
allowed
to
access
this
private
pulled
image.
We
can't
and
that's
part
of
that
insurance
secret,
full
image
stuff
we're
working
on,
but
without
the
policy
it's
sort
of
hard
to.
C
I
Know
to
match
that
to
the
runtime
handlers
and
in
the
Pod
specs
we're
sort
of
guessing
a
little
bit
too
much.
If
we
had
the
right
policy
definitions
for
security
and
for
caching,
you
know
performance
purposes,
then
then
we
could
do
that.
We
also
have
a
use
case
in
that
specific.
So
far,
just
a
container
today
because
of
our
snapshotters,
we
can
set
them
on
a
Handler
basis.
I
H
Appreciate
you
chiming
in
there
Mike
do
you
remember
who
I
was
hoping
to
induce
one
of
the
things
I
wasn't
sure
when
you
were
pushing
for
you
know
or
Ensure
secret
pool
images
can't
be,
are
always
re-entitled,
I,
guess
or
revalidated
when,
when
already
cashed
on
a
note,
was.
E
H
Think,
through
is,
if
you
had
heard
any
for
those
who
are
using
kubernetes
and,
like
maybe
highly
regulated
environments
or
like
different
compliance
environments.
If
anyone
has
had
a
requirement
that
says
this
pod,
that's
using
this
secret
pooled
image,
the
secret
pool
image
must
be
garbage
collected
or
shared
the
same
fate
as
the
pod,
like
does
the
life
cycle
of
the
application
image
need
to
match
the
life
cycle
of
the
the
run
time
itself.
So,
yes,
the.
I
H
Okay,
but
but
the
point
being
like
you
folks,
have
heard
requirements
around
image,
GC
or
when
it
should
be
done
or
not
done,
and
if
that
image
was
entitled
or
not
where
the
policies
might
vary
or
need
to
vary.
Whether.
F
I
So
you're
on
it
there
when,
if,
if
we
have
the
right
policy
set
as
per
Peter,
was
talking
about,
then
we
could
also
cover
multi-tenancy
issues
with
respect
to
the
security
right
in
that
policy.
For
the
you
know
for
caching
policy,
and
then
you
could
pass
that
down
to
the
runtime
and
we
could
go
ahead
and
clean
it
up
right
right
now.
The
garbage
collection
is
done
by
Google
it.
Maybe
that's
the
right
place,
but
I'm,
not
I'm,
not
sure
yet.
I'm,
not
convinced
I.
I
Think
for
two
reasons:
one
the
security
information
needs
to
be
kept
in
violent
right.
We
shouldn't
be
passing
Secrets
down
the
credits
over
the
apis
I,
don't
think
we
should
probably
come
up
with
the
cash
policy
that
says
use.
You
know,
resolver
level
key
rings
only
and
then
Google.
It
would
know.
Okay,
I'm
not
going
to
try
to
use
the
old
model
of
using
image,
pool
Secrets
right.
I
So
it's
that's!
That's
one
part
of
the
security
thing
when
you
clean
up
the
other
part
is
the
yes.
If,
if
you
pulled
it
from
one
person,
you
have
to
pull
always
if
you're
in
a
multi-tense
scenario.
Today-
and
we
can
fix
that,
but
yeah
it's
going
to
require
some
more
work
on
that
and
sure
pull
Secrets
I'm.
Sorry
yeah.
A
Thanks
yeah
well
for
the
same
way.
No
discussions
I
for
one
of
the
things
we
discussed
internally.
I
I
wanted
to
entertain
an
idea
if
we
can
put
a
mirrors
config
into
kublet.
Like
do,
we
have
to
have
mirrors
config
in
runtime,
so
one
of
the
reason
for
mirrors
config,
like
the
problem
with
mirrors
config
in
the
runtime.
A
If
you
have
pool
secret
for
specific
mirrors
that
you
use,
you
may
be
want
to
use
this
from
cigarette
when
you
use
this
mirror,
especially
if
it's
some
kind
of
ambient
cool
secrets
from
like
in
Google
examples.
A
Gcr
IO,
like
you,
may
want
to
use
your
ambient
secret
to
use
with
GCR
Io
if
it's
configured
as
a
meter
for
specific
image
or
like
specifically
registry,
and
there
are
other
use
cases
that
may
be
useful
like
if
you
can
reuse
certain
images
and
stuff
like
that,
and
also
it
will
be
useful
for
or
end
users
because,
like
they
don't
need
to
learn
how
to
configure
mirrors
different
environments
in
different
runtimes
and
the
logic
is
very
much
the
same
across
different
runtimes.
I
More
registry
murder,
routing
information
in
pod
spec,
as
opposed
to
Less
in
passing
the
policy
for
pulling
the
image
down
to
the
runtime.
You
go
both
ways,
but
then
the
are
you
really
wanting
kiblet
to
do
the
resolve
in
the
poll
and
then
push
it
into
yeah,
a
location
for
the
container
runtime
to
use.
That's.
A
No
API
will
be
definitely
more
elaborated
between
qubit
and
runtime.
I
J
Well,
I,
wonder,
like
you
know,
kind
of
a
pattern
that
we've
been
leaning
into,
but
not
necessarily
like
committing
to
is
like
the
opposite
direction,
where
we're
moving
more
housekeeping
knowledge
into
the
runtime
instead
of
the
other
way
around
and
the
motive
they
should
be.
The
runtime
is
the
one
that's
ultimately
responsible
for
those
operations
so
or
so
it's
like
you
know
closest
to
it
knows
about.
J
You,
know
the
confidential
VMS
in
a
way
that
the
qubit
will,
like
probably
never
be
taught
how
to
know
or
like
not
it's
not
in
the
scope
of
that
those
enhancements.
I
wonder
I
do
agree
that
it's
awkward
right
now,
but
basically
the
way
that
you
have
to
do
mirroring
in
the
runtime.
J
Is
you
basically
lie
to
the
Cuba
and
say
like,
like
you
asked
me
to
pull
this
image,
but
I'm
just
gonna
ignore
you
and
do
something
different
so
like
reconciling
that
sort
of
like
weird,
like
sleight
of
hand,
would
be
good
but
I'm,
not
sure
I.
Think
talking
through
the
details
of
like
who's
ultimately
responsible
for
that
I
think
would
be
I
think
we
might
need
to
do
some
more
discussion
on
that.
F
Well,
user
interface
of
fetching
with
secret
and
passing
it
down
to
runtime
or,
like
mirror,
configuration
and
passing
it
down
to
runtime,
it's
it.
It's
probably
solvable
it's
not
a
big
deal
but
like
moving
to
couplet
logic
of
how
images
should
be
fetched
or
like
when
teaching
like
different
runtime
handlers.
What
is
the
specifics
to
Google
it?
It's
not
really
a
good
one.
In
my
opinion,.
I
And
tell
them
why
this
is
important.
This
is
only
going
to
get
more
complicated
and
more
urgent,
as
we
start
doing,
artifact
support
for
scan
results,
or
you
know,
signatures
s-bombs
requirements
that
you're
going
to
need
to
have
in
the
pot
in
the
Pod,
spec
I
think
going
forward
right
that
it
required
you
know
all
images
have
been
signed.
I
I
J
Yeah
and
I
think
it's
especially
complicated
because
we
have
the
split
brain
mode
where,
like
currently
qubit,
is
where
half
of
the
things
and
telling
the
CRI
yeah
so
I
think
it
knows,
and
the
CRI
is
either
listening
or
not
depending
on
like
how
obscure
the
configuration
is
and
that
I
think
I
think
we're
really
good
like
with
these
kinds
of
more
obscure
use
cases
we're
going
to
just
keep
hitting
these
walls
of
like
who
is
actually
responsible
like.
Why,
like
who?
J
Who
is
the
entity
that
should
actually
know
about
this
and
who
should
just
follow
along
we're
coming
up
against
time?
I
am
wondering
I,
like
part
of
my
wanting
to
bring
this
up
was
just
to
see
like
who's
thinking
about
this.
Does
anyone
want
to
be
involved
in
the
conversation?
It
sounds
like
yes,
the
second
question
would
be
like
what
is
the
appropriate
Forum
to
continue
this
conversation?
J
Should
we
have
a
working
group
that
is
talking
about
it,
or
should
we
like
just
continue
in
the
you
know
this
meeting
for
a
little
bit
until
it
seems
appropriate?
This
is
something
that
we
want
to
like.
We
as,
but
you
know,
like
my
team
at
Red
Hat,
wants
to
like
pursue
the
garbage
collection
piece,
at
least
in
129,
but
I
want
to
make
sure
all
the
other
voices
are
heard.
J
Should
we
set
up
a
separate
meeting,
or
should
we
just
reconvene
next
week
and
you
know
see
if
we
can,
you
know,
come
up
with
a
sort
of
Direction,
maybe
with
a
proposal,
that's
written
out
a
little
more
concretely.
J
Right,
okay,
well,
I
can
investigate
sort
of
starting
at
least
like
it,
a
working
group
in
the
intermediary
intermediate
time
between
like
now
and
when
we
have
to
start
thinking
about
cap
process
so
that
we
can
get
the
we
can
have
larger
discussions
like
this
and
kind
of
fine
tune.
Our
all
of
our
thoughts
before
bringing
it
to
the
larger
forum.
A
Thank
you
cool.
Thank
you.
There's
a
last
item.
I
wanted
to
remind
you
that
sidecar
working
group
is
happening.
We
will
we
post
meetings
but
we'll
restart
them
starting
next
week
it's
9
A.M
just
hour
before
this
meeting.
If
you're
interested
in
sidecars
Evolution,
please
join
that
meeting
and
with
that
we
reached
the
end
of
agenda.
Is
there
any
last
minute
notes?