►
From YouTube: Kubernetes SIG Node 20200922
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
sergey,
can
you
start
today's
meeting
with
the
pr
status,
video.
B
Okay,
should
I
say
that
that's
ignored
weekly
meeting
and
it's
september
22nd
this
week
we
still
making
progress.
We
still
now
like
downtrending
on
pr
number.
I
just
checked
statistics
on
latest
like
on
a
pr
for
two
years
and
we
almost
at
the
level
of
pr
number
from
last
year.
We
are
not
even
close
to
two
years
back,
because
that
level
was
like
around
40
50
prs.
Active
last
year
was
around
148
number,
so
we
we
can
get
close
to
that.
B
I
hope
we
can
like.
We
have
four
more
weeks
and
if
you
have
a
trend
of
like
decreasing
like
10
at
least
a
week,
that
will
be
like
very,
very
good,
and
I
looked
at
number
of
inactive
prs.
The
same
def
stats
and
inactive
pr's
is
very
low,
so
we
we
try
to
like
poke
all
the
pr's
very
actively.
So
hopefully
we
can
like
it
will
help
to
decrease
number
as
well
and
we
can
close
all
the
inactive
ones
and
get
to
the
good
numbers.
B
Yeah.
This
week
I
looked
at
the
rotten
prs.
There
are
two
that
seems
useful,
so
if
you
have
a
energy
to
just
revitalize
them,
just
do
it.
Thank
you.
A
C
Yeah,
so
I
know
much
of
today's
agenda
is
tied
up
on
cap
reviews.
The
ask
I
was
just
having
is,
if
we
can
get
every
cap
enumerated,
that
we
want
reviewed
this
week,
even
if
it's
ideally
in
this
agenda
doc,
that's
great
and
we
can
ensure
we
pair
up
a
reviewer
with
that
cap.
C
The
freeze
is,
oh,
I
think
october
4th.
Maybe
it's
the
second
it's
somewhere.
C
It's
the
second
sixth,
oh
six,
six
yeah,
thank
you
kirsten,
and
so
I
think,
if
we
allow
this
week
to
get
some
reviews
done,
we'll
have
time
for
iteration
and
still
meet
that
sixth
deadline,
but
two
weeks
out
or
so
from
the
enhancements
freeze.
C
I
just
want
to
make
sure
that
people
are
aware
of
the
upcoming
date,
absent
that
I
guess
we
can
go
on
individual
caps
for
for
the
items
that
we
raised
last
week,
that
I
said
I
wasn't
able
to
review
until
this
week
the
sidecar
container
kept
by
pages
I
haven't
read
yet
I
did
get
through
the
memory
manager
cut
and
so
that
sidecar
one
is
next,
but
if
you
want
to
go
through
the
the
other
remaining
caps
now,
I
think
maybe
other
others
have
had
a
chance
to
to
read
and
comment
or
want
to
talk
through.
D
Them
would
it
be?
Okay
if
I
just
said
something
really
quick,
I'm
kirsten
and
I'm
the
enhancements
lead
for
1.20
and
we're
basically
trying
to
be
super
proactive,
this
cycle,
so
that
things
don't
get
stuck
towards
the
end.
So
like
we
started
pinging
all
of
the
cats
early
and
a
lot
of
people
have
been
super
responsive.
D
If
you
have
something
that
needs
to
go
in,
then
it'd
be
great
to
respond
to
any
pings
that
are
on
the
issues,
because
that's
how
the
team
tracks
it
we've
also
been
emphasizing
to
people
opening
tests
that
they
need
to
talk
to
the
sig
first
and
we've
actually
added
that
into
the
issue.
Template
so
that
you
know
we
just
don't
get.
E
D
With
like
people
who
thought
that
they
had
a
great
idea
without
actually
talking
to
the
sponsoring
sig,
so
if
you
have
any
questions
like
totally
reach
out
to
me,
if
you
have
any
problems
reach
out
to
me
and
I'm
sure
you'll
hear
from
me
again
on
all
of
your
pr's
and
issues.
So
that's
all.
A
C
So
I
I
think
we
should
make
sure,
as
we
go
through
the
rest
of
the
agenda,
that
we
have
a
clear
reviewer,
so
at
least
for
the
ones
that
are
listed
here
now.
Both
kevin
and
myself
were
reviewing
the
memory
manager
cap
but
of
course,
we'd
love,
more
eyeballs
on
it
and
then
the
sidecar
cap
I
had
said
I
would
take
shepherding
on
so
it's
the
remaining
caps.
C
If
we
go
through
here,
obviously
in
general
and
make
sure
we
have
a
clear,
a
signer
for
or
an
understanding
on,
if
it's
something
we
can
actually
implement
in
this
phase
or
not
or
what
our
goal
would
be
with
the
cap.
So
maybe
we
want
to
go
to
the
next
topic,
but
that's
okay.
Dawn
like
the
username
spaces
cap
is
the.
F
Can
I
ask
you
a
question
just
before
that
yeah,
not
sure
I
follow
so
you
will
review
soon
and
and
if
it
all
is
okay,
we
may
be
added
to
the
1.20
milestone
or
how
do
we
want
to
do
that.
C
Right,
I
haven't
read
your
latest
update
to
this
die
card
cap,
but
it
was
my
agenda
item
this
afternoon
to
do,
and
so
I'm
assuming,
if
that
kept,
was
agreeable.
We'd
want
to
proceed
on
alpha
implementation
in
120.
If
you
were
okay,
perfect
able
to
do
that,
yeah.
C
You,
but
I
I
guess
I
would
say
if
there
were
caps,
that
we're
just
trying
to
get
design
consensus
around,
but
we
don't
necessarily
want
to
make
implementation
progress
on
m120.
We
should
just
call
that
out,
as
we
get
through
some
of
these
topics,
thanks
a.
C
C
There's
somebody
available
that
wants
to
talk
through
the
username
spaces
kept.
G
C
G
G
G
G
I
don't
want
to
go
into
all
these
issues,
but
these
are
some
of
the
cds
that
could
have
been
mitigated
if
users
are
in
space,
where
use
so
what
we
are
proposing,
or
what
we
would
like
to
propose
is
to
extend
the
pod
specification
to
have
this
new
username
space
mode.
That
has
three
different
options:
three
different
ways
of
using
of
enable
enabling
username
spaces.
G
G
This
case
is
useful
for
applications
that
don't
work
with
username
spaces.
The
linux
kernel
performs
the
capabilities
check
taken
into
consideration
the
username
spaces,
so
there
are
some
special
capabilities
that
are
not
compatible
with
username
spaces,
so
those
applications
we
have
to
share
the
host
username
space
to
work
and
the
same
happens
for
privileged
containers
and
this
this
is
the
current
behavior
we
have
in
coordinators,
and
we
think
this
should
be
the
default
mode
for
the
time
being
to
avoid
breaking
a
system
workloads.
G
We
have
a
second
mode
that
we
would
like
to
propose,
and
this
is
cluster.
So
in
this
mode
all
the
pods
share
the
same
id
mapping.
G
This
id
mapping
is
defined
in
the
cubelet
configuration,
and
yes,
this
is
just
I
detail
that
the
bots
are
in
different
username
spaces,
but
there
are
using
actually
the
same
id
mapping,
so
this
mode
is
useful
to
allow
to
share
volumes.
What
happens
here
is
that
if
there
is
a
volume
that
is
shared
by
different
pods,
those
pods
will
be
able
to
share
files
on
that
volume
because
they
affect
the
user
id
and
group
id
and
the
house
is
the
same.
G
G
G
This
mode
is
suitable
to
be
using
for
stainless
workloads
because
those
workloads
don't
have
volume,
so
we
don't
have
to
handle
the
volume
axis
issues
with
different
mappings
and
we
think
that
this
should
be
the
default
mode
in
the
future
in
the
long
term.
What
I
mean
is
that,
once
the
linux
kernel
has
all
the
features
that
we
need
to
achieve,
the
uid,
gid
and
so
on,
this
should
be
the
mode
that
is
default
for
all
the
ports.
G
This
we
are
proposing
these
changes
to
the
cri,
so
we
will
have
to
extend
the
zero
to
contain
the
information
about
the
username
spaces
we
have.
We
want
to
do
two
specific
modifications.
The
first
one
is
to
extend
to
add
a
new
user
field
in
the
namespace
mode,
so
this
user
field
will
indicate
if
the
username
space
mode
to
use
for
the
container
is
a
different
one.
A
new
one,
or
if
that
username
space
is
the
same
of
the
host,
so
the
binary
values
for
that
will
be
both.
G
This
is
just
a
detail,
important
to
notice
that
the
cluster
mode
that
I
explained
for
the
post
specification
is
not
right
on
at
this
point,
because
this
is
just
a
special
case
of
the
pod.
The
only
difference
here
is
that
all
the
all
they
know,
all
the
pods
using
this
class
are
more
I'm
going
to
share
the
same
id
mappings.
C
And,
of
course,
in
the
cluster
mode
sure,
where
is
that
id
mapping
stored
and
then
what
would
be
the
impact
of
somebody
running
cluster
mode?
C
Where
I
think
you
said
the
behavior
was
intending
you
to
be
able
to
reuse
volumes,
I'm
just
thinking
about
situations
with
tools
like
valero
that
allow
you
to
maybe
swing
a
volume
from
one
cluster
to
the
other
one.
So
I'm
just
curious
where
the
mapping
is
is
persistent.
G
Yes,
the
idea
is
that
this
mapping
is
a
cluster-wide
configuration,
so
we
are
not
still
sure
if
the
configuration
should
be
on
the
cubelet
or
if
we
should
configure
this
globally.
But
the
idea
is
that
all
the
the
cluster,
the
whole
cluster
is
using
the
same
value,
but
yes
for
handling
volumes
between
different
clusters.
We
haven't
considered
that
case.
Yet.
A
I'm
still
just
I
may
have
the
network
problem,
so
it's
the
your
oh,
your
sentence
is
a
little
bit
broken
for
me,
but
I
still
confusing
about
the
cluster
my
opinion.
You
see
that
every
part
share
the
same
id
is
that
so
all
those
parts
it
is
have
in
this
class
mode
and
are
they
so
they
don't
have
any
like
isolation.
They
provided
so
I'm
confused
by
that
cluster
level
of
the
mode.
G
Okay,
so
yes
in
the
cluster
mode
there,
all
the
pods
that
are
running
on
the
cluster
mode
share
the
same
mapping.
So
here
there
is
something
like
an
isolation
between
those
pods
on
the
host.
But
yes,
there
is
no
isolation
between
those
parts
using
the
cluster
mode,
because
that
will
there
will
be
using
the
same
mapping
of
the
host.
C
We
previously
exported
with
vegas's
work
around
username
space
remapping-
I
I
guess
I'm
kind
of
reading
the
cluster
mode
as
there
was
a
capability
in
docker
that
allowed
you
to
do
a
default
remapping
that
all
containers
got,
and
so
it
was
opaque
to
the
cluster.
What
that
mapping
was,
I
think,
back
when
we
explored
this,
but
this
is
basically
saying,
rather
than
pick
up
a
default
remapping
that
might
have
been
configured
at
the
runtime
level.
C
A
A
It
sounds
like
the
this
kind
of
static
deployment
to
all
the
kubernetes,
and
so
they
understand
the
same
scene,
mapping
and
but
from
what
directly
explained
just
like
the
there
could
be
next,
the
com
still
configurable
at
the
cluster
question
time
or
manage,
or
whatever
operational
time
so
still
could
be
reconfigured
and
but
assign
that
authority
assigned
to
the
cupola.
G
Yes,
you
are
right
actually
to
be
honest,
this
is
an
idea
we,
yes,
we
know
that
somehow
the
bots
will
have
to
share
design
mapping.
But
yes,
this
is
an
open
discussion
where
to
configure
that
mapping
and
how
to
configure
that
mapping
a
possibility
to
should
be
to
make
this
a
static
and
to
configure
that
just
a
cluster
creation
that
could
be
an
option.
Another
option
is
to
have
that
configuration
and
on
each
cubelet
instance,
but
in
this
case
there
is
the
risk.
F
Hi
sorry,
I'm
rodrigo
I'm
working
with
maui
also,
and
maybe
something
that
that
wasn't
stress
enough
is
that
each
part
will
have
in
class
even
in
cluster
mode.
Each
part
will
have
a
different
user
name
space,
but
the
mapping,
so
the
effective
user.
Reducing
from
the
host
point
of
view
will
be
the
same.
F
So
there
will
be
some
isolation,
just
yeah
from
the
host
point
of
view.
It
would
be
the
same,
the
same
mapping
and
yeah
and
basically
this
simple
mode
achieves
a
lot
from
the
security
point
of
view,
because
there
are
different
username
spaces
and
root
on
the
container
is
not
put
on
the
host
and
on
those
things.
F
So
the
answering
your
question
down
on
how
to
how
the
algorithm
to
pick
a
mapping
would
be
it
would
be
to
to
select
the
mapping
and
in
the
linux
kernel
you
just
need
to
pick
the
container
id
the
host
id.
F
So,
for
example,
0
in
the
container
ide
will
map
to
1000
in
the
host
and
the
length
it
will
be
a
linear
mapping
and
that's
how
the
kernel
allows
to
configure
it,
and
so
it
can
be
just
serial
zero,
ten
thousand
or
whatever
it's
configured
in
the
cubelet
and
for
the
length
that
you
want
something
like
that
would
be
the
algorithm.
If
I'm
not
missing
something.
A
C
Yeah
sure
actually
yeah
also
go
ahead
so
like
right
now,
this
is
being
presented
as
a
field
on
the
pod
spec,
and
I
was
curious
if
you
intended
this
to
get
moved
under
the
pod
security
context.
C
And
then
I
was
curious
if
there
was
a
rationale
for
why
we
may
or
may
not
want
a
setting
per
container.
C
Itself,
there's
a
pod
security
context,
block
that
lets.
You
configure
what
a
whole
yeah.
Yes,
linux
modes,
everything
else
there
I
would
have
expected
that
to
fall
into
there,
so
that
was
just
a
minor
knit,
but
then
generally
I'm
asking.
The
second
question
is
more
interesting
to
me,
which
is
do.
Do
we
see
a
need
to
have
a
different
remapping
on
a
per
container
basis.
C
So
like
one
of
the
use
cases
for
sidecar
containers,
if
I
recall,
was
aside
from
istio
being
able
to
have
like
a
log
like
a
common
logging,
sidecar,
and
I
could
see
or
like
a
core
dump
sidecar
right
and
I
could
see
as
sidecar
container
use
proliferates,
that.
C
Some
of
those
things
might
bring
in
both
a
container
and
a
pv
binding,
that's
not
known
to
the
original
pod
author,
and
it
could
be
I'm
over
complicating.
I'm
just
I'm
trying
to
think
through,
like
this
proposal
with
sidecar
containers.
If
any
sidecar
containers
would
be
confused
by
how
username
spaces
are
set
up.
H
I
And
there
is
a
restriction
on
in
the
linux
kernel
if
you,
if
the
different
containers
have
the
same
network
name
space,
which
is
how
kubernetes
works,
then,
if
you
want
to
share
the
networking
space,
it's
difficult
to
have
a
different
username
space
for
the
different
container
of
the
same
pod,
because
then
slash
this
cannot
be
mounted.
So
that's.
C
Okay,
yeah
that'd
be
awesome.
If
that's
clarified
in
the
cup,
I
hadn't
tied
those
two
things
together.
G
Okay,
so
if
there
are
no
any
other
questions,
I
will
continue
with
this.
I'm
just
going
to
finish
so,
of
course,
we
will
have
to
extend
the
cri
to
contain
the
mappings
to
use
I
want
to
emphasize
here
the
in
our
proposal.
Cuboidlet
will
be
the
one
making
the
decision
about
what
are
the
mappings
to
use.
G
G
There
are
some
problems
with
docker,
because
docker
doesn't
support
containers
with
different
different
id
mappings.
So
that's
something
that
we
will
have
to
discuss.
I
don't
know
that
there
is
a
plan
to
duplicate
docker
chain.
So,
yes,
we
will
have
to
discuss
with
you
more
details
how
to
handle
the
docker
use
case
for
this
proposal,
and
this
table
is
just
summarizing
the
different
modes
we
have
in
the
pod
spec
and
what
is
sent
over
the
cri,
so
basically
the
pods
design,
both
with
different
non-overlapping
mappings
for
each
pod.
G
G
Finally,
we
would
like
to
tackle
this
problem
in
different
phases,
so
we
would
like
to
try
to
iterate
over
the
whole
design
to
discuss
the
different
three
different
nodes
and
define
something
like
a
long-term
implementation
for
this,
but
for
the
implementation
itself
we
would
like
to.
We
would
like
to
propose
implementing
this
in
different
phases.
So
for
the
phase
one,
we
would
like
to
introduce
the
hulls
and
the
cluster
modes.
G
G
I
welcome
to
20
feedback.
We
have
done
some
meetings
with
red
hat
internally
and
we
want
to
discuss
to
start
disclosing
with
the
whole
community.
So
will
be
great
if
there
are
feedback
on
this.
I
think
the
next
phase-
the
next
task
will
be
to
start
to
open
up
to
update
the
cap
with
this
new
proposal
and
to
do
this
discussion.
C
So
I
think
the
thing
we
need
to
figure
out
here
is
who
can
pick
up
the
baton
to
re-shepherd
this
scenario.
So
in
the
past
we
had
vegas
gowdy
that
was
working
on
this
and
I
was
assisting
and
we
got
far
and
what
you're
presenting
here
is
a
slight
variation
on
that
past
iteration,
I'm
wondering
if
there's
anybody
that
would
want
to
volunteer
as
like
a
primary
reviewer
on
the
cap
or
potential
approver.
I.
C
I
mean,
I
think,
that
I
think
that's
fine
to
me.
I
just
make
sure
we
have
the
right
level
of
kernel
and
runtime
awareness
right
so
having
experts
review.
The
work
is
the
right
thing.
If
anyone
else
wants
to
pair
up
with
him
or
not,
I
think
that'd
be
good.
C
Just
for
level
setting,
then
it
seems
like
when
you
talk
about
phases.
Are
we
wanting
to
get
agreement
on
a
cap
in
a
one
tony
time
frame?
Were
you
looking
to
make
an
iteration
towards
any
one
of
those
phases
in
this
release?.
G
G
The
problem
here
is
that
the
whole
implementation
is
quite
big,
but
we
also
know
that
we
should
discuss
the
the
whole
design
design
first
before
going
into
the
implementation
of
the
first
phase,
because
this
is
not
possible
to
do
the
design
for
the
first
phase,
implement
that
and
then
start
discussing
the
second
one,
because
the
design
could
change.
So
we
have
to
define
the
design
at
the
beginning
and
then
start
implementing
in
different
phases.
E
Okay,
so
I
I
think,
like
some
of
the
we
wanted
to
do
this
initial
presentations
to
just
get
a
feeler
for
what
signore
thinks
of
these
fields,
and
this
overall
approach
like
at
least
for
the
phase
two,
we
still
have
to
figure
out
the
some
of
the
details.
E
C
One
I
know
when
all
you-
and
I
were
talking
a
bit
in
the
background
but
like
I'm
still
trying
to
think
through
if
there
was
any
peculiarities
around
accounting
for
the
image
effects,
when
this
is
enabled
or
not.
So
maybe
some
discussion
in
the
cap
that
talks
about
when
and
how
remapping
occurs
when
the
image
is
pulled
and
if
having
more
than
one
pod
referenced
the
same
image
that
uses
a
different
username
space
mapping.
E
Right
yeah,
so
I
I
think
that
that
was
one
of
my
comments
on
the
last
iteration
and
I
think
the
the
way
albanian
can
correct
me.
Container
d
is
pulling
it
to
root
and
there
is
a
one
time
cost
to
do:
the
snapshot
to
a
different
username
space
root,
whereas,
like
classically
in
docker,
we
already
had
a
different
route
in
which
you're
pulling
into
so
on
the
on
the
cryo
side.
We
still
need
to
figure
out
how
we
wanted
handle
like.
E
Ideally,
if
we
have
the
mapping
already
then
pull
time
itself,
we
can
just
pull
into
the
separate
route
and
then
like
any.
C
E
K
E
Right
exactly
so,
if
we,
if
we
have,
if
the
runtime
knows
the
route
and
at
pull
time
itself,
we
are
just
creating
that
one
root
of
s
for
an
image.
And
if
we
know
that
it's
never
going
to
be
used
by
root,
and
it's
only
going
to
be
used
by
the
pod,
then
we
can.
We
can
save
the
save
that
cost
versus
always
pulling
by
root
and
then
doing
that.
One
time.
E
A
Okay,
well,
that's,
but
but
that's
exactly
earlier
what
I
try
to
figure
out
because
for
a
giving
part
them
based
on
what
you
see
that
the
there
they
may
have
the
base
image
and
then
then
it's
kind
of
like
the
based
on
what
you
just
described.
It's
kind
of
cluster
mode
right,
so
they
all
can
share.
But
on
top
of
that,
when
there's
the
delta
build
on
top
of
those
base
images,
then
they
become
to
the
pod
mode.
So
that's
why
I
kind
of
I
don't
know
how
we
are
going
to
manage
that.
A
So
when
you,
what
do
you
describe?
That's
kind
of
the
question.
What
I
have
and-
and
also
this
is
all
goes
to
another
mode.
It
goes
to
like
the
powder
light,
all
right
so,
but
at
the
bottom
now
you
only
have
the
one
mode
associated
so
you're
in
the
app
used
to
have
to
either
choose
cluster
mode,
for
everything
always
have
to
choose
the
power
of
the
mode
for
everything.
C
I
guess
the
thing
on
the
pod
mode
that
I
was
concerned
about
is
like
if
we
end
up
getting
to
a
state
where
everyone
runs
pod
mode,
and
then
everyone
has
an
sdo
sidecar
container
injected
into
their
pod
and
then
the
cost
of
that
sidecar
container
is
duplicated
per
container
running
on
that
host
to
do
the
remapping
that
that's
the
type
of
thing
I
was
just
trying
to
work
out
on
what
the
gotchas
were
here
or
not,
particularly
when
you
get
to
the
pod
mode,
but
I
think
the
tldr.
C
This
is
the
behavior
of
image.
Garbage
collection
in
image
accounting
is
probably
like
the
best
area
to
explore.
Next
on
this.
E
Yeah,
I
think
I
think
the
sidecar
one
is
a
was
a
good
issue
to
be
called
out
with
a
partner.
E
So
people
like
kenwork
would
want
to
implement
ideally
the
cluster
mode,
which
is
for
phase
one
if
we
don't
have
any
big
issues
with
the
overall
design
where
we
start
implementing
the
phase
one
and
we
realized.
Oh,
no,
we
didn't
think
of
a
blocker
which
will
prevent
us
from
implementing
phase
two
and
the
features
down
the
line.
A
Shouldn't
how's
the
mode,
it
is
basically
just
same
behavior,
so
basically
phase
one
you
got.
We
are
kind
of
the
look
into
the
host
plus
the
cluster
mode
right,
yeah.
E
A
A
C
Yeah,
if
we
can
follow
up
or
know
if
you
want
to
take
review
on
that,
that'd
be
great
and
then
it's
going
to
require
api
reviews,
so
it
will
probably
be
a
little
bit
longer
and
then
I
do
think
the
field
should
move
to
the
pod
security
context,
but
that
will
save
you
a
round
of
api
review
and
just
make
sure
in
the
cap
we
define
what
the
feature
gates
are
to
enable
this
and
okay.
I
think
I
think
image.
Garbage
collection
is
the
main
thing
I'm
concerned
about
on
on
this.
C
E
Yeah,
I
guess
mauricio
and
rodrigo
and
alban.
We
can
work
together
and
come
back
next
week.
Sure
sure.
A
C
Okay
yeah.
I
was
actually
curious
about
the
kata
as
well.
It
seemed
like
kata
didn't
really
have
a
use
case
for
host,
and
so
if
anyone
on
the
call
could
represent
the
kata
community,
that
would
be
useful.
J
I'll
need
to
look
at
it
closer,
but
yeah
we
don't.
We
don't
do
house
networking
if
that's
the
question
just
host
username
space
support
so
like
it
seemed
like.
I
think
there.
A
Well,
I
think
the
quota
might
be
maybe
it's
okay,
because
they
are
already
hardening
and
it's
just
certain
mode
that
they
just
by
default,
to
disable,
not
support,
but
the
futuriser
is
processed
on
the
kernel
and
also
it's
on
the
host
the
same
host.
So
that
may
be
half
the
competitiveness
issue.
So
so
so
can
can
someone
send
me?
A
Oh
I
do
you
have
the
email
can
share,
so
I
can
introduce
people
and
the
follow-up
with
you
offline,
of
course,
in
discussing,
will
came
back
to
the
signal
for
discussing,
but
I
just
want
to
say
that
for
efficiency
perspective
so
can
can.
Can
you
share
with
me
your
email
address,
so
I
can
introduce
the
people
from
the
devices
or
go.
I
Oh,
there
is
a
typo
in
the
email,
but
we
can
share
it
on
the
on
the
selection
here
on
the
signal
slack
channel.
A
M
L
Okay
yeah,
so
it's
my
first
time
here,
so
I
don't
know
really
the
process,
but
I
opened
this
issue
because
of
anything
just
you
know
useful
for
containers
to
be
able
to
report
and
log
you
know
id
and
and
digest
of
what
they
are
running,
both
for
debugging
and
other
purposes.
L
For
my
personal
case,
it
is
reproducibility
of
scientific
competition
inside
of
job.
So
we
want
to
report
that
and
look
that
what
the
exact
image
was
used.
So
I
opened
the
issue
and
a
comment.
There
suggested
that
you
know
I
should
go
with
more
formal
process
to
move
it
further.
I'm
not
sure
if
you
know
it
requires
a
cap
or
you
know
it's
pretty.
L
C
Yeah,
so
is
there
I'm
trying
to
think
back
if
there's
a
historical
enhancement
that
described
download
api
generally
given
its
its
history,
but.
L
Api,
I
mean
the
main
idea
is
that
you
know
the
code
running
inside
of
a
container
should
know
what
exactly
the
version
of
the
code
is
running
and
so
and-
and
you
know
the
easiest
way
for
that
is
to
report
the
docker
image.
You
know
the
jest
or
or
you
know
more
generally,
I'm
image
id,
and
so
then
you
know
we
can
do
that
both
to
report
to
end
users.
You
know
which
version
of
the
code
you
know
served
your
page.
L
C
I
think
like
like,
if
I
run
a
in
other
run
times
like,
can
a
java
application
know
what
version
of
a
jar
file
was
given
to
it,
like
I'm
wondering
the
boundary
there
and
then
can't
sha's
vary
across
the
images
I'm
trying
to
think
through.
In
all
the
cases
we
run
into
where
the
the
value
can
differ,
and
I
swear
that
we
have
one,
but
I
just
it's.
It's
escaping
me
at
the
moment.
So.
L
I
mean
image
id
is
already
available
from
outside,
so
you
can
log
it
from
outside.
For
you
know,
I
think
all
the
same
reasons.
I
don't
see
the
reason
why
the
internal
application
shouldn't
know
you
know
what
is
being
run.
I
mean
it's
just
the
question.
You
know
where
you
want
to
log.
What's
the
use
case
here,
okay,
so,
for
example,
if
you
want
you
can
imagine
a
website
where
you
know
in
the
footer
of
the
website,
you
want
to
show
tell
you
know
which
version
of
docker
image
must
serve
this.
L
This
request
and
that's
currently
not
possible.
Really,
you
know
so,
for
us,
local
image
is
the
the
unit.
Not
not
really.
Just
you
know,
git
repo,
but
really
what
you
know.
You
want
to
reproduce
exactly
the
the
particular
docker
image
and
and
know
which
one
was
running.
Something,
and
I
think
the
imagery
is
the
closest
we
have,
and
we
just
want
to
expose
it,
but
couldn't.
L
C
I'm
still
kind
of
missing
it
feels
like
I
I
don't
maybe
I'm
alone
on
this,
though
I
just
I'm
kind
of
missing
like
why
the
binary
cares
right,
like
the
image
defines
my
operating
environment
for
my
application,
and
you
know
when
you
package
your
your
image.
If
you
had
application
specific
versioning
semantics,
wouldn't
you
put
that
in
your
container
image
when
building
your
image.
L
L
So
when
you
run
a
job
where
you
get
data
in
where
you
want
to
run
a
docker
image,
if
somebody
wants
to
rerun
those,
they
have
to
know
which
docker
image
version
you
used
and
so
having
digested
d
allows.
I
mean
image
id
allows
them
to
pull
exactly
that.
Docker
image
having
something
internal
does
not
map
easily
to
what
exactly
goku
image.
They
have
to
use
to
reproduce
the
thing,
and
so
you
know
you
can
lock
what
was
used,
but
it's
not
possible
like,
for
example,
I
can
store
the
git
commit
hash
used
to.
L
You
know,
build
the
docker
image
and
so
on,
but
if
somebody
wants
to
rerun
that
finding
which
docker
image
included
that
git
and
you
know
exactly
version
because
it
might
be
multiple,
lock
images
built
from
the
same
git
but
different
other
dependencies
on
it's
hard,
so
you
know
we
already
have
a
readily
available
way
for
both.
L
You
know,
logging
and
also
pulling-
and
that's
I
think,
is
the
important
thing
here
is
that
you
know
having
an
image
id
allows
you
to
pull
exactly
the
same
version
of
image
later
on
again,
and
so
that
allows
you
to
reproduce
this
and
and
debug
or
you
know
whatever
use
case
you
you
have,
you
know
so
far.
As
you
know,
debugging
reproducibility
is
the
the
main
use
case
here.
So.
C
Maybe
another
way
you
could
approach
this
is.
Can
you
just
run
two
containers
in
your
pod
and
then
pass
into
the
second
container
of
the
image
of
the
first
container
that
you
want
to
know
that
you
were
running
like?
Why
does
this
need
to
be
a
like
I'm
kind
of
curious
one?
If
there's
like
a
security,
isolation,
boundary
we
should
think
through
on
letting
an
application
environment
know
which
container
image
it
was
running
from,
I
mean.
L
It's
allowing
it
to
know
it's
allowing
it's
not
done
download
api.
You
have
to
enable
it's
not
that
all
images
will
now
know,
but
if
I
want
to
know
in
my
container
how
it
or
you
know
what
is
being
run,
I'm
able
to
configure
it
at
the
kubernetes
level.
It's
not
that
is
you
know
we
are
giving
everyone.
This
information.
C
Yeah,
so
maybe
the
next
step
on
this
would
be
to
write
a
cap
that
gives
your
use
case
and
motivation
a
little
more
detail,
and
then
that
would
explain
why,
like
it
needs
to
be
generic
to
kubernetes
versus
if
it's
specific,
to
a
particular
class
of
pods
you're,
creating
that
you
could
pass
this
information
in
a
container
co-located
to
your
primary
container.
That
said
that
this
is
the
image
I'm
actually
running.
K
I
make
it
could
be
easier
than
that.
We
have
something
similar
to
this
with
when
we
build
our
operators.
In
that
you,
we
we
pass
in
the
operand
image
path
to
the
operator
as
an
environment
variable
so
like
if
you,
if
you
have
something
that's
the
some
template,
that's
rendering
out
your
deployments
or
whatever
you
can
just
have.
The
template,
render
out
the
image
and
then
copy
that
same
value
into
an
invar
that
you
inject
into
your
container
right.
L
Yeah
I
mean
there
are
many
workarounds
here.
I
I
completely
agree.
I
I
I
mean
I
mean
the
same
calls
for
everything
provided
in
the
download
api.
So
you
know
the
whole
point
of
download
api
is
that
you
don't
do
that,
but
yeah
like
I
mean
donut
api,
the
same
thing
like
if
you
want
to
pass.
You
know
the
the
amount
of
resources
you're
giving
to
the
container.
L
You
can
also
have
use
a
template
and
you
can
pass
it
as
environment
variable
and
copy
the
value
from
the
you
know,
resource
field
to
the
environment,
variable
and
so
on.
You
know
here
the
idea
here
is:
it's
really
like
you
know
not
have
to
repeat
yourself
and,
and
that
you,
you
know
you
minimize
the
operation,
operation,
complexity
here
and
also
the
other
issue
here
is
that
you
don't
really
know
exactly
which
version
of
the
docker
image
the
the
docker.
Will
I
mean
the
kubernetes
will
pull
so
you
can.
L
You
can
specify
you
know
image
and
tag,
and
then
it
pulls
a
particular
version
of
it,
and
you
cannot
obtain
that
unless
once
it's
running
you
can,
of
course
you
know
exactly.
L
You
know
use
a
docker
image
with
the
whole
digest
as
a
link,
but
if
you
are,
if
you
don't
care
in
that,
you
know
exact
moment,
you
know,
for
which
particular
version
is
running,
but
you
just
say
you
know,
use
the
latest
work
latest
stack,
but
when
it's
you
use
the
latest
like
I
want
to
know
which,
which
one
particularly
you
you
put
you
pull
yes.
So
here.
G
It's
really
about
being.
G
A
Can
we
can
we
follow
up
through
the
club?
I
hope
the
proposal
to
make
the
capture
all
the
use
cases-
and
I
know
all
the
use
cases
is
it
described
here.
Actually
people
have
the
war
crime,
so
we
can
carry
discussion
there.
So
that's
more
organized
and
we
also
don't
have
the
six
minutes.
We
have
so
many
topic
and
I
haven't
go
through
yet
so
can
we
carry
on
that
one?
So
nika's
thanks
for
the
proposal
and
please
send
us
the
type
we
will
carry
on
our
discussion
there.
A
So,
let's
move
to
the
next
topic:
we
don't
have
much
time
so
we
know
do
you?
Do
you
want
to
talk
about
tonight?
The
power,
the
resource
api.
N
N
I
won't
present
the
cat,
but
I
will
talk
about
it
very
quickly.
The
kepler
links
in
the
notes
is
really
just
a
reformat
of
the
existing
kept
to
the
new
process.
N
What
is
here
discussed
is
moving
from
beta
to
v1,
so
this
is
a
feature
that's
been
in
beta
since
1.15,
it's
been
used
in
production
by
a
number
of
customers
and
overall,
the
work
that
would
just
be
remaining
to,
at
least
in
our
opinion.
Move
from
beta
to
v1
is
really
create
the
v1
api
and
maybe
add
a
small
metric
and
make
sure
that
the
grpc
config
option
are
set
in
such
a
way
that
they
don't
allow
views,
but
that's
really
what
this
cap
is
about,
or
what
this
pr
is
about.
N
A
Okay,
I
will,
I
will
take
a
look
on
this
one
and
I
I
believe
we
already
talked
about
this
one.
We
all
agree
and
we
will
look
at
the
detail.
I
I
know
the
david
dashboard
also
still
on
this
one
as
the
reviewer
so
yeah
can
we
also
we
can
move
for
the
next
one,
so
we
can
carry
on
offline
and
please
everyone,
if
interesting
on
this
one,
please
I'll
review
this
one
and
next
one.
It
is
memory
management,
cap.
C
Yeah,
so
I
had
reviewed
this.
The
main
issue
I
was
hitting
on
this
just
to
summarize
was
one
wanting
to
have
better
metrics
on
knowing
if
the
actual
feature
is
functioning
as
design.
So
if
the
keyboard
could
report
its
node
local
topology
view
and
some
understanding
on
how
pods
were
bound
to
it,
that
would
be
useful
as
a
person
trying
to
investigate
the
system.
C
The
other
one
was-
and
this
is
too
long
of
a
topic
to
dive
into
here,
but
we've
been
exploring
issues
around
kernel,
memory,
accounting
and
renault
and
seth
and
myself.
Goodbye
talked
in
more
detail,
but
since
this
tries
to
further
elevate
what
it
means
to
be
guaranteed
memory,
particularly
to
users
that
expect
ever
and
ever
greater
guarantees,
I
wanted
to
pull
the
community
on
if
there
was
any
guidance
for
how
people
did
see.
C
Group
k,
memory
accounting,
because
what
we
see
at
red
hat
is
that
the
charges
per
c
group
can
vary
depending
on
the
number
of
cpu
threads
available
to
that
container,
and
so
just
accounting.
For
that,
when
we
provide
ever
and
ever
creating
greater
accounting
guarantees
in
the
cable
it
is.
I
felt
like
something
we
should
define
as
part
of
a
notes
back
on.
C
Now
that
the
comment
was
public
on
that
bugzilla,
which
was
the
case
lab
accounting
based
on
cpu
threads.
So
no
one
needs
to
read
the
internal
analysis
of
how
we
ultimately
found
these
new
new
issues.
But
in
general
I
think
if
we
can
come
back
on
like
a
recommendation
on
how
to
set
up
cgroup.kmm
when
using
this
feature
is
kind
of
the
main
thing
with
calling
out.
M
Yeah
for
sure
and
like
I
asked,
let's
discuss
it
on
the
next
week,
because
it's
a
really
interesting
topic,
because
it's
really
only
relate
can
be
related
to
the
hardware
topology
and
it's
like
also
can
be
related
to
the
system.
Calls
to
the
specific
system
calls
if
you
initiate
it
so
yeah.
A
A
And
I
want
to
tell
myself
to
take
a
look
and
that's
why
anyone
else
want
to
also
review.
Please
take
a
look
at
that
one.
This
one
also
been
talked
about
for
many
times
and
come
back
many
times
and
joined
by
six
storage
initially
and
then.
Finally,
we
have
the
signal.
People
really
own
those
things,
and
please
take
a
look
on
that
one.
O
Time
I'll
just
go
over
very
briefly,
so
there's
a
new
cap
that
me
renault
put
up.
So
the
idea
here
is
that
kublet
should
kind
of
be
aware
of
the
underlying
machine
shutdown
and
be
able
to
actually
capture
that
event
and
during
the
machine
shutdown
actually
terminate
pods
gracefully,
because
currently,
when
you,
when
a
machine
shuts
down
all
the
pods,
are
just
kind
of
deleted
by
the
underlying
emit
system
and
they're
just
killed
so
pre-stop
hooks
and
sick
term
and
all
that
stuff
is
not
honored.
O
So
this
kind
of
the
proposal
here
we
can
go
into
more
details
on
next
meeting,
but
I
would
appreciate
anyone
who's
interested
to
take
an
initial
look
and
provide
some
feedback.
A
So
we
have
on
windows,
privilege,
windows
and
the
last
time.
I
think
also
it's
because
last
time
I
went
after
time,
and
so
so
can
we
make
the
at
least
next
week.
This
is
the
first
topic
to
talk
about.
H
So
we
did
raise
this
with
sig
off
and
they
didn't
have
any
problems.
I
think
we
can
talk
about
this
next
week,
but
would
it
would
we
be
able
to
get
anybody
to
take
like
a
look
at
the
kept
before
next
week?
So
we
have
enough
time
to
kind
of
react
to
feedback
before
the
number.
F
C
It
seemed
like
there's
a
discrepancy
in
the
behavior
versus
what
the
pod
spec
shows
and
so
the
more
we
could
do
to
make
what
the
pod
spec
shows
map
the
actual
user.
Behavior
was
the
main
just
of
my
feedback,
and
I
haven't
seen
if
that's
been
addressed
yet
but
yeah,
I
tried
to
be
proactive,
I'm
getting
a
review
for
you.
There.
H
Yeah,
thank
you,
yeah.
I
think
we
did
address
a
number
of
those
comments,
but
yeah
yeah,
I
think
moving
to
next
week
is
fine.
I'm
just
worried
that
we
won't
have
enough
time
if
we
discuss
next
week
to
be
able
to
reach
a
consensus
by
the
sixth.
A
Okay,
so
I
will
move
to
next
week
and
the
quad
for
everyone
to
take
a
look
at
the
dock
and
there
are
other
things
like
the
container
notifier
cab,
I
think
maybe
just
call
out
for
the
reviewer
and
the
circuit.
You
have
the
couple
question
and
the
certain
things
maybe
people
can
answer
through
the
through
the
these
meeting
notes
is
that
okay,
because
we
don't
have
the
time
right
now,.
B
Yeah,
so
there
is
a
cab
that
we
discussed
in
one
of
the
previous
meetings
about
timeouts.
It
will
be
really
great
if
it's
very
simple:
it's
one
pager
and
it
will
be
great
if
you
can
approve
it
and
then
runtime
class,
we
have
a
document
collecting
all
the
feedback.
So
if
you
want
to
read
it
then,
like,
I
think
we
need
to
start
moving
runtime
class
into
ga
as
well.