►
From YouTube: Kubernetes SIG Node 20200923
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
help
welcome
everyone.
So
this
is
a
a
public
signal-related
meeting
to
discuss
requirements,
issues,
goals
blockers
to
try
to
promote
what
we
have
today
in
kubernetes
cri
from
an
alpha
api
status
to
a
beta
and
eventually
ga
form.
A
Renault
and
I
had
been
chatting
a
little
bit
last
week
and
dims-
I
don't
know
if
dems
is
here
today
around
what
we
can
do
to
try
to
evolve
the
present
status
quo
of
signoid.
Having
you
know,
the
in-tree
docker
shim,
which
I
think
as
a
community
we
would
recognize,
has
been
largely
put
under
maintenance
versus
where
a
lot
of.
A
Both
community
and
vendor
adoption
has
shifted
towards
using
the
cri
implementations
in
practice,
and
so
we
kind
of
have
the
cri
on
this
quasi
it's
alpha,
but
everyone
treats
it
as
production
level
status,
but
that
is
both
good
and
bad.
I
guess
so
it's
good
and
that
I
think
many
of
us
represent
users
who've
been
using
a
cr
implementation
in
production
now
for
well
over
probably
a
year,
which
means
we've
learned
enough
now
to
say
what
what
we'd
want
to
tweak
before.
A
And
so
renault,
since
you
organized
this
meeting,
do
you
want
to
go
first
on
that
one.
B
A
Yeah,
so
so
my
perspective
on
that
one
is,
is
just
my
perspective
right,
so
I'm
happy
to
hear
others
feel
differently.
My
perspective
was
part
of
the
reason
we
developed
the
cri
itself,
like
the
motivation
was
we
wanted
to
have
an
interface
that
met
the
need
of
kubernetes
at
that
particular
release
right.
A
So,
in
the
early
days
of
containers,
we
had
a
fast
moving
container
on
time,
fast,
moving
orchestration
service,
and
sometimes
there
was
an
impedance
mismatch
between
what
the
orchestration
platform
desired
and
then
what
the
container
run
time
met
across
many
dimensions
right.
Sometimes
the
runtime
was
ahead.
A
So
things
like,
I
don't
know
even
username
space
remapping
that
we
talked
about
this
week
might
have
appeared
in
the
runtime
before
it
was
actually
understood
how
it
would
be
leveraged
in
the
orchestrator,
and
so
when
I
think
about
what
it
would
mean
to
alpha
to
beta.
A
I
think
the
first
question
is
like
what
does
a
cri
version
mean
right
and
so
for
me,
if
I,
through
my
perspective
here,
is
I
I
think
if
we
go
from
an
alpha
to
a
beta,
it's
not
like.
We
would
say
this
is
cri
version
1.0
we
would
say
this
is
the
cri
definition
to
satisfy
kubernetes,
1.20
or
1.21,
and
in
the
same
way,
that
we
roll
out
feature
gates
within
kubernetes.
That
says:
hey.
You
have
a
feature.
That's
now
alpha
beta
or
ga.
A
C
A
The
the
idea
would
be
you
would
you
would
recognize
that
cri
is
a
supported
interaction,
protocol
between
the
cuba
and
a
runtime,
but
the
actual
api
definition
of
cri
doesn't
need
to
be
n,
minus
2,
skew
or
n
minus
3
skew
in
the
same
way
that
we
have
with
cubelets
and
api
servers,
it's
more
just
a
recognition
that
we
have
something
tested
in
ci
as
consumers.
A
We
signal
to
the
community
that
this
is
the
api.
We
as
a
community
are
aligning
against
and
you
can
expect
iterations
of
that
api.
In
the
same
way
you
expect
iterations
to
kubernetes.
So
like
that
versioning
statement,
I
think,
is
the
first
thing
we
need
to
come
to
a
consensus
with
as
a
as
a
community
or
a
group.
If,
if
people
are
comfortable,
saying
a
cri
version,
maps
to
a
cubelet
version.
B
Yeah,
I
think
that
that
makes
sense
we
like,
if
we
always
tie
the
cubelet
version
to
the
cri
version
and
don't
make
any
guarantees
about
backwards.
Compatibility.
D
What
is
the
story
today
for
kubernetes
queue
and
container
disqu
like
like?
Does
it
like?
Do
you
have
a
list
of
supported
container
d
versions,
or
how
does
it
work?
I'm
sorry
yeah.
It
may
be
like
a
newbie
question.
A
No
that's
great
sergey.
So,
like
that's
part
of
the
tension,
I
see
with
what
sir
I
had
had
intended
to
overcome
right.
So
we
used
to
have
to
say,
with
the
entry
docker
shim,
we
support
docker
1
9
110
111
112.,
like
we
supported
three
levels
of
docker.
At
a
time.
If
I
recall
during
the
early
iterations
of
the
project,
the
the
statement
of,
do
you
support
container
d,
or
do
you
support
insert
anybody
else
on
the
producer
side
of
a
cri?
A
I
would
like
to
just
complete
this
dis,
disassociate
from
being
the
realm
of
responsibility
of
the
kubernetes
project,
in
the
same
way
that
the
cni
and
the
particular
individual
versions
of
a
cni
provider
are
disconnected
from
like
a
key
thing.
We
think
about
right.
So
it
would
be
up
to
the
container
d
community
to
say
we
conform
to
this
particular
version
of
cri
or
not,
or
the
the
cryo
community
or
any
of
the
other
runtime
minor
communities
and
variations.
B
Yes,
and
in
cryo
we
are
tying
the
version
of
cryo
to
the
version
of
kubernetes,
so
it's
kind
of
okay,
116
is
116,
117
is
117,
and
so
on,
not
so
clear
on
container
d
versioning,
but
like
folks
on
the
call
from
that
community
are
pretty
challenged.
E
A
A
Wanting
to
cut
new
releases
of
container
d
to
align
with
karani's
releases
and
then,
if
we
think
about
forward
evolution
of
the
api,
does
that
mean
if
we
think
about
in
terms
of
like
alpha
beta
ga
style,
feature
gates
and
cube
when
we
roll
something
in
in
an
alpha
phase,
we've
been
pretty
good
as
a
community,
not
saying
something
couldn't
go
to
beta
until
it
was
kind
of
satisfied
by
ideally
more
than
one
cri
provider,
and
so
I
guess
what
I'm
curious
is
like.
Is
there
a
real
tension
here?
If
we
just.
F
I
don't
think
I
could
speak
so
I'm
more
been
on
the
more
runny
one-timing
side
than
the
the
specific
cry
integration
for
container
d,
but
I
can
say
the
project
defines
a
release,
support
scope
where
it's
like
it's
either
a
year
or
whenever
the
version
of
of
the
kubelet
or
of
sorry
of
kubernetes
that
was
out
at
the
time
is
end
of
life.
So
it's
kind
of
whatever
the
max
of
those
two
is
and
that's
kind
of
how
that's
been
defined.
F
Tension
around
that.
A
A
E
Can't
comment
on
the
actual
support,
but
I
think
to
answer
your
earlier
question
about
cry
being
embedded
into
container
d.
We
have
been
testing.
You
can
build
the
complete
set
of
binaries
from
container
d,
slash
cry
and
get
the
latest
version
of
whatever
cry
api.
They
have
vendored
into
there
and
also
vendor
in
like
a
known
version
of
container
d,
or
you
can
build
from
continuity
container
d,
which
will
fender
in
the
known
version
of
cry
and
so
for
some
testing.
E
F
And,
as
somebody
pointed
out
in
chat,
there's
a
actually
the
readme
on
container
d,
slash
cry
kind
of
defines
the
kubernetes
versions
that
are
and
how
the
support
scope
works
for.
A
A
So
I'm
trying
to
think
about
I'm
not
hearing
major
objections
to
the
original
spirit
of
the
cri,
which
was
define
a
runtime
interface
that
was
needed
for
meeting
kubernetes
needs,
which
meant
the
versioning
was
oriented
towards
kubernetes.
In
my
view,
and
not
in
verse
and
so
like,
it
seems
like
as
a
broader
community.
We've
been
successful
with
that
to
date,
some
things
have
evolved
since
cri
itself
was
originally
done,
and
dems
isn't
on
here,
but
like
the
cri
itself,
now
is
externalized
from
the
main
kubernetes
tree
for
vendor
dependency,
management
and
stuff.
A
So
things
are
simpler
there
and
so
from
a
pure
versioning
standpoint.
It
sounds
like
going
from
alpha
to
beta
or
ga.
A
We
don't
feel
that
there's
a
strong
need
to
change
our
definitions
of
skew
or
anything
that
right
right
now,
which
is
good
to
hear
and
so
then,
like
the
next
level
of
question,
is
probably
are
there
issues
the
community
has
faced
with
using
cri
in
production
that
we
feel
we
want
to
close
before
moving
out
of
our
alpha
state
and
maybe
to
set
some
like
historical
context
or
thoughts
on
this.
A
A
But
are
there
other
issues
that
people
have
hit
now
that
we'd
want
to
raise,
as
as
real
blockers
to
their
particular
runtime
implementation
being
competently
used
in
production
due
to
a
gap
in
the
api?
I
guess.
B
So
I
think
one
issue
like
we
have
been
using
c
advisor
in
production
and
we
did
a
quick
test
with
our
perf
team
and
we
found
some
regressions,
so
we
are
not
sure
whether
it's
the
way
we
implemented
it
in
cryo
or
it's
just.
There
is
the
because
the
behavior
has
changed
where
cubelet
is
talking
over
the
cri
to
get
the
stats
that
has
made
it
worse.
So
we
need
to
investigate
and
get
to
the
bottom
of
that.
A
Yeah
and
then
the
other
area,
I'll
just
call
it
here,
and
I
I
don't
necessarily
view
this
as
a
as
a
blocker
versus
maybe
a
future
evolution
is
nodes
with
ever
greater
density
of
containers.
I
think
we
at
red
hat
at
least,
have
had
experience
where
there
might
be
issues
in
the
current
cri
definition
around
supporting
very
large
numbers
of
containers
on
a
node.
E
Yeah,
I
was
going
to
say
I
think
some
users
that
microsoft
have
wanted
to
have
like
paginated
query
operations.
I
don't
think
it's
been
a
blocker,
I
think,
but
it
has
been
identified
that
as
we
go
to
like
super
scale,
we
might
start
running
into
some
issues
with
that
yeah.
G
One
concern
not
not
really
a
blocker,
but
like
an
overall
concern
about
the
current
state
of
cri
is
with
duality
of
who
owns
what,
in
terms
of
controlling
classic
groups
on
on
linux
site.
G
So
a
couplet
does
some
direct
modifications
and
when
we
have
a
runtimes,
which
does
some
modifications
so
in
in
my
understanding,
the
sara
interface
was
actually
supposed
to
be,
like
the
ultimate
controller
of
like
how
the
container
is
run
so
kublet
shouldn't
be
assuming
something
and
actually
give
a
power
to
a
runtime
to
actually
create
all
we
needed
infrastructure
like
we
see
groups
here,
are
forward
containers
where
all
the
limits
and
so
on,
yeah.
So.
A
But
it's
a
fair
piece
of
feedback
right
like
we
didn't
call
it
the
pod,
runtime
interface
right.
It
was
the
container
runtime
interface
and
so
dealing
with
things
like
more
than
one
container
in
a
pod
and
the
nic
containers
and
all
the
discussions
we
had
around
sidecar
containers
and,
and
that
type
of
thing
I
don't
think
for
today's
discussion.
I'd
want
to
involve
the
meaning
or
role
or
scope
of
cri
to
say
how
come
you're,
not
the
entire.
A
How
come
you
don't
handle
all
orchestration
on
a
pod
definition,
excluding
storage?
Maybe
that's
a
future
project
we
can
take
on
in
the
sig,
but
like
for
the
scope
of
what
cri
was
today.
I
viewed
it
as
the
imperative
api
cubelet
calls
to
invoke
creation
and
destruction
of
a
container.
G
But,
as
you
mentioned,
like
the
things
so
I
evolved
from
the
timeless
year,
I
was
defiant
and
actually
the
current
implementation
of
life
aversion.
So
practically
this
preparation
of
like
vm
based
runtimes,
you
really
need
to
care
about
the
sandbox
creation.
You
really
need
to
prepare
our
size
of
william
based
on
all
amount
of
containers
inside
this
sandbox
all
the
resources.
What
is
needed
for
those
containers?
F
G
C
F
The
cri
was
more
designed
for
like
a
traditional
linux
container
and
not
around
some
of
the
things
that
we've
kind
of
made
happen
like
vm,
based
containers
and
stuff
like
that
and
it
kind
of
shows
in
the
interface
and
how
some
some
of
the
like
some
of
the
shortcomings
of
the
interface
in
terms
of
adding
those
kinds
of
things,
kind
of
stand
out.
When
you,
when
you
look
at
that
stuff.
But.
A
Yeah,
so
I
guess
brian
and
alex
like
I
plus
one
on
those
comments.
I
think
the
thing
I
fear
is
like.
A
Part
of
this
discussion
is
part
of
that
symptom
right,
which
is
like
cri.
What
formed
in
what
2017,
maybe
2016.
My
own
memory
is
failing
me
and
by
not
defining
like
a
a
checkpoint
of
progress
to
some
degree,
we
might
have
held
back
progress
on
those
other
areas
and
so.
A
I
I
don't
see
there
necessarily
being
an
incompatibility
on
us
saying
the
cri
as
a
communication
protocol
between
cubelets
and
runtimes
is
a
recommended
path
forward
for
the
kubernetes
project
and
at
the
same
time
we
can
evolve
the
definition
of
the
cri
to
expand
use
cases.
Let's
say
be
seen
in
the
same
way
that,
like
people
respected
the
protocol
between
the
docker
api
and
cubelet,
as
just
the
accepted
thing
to
use
independent
of
any
knowledge
on
how
the
docker
api
was
versioned
and
if
the
cuba
could
take
advantage
of
it.
A
So
I
kind
of
feel
like
if
we
reach
a
checkpoint
of
saying
cri
is
now
beta
or
ga
one.
It
can
give
a
clearer
signal
to
the
broader
user
community,
which
is
like
we
could
deprecate
the
docker
shim
and
have
more
users
adopting
runtimes
like
container
d
or
cryo
that
might
be
getting
deeper
investment
or
any
of
the
other
vm
oriented
ones
that
were
discussed,
but
right
now,
like
we're
kind
of
doing
a
disservice
to
the
community
by
keeping
it
kind
of
in
a
quagmire.
A
So
my
hope
here
is.
We
can
make
a
decision
point
to
say:
we've
reached
satisfactory
progress
on
something
we
have
enough
user
experience
on
something
to
say
we're
comfortable
with
this,
but
then
that
lets
us
then
shift
attention
to
like.
What's
the
next
thing,
after
that,
I
guess
for
things
that
we've
learned
on
that.
G
So
that's
practically
my
real
question
related
to
it.
So
if
we
call
it
better
right
now,
will
we
be
able,
in
this
better
stage,
to
evolve
the
apis
to
expose
a
bit
more
information
about
where
board
and
container
resources
downwards
to
ap
to
runtimes
yeah?
So
I
think
the
goal.
A
A
It
helps
us
remove
the
docker
shim
from
the
core
kubernetes
kubernetes
repo
clean
up
our
dependency
management
within
that
repo
and
signal
to
users
that
you
should
move
to
one
of
the
broadly
supported
container
runtime
versions,
and
so
really
the
motivation
here
is
like.
How
do
we
clearly
signal
it's
time
to
get
off
of
the
docker
shim?
A
And
that's
partly,
why
we're
saying
there's
like
an
unknown,
unknown
or
hidden
truth
here,
which
is
like,
I
think
the
majority
of
commercially
supported
distributions
have
moved
to
a
particular
cri
choice
and
there's
like
a
cognitive
dissonance
in
the
open
source
community
on
maybe
sending
a
wrong
signal
to
to
stick
with
the
entry
docker
shim.
G
Yeah,
I
totally
agree
about
the
docker
ship
and
like
for
our
customers.
What
I'm
talking
this?
My
first
comment
is
like
you're
running
this
dream:
get
rid
of
it
switch
to
container
deal
cry
when
we
do
something
but
again
duplication,
docker
shipments,
one
part
of
the
store
store,
the
second
part
of
the
story.
In
better
stage,
are
we
going
to
do
any
compatible
or
incompatible
changes
to
cri
api
to
get
a
future
or
like
development
of
runtimes
or
this
interface
at
all
yeah?
I
would
definitely.
A
A
All
that
we'd
be
saying
here
is
that
our
energies
to
support
those
capabilities
would
be
100
directed
towards
doing
it
in,
ideally,
a
forward
compatible
manner
to
the
cri
right.
We
don't
want
to
just
make
it
that
the
cri
will
will
change
so
drastically
that
you
have
no
one
implementing
it
between
particular
versions
of
cube,
like
I
don't
think,
that's
a
real
fear
versus
we
could
definitely
add
new
operations
or
new
arguments
to
those
operations
to
meet
new
use
cases
for
sure.
F
You
know
dead,
it
seems
like
going,
I
don't
know
the
difference
between
alpha
vader
would
be,
but
like
especially
like
ga
or
it's
a
declaration
of
backwards.
Compatibility
like
coop
120
is
not
going
to
break
stuff
that
worked
in
google
119
or
something
like
that.
A
Well,
I
I
largely
agree
with
that.
That's
a
fair
statement,
but
it
would
mean
that
you'd
be
expected
to
to
upgrade
your
cri
implementation
to
keep
up
to
pace
with
any
feature
that
say
a
future
version
of
cube
evolves.
So
I
would
still
want
to
keep
cri
versioning
semantics
tied
to
cube
itself.
Otherwise,
one
of
the
main
value
propositions
of
the
project
itself
would
have
been
lost,
which
was
have
an
interface
that
meets
the
needs
of
that
present.
Release
of
kubernetes.
E
I
thought
of
a
potential
cry
deficiency
that
I'd
like
to
also
bring
up,
and
this
is
not
entirely
well-formed.
I
haven't
had
enough
time
to
think
through
this,
but
for
windows,
specifically
with
windows
containers.
The
kernel
version
of
the
container
that
you're
trying
to
start
is
very
important
and
all
of
the
containers
within
a
pod
must
be
started
with
the
same
kernel
version,
and
we
are
kind
of
struggling
to
figure
out
the
best
way
to
support.
E
But
you
can
also
with
hyper-v
isolated
containers,
and
so
this
may
be
digressed
into
like
the
vm-based
containers.
E
But
we
have
the
ability
to
start
different
pods,
targeting
different
kernel
versions
on
windows
and
with
the
cry
api
as
it
is
today
we're
having
a
hard
time
kind
of
doing
up
like
list
operations
and
understanding
exactly
which
container
images
are
on
the
machine,
especially
around
multi-arch
images,
where
a
multi-arch
image
definition
can
have
multiple
image
versions
for
multiple
windows,
containers
that
all
target
different
os
versions
and
we're
kind
of
hacking
around
it
with
some
ana
by
also
passing
around
some
annotations
today.
E
E
Just
kind
of
that
that
plumbing-
and
I
know
windows
is
kind
of
breaking
the
expectations
that
you
can
start
any
container
for
the
platform
on
like
in
any
of
the
pods.
A
Okay,
trying
to
think
about
how
to
best
approach
that
right,
because,
like
the
expectation
today
is
I
maybe,
if
I
promote
a
little
bit,
are
you
do?
You
have
concerns
with
the
cubelet,
acting
as
the
garbage
collector
of
truth.
E
Or
images
well
today,
it's
like
the
cubelet
issues,
the
list
like
list
images
command
and
the
list
images
command,
takes
in
a
container
image
and
a
tag
or
just
the
container
spec,
and
if
that
happens,
to
be
a
multi-arch
image
that
the
kind
of
the
behaviors
or
kind
of
workflow
around
that
is
unknown.
So
the
cris
can
say.
E
Yes,
I
have
that
image,
but
it
may
have
the
image
for
a
different
platform
or
like
a
different
os
version,
and
so
then,
when
we
move
on
to
the
like,
either
depending
on
what
the
behavior
is
to
either
pull
or
or
skip
the
image,
we
could
not
have
the
image
when
we
then
moved
to
try
and
start
the
the
sandboxes
for
the
pods.
C
B
A
Yes,
they
do
one
of
them.
I
I
didn't
think
like
empty
their
volumes.
One
of
the
volume
types
I
thought
was
a
gap,
but.
E
A
E
That
empty
dirt
volume
was
added.
I
know
that
there's
a
deficiency
right
now
around
single
file
mappings
in
containers
that
I
believe
was
addressed
at
the
os
level,
with
the
windows,
security
updates
or
just
cumulative
updates
that
got
released
this
month.
I
haven't
had
time
to
validate
that.
H
Yeah,
so
on
on
that
front,
you
know
like
so
when
initially
I
was
involved
in
the.
H
List
manifest
and
that
time
we
only
had
architecture
and
os
in
image
truck
image,
part
of
it.
So
I
think
instead
of
cubelet,
you
know
it
would
be
better
from
the
windows
point
of
view.
If
you
add
another
field
here
in
image,
spec
itself
that
can
help
you
identify
this.
These
images.
H
E
E
C
I
E
I
E
All
right
so
so
like
we
have.
We
have
an
example
today
where
you
might
want
to
run
the
same
well,
you
might
want
to
run
a
container
on
like
just
windows
version,
let's
just
say
like
x
and
y,
and
if
you
have
a
multi-arch
image
that
contains
two
like
an
image
for
like
windows,
version
x
and
windows
version
y.
E
E
And
if
it's
a
multi-arch
image
manifest
the
list,
image
call
doesn't
know
which
particular
windows
version
it
needs
to
query,
and
then
poll
image
doesn't
really
know
which
particular
version
it
needs
to
pull
onto
the
node.
So
to
work
around
this.
I
believe
that
there
were
some
talks
and
unfortunately
it
was
between
justin,
terry
and
lantau
from
the
cri
side,
who
are
both
no
longer
really
involved
in
the
project.
E
But
we've
got
annotations
that
get
added
to
the
list
poll
the
list
and
the
poll
call
where
we
can
specify
the
either
the
runtime
handler
name
and
then
container
d
needs
to
do
some
mapping
to
that
or
a
particular
kernel
version
to
to
make
sure
that
that
list
up
image,
call
or
pull
image
call
kind
of
honors
a
particular
os
version
and
that's
working
today
right
or
did
you?
C
Try
it
out
with
an
annotation
if
we
like
it,
write
a
cap
and
let's
extend
the
cri
api
to
to
support
the
the
field
directly
and
that.
E
E
I
think
that
there
were
some
discussions
to
about
what
it
would
take
to
to
move
some
of
those
changes
into
container
d
that
I
think
were
kind
of
like
motivated
some
more
longer
term
or
architectural
decisions
with
container
d,
and
I
think
those
conversations
are
still
ongoing
because
I
think
container
d
itself
kind
of
assumes
that
that
there's
one
image
store,
that's
based
on
a
single
platform
or
architecture
or
os
version,
and
in
order
to
more
cleanly
kind
of
plumb
that
all
the
way
through
there's
going
to
be
some
wide
like
bigger
changes.
E
A
It's
okay:
we
could
pause
for
a
second
and
like
go
back
to
the
macro
goals,
then,
which
is
as
a
community
here
I
haven't
heard
any
objections
to
clearly
signaling
to
like
open
source
software
kubernetes
community
users
that
we
do
seem
to
want
to
all
steer
our
user
community
towards
a
cri
choice
right
rather
than
signal
usage
of
the
entry
docker
shim,
I
think,
as
a
maintainer
community,
we
would
like
to
focus
on
the
cri
path
rather
than
the
in-tree
docker
shim,
as
a
parallel,
which
I
think
many
of
us
have
trouble,
keeping
together
in
heads
and
so
from
that
standpoint
it
doesn't
sound
like
there's,
like
any
macro
objections
to
promoting
the
api
out
of
alpha,
giving
a
clear
beta
statement
on
it
and
if
we
think
about
them
as
larger
goal
of
clearly
signaling
docker
shim
deprecation,
like
no
objections
on
that
one
and
so
maybe
like
we
can
think
through.
A
Like
iterative
api
tweaks,
we
want
to
make
and
sounds
like
on
windows
and
image
management.
Maybe
there's
a
couple
we
could
suggest,
but
maybe
we
could
pause
here
and
ask
like
what
are
the
mechanical
steps?
We
think
we'd
have
to
complete
in
order
to
move
the
cri
definition
out
of
alpha
to
beta
and
how
many
we
feel
really
matter
and
so
like
even
at
the
most
basic
level
like.
If
we
go
to
the
api
now
and
we
see
it,
the
api
versions,
v1
alpha
2.
A
A
C
B
A
So
I
think
I
I
think
if
we
try
to
strip
responsibility
for
say,
pod
level,
c
group
management
or
pod
level,
metrics
gathering
out
of
the
cubelet
into
the
remote
interface
to
me.
That's
a
clear
distinguishing
point
where
it's
not
quite
the
container
runtime
interface
but
more
of
the
pod
engine
interface,
and
it
would
be
a
new
effort,
new
activity
that
might
want
us
to
have
a
v2
version
right
of
a
protocol.
G
A
good
example
for
this
dual
responsibility
is
huge
pages,
so
coblet
is
responsible
to
set
all
the
things
on
the
code
level
and
the
actual
runtime
does
it.
You
know,
address
sorry
containing
resources,
structure
of
the
fields
so.
A
Actually,
my
memory
is
escaping
on
that
one
alex.
I
thought
I
had
merged
a
change
in
119
to
allow
you
to
do
container
level,
isolation
of
huge
pages,
isolation.
G
Is
one
thing
but
but
overall
limit
setting,
it's
still
kubrick
is
doing
it
on
the
pod
level.
A
Sure
but
keyboard's
doing
it
for
cpu
and
memory
on
the
pod
level,
and
it's
doing
it
at
the
claws
level
and
so
like
in
general,
like
if
we
talk
about
management
of
quas,
getting
handed
off
to
a
runtime
engine
outside
of
the
keyboard,
that's
more
than
just
container
or
even
pod.
It
also
then
takes
ownership
of
of
the
quas
hierarchy
itself
and
its
relationship
to
the
peer
system.
Services
like
it's
a
bigger
problem
than
even
just
pod,
right.
G
A
I
guess,
with
all
respect
alex
I'm
trying
to
think
about
not
wanting
to.
A
I
don't
want
to
boil
the
ocean.
I
guess
here
right.
It's
like
we
have
a
current
intersection
point
that
we've
had
production
usage
on
for
well
over
a
year
by
many
parties
that
if
we
define
that
checkpoint,
it
doesn't
prevent
us
from
iterating
or
evolving
going
forward.
I
guess
and
so
like.
A
A
From
a
node
to
pod
boundary
or
even
a
pod
to
pod
boundary,
where
right
now,
the
cuboid
is
taking
on
many
of
those
responsibilities,
and
I
think
more
is
involved
in
that
honestly,
when
we,
if
you
have
that
discussion,
then
you
start
questioning
well,
who
should
handle
eviction
and
scarce
resource
management,
and
today
that's
the
cubelet
right
and
so
like.
A
I
want
to
be
careful
on
a
discussion
to
say
that
that's
a
larger
thing
to
chew
off.
I
guess
then.
C
Right
and
then
it
would
be
up
to
the
the
cry
implementers
the
integrations
to
decide
if
they
want
to
support
two
apis
at
once,
or
you
know,
with
two
different
service.
You
know.
A
G
Right,
yeah,
it's
bigger,
but
all
my
questions.
What
I
was
asking
it
was
more
about.
If
we
do
it
as
a
better
now
or
do
we
have
past
to
evolve
with
in
the
future,
so
something
like
was
saying
like,
would
it
be
v2
or
be
better
too
or
something.
B
Think
like
before
talking
about
that
bigger
example,
I
have
like
three
smaller
items
like
that.
We
can
quickly
discuss,
so
one
of
them
is
charging
image,
pulls
to
a
pod
sandbox,
then
c,
groups
v2
and
username
spaces.
So
where
do
these
land.
A
A
B
We
we
don't
have
the
pod
sandbox
information
as
part
of
the
image
pull
so
we'll
how
to
actually
qublet
will
have
to
create
the
pod
slice
and
hand
that
information
to
us
like
either
all
of
the
pod
request
or
just
the
podc
group.
So
we
can.
A
But
actually
that
brings
up
a
good
point
like
if
there
are
discrepancies
in
where
charges
are
happening.
I
I
think
we
we
would
like
these
things
to
be
charged
within
their
bounding
container
as
much
as
possible.
B
Okay,
yeah
we
do
have.
We
do
have
that
in
already.
So
I
think
if
anyone
knows
off
the
top,
it's
just.
A
But
only
if
the
pod
policy
was
always
right
and
so
yeah
the
existence
check
from
node
to
then
registry
to
say
you
know,
pull
again
that
subsequent
pull
on
an
always
case
should
still
be
fine
going
to
the
right.
H
H
A
B
I
think
it's
it's
still
okay
like
if,
if
the
first
user
that
is
pulling
the
images
is
charged
for
it.
So
I
guess
I
guess
it's
not
really
a
gap.
We
just
need
to
implement
it
in
cryo,
at
least
I'm
not
sure
if
container
d
is
doing
a
pull
inside
of
c
group
mike,
do
you
know.
C
A
And
actually,
if
we
just
take
a
note,
I
think
ronald,
you
have
a
good
list,
I'm
trying
to
look
at
the
image,
fuller
right
now
in
cuba
to
verify
we
are
actually
sending
it
to
you,
even
though
it's
in
the
api,
so
that
that's
one
to
talk
through
the
second
item
you
raised,
I'm
sorry.
B
So
so
c
groups
v2
changes
and,
like
I'm
talking
about
the
not
converting
case
where
we
want
to
figure
out
how
the
qos
will
change
and
how
how
your
pod
specs
will
will
how
to
change
with
all
the
changes
in
c
groups.
We
do
where
we
can
take
advantages
of
of
the
kernel
improvements
in
v2.
A
This
right
now
so
for
secret
speech
ii,
we
have
the
code
on
the
keyboard
side,
that's
doing
the
conversion
between
the
cube
resource
model
to
the
cri
model,
which
right
now
is
secret,
p1
oriented
and
then
we're,
depending
on
the
runtime,
to
detect
it's
on
a
secret
v2
host
to
then
convert
it
back
to
a
v2
form.
A
A
I'm
not
sure
if
we
have
any
complication
where
the
qubit
would
see
it
on
a
v2
host
and
not
want
to
send
v2
resources,
because
the
cubit
itself
is
also
orchestrating
v2
taxonomy.
So,
from
my
perspective,
like
we
have
secret
v2
in
flight
right
now
in
cubelet,
I
would
love
to
see
an
iteration
on
a
cry
api
definition.
That
said,
I
see
I'm
on
a
v2
host
and
we
could
just
send
v2
messages.
A
If
it's
as
simple
as
just
doing
linus
container
resources,
v2
or
adding
the
new
v2
fields,
and
only
sending
one
or
the
other,
that's
fine,
too
yeah.
A
So
the
cubelet
needs
to
support
v1
hosts
for
a
very
long
time,
moving
forward
and
v2
hosts
in
parallel
right.
So
I
don't
see
a
case
where
in
kubernetes
we
would
say
everyone
must
move
to
v2
now,
and
so
it's
a
matter
of
cubelet's
gonna
have
to
support
v1
and
v2
forms,
and
I
think
that
might
mean
particular
cri
implementations
would
have
to
support
v1
or
v2
forms
and
the
qubit
just
caused
the
right
one
for
what
it
sees
as
the
state
of
the
host.
B
Yeah,
I
I
think
that
that
that
that
makes
sense
I
mean
no,
I
I
don't
think
like
we
have
to
move
cra
to
v2.
It's
just.
I
think
my
question
is
like:
does
that
change
any
arguments
one
way
or
the
other
and
like
jumping
from
alpha
to
v1
or
v1
beta.
A
I
don't
see
an
issue
on
us
being
able
to
roll
that
out.
I
do
like
the
idea
of
having
just
the
linux
container
resources
v2
part
of
the
gap
on.
That,
though,
is
I
thought
we
tried
to
keep
everything
in
line
with
the
runtime
spec
and
we
still
need
a
v2
runtime
spec.
B
A
I
I
think
we'd
have
to
explain
the
differences
this
goes
back
to
like
we
don't
do
kernel
memory
accounting
properly.
Still
so
and,
as
you
know,
monolith.
B
A
And
so
I
guess
I'm
comfortable
with
the
fact
that
we
don't
have
every
unknown
unknown
solved
but
maybe
like
if
we
wanted
to
stage
getting
secret
v2
alignment
with
the
latest
state
of
the
runtime
spec
in
the
cri
and
have
a
linux
container
resources
v2
as
a
prereq
in
that
api
definition,
that
that
seems
fine
to
me.
But
we
we
can't
reach
to
a
state
where
cube.
What's
going
to
have
v1
and
v2
parallel
support
for
a
long
time
to
come,
like
that,
that's
even
more
controversial
than
runtime
choices.
A
Honestly
to
say
how
your
os
right
version
is.
A
I'd
like
to
have
as
few
jumps
as
possible
that
are
as
disruptive
as
least
disruptive
as
possible
to
recognize
what
feels
like
a
status,
an
accepted
status
quo
now
so
like
and
for
secrecy.
Two
stuff
it
seems
like
we
could
just
update
our
linux
container
resources
to
map
what
I
see
in
the
runtime
spec.
But
I
don't
know
if
we'll
get
all
answers
in
today's
call,
but
like
we've,
raised
three
topics,
or
at
least
four
the
windows
image
listing
and
we
can
iterate
on
that
asynchronously.
B
Was
the
username
spaces
changes
which
which
are
not
as
disruptive
in
my
opinion,
like
they
still
go
into?
If
we,
if
you
broadly
agree
on
having
a
port
security
policy
knob,
then
it
should
be
okay,.
C
C
C
So
if
we
decide
that
you
know
version
1.0,
has
it
the
capabilities
that
we've
already
validated
right
with
no
extensions,
that's
fine,
and
then,
if
we
add
a
new
feature,
we
can.
We
can
just
say
okay,
that
features
in
in
this
particular
version,
and
then
we
can
respond
with
the
you
know
that
tag
when
we've
got
it
in
that
version
of
our
our
implementation
right.
B
C
A
That's
it.
I
want
to
be
respectful
of
time,
and
actually
I
need
to
know
if
I
have
to
join
something
else.
I
think
we've
raised
some
like
clarity
of
consensus,
that
there
isn't
major
objections
on
providing
a
signal.
I
think
there
are
about
three
or
four
small
items
that
we
can
chase
down.
Asynchronously
to
say:
do
we
think
there
is
a
backwards,
incompatible
change?
A
I
think
it's
fair
to
say
that
if
we
declared
a
stable
point,
even
if
we
said
it
was
v1
yes,
the
cubelet
wouldn't
want
to
regress
on
that,
but
the
community
expectation
would
be
that.
A
Cri
implementers
would
still
maintain
versioning
consistent
with
any
evolution
that
might
happen
within
a
particular
cube
release,
but
we
would
not
try
to
break
backward
compatibility,
obviously,
in
the
same
way
that
we
haven't
to
date.
A
So
maybe,
like
a
couple
of
these
items
that
were
raised
the
windows,
one
on
image
management,
the
three
merino
rays,
if
we
can
do
some
investigation
the
next
day
or
two
on
what
we
would
have
wanted
these
to
look
like,
let's
get
those
identified
and
then
maybe
next
week
we
meet
again
to
see
if
there
were
any
other
unknown
unknowns
met,
and
I
guess
the
question
I
would
have
is
like
do.
We
feel
we
need
to
go
to
even
a
beta
stage.
A
Or
will
we
just
want
to
go
straight
to
some
statement
of
this
is
generally
supported
or
not
like.
I
can't
think
of
any
beta
characteristics
that
I'd
want
to
capture
to
make
a
ga
statement,
given
that
it's
widely
understood
in
the
community
that
everybody's
kind
of
treating
this
as
if
it's
ga
today,
and
so
I
don't
want
to
do
ceremony
just
for
the
sake
of
ceremony.
But
if
people
can
think
of
good
reasons.
C
I
Yes,
we
are,
we
didn't
used
to
two
cycles
ago.
I
We
moved
a
bunch
of
them
away
from
docker
and
doing
container
d
at
least,
and
we,
the
cri
folks,
have
see
our
io
folks
have
been
adding
tests,
ci
jobs
and
adding
to
test
grid
as
well,
and
I
think
that
is
so-
it
it's
probably
going
to
take
a
a
month
or
two
more
to
get
some
solid
under
the
belt
for
crio,
but
continuity.
Yes,.
A
I
The
various
paths,
but
the
question
is:
who
else
do
we
have
to
ask
right
like?
Is
there
any
anybody
else
that
we
need
to
go
ask
for
test
coverage,
any
other
cra
implementations.
A
It's
really
the
variance
right:
it's
like
a
cri
plus
kata
right,
cri,
plus
g
visor,
like
those
those
various
communities
where
we
could
always
have
more,
but
I
don't
think
we
would
view
that
necessarily
as
skating,
so
it
I
none
stand
out
to
me.
Dims
right
now
is
things
that
we
need
to
to
do
more.
Okay,
then
we
are
covered,
then
yeah
and
then
the
last
question.
A
If
we
want
to
cue
one
other
meeting
up
on
this
for
next
week,
then
is
volunteers
to
actually
mechanically
make
it
a
reality,
and
let's
queue
that
up
for
discussion
on
tuesday
next
week,
hopefully
time
permitting
or
asynchronously
there
and
that
comes
down
to
like
basically
do
we,
I
don't
think
the
work
is
significant
and
I
it,
but
it
just
probably
touches
crictl
and
cri
api
and
a
minor
tweak
to
the
cubelet
as
we
get
through
these.
B
Right
yeah,
I
can.
I
can
do
that
once
we
reach
agreement
on
the
next
video.
C
Yeah
same
here
for
the
container
decide
I'll
do
a
pull
request
for
the
package.
Main
changes.
A
So
maybe
it's
a
beautiful
thing,
actually,
maybe
marinol
or
mike.
Do
you
guys
want
to
work
together
on
just
the
kep
for
the
plan
on
this?
As
the
outcome
of
today's
discussion,
it
doesn't
sound
like
we
had
any
major
things
raised
on
this
and
it
would
represent
two
voices
from
at
least
two
implementing
communities.
Yeah
sounds
good.
I
Yes,
mike,
I
can
help
you
review
awesome.
A
I
didn't
want
to
put
dems
on
the
spot
anyway.
I
I
have
to
stop,
but
I
appreciate
you
guys
getting
together
and
I
think
this
is
important
for
us
today.
So.