►
From YouTube: Kubernetes SIG Node 20200519
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Okay,
all
right
apologies
for
issues
with
us
figuring
out
how
to
use
our
meeting
software.
This
is
the
May
19th
signal
meeting
and
amaranth
on.
Today's
agenda
is
Javier
available
to
talk
through
his
first
item.
C
This
feature
is
about
allowing
users
to
set
the
fqdn
of
a
pole
into
the
hostname
field
of
the
kernel
by
default
like
well.
Currently,
kubernetes
sets
the
short
name
whenever
you
decide
to
get
fqdn
by
setting
like
su
domain,
so
the
main
field
of
a
poly
spec,
it
will
create
an
fqdn
for
you
and
it
would
configure
a
TC
host,
but
the
host
name
below
the
kernel
will
be
the
surname.
So,
with
this
feature
like
you
will
be
able
to
set
a
full
fqdn
into
the
host
name
field
or
the
kernel.
C
This
is
something
that
traditionally
unique
systems
have
been
doing
and
very
hard
has
been
encouraging
way
before
warranties.
So
these
we
think
that,
with
the
feature
we
can
increase
the
interoperability
of
cornetist
and,
like
all
these
people
having
like
legacy
so
we're
built,
making
assumptions
about
what
the
host
name
is,
can
basically
use
gratis
without
making
like
deep
and
risky
code
changes.
C
D
C
So
we'd
one
thing
that
I
couldn't
verify
yet
is:
if
he
works
in
window,
I
have
an
initial
approach
of
these
and
we
are
basically
setting
like
the
sandbox
like
sitting
there
full
of
key
DNA
string
in
it
in
window
in
Linux
works.
I
had
some
colleague
help
me
do
some
tests.
We
just
don't
care
because
the
the
behavior
on
docker
will
be
like
to
dr.
C
Ron
age,
and
in
there
you
can
put
the
full
string
and
in
Windows
he
was
able
to
see
the
full
screen
that
he
put
in
there
into
the
computer
name,
but
yeah
I'm.
Assuming
that
we
will
do
some
kind
of
integration
tests
or
e2e
tests
that
will
include
Windows
I
couldn't
find
any
Google
at
runtime.
That
is
a
specific
for
Windows,
so
I'm
assuming
it
just
relies
on
docker,
container
D
or
whatever
is
under
correct.
D
C
So
I
created
a
caped
proposal
and
I
have
a
unresolved
note
about
Windows
to
make
sure
we
address
that
and
yeah
I
was
planning
to
draw
by
e-cig
Windows.
After
talking
with
a
signal
people,
okay,
perfect
thanks.
You
say
that
we
I
talk
with
the
Signet
work
last
week.
They
were
ok
with
this
feature
and
they
basically
had
the
same
question
like.
Will
it
work
on
windows
right
away?
E
Ask
you
some
question:
even
today,
not
all
the
powder
in
a
theater
feature
not
only
think
of
things
working
on
the
windows.
So
what
have
you
suggest?
I
read
a
brief
narrative
that
cap
suggests
is
introduced
a
flag
and
to
native
customer
native
the
user
to
side.
It
is
the
flow
dominant
or
not
so
beautiful.
Of
course
we
need
to
figure
out.
It
is
working
on
the
windows
or
not
analyzing
it.
If
it's
not
working
on
the
windows.
Oh,
it
is
not
compatible
workers.
It
is
acceptable
without
come
in
this
room.
E
This
is
only
limited
to
the
powder
or
container
running
on
the
Linux
box.
I
stood
the
we
agree
in
the
past.
We
want
to
launch
the
converging,
but
you
giving
karna
situation
is
not
all
it
all.
The
feature
is
converging
also
support
the
windows
container
and
the
windows
hard
so
I
just
want
to
ask
you
is:
did
we
change
that.
D
For
with
daiquiri
II
for
a
couple
of
different
reasons,
it's
been
very
hard
to
get
any
sort
of
changes
around
windows
containers
pushed
into
the
daiquiri
runtime,
so
we've
been
yeah,
taking
the
approach
with
documenting
it
and
wherever
possible,
trying
to
add
switches
to
make
sure
that
like
if
it
is
a
feature
that
can't
run
on
Windows
that
it
runs
on
that
it's
that
users
can
either
configure
it
or
that
you
know
acceptable.
Behavior
works
on
Windows
as
we're
removing
switching
as
working
towards
a
editing
container.
F
C
D
I,
don't
currently
were
saying
running
Linux
containers
on
Linux
containers
on
Windows.
No,
this
is
kind
of
obscure
for
the
work
we're
doing
with
container
D
and
hyper-v
isolation.
We're
focusing
on
just
supporting
running
different
OS
versions
of
windows
or
containers
targeting
different
OS
versions
of
windows
on
those
nodes.
We'd
have
to
look
into
that
and
I
probably
need
to
pull
some
people
so.
C
D
C
C
And
it
will
stay
in
continue
creating
forever.
It
just
will
log
events
saying
hey
your
Poly's
fqdn
is
longer
that
63
bytes.
We
cannot
create
a
sandbox,
because
that's
always
a
nice
and
message
that
failing
underneath
in
when
docker
tries
to
do
that,
so
I
feel
like
this
proposal
is
mostly
to
to
to
set
the
mechanisms.
I'm,
not
sure
we
or
three
discussions
that
we
had
so
far.
We
couldn't
come
up
with
a
good
solution
for
these
like
there
are.
C
Several
ideas
like
like
one
is
like:
we
do
nothing
and
let
it
for
whoever
wants
to
use
this
feature
to
create
an
emission
controller
that
will
control
these.
For
them.
This
seems
to
be
like
Nettie's
kind
of
covering
approach
needs
to
be
like
hey,
let's
use
a
mission
controllers
like
the
way
hook
wants
to
to
make
sure
users
don't
do
silly
things
or
we
control
our
users
better.
Another
approach
is
creating
a
mission
plugging,
but
that
will
have
to.
G
E
C
Of
like
break
the
different
layers
right
yeah,
we
have
to
make
assumptions
on
the
underneath
layers.
If,
for
example,
you
want
to
reject
the
creation
of
a
deployment
based
on
these,
which
is
not
so
great
and
another
another
another
way
that
we
that
came
out
during
the
Signet
work
discussion
was
like,
we
can
have
like
the
node.
C
Allow
us
to
say:
hey
this
type
of
error
is
not
reliable
and
then
they
post
change
into
error
or
I
mean
fail.
It
failed
state,
then
that
would
be
also
much
nicer
than
the
current
thing
about
the
Paulo
state
content
created
forever.
So
I
don't
know
if
there
are
thoughts
in
this
regard
of
in
the
signal
like
what
you
guys
think
I'm.
A
Sorry
I'm
just
reading
your
cap
for
the
first
time
so
I'm,
not
as
far
on
the
questions
as
you
might
be,
on
wanting
to
get
answers.
So
just
the
current
behavior
for
fully
qualified
domain
names
is
basically
your
default
DNS
name
as
your
pod
name,
and
then
you
can
give
an
optional
subdomain
on
the
existing
pod.
Spec,
except
here,
is
adding
a
boolean
that
does.
C
A
I'm,
not
questioning
any
of
it
to
him,
I
guess:
I'm,
trying
to
map
story.
One
story,
two
story:
three
in
the
cup
and
story:
one
was
I,
get
food
back,
I
didn't
set
this
field
story
to
said,
I
set
the
name
and
the
subdomain
didn't
set
the
field,
but
I
still
got
the
same
fqdn
as
story
three,
and
so
that's
what
I'm
getting
confused
on
so.
F
The
difference
between
host
name
and
host
name,
both
say
host
name.
Fqdn
returns
the
same
result.
So
that's
what
I
was
just
getting
confused.
Yeah
host
named
chef
will
always
do
the
right
thing.
It
does
the
right
thing
today
and
it'll.
Do
the
right
thing.
I
think
the
problem
here
is
that
there's
a
body
of
users
who
assume
that
host
name
without
the
dash
F
would
give
them
the
fqdn,
and
that's
true
on
some
distributions
and
not
true
in
committee.
A
F
C
C
There
are
case
as
I
have
it
right
now:
I
basically
check
if
the
fqdn
is
longer
than
63
bytes
and
if
it
is
I
just
return
an
error,
and
that
shows
like
you
go
a
little
bit
down
after
the
user
stories.
You
will
see
how,
when
you
the
police-station
container,
creating
and
then
like,
when
you
do
describe
the
pot,
you
will
see
errors
saying
failed
to
create
the
sandbox.
The
fqdn
is
longer
than
63
characters.
C
You
create
a
deployment
that
is
well,
you
cannot
create
a
deployment
longer
than
63
bytes,
but
you
can
create
a
deployment
that
is
60
and
then
what,
if
the
replica
set
and
for
the
pod
arts
will
go
over
63.
So
it
seems
that
there
is
a
different
way
of
generating
the
pond
names
when
this
situation
is
hit.
Is.
F
C
I
created
one
with
sixty
characters,
it
created
a
replica
say
by
adding
like
and
a
bunch
of
stuff,
and
then
they
pod
it
delete
it
reserved
itself
the
5/5
character.
So
you
remove,
like
it
removed
two
characters
from
my
deployment
name
and
then
it
five
random
characters
to
the
pond
name.
So
it
was
a
bit
estranged,
I,
wouldn't
okay,.
C
Yeah,
it
seems
like
so
that's
the
main
thing
like
I,
don't
know
if
we
should
try
to
attempt
to
solve
these
or
like
we
can
make
clear
on
the
documentation
that
saying
hey,
you
are
doing
that
you
may
hit
these
like.
Maybe
you
want
to
do
these
a
mission
controller,
and
maybe
we
can
even
share
the
code
of
the
mission
controller
for
people
to
just
copying.
F
Yeah
I
mean
there.
Isn't
there
really
isn't
a
good
answer
here
that
crosses
across
all
those
different
layers
right?
This
is
fundamentally
the
problem
with
the
error
handling
and
this
in
this
sort
of
a
patch
like
there.
Isn't
there
isn't
a
good
answer
short
of
every
control
or
publishing
how
many
characters
it's
gonna
add
to
the
end
so
that
you
at
the
top
of
the
stack
you
can
subtract
and
subtract
and
subtract,
but
Wow
yeah.
C
A
In
general,
like
I've,
come
to
terms
with
the
idea
that
kubernetes
is
a
controller
that
just
keeps
on
trying
and
as
long
as
we
give
the
response
that
we're
not
succeeding.
I've
I've
come
to
accept
that
computers
are
there
to
run
CPU
cycles
and
it's
okay
to
keep
trying.
Maybe
I've
been
playing
in
queue
on
the.
F
Problem
the
problem
with
marking
it
is
permanently
failed
on
one
node
is
that
the
replica
set
is
just
gonna,
try
to
create
it
on
another
node
right,
that's
right!
So,
like
you
have
to
do
that,
all
the
way
up,
all
the
layers
again
so
I
I
kind
of
leaned
towards
what
Derek
is
saying.
You're,
not
actually
gonna,
get
much
better.
I
think
that
the
most
important
part
is
to
surface
it
in
the
most
obvious
place
possible,
which
is
probably
the
pot
honestly
right.
C
A
I
think
we
can
help
update
the
the
contents
of
the
cap
if
this
is
your
first
journey
through
the
cat
process,
yeah,
and
so
some
of
the
questions
may
or
may
not
apply,
I.
Think
looking
at
the
cap
roughly
here
like
we
have
to
ask
ourselves
that
this
requires
a
feature
gate
and
all
the
other
things
that
go
along
with
changing
the
pots
back.
Yes,.
A
F
A
C
Yes,
so
the
current
implementation
I
share
is,
like
you
seem
like
a
cupola
flag,
but
it
was
brought
to
my
attention
that
maybe
using
like
a
poly
speck
field
is
much
more
flexible
and
nicer
to
integrate
with
and
to
offer
for
things
like
UK,
so
they
don't
have
to
sustain
like
more
changes.
So
you
know
so.
A
C
F
C
A
C
E
E
To
I
want
us
to
see
that,
because
that
could
be
sacrificed.
Are
you
subpoenaed
here,
because
many
customers
are
accompanying
the
those
kind
of
things
combination
of
the
opt-in
or
opt-out?
It
is
may
be
successful
of
the
user
be
native,
so
I
think
the
region
will
accept
this
woman.
It
is
just
because
it's
reasonable
request
and
also
to
support
existing
application.
I
think
that's
the
key
part
here.
E
F
E
E
J
I
I
I
I
Okay,
what
I
think
the
natural
is
as
I
was
implementing
the
cubelet
part
of
it.
So
I
did
the
API
part
of
it
and
I
think
both
Tim
and
David
had
a
look
at
it
and
a
gave
me
some
feedback
on
it.
Thank
you
very
much.
The
next
part
that
I'm
working
on
before
I
send
the
the
peers
to
upstream
code
in
quick
succession.
First,
the
API
and
then
a
few
days
later,
I
was
in
the
couplet
code.
I
I
The
next
step
is
to
update
the
limits,
if
required,
and
that
goes
through-
update
container
resources,
CRI
API
and
now
we
have
the
resources
field
is
also
in
this
container
status
to
inform
the
user.
Okay.
This
is
the
new
requests
applicable
to
the
pod
reservation,
and
then
these
are
the
new
limits
that
are
enforced
on
the
pod
or
the
container
of
the
pod.
I
One
thing
I've
discovered
while
going
through
this,
especially
implementing
the
docker
shrim
side
of
docks
on
darker
side
of
of
the
API
on
the
client
side,
there
is
a
way
to
query
the
memory.
The
stats
gives
me
the
memory
limit
in
bytes
that
feel
that
goes
the
C
group,
but
there
is
no
way
that
I
could
find
to
query
the
CPU
CFS
quota
off,
for
example,
CPU
shares
those
three
fields
from
the
C
group
and
this
kind
of
got
me
thinking.
I
Okay,
if
there
is
another
implementation,
maybe
container
D
I'm,
just
taking
it
as
an
example
here,
as
of
today,
they
they're
not
implementing
the,
but
once
the
implemented
I
expect
them
to
return
both
all
those
three
CPU
fields
in
the
Linux
container
resources
and
the
memory
field.
So
this
is
this
comes
from
the
extension
we
made
the
CRI
API,
which
is
me
escape
this.
I
This
is
a
CRI
kept
that
we
decided.
Okay,
we
were
gonna,
extend,
update,
container
resources
to
carry
the
support
for
Windows
as
well
rate,
and
it
has
less
container
resources,
and
then
we
extend
the
container
status
API
to
return
container
resources,
which
is
going
to
give
us
the
memory
and
CPU
information.
This
is
the
actual
information,
actual
limits
that
are
applied
for
the
CPU
and
memory
settings
in
the
CRI
in
the
runtime,
and
that's
what
we
would
ideally
like
to
report
to
the
user
in
our
container
status
resources
field.
I
If
they
error
out,
if
we,
if
we
sin-
let's
say
we
call
update
container
resources
and
we
give
it
new
limits
and
the
error
out,
then
we
keep
trying.
We
don't
report
that
we
have
updated
it
in
our
container
status.
But
the
question
is:
let's
say
it
succeeds.
There
are
two
ways
to
go
about
it.
One
is
we
can
assume
that
both
limits
were
configured.
I
The
memory
and
CPU
limits
that
we
applied
is
configured
and
we
go
with
that.
The
ideal
way
is
to
query
this
information
from
the
CRI,
which
is
the
container
status
message,
so
let's
say
cube
Natur
to
restart.
It's
got
to
discover
this
information
and
the
best
way
to
do
that
is
scared
from
the
CRI.
If
that
is
not
available,
then
we
can
do
update
container
resources
in
what
the
spec
tells
us.
I
The
limit
should
be,
and
if
it
succeeds,
we
report
that
that
is
the
that
seems
to
be
the
viable
path
at
this
point,
because
I
don't
see
a
way,
we
can
get
this
information
if
the
CRI
is
not
supporting
it.
But
the
issue
is
one
of
the
with
dr.
shim,
as
is
implementing
the
syndication
support
to
return.
I
The
information
I
found
that
there's
no
API
on
the
docket
API
side
to
query
the
unless
I
go
around
the
talker
API
and
read
the
C
group
information
directly
from
the
hard
disk
which
I
don't
want
to
do
or
from
the
file
system
secret
file
system,
which
I
don't
or
do
I
don't
have
I
have
partial
information
like
the
memory
limits.
Is
there
but
I
don't
have
the
CPU
information
so
on
the
public
side?
How
do
we
handle
this?
That's
kind
of,
let's
say
the
CRI
does
not
yet
support
it.
I
I
The
options
that
I
was
looking
at
is
potentially
add
a
flag
one
second,
so
one
one
option
is
to
assume
that
zero
means
no
information
has
been
returned
and
then
we
published
the
information
that
we
got
from
the
last.
If
the
container
was
created,
then
we
can
assume
that
the
limits
that
were
that
we
initially
set
is
the
limits
that
we
are
using
and
if
the
cube
literally
started,
we
do.
We
call
an
update,
container
resources
and
if
that
succeeds,
then
we
set
the
limits
from
that.
A
Individual
that
it's
configuring
their
cluster
to
ensure
that
their
runtime
of
choice
has
the
capability
needed
like
I,
don't
want
to
spend
too
long
trying
to
think
about
ways
of
working
around
when
a
runtime
doesn't
have
a
capability
versus
just.
You
know
like
let's
not
turn
as
people.
You
know
like
butts
I'm,
gonna
turn
that
heat
round
environment.
I
Okay,
so
when
we
rolled
this
out,
let's
say
in
119:
we
get
there
and
then
we
we
have
this
feature
in
alpha
and
people
try
it
out
the
container
the
CRI
is,
maybe
dr.
shim
is
there,
but
other
Ciara's
are
not
there
and
dr.
Shrum
is
returning
only
memory
information,
because
I
don't
have
a
way
to
get
docker
api.
To
give
me
the
CPU
information.
What's
the
best
way
we
can
do
to
handle
this
and
I
understand
if
I
know.
A
I
E
So
basically,
what
do
you
want?
You
want
an
actor,
what
I've
come
in
runtime
and
this
is
status
and
V
what
it
is
a
host
at
the
real
state
right.
So
the
hoster
for
this
is
particularly
see
group.
What's
the
configuration
it's
actually
yes,
in
that
case,
I
think
that's
pretty
straightforward
if
I
understand.
So,
if
it's
the
zero,
it's
basically
no
magnitude
CPU
and
memory
in
colonel,
you
cannot
set
to
zero
right.
So.
E
I
So
so
that
was
the
assumption
that
I
was
trying
to
make
which
I
was
unsure
about,
which
is
the
question
I
had
here.
So
if,
if
we
get
back
zero,
can
we
assume
that
the
CRA
does
not
support
it
and
we
fall
back
to
a
second
option
that
we
have
direct
scan,
suggesting
that
if
we
don't
have
that
we
should
fail
the
let's
start.
That's
also
a
reasonable
way,
I
didn't
think
of
that,
but
that
work
that
could
also.
E
A
A
This
was
a
new
feature
gate
to
support
updating,
container
resources,
and
so
there
was
an
option.
That
said,
we
could
say
that
if
this
feature
gates
enabled
and
the
container
runtime
configured
a
stock
regime,
then
we
just
don't
start
the
qiblah
right
and
that's
one
of
the
reasons
why
the
feature
would
still
be
an
alpha
right.
That's
all
I
was
suggesting
okay.
I
Okay,
my
thought
was:
if
we,
if
the,
if
doc
assume,
does
not
support
this
container
status,
the
new
extension
that
we
did
adding
container
resources,
then
we
can
work
around
dr.
shim
is
in
couplet,
so
we
can
do
something
to
fix
that,
but
another
CRA
that
doesn't
support
it.
How
do
we
handle
that?
So
we
are.
I
B
E
I
E
Then
the
zero
means
something
it
is
not
supported,
but
another
thing
it
is
I
want
to
mention
that
I
do
I
want
to
brought
this
up
because
last
year
we
need
to
talk
about
the
deprecated
of
the
table
for
sharing
and
the
switch
to
using
other
continent
on
an
easily
is
now
to
have
to
talk
her
shame
beauty,
the
beauty
in
in
all
tree,
like
the
entry
things
and
so
so
I
think
they're.
Also
next
I
just
saw
that
the
Alex
Alex
Turner
also
brought
this
up
yeah.
E
I
I
I
Cannot
they
can
do
it
in
both
cases?
So
it's
a
Sierra
implementation,
specific
detail,
let's
say
they're
trying
to
configure
in
in
the
case
of
cgroups
their
current
write
to
the
file
and
that
fails,
that's
an
error
and
if
they
did
write
to
the
file
and
they
see
that
they
have
a
way
of
detecting
that
the
limits
are
not
being
applied.
That's
an
error!
A
The
CRI
is
gonna,
apply
a
value,
but
is
the
expectation
that
the
CRI
provider
is
then
reading
back
the
value
to
make
sure
it
matched
what
you
had
specified
would
not
have
been
applied.
Should
they
return
an
error
or
should
they
return
a
non
error
and
just
the
latest
resources
that
are
read
back
from
the
kernel
itself?
A
K
The
region
is
not
always
an
option,
because
you
cannot
have
a
VM
based
runtime,
so
likewise
container,
D
or
cry
for
itself.
It's
not
able
to
read
something
from
inside
via
well
being
a
question.
Actually
what
happens
if
we
run
time
is
partially
apply.
It
were
resources,
so
I,
don't
know
what
this
would
be.
Hero,
for
example,
run
see
if
you
update,
contain
Ian's
resources
for
what
say
like
five
parameters,
and
only
one
was
a
really
apply
it.
Yes,.
A
J
B
I
There's
an
echo
here
from
the
couplet
side.
What
we
see
is
that
if
we
have
both
CPU
and
memory
being
updated,
we
only
do
it
one
at
a
time,
so
they
can
say:
okay,
I
got
the
CRI
will
not
run
into
a
situation
where
the
applied
memory
and
could
not
apply
the
CPU
limits
for
a
particular
resource
and
the
reason
I
need
to
split
it
out.
That
way,
is
user
could
be
increasing
the
memory
and
decreasing
the
CPU.
At
the
same
time,.
A
But
I'm
just
trying
to
think
like
this
operations
can
exist
for
a
long
time
into
the
future
right
and
so
like
there'll,
be
other
resources
that
will
want
to
update
in
the
future
beyond
CPU
and
memory
yeah,
and
so
like
I'm,
just
trying
to
clarify
what
we
think
the
behavior
this
should
be.
In
my
current
understanding
of
the
behavior
was
that
if
the
update
couldn't
be
applied,
then
the
CRI
author
returns
an
error.
J
I
If
the
it
is
not
full
cannot
be
fully
applied,
they
should
return
it.
Actually,
we
should
keep
it
that
way
and
when
we
send
an
update
from
our
side,
of
course,
we're
only
updating
one
resource
at
a
time,
and
there
is
a
good
reason
for
me
to
do
that,
which
is
I
need
to
order
the
resizing
so
that
we're
let's
say
the
overall
limit,
is
changing.
Let's
say:
you're
increasing
the
CPU
of
the
first
thing.
I
I
do
is
increase
the
pod,
mod
level,
see
group
and
then
go
and
increase
the
individual
containers
in
an
in
I
sorted
it
so
that
I
do
the
decreases
first
and
then
the
increases
so
that
I,
don't
at
any
point
of
time
exceed
the
pod
level.
See
group
and
I
can
only
do
that
if
there,
if
there
are
multiple
types
of
resources
being
updated
like
CPU
memory
and
something
else
tomorrow,
I
can
do
handle
one
resource
at
a
time.
I
So
we
could
say
that
you
know.
Will
we
could
say
you?
You
will
not
be
asked
to
change
more
than
one
resource
at
a
time,
which
is
what
we're
gonna
do,
but
I'd
like
to
keep
that
closer
to
our
chest
for
now
and
say:
ok,
if
you
are
asked
to
update
more
than
one
resource
and
if
any
of
them
fails
from
your
point
of
view,
you
just
tell
me:
I
can't
do
it.
I
The
whole
thing
is
failed,
even
though
you
updated
partially
and
then
we're
gonna
roll
back
or
no
no
roll
back
would
risk
like
okay.
If
the
update
failed
for
some
reason,
there's
a
good
chance
that
roll
back
could
also
fail.
So
the
best
thing
to
do
is
the
most
robust
thing
to
do
here
is
to
just
okay.
You
failed
I'm,
just
gonna
keep
trying
until
we
get
there.
L
That
we
are
changing
the
response
from
the
container
update
container
resources
so
that,
after
this
protocol
change,
it
will
have
a
new
field
so
which
will
so
when
you
updated
the
container
and
time
we
read
out
the
current
value
for
all
the
resources
and
it
will
return
it
in
the
replies.
So,
in
the
case
of
a
partial
failure,
you
will
get
back
what
is
the
current
situation?
So
we
will
not
force
the
CRI
to
choose
between
either
a
rollback
or
a
non
row.
L
I
G
I
You
can
I
can
certainly
we
can
return
the
current
state
in
the
response,
I'm
all
open
to
that
we
added
container
status
for
that
reason,
extended
container
status.
For
that
reason,
essentially,
okay,
we
can
query
it
so
once
we
do
this,
we'll
query
it
and
then
we'll-
and
this
is
kind
of
what
I'm
doing
here-
I
queried
the
current
ID
I
call
update
container
resources
that
I
call
container
stylist
I,
get
packed
the
current
information
and
update
the
eleg
cache,
which
goes
which
propagates
over
to
status,
the
v1
API
status.
I
Yeah
I
think
that
we
had
discussed
a
while
back
whether
we
wanted
the
update
response
and
then
we
forget:
okay,
since
we're
going
to
be
querying
anyways,
let's
say
couplet
were
to
restart
and
to
populate
the
status.
It
needs
to
discover
the
information
currently
on
the
node
throw
observations.
So
we
need
that
extension
for
status,
so
adding
it
to
the
response
of
update
container
resources
is
a
nice
to
have,
but
it's
not
good.
L
So
I
I
think
what
you
should
do
is
that
if
you
cannot
just
update
everything
atomically,
you
should
fail
and
then,
if
you
returned,
if
you
returned
in
the
response
the
current
resource
state,
then
you
will
name
it
up
to
the
cooler
what
it
will
do
so
I
think.
So
that's
what
I
would
do
in
this
kind
of
API.
That's
why
I
brought
this
on
but
yeah
you
answered
my
question.
I
Yeah,
the
caller
is
just
going
to
retry
he's,
not
gonna.
The
current
status
is
gonna,
be
updated
when
it
does
the
query,
so
I
want
them
to
be
very
independent
paths,
so
it's
not
like
update,
is
dependent
on
this.
We're
gonna
call
them,
but
the
value
propagation
goes
through
entirely
comes
from
the
Sierra
and
it
gets
reflected
in
the
status
anyways.
Oh
I
think
yeah
to
summarize
directs
I
believe
we
had
this
discussion
earlier
I.
I
K
B
There
shouldn't
be,
but
let
me
think
about
that
I
think
that's
on
purpose.
It's
the
same
thing
that
we
do
for,
for
example,
creating
a
container.
We
have
one
control
loop
that
tries
to
take
the
action
to
resolve
the
difference
between
we.
What
well
we
want,
which
is
a
container
and
what
we
see,
which
is
simply
no
container
and
then
later
on.
We
have
a
separate
loop
for
observing
the
actual
set
of
containers
running
on
the
node
and
updating
the
status
recording.
L
Right,
I'm,
sorry,
but
why
are
you
a
Sperry
I?
Why
are
you
assuming
that
the
client
is
always
the
cubelet
I
mean
we
are
talking
about
the
generic
API
that
could
be
used
by
something
else
than
the
cubelet
and
I
think
that
we
should
not
assume
that
so
I
think
how
the
Kuril
behaves
should
not
be
reflected
in
the
design
of
this
API,
because
that
would
be
a
API
design
mistake.
I
K
I
Let
me
think
about
this
and
see
where
it
goes
and
to
summarize
done
what
you
mentioned,
it
does
confirm
my
suspicions
if
you,
if,
if
there's
any
other
new
information,
that
okay,
these
fields
can
be
0
in
the
kernel,
please
let
me
know
because
as
far
as
I
could
my
observations
goes
when
we
query
memory
limit,
CPU
shares,
CPU
period
and
CPU
quota,
the
CFS
CPU
parameters,
we
should
never
get
back
zero.
So
if
we
do
get
back
zero,
we
can
safely
assume
that
the
CRI
does
not
support
it.
E
B
E
That
you
can
set
that
a
zero,
but
that
will
convert
one
data
to
some
nonzero
value.
So
it's
not
much
in
the
older
kernel
is
one
see
if
you
set
that
to
zero
the
kernel
Akashi
if
I
remember
correctly,
it
was
a
support
account
of
those
product
issues
in
in
the
past,
but
in
fact
the
latest
kernel
basically
is
rounded
to
some
univille
Umbra,
which
is
minimal
for
CPU
is
some
some
minimum
number
for
memory.
It
is
the
full
Mike
if
I
remember,
they
all
have
to
round
it
to
some
real
number.
D
E
I
also
want
to
follow
up
voiced
with
Eric
stones
are
mentioned,
I'd
to
next
to
some
others,
their
implementation
and
caught
her
I.
Don't
think
this
feature
will
be
really
meaningful
for
kata
situation,
and
for
me
it
is
you
asking
a
useful
after
another.
We
am
like
the
ice,
the
pots
and
a
box
and
hypervisor
or
any
other
hypervisor
solution,
and
it
is
not
just
hypervisor
because
the
device
are
actually
it
is.
E
You
can
still
read
of
those
C
group
is
more
lightweight,
but
if
you
are
used
to
being
at
the
BM,
basically
it
is
your
dedicated
actor
VM
for
the
food
for
those
group
of
the
containers.
So
here
it
is
more
like
the
how
you
are
going
to
want.
You
have
this
group
of
the
container
as
the
part
and
how
dynamic
to
share
the
results
with
other
company
in
that
holster,
so
so
I
just
limited
that
what
I
described
here
for
this
particularly
peer
feature
and
I
think
about
it,
is
not
really
useful.
E
E
I
Right,
that's
in
fact
a
feature:
we've
been
trying
to
solve
that
and
we're
looking
at
ballooning
as
one
of
the
potential
ways
to
reclaim
unused
vm
memory
and
then
share
it
do
a
better
resource
sharing
where
we
don't
have
a
lot
of
progress
on
that,
yet
hopefully,
by
next
group
con
we
might
have
something.
But
right
now
we
were
just
sticking
to
containers
kata.
As
far
as
I
remember
it
does
see
group
mapping.
So
it's
able
to
change
that
and
read
it.
It's
a
lightweight
VM,
clear
p.m.