►
From YouTube: SIG Node Sidecar WG 2023-01-31
Description
Meeting notes and agenda: https://docs.google.com/document/d/1E1guvFJ5KBQIGcjCrQqFywU9_cBQHRtHvjuqcVbCXvU/edit#heading=h.m8xoiv5t6qma
GMT20230131-170434_Recording_1482x1120
A
Hello,
hello,
it's
last
day
of
January
2023,
it's
a
signalside
car
containers,
working
group
meeting
yeah.
We
have
some
agenda
items
today,
let's
get
right
into
it,
so
first
agenda
item,
I
added,
so
I
made
a
presentation
of
sidecar
Kappa
to
a
security
team
here
internally
in
Google,
and
they
suggested
I
mean
generally,
they
received
very
well
one
thing.
A
They
said
that
may
be
helpful,
even
though
it's
not
like
it's
likely
will
only
be
useful
in
a
very
big
organization
is
to
have
ability
to
specifies
like
some
no
exact
policy
on
you
need
containers.
So
if
we
can
create
like
some
article
for
any
container
specifically
and
be
able
to
just
like,
have
a
blanket
policy
on
that,
it
will
be
very
useful
and
I'm
not
sure,
like
I
didn't
dig
deeper
into
that
like
how.
How
do
like
is
there?
Is
it
already
possible?
A
We
need
to
do
something
about
it,
but
this
was
one
of
the
idea
how
we
can
have
more
privileged
sidecargs
containers,
which
defined
as
a
need
and
cluster
administrators.
Today
they
need
to
pick
specific
content,
individual
containers
to
disable
no
exec,
but
with
sidecars.
You
probably
can
just
do
blanket
for
all
the
init
containers
which
may
help
in
certain
scenarios.
A
Does
anybody
know
more
about
like
how
this
policy
work
and
is
it
possible
today.
B
B
We
do
this
make
a
case
that
no
exec
is
useful.
In
that
context,
we
have
there's
enough
of
the
flags
already
there
I
think
it'd
probably
be
good
to
at
least
get.
There
needs
to
be
like
a
pretty
good
capsule
definition,
but
it'd
be
a
good
topic.
B
I,
don't
think
anybody
would
really
have
an
issue
before
and
it
containers
and
regular
containers,
probably
also
this
would
apply
to
ephemeral
containers
as
well,
because
the
classes
of
use
case
for
ephemeral
containers
might
lead
you
towards
needing
to
do
some
of
that
and
in
containers
you'd
be
trying
to
avoid
you
know:
classic
Linux
trying
to
drop
Privileges
and
the
regular
containers
would
be
denied
that.
A
Okay,
yeah
I
probably
will
put
the
beta
targets
by
the
goals,
so
we're
not
judging
it
with.
C
A
Because
I
think
it's
like,
whatever
decision,
we
will
make
its
additive
task.
It's
not
a
requirement.
A
C
A
Thing
I
wanted
to
discuss
is
like
I,
think
Capital
will
go
in
pretty
well,
we
don't
have
any
large
concerns
any
longer.
The
only
question
left
is:
what
do
we
use
for
to
indicate
that
sidecar
can
need
it
already
started?
A
So
there
are
I,
have
a
community
like
special
special
kind
of
music
for
that,
so
we
can
rely
on
the
residence
probe,
so
we
can
wait
for
container
to
get
to
registered
and
then
decide
that
it's
already
like
and
start
new.
Next
in
your
container,
we
can
wait
for
startup
props
to
complete
and
then
once
startup
props
complete
that
we
decided
containers
started
and
we
colonization
of
next
any
container.
A
And
lastly,
we
can
do
the
same
as
we
do
for
regular
containers.
We
can
just
wait
for
post
start
hook
to
complete
it's
synchronous,
I'm,
sorry,
and
once
it's
completed,
we
can
proceed
with
installation
of
next
containers.
So
post
art
is
very
easy
to
implement
it's.
It's
one,
synchronous
call
and
basically,
when
you
initiate
container
just
wait
for
a
wait
longer
for
post
start,
and
you
will
be
done
with
that.
A
The
problem
with
this
approach
is
people
will
likely
need
to
implement
more
in
their
post
start
hook,
so
they
like
it
will
have
some
common
logic
that
very
similar
to
Startup
props
waiting
for
some,
maybe
some
correlating
of
cement
points
waiting
for
endpoint
to
start
up
and
also
post
art.
Hook
is
very
hard
to
troubleshoot.
Today,
you
know
product
here,
like
we
don't
change
the
status
of
cabinet.
A
We
just
like
I
kind
of
just
having
it
in
this
like
a
limbo
like
something
is
happening,
and
we
have
no
visibility
in
what's
happening.
That
might
be
fixed,
but
this
is
a
current
state.
B
A
A
A
Okay,
so
yeah
I
agree
that
we
I
mean
it's
not
an
argument
against.
It's
just
argument
to
fix
it
and
make
it
better.
Next
thing:
startup,
props,
startup,
probes,
harder
to
implement
and
I
think
what
makes
it
harder
is.
We
need
to
have
some
state
of
containers
that
is
started,
but
Readiness
are
not
initialized
yet.
A
I
mean
there
is
no
State
like
that
today,
like
today,
like
startup,
props,
ended
and
writing
kicks
on
and
the
reasons
marketing
is
ready,
so
it
will
be
extra
work
and
the
difference
between
startup,
props
and
post
start
Hook
is
in
very
small
and
trivial
sidecar
containers
that
doesn't
want
to
start
to
wait
for
initialization
of
a
cutter.
Startup
will
become
slower.
A
So
you
have
this
Jitter
before
we
start
probes
and
then
props
will
kick
in
and
then
state
will
change
and
only
after
that,
like
next
container
will
be
we'll
start
initialization.
So
for
very
trivial
case
we
will
slow
down,
but
in
general
I
think
for
like
regular
case
it
will
be
very
similar
to
post
start
and,
lastly,
Readiness
probe,
which
we
went
with
initially
it's
quite
limiting,
because
we
tie
together
Readiness
as
I'm
ready
to
serve
traffic
and
I'm
ready
to
be
included
with
liveness
So.
A
Like
we
eliminate
scenario
from
startup,
like
sidecar,
canaries
decided
it
started
and
it's
fine
to
proceed
with
initialization,
but
it's
not
ready
to
serve
with
traffic.
It
wants
more
installation
to
be
done
in
parallel
and
it
doesn't
want
to
delay
anything
else.
It
just
once
more
time
for
itself,
so
this
scenario
will
be.
A
This
will
be
limiting
this
scenario
and
yeah,
so
also
it
delays
even
further
between
startup
and
residence
probe
like
when
startup
companies,
you
need
to
wait
for
Legends
to
kick
in
and
then
like
regions
will
need
to
succeed
at
least
once
so.
It
will
be
extra
time
spent
on
initialization.
B
Can
you
can
y'all
hear
me
yeah
sorry,
yeah
I'm,
just
driving
and
it
was
a
bad
signal.
B
B
Looking
for
what
is
the
clearest
indication
of
a
gate
I
will
that
if
we
have,
if
we
have
overlapping
constructs-
and
so
in
that
case
right
now
at
least
so
far,
we've
talked
about
startup,
probes
and
post
art
hopes
both
of
those
qualify
as
because
those
features
exist
because
they
get
startup
and
because
the
construct
we're
basically
offering
here
with
the
sidecar
is
the
ability
to
fail
initialization.
If
a
series
of
startup
criterias
haven't
been
hit,
I
would
argue
that
the
presence
of
both
of
them
have
to
be
taken
into
account.
B
The
post-start
by
itself
has
a
pretty
clear
argument
that
it
has
to
be
succeed
to
be
considered,
started
and
so
really
I.
Think
I
would
break
this
down
into.
Do.
We
need
anything
within
the
container
or
not,
and
if
the
case
that
we
have
to
have,
it
is
strong
enough,
we
should
we
I
think
we
have
to
support
both
mechanisms,
and
if
the
user
chooses
to
use
both
of
them,
we
need
to
have
the
clear
rationalization
for
what
their
behavior
is.
B
A
B
B
While
there
is
a
gap,
as
you
noted
and
indicating
that
the
post
art
hook
has
succeeded,
it's
still
like
we
have
a
linear
sequence,
try
to
keep
as
few
things
as
possible
from
behaving
differently
and
the
presence
of
if
you've,
added
to
the
spec,
a
probe
or
a
hook.
You
are
indicating
that
you
wish
to
gain
startup,
which
sounds
like
you
know,
just
from
a
consistency
from
the
user
expectation
point
I'm
buying
the
argument
here
that
both
of
those
are
blockers
too
the
move
past
the
unit
container.
B
But
you
can
choose
not
to
use,
you
can
choose
not
to
use
that
probe.
I.
Think
is
what
the
argument
would
be,
and
if
you
have
no
probes,
then
we
just
go
right
away
the
moment
that
container
is
started
like
either
it
starts
in
parallel,
or
we
just
start
it
and
move
on,
because
the
implication
of
restart
always
would
be.
If
there's
nothing
gating
you
moving
next,
once
we
understand
that
we
can
start
like
we
can
get
to
the
point
of
calling
start
container.
A
I,
don't
know
it's
fine.
This
eliminates
this
argument,
so
yeah
I
remember
now
that
another
argument
again
startup
props
was
that
regular
containers
doesn't
behave
the
same
way.
So
you
like,
we
change
the
behavior.
If
we
need
containers
to
delay
for
startup
probes
to
block
on
Startup
probes
and
it's
different
and
different
is
always
puts
on
the
question.
B
I
and
I
don't
either
I
have
to
go
back
and
re-review
it.
It's
been
a
while,
since
I've
looked
at
startup
probes,
but
certainly
if
an
end
user
would
believe
that
they
block
startup
and
that
we
communicate
that
in
other
ways,
I
think
you
know,
my
argument
would
always
be
try
to
integrate
the
other
existing
features
in.
We
don't
get
to
ignore
existing
features
in
a
design.
A
You're
breaking
up
again
so,
let's
okay,
so
feedback
here
is
SATA
probes
generally
good
thing.
Maybe
we
need
to.
We
likely
need
to
account
for
both
data
props
and
post
start
cook.
Is
there
any
other
opinions
on
them
call
today?
A
Let
me
move
past
it
so
yeah
with
this
two
I
think
we
like
I,
don't
see
any
other
concerns
on
the
cap,
so
cap
seems
to
be
not
have
any
other
unresolved
questions.
I
think
we
can
write
down
what
we
want
on
the
startup
hook
and
like
remove
on
the
result
here
and
once
we
do.
That,
like
should
be
fine.
C
Hey
Sergey,
okay,
this
is
vide.
Oh
I
took
a
sorry
I've
not
been
part
of
this
group
for
a
little
while-
or
this
is
the
first
time
I'm
coming
in
so
I
might
have
missed
a
lot
of
context,
but
I
added
a
couple
of
comments
to
to
the
cap
and
in
particular
about
the
resize.
The
having
the
resources
allocated
reflected
in
the
Pod
I
am
not
in
favor
of
doing
that.
The
it
seems
to
be.
C
There
is
another
effort,
that's
going
on
to
bring
pod
level
resources
as
well,
so
it's
going
to
conflict
with
that
and
I
don't
readily
see
what
benefit
they
get
from.
How
do
we
justify
that
being
in
the
API
when
you
can
compute
them
with
a
helper,
and
the
second
thing
is
when
a
pod
is
not
scheduled
for
the
reason
of
resources
not
being
available.
It's
already
reflected
in
the
Pod
conditions,
if
I
remember
correctly
out
of
CPU
or
out
of
memory
conditions
show
up
there.
C
So
the
information
is
already
present,
so
I'm
struggling
to
see
why
this
is
needed
and
even
if
we
were
to
add
it,
it's
probably
going
to
collide
with
the
future
potential
use
case
for
having
resources
for
the
Pod
level
itself.
C
There
is
one
the
Pod
overhead,
that's
a
different
thing,
but
there's
what
we're
talking
about
here
is
a
pod
level
resources.
There
I
believe
you
pointed
me
to
a
cap.
A
To
give
more
context
now
for
everybody
on
the
call
as
part
of
a
cap
we
wanted
to,
we
changed
the
formula,
how
we
calculate
resources
needed
for
port
and
the
feedback
was
that
we
need
to
expose
it
somehow
be
the
metric,
be
it
like
some
way
and
I
think
another
feedback
was
that
pod
status
seems
to
be
a
good
fit
for
this
field.
I
mean
I'm.
Sorry,
I
didn't
see
your
comments,
maybe
I
I.
A
A
Sorry,
yeah
and
you're
saying
that
you're
not
in
favor
of
hiding
it
in
Port
status,.
C
Yes,
I,
don't
see,
I,
don't
really
see
why
we
need
that
when
we
can
provide
an
API
to
compute
that.
B
We
already
actually
have
a
beta
level
metric
that
represents
this,
and
that
is
that
uses
the
same
helpers.
That
was
part
of
the
reason
to
add
those
metrics
for
future
change.
I
will
say:
that's
a
really
really
good
point
that
I
had
not
considered
and
actually
needs
to
be
completely
discussed
in
the
cup.
We
are
changing
the
observed
resource
model
through
the
addition
of
a
new
feature,
and
there
are
migration
implications.
People
forget
to
implement
it.
What
are
the
impacts?
Etc.
A
I'm
not
sure
I
follow
part
of
forget
to
implement
it.
The
support
status
and
we
will
implement
it
right,
so
it
will
be
implemented
like
there
are
like
proposal
to
changes
into
places
like
that.
One
in
schedule,
while
both
is
still
painting
can
then
couplet
will
update
it
whenever
it
has
a
change
right
or
we're
only
scared
of
it.
C
Yeah,
so
another
sort,
I
think
this
probably
has
already
been
discussed.
The
idea
of
introducing
another
container
class
like
sidecar
containers,
just
looking
at
the
service
mesh
example,
it
looks
like,
or
one
thing
that's
common
to
all.
These
is
that
you
want
to
have
a
init
container
that
doesn't
terminate
that
continues
to
run
and
to
accomplish
that,
and
you
want
you
may
want
to
start
this
before
the
unit
containers
happen,
which
essentially
brings
the
question
of
okay.
C
There
are
some
containers
which
have
a
ordering
requirement
where
you
want
to
start
this
and
then
it
may
it
may
you
want
to
keep
it
running,
but
if
you
have
the
restart
policy,
that's
always
instead
of
having
it
just
always,
why
don't?
You
have
always
and
never
and
then
have
a
container
successfully
exit?
It
will
accomplish
the
same
tasks
and
then
you
are
giving
the
sidecar
container
class
the
full
full
treatment
of
the
resources.
C
A
Yeah,
that's
the
idea,
and
the
next
step
for
us
was
that
the
formula
that
we
will
end
up
is
quite
complex,
so
we
need
to
like
iterate
through.
You
need
containers
and
calculate
all
the
maximums
like
sliding
window
maximums
and
then
sum
up
of
the
regular
containers
and
then
add
product
ahead.
So
it's
quite
involved
computation
and
to
well.
B
There
are
the
ones
that
are
they
run
with
the
lifetime
of
the
Pod
and
then
there's
ones
that
run
with
a
lifetime
of
run
to
completion
before
the
lifetime
of
the
Pod
and
every
container
that
can
run
for
the
lifetime
of
the
pot
adds
to
the
pods
total
resources
and
then
every
pod
that
doesn't
runs
to
the
peak
and
I
think
that's
kind
of
like
where
basically,
we
need
to
formalize
that,
and
that
is
the
strongest
argument
that
I've
heard
so
far
for
a
different
struct
right
for
introducing,
like
a
new
type
of
you,
know,
other
container.
B
That
is
an
additive
construct
that
we
add
into
the
the
other
calculations,
because
my
previous
point,
which
I
got
caught
off
and
I'm
sorry,
was
it's
just
anytime.
We
add
a
new
Behavior,
the
obviousness
of
the
change
that
clients
have
to
take
into
account.
So
in
this
case
custom
schedulers
the
course
scheduler.
Anyone
who's
written
custom
object
out
there
to
calculate
resources.
B
That's
a
more
complicated
process
when
it's
integrated
into
existing
types
than
when
it's
a
net
new
field
or
something
so
I,
don't
think
the
argument
is
we
can't
do
it
in
containers,
but
this
is
one
of
the
best
arguments.
I've
heard
for
at
least
considering
the
the
total
implication
of
resources
and
what's
easy
for
people
to
react
to,
because
that's
also
another
factor
is
it's
not
just
what's
easy
for
the
end
User.
It's
also
what's
easy
for
existing
consumers
of
those
apis
to
reactive.
A
Okay,
I
I
bought
it.
Everybody
need
to
change
things,
I
I,
just
trying
to
understand.
How
is
it
affecting
the
fact
that
we
will
We
expose
this
as
a
calculated
value,
or
we
don't
expose
this
calculated
value.
B
So
we
expose
the
metric
today
that
exposes
the
calculated
value
already.
That's
a
part
of
you
know
the
metrics
of
the
scheduler
disposes
that's
intended
for
consumers,
who
are
more
capacity
planning
schedulers
themselves
will
have
to
react
to,
and
you
know
add
this
feature,
that's
something
that
they
would
have
to
do.
Normally.
That's
an
understood
part
of
it
I
think
the
argument
that
we
need
to
expose
the
calculated
value
on
the
API.
B
Driving
that
metrics,
for
precisely
this
reason,
it's
hard
for
end
users
to
calculate,
and
we
made
an
argument
there.
We
we
would
not
put
it
in
the
API
at
the
time
like
we
would
not
have
calculated
metrics
values
the
API.
We
always
reassessed
that
I,
don't
think
we
have
three
now,
for
this
particular
use
case.
C
You're
breaking
up
quite
a
bit.
Could
you
please
repeat
that
pattern
I?
Could
we
couldn't
hear
everything
at
least
I
missed
I,
don't
know.
Maybe
the
problem
is
on
my
end.
Did
everybody
get
that.
C
A
Yeah
I
followed
all
the
way
to
the
statement
that
we
didn't
expose
it
before
and
we
don't
want
to
expose
it
specifically
for
this
feature.
But
I
didn't
get
a
reasoning.
C
Right
yeah,
my
my
reasoning,
is
a
little
different
from
that
I
see
that
there
is
a
potential
for
conf
conflict
with
another,
odd
level,
specification
of
requests
and
limits.
C
Besides
that,
the
defined
value
of
resources
allocated
at
least
using
that
same
terminology,
it's
currently
defined
value,
is
very
different
from
what's
being
proposed
here.
The
main
argument
against
it
for
me,
is
that
this
is
set
as
the
as
the
amount
of
resources
allocated
to
each
container
at
the
container
level
by
the
kublet
when
it
admits
the
Pod,
so
pod
is
admitted
in
an
All
or
Nothing
deal,
so
all
containers.
If
their
requests
are
met,
then
it's
going
to
get
admitted.
If
not
it's
going
to
be
rejected.
C
The
Pod
get
gets
that
the
kiblet
gates
keep
that
does
the
gatekeeping
for
that
now
introducing
this
at
the
Pod
level
as
a
summation
seems
redundant
and
I.
If
we
can
get
that
from
the
API.
The
consumers
of
this
is
users.
We
can
have
an
API
and
have
that
via
Cube
CTL
describe
part
describe
pod
and
programmatically.
If
the
API
is
available
like
a
metrics
consumer
or
something
can
call
that
and
get
the
information
and
I
would
if
the
yeah,
because
sorry
go
ahead,.
B
I
I
was
gonna
say
like
stepping
back
like
we
have
previously
multiple
times
made
the
argument
that
the
calculated
values
should
not
show
up
in
the
API,
so
I
don't
know
that
the
bar
is
that
high
to
say
we
shouldn't
do
it.
We
do
have
a
a
API
in
the
form
of
a
metrics
endpoint
that
provides
it.
I
wasn't
clear
whether
you
were
arguing
for
a
third
type
of
API
or
whether
you
were
justifying
that
you
know
more
justification
that
we
shouldn't
add
it
to
pod
status.
C
Two
things
yeah
one
is,
we
should
not
add
it
to
part
status
and
I
was
also
arguing
for
a
third
type
of
a
third
container
type,
so
you
already
have
or
fourth
in
this
case
we
already
have
init.
We
have
the
normal
containers.
C
We
have
ephemeral,
containers,
add
a
sidecar
container
type,
the
overall
just
reading
this
I
I,
don't
know
the
historic
discussions
behind
this,
or
if
this
was
a
intentional
choice,
but
reading
the
kip
I
went
through
it
yesterday,
I've
been
a
bit
sick,
so
I
really
haven't
grasped
everything
with
a
clear
mind,
but
going
through
it.
It
felt
like
we
are
introducing
the
restart
policy
to
differentiate
kind
of
allude
to
this
sidecar
container's
existence
by
overloading
the
adding
restart
policy
and
kind
of
overloading
the
meaning
I
just
feel.
C
Why
not
be
direct
and
explicit,
and
just
say
yes,
this
is
an
industry
accepted
industry
evolved
and
accepted
thing.
That
concept
is
there,
let's
bring
it
in
and
follow
the
common
rules
and
then
sorry
and
that
I
might
be
of
my
reservation
here.
B
But
there
are
some
other
things
that
are
clear,
so
we're
basically
talking
about
that
trade-off
and
it
would
probably
be
use
case
driven
if
we
have
a
strong
set
of
ordering
use
cases
that
we
can,
that
we
can
argue
for
that,
might
have
more
weight
than
this
and
I
think.
The
resource
argument
is
an
example
of
the
intuition
of
how
a
user
and
a
system
consumer
approaches
the
intermixed
in
it
containers
increases
the
amount
of
confusion
that
all
integrators
will
have
leveraging.
This
feature
correct.
C
That
so
yeah,
that's
primarily,
that
seems
to
be
the
reason
why
I
would
want
the
container
a
separate
class
and
at
least
on
the
initialization,
when
you
start
it
up,
you
go
with
the
order
in
which
it's
specified
in
the
yamu
and
then
have
restart
policy
not
with
just
always
have
always
and
never
so
that
gives
the
flexibility.
For.
Let's
say
you
have
a
sidecar
container,
one
sidecar
container,
two
and
sidecar
container
2
wants
to
use
the
connection
that
was
established.
C
Some
kind
of
service
service,
mesh
connection
that
was
established
by
SC1
to
get
some
information
and
exit
that
could
be
done
in
the
application
container.
Unless
you
have
security
privileges
that
it
needs
extra
extra
security
privileges.
That
could
be
a
reason,
but
it
can
exit
successfully
and
that's
perfect,
perfectly
legitimate
thing
to
do.
Isn't
it.
A
It's
not
exactly
yeah,
you
need
container
section
into
like
infrastructure,
container
section
or
whatever
outside
Car
Country
intersection,
so
it
will
achieve
exactly
what
you're
saying
so
it
will
intermix.
You
need
containers
inside
car
containers
and
it's
already
exists
like
it's
just
coding.
It
containers
for
some
historical
reasons,
mostly
so
yeah
I,
think
we
went
through
this
argument
and
we
end
up
implementing
the
same
thing
as
we
Implement
today
already.
A
But
I
want
to
return
to
this
resource
calculation,
I'm
I'm,
not
okay,
super
attached
to
exposure
support
status,
I
just
think.
It's
it's
very
natural
way
to
expose
things
mostly
because,
like
we
have
two
caps
like
this
cap
will
change
the
way
how
we
calculate
and
then
another
cap
that
Renee
is
working
on
is
dynamic
resources.
A
We
now
need
to
know
that
Port
request
is
between
what
is
requested
and
what
is
about
to
be
applied,
so
you
need
to
pick
maximum
out
of
it
out
of
these
two
values.
So
formula
will
be
more
and
more
complicated
and
miatric
wouldn't
know
about
the
status
of
what
metric
doesn't
know
whether
we
already
applied
a
new
request,
or
we
about
to
be
able
to
apply
this
new
request.
So
of
when
do
you
report?
You
wouldn't
be
able
to
get
this
precise
information
from
metric
itself?
B
Yeah,
you
would
effectively
know
after
the
cubelet
has
made
the
update
and
then
there's
a
period
where
it's
been
requested,
but
not
applied
and
and
to
be
fair.
The
scheduler,
like
all
schedulers,
need
to
know
the
effective
calculation.
The
cubelet
also
needs
to
have
a
similar
calculation.
There
is
an
argument
that
why
is
the
cubelet
doing
a
calculation
that
it
doesn't
need
to
if
other
consumers
have
to
do
that
exact
same
calculation,
so
that
that's
certainly
a
good
argument
for
status
in
a
larger
context
has.
A
No,
not
that
I
know
of,
and
that's
why,
when
they
came
here
and
I
really
appreciate,
you
know
your
regions
through
and
giving
feedback,
so
I
I'm
trying
to
understand
how
we
can
resolve
it
like
one
way
to
resolve.
It
is
to
put
it
for
beta
requirement
and
say,
like
in
beta,
we'll
decide
whether
we
want
to
expose
it
I.
It
may
work,
I!
A
Think
I,
don't
know
how
many
people
will
try
to
implement
it
right
away
and
like
they
really
really
need
this,
but
you
always
can
give
some
like
helper
methods.
B
Well,
honestly,
I
I
would
probably
say
I
think
even
based
on
this
discussion,
like
beta
going
into
data
for
me,
is
going
to
be.
Consumers
of
this
have
demonstrated
a
number
of
patterns
and
have
given
feedback
on
which
patterns
are
frustrating.
So,
like
some
of
the
arguments
here
are
like
the
ordering,
it's
not
clear
to
me
that
we
have
a
really
broad
canvas
of
ordering.
B
You
know
examples
and
use
cases
called
out
yet
because
we're
still
kind
of
working
through
it.
We
can
imagine
some
a
really
good
analysis
of
people
trying
to
use
this
and
saying
we'd
be
okay
with
you
know,
a
net
container
and
then
a
regular
container
or
people
saying
you
know
we
have
to
have
some
of
these
more
complicated
constructs
and
then
there's
people
who've
done
10
15,
minute
containers,
they're
kind
of
the
outliers,
but
I
have
seen
decomposed
logic
and
I've.
D
So
one
of
the
sort
of
the
uses
for
the
the
status
API
was
just
the
I've
seen
users
that
struggle
currently
with
trying
to
figure
out
how
many
you
know,
resources
of
resources
of
pod
needs
to
schedule.
If
you
have
a
pod,
that's
pending,
they
have
to
not
all
of
them
know
that.
Well,
you
need
to
go.
Look
at
your
end
of
containers
and
take
the
max
and
I
think
we
sort
of
get
by
with
the
fact
that
for
the
most
part,
workload
pods
end
up
having
larger
resource
requests
than
our
net
container
net
container.
D
So
for
the
most
part
it
doesn't
matter,
but
I
can
see.
If
maybe,
if
Cube
color
describe
pod
does
show
that
value.
Then
you
also
that
also
solves
a
problem
for
the
end
user
because
they
can
just
describe
a
pod
and
get
the
value
I
guess
what
what
would
be
frustrating
to
user
is
a
positive
schedule.
The
condition
just
says:
I
don't
have
enough
CPU
on
a
node.
Now
I
need
to
go.
Look
at
this
multi-step
formula
and
manually
look
at
the
pods
back
and
calculate
these
values
to
figure
out
exactly
what
size.
D
Node
I
need
to
launch
to
replace
this
node
with
to
actually
get
my
positive
schedule,
but
yeah.
If,
if
keep
coming
to
scrapod
shows
it
and
you
have
some
sort
of
helping
methods
within
the
kubernetes
code
that
other
you
know,
schedulers
can
use.
Then
that
seems
to
also
solve
a
problem
without
exposing
it
as
a
status.
B
We
could
also
expose
a
a
a
virtual
resource,
Sub
sub
resource,
on
a
pod
that
would
convey
some
of
that
information,
so
that
there's
I
think
this
is
also
a
good
example
of
like
and
I
appreciated.
Todd
like
the
use
cases
are,
it
would
help
to
look
at
holistically,
like
some
of
those
like
three
or
four
different
types
of
use-
case
I'm,
a
system
administrator
wanting
to
do
capacity
planning.
That
was
actually
what
the
original
metrics
were
intended
for.
B
So
ask
the
scheduler
same
same
model,
same
code,
Sergey
brought
up
the
one
of
you
have
a
multi-stage
process
whereby
you
set
an
intent
and
then
the
cubelet
realizes
it
and
reacts
and
needs
to
convey
that
correctly.
And
then
you
also
kind
of
have
the
impossibility
aspects
of
I'm
trying
to
schedule
something,
and
it
is
a
dynamic
system
and
some
of
the
information
is
pretty
far
Downstream
having
a
place
to
coordinate
and
Report.
What
resources
are
needed
is
actually
very
useful
as
well,
and
then
that
gets
into
do.
B
You
need
to
do
it
in
bulk,
which
is
kind
of
the
capacity
planning
use
case.
Is
it
more
a
diagnosis
and
then
is
it
a
part
of
a
fundamental
centralizing,
the
the
calculation
so
that
fewer
components
of
the
system,
because
I
think
like
another
Point
as
we're
bringing
this
up
is
the
definition
of
the
resource
model
is
defined
by
the
API
generally
in
Cube
we,
we
might
have
extensions
that
have
opaque
resource
models
outside
of
the
core
pod
model,
but
the
expectation
would
be.
B
The
definition
of
the
Pod
model
is
what
defines
the
resource
model,
the
Caps
that
are
approved,
the
feature
Gates
that
are
on
and
the
expectation
that
all
of
the
cubelets
and
all
the
schedulers
on
the
system,
respect
to
those
values,
is
kind
of
the
default
assumption.
We
have
that
most
users
work
within.
B
Is
it
unreasonable
of
us
not
to
try
to
summarize
that
versus
pushing
it
into
all
the
different
places
in
the
code
right?
If
we
make
an
API
change
that
says
this
field
should
exist,
we're
effectively
saying
if
you're
looking
at
this
API
object.
You
should
assume
that
this
works
this
way.
Why
are
we
making
everybody
have
to
go
recalculate
it
themselves
describe
include.
A
So
Clinton
I
I
see
you
saying
that
so
maybe
like
this
may
be
a
way
to
implement
it.
There
may
be
other
ways
to
expose
the
same
information
BeyondPod
status.
What
do
you
have
any
recommendations?
How
to
resolve
it?
Like
so
you're
saying
beta
is
not
an
answer
right,
so
you
want
someone's,
certainly
Alpha.
B
My
argument
was
I
think
we
should
look
at
getting
the
core
mechanism
in
Alpha
and
it
would
be
okay
if
a
criteria
for
beta
was
assessed,
how
much
information
we
need
about
the
resource
model
to
decide
whether
to
enter
beta?
Do
we
need
the
status
API?
If
so,
that
should
be
a
prerequisite
for
entering
beta
as
we've
gone
through
that
process,
the
same
thing
for
the
ordering
problem.
How
much
ordering
complexity
do
we
need
and
does
the
are
side
cars
used,
often
enough
and
likely
to
be
an
often
enough
portion
of
all
in
it?
B
Containers
that
separate
an
amount
is
going
to
is
going
to
be
better
for
end
users
in
the
in
the
end,
so
I
I
think
some
of
this
is
I'm
almost
arguing.
We
need
to
be
getting
experience
with
Hands-On
usage
of
the
init
container
patterns
as
a
prerequisite
for
entering
beta
anyway.
Can
we
use
that
to
potentially
do
a
little
bit
more
work?
Can
we
use
that
to
defer
some
of
these
questions
or
put
them
down
explicitly
in
the
cap,
as
we
want
to
answer
this
question
with
actual
Hands-On
feedback
from
use
cases.
A
Okay,
vinay:
do
you
have
any
a
lot
of
concerns
with
this
spot
status?
Do
you
see
how
do
you
see.
B
A
C
Can
do
it
in
fact,
I,
remember
probably
in
the
previous
iteration
of
this
implementation.
Did
some
changes
to
that
to
the
describe
API
to
show
the
requested,
which
is
the
desired
resources
and
allocated
which
is
the
actual,
what's
actually
there
for
the
Pod
at
this
time
as
it
is
running,
and
then
the
status
resources
the
made
those
extensions.
So
if
that
information,
if
all
we
need,
is
to
get
that
information
out
via
the
API
for
describe,
we
I
think
we
can
do
that
having
it
in
the
path
status.
C
For
me,
I
think
we
need
a
really
high
bar
to
justify
that
like
what
is?
How
does
it
programmatically?
You
know:
how
does
it
equip
the
user
to
take?
How
is
it
actionable?
What
can
the
user
do
with
that
information?
That's
kind
of
unclear,
so
any
details
on
that
might
help
resolve
okay
yeah.
We
really
need
this,
because
I
can
see
the
sidecar
container
yeah
I'm,
seeing
both
sides
of
the
argument
now,
but
having
a
new
class
that
felt
like
the
instinctive
yeah.
C
There
are
projects
out
there,
which
you
know
have
you
do
the
birth
and
death
and
then
order
it
in
some
way
give
an
order
and
give
the
restart
policy
so
that,
in
that
order
you
can
have
some
containers
come
and
run
forever
the
cycle
containers
and
if
some
container
needs
to
start
and
then
do
some
work
after
the
sidecar
communication,
like
the
service
mesh
proxy
on
wire
or
whatever
is
started,
it
needs
to
do
some
extra
work
to
get
some
at
higher
privileges
and
then
exit.
C
Let
it
do
that
I
have
to
go
through
all
the
use
cases.
Problem
statements
once
again
to
be
sure,
but
this
seem
to
be
fitting
way
of
like
having
the
restart
policy
at
per
container
level.
Yeah
I'm
a
go
for
that
to
summarize
and
the
resources
allocated
at
the
Pod
level
at
this
point,
I
feel
we
shouldn't
do
it.
We
need
strong
registrification
to
have
it
sorry
to
have
it
in
the
Pod
status.
C
A
Okay,
so
I
think
the
summary
is
we
want
to
get
more
feedback
at
least
scenarios
a
little
bit
more
explicit
and
in
order
to
not
block
this
cap,
I
would
probably
set
as
a
goal
for
beta
in
Alpha
it
tentatively.
A
Okay,
we
I
want
to
also
to
get
into
a
specific
implementation
thoughts
that
you
can
start
already
and
one
of
the
tasks
that
I
stumbled
on
right
away.
It
is
unique
container
in
this
sharing
container
and
a
time
from
container
type.
If
you
haven't
seen
this
I,
we,
what
we
discussed
is
that
we
want
to
restart
policy
and
we
want
to
make
a
minimal
API
change.
So
we
wanted
to
get
a
restart
policy
only
to
init
container
type
and
not
to
regular
containers,
not
to
informal
containers.
A
Unfortunately,
it's
not
possible
with
the
typing
system
we
have
today,
because
init
container
has
the
same
type
as
a
regular
container,
and
the
ephemeral
container
has
extend
like
one
more
field
than
a
regular
container,
but
it
has
to
be
has
to
has
all
the
fields
of
regular
container
as
well.
So
if
we
will
add
restart
policy,
it
will
automatic
like
without
any
change
of
type
system.
Today
it
will
automatically
be
added
to
all
types
of
container,
including
ephemeral.
A
This
is
not
ideal,
so
what
we've
been
discussing
is
Let's
D-Share.
You
need
Cutters
from
regular
containers
by
forking
types
and
I
I
wrote
this
proposal
how
to
how
type
working
can
be
done.
Unfortunately,.
C
A
Not
like
as
straightforward
as
it
turned
out
to
be,
there
are
some
concerns,
and
mostly
cancers
are
from
external
validators
and
external
processors
of
container
types,
foreign.
A
C
A
We
will
split
it
into
multiple
types
that
it
can
confuse
all
these
validators
and
there
are
like
three
Jordan
pointed
on
three
different
types
of
validators,
like
ones
that
use
type
go
directly,
so
I
think
easiest
one
to
handle,
because
we
can
have
some
helper
methods
for
them
and
they
will
just
reuse
this
helper
methods.
A
Then
there
is
a
some
other
language
or
some
like
own
implementation
of
the
same
type,
so,
basically
just
getting
like,
maybe
the
generating
other
wrappers
themselves
and
they
just
Implement
their
own
visit
containers
pattern
in
their
different
language.
And
lastly,
there
are
some
untyped
validators,
like
some
recax
rules
or
whatever.
They
also
assume
some
types.
I
think
last
one
is
easiest,
one
likely
easiest
one,
because
it's
just
look
at
yaml
and
apply
yaml
rules.
A
And
we
wouldn't
change
any
yaml
definition
for
at
least
for
a
common
field,
so
this
shouldn't
be
affected,
but
this
first
two
and
as
I
said
the
first
one
is
maybe
not
as
affected
because
they
need
to
change
code
like
they
will
need
to
use
new
helper
methods,
but
also
maybe
know
that
critical,
the
most
critical
one
is
second
pattern.
A
When
people
generate
some
wrappers
themselves
and
Implement
their
visit
and
I'm
not
sure
how
many
of
those
there
are
and
how
to
resolve
this
problem
so
like
we
G,
sharing
type
will
definitely
break
a
lot
of
consumers,
but,
and
we
have
a
choice
of
not
breaking
them
and
confusing
people
by
aging.
A
field
that
is
not
being
used
by
all
caterers
alternative
is
to
break
those
customers
and
have
less
confusion
for
end
user,
and
this
issue
still
haven't
result.
I
mean
we
still
it's
still
open
and
I.
A
Don't
know
like
if
we
want
to
discuss
it
on
this
forum
or
you
want
to
just
read
it
offline
and
comment.
There.
C
A
There
is
a
proposal
like
it's
not
is
this
proposal
may
or
may
not
go
in
127,
but
whatever
it
will
be,
the
there
is
a
proposal
already
that
takes
creates
a
new
jrpc
API
and
the
grpc
API
assumes
that
all
containers
have
the
same
type.
So
you
just
pass
continue
object
through
this
API
and
since
it's
container
object,
you
already
stuck
like
I
mean
D
sharing
Types
on
this
crpc
API
will
be
yet
another
task
we
will
need
to
make
when
we
decide
to
de-share
those
types.
If.
A
A
Okay,
yeah,
if
you
have
a
comments,
please
come
into
this
cap
and
if
you
have
experience
with
consuming
these
types,
I
thought
you
said
that
you,
you
wrote
some
schedulers
before
how
the
schedule
is.
Is
it
golang?
Do
you
use
types
code
directly.
D
Yeah
and
yeah
so
I
work
on
Carpenter
and
Carpenters
schedule
and
consolidation,
so
we
use
yeah.
We
use
the
go-based
types
for
all
of
these,
so
yeah
it
would
affect
us
it's
a
change,
but
there's
also
like
every
every
kubernetes
version.
There's
a
change
related
to
scheduling
as
well.
So
it's.
D
A
A
D
A
A
A
Okay
and
I
wanted
to
highlight:
Todd
is
working
on
what
we
discussed
before
yeah
recirculation
logic
centralization,
so
we
I
I
didn't
find
your
PR
very
quickly
taught.
So
if
you
want
to
highlight
it,
please
share
with
me
or
send
it
in
chat.
A
Yeah,
yeah
I
thought
that
working
in
economics
and
computer
type
can
be
easily
done
before
we
start
implementation.
Apparently
it's
not
easy.
It's
more
thinking.
A
Logic
centralization
is,
we
have
many
places
in
qualities
with
a
resource
usage
by
of
pod
and
Todd
is
working
on,
combining
them
all
together
and
before
the
discussion,
we
thought
that
we
will
expose
it
as
a
port
status
also
before
we
even
start
campwork,
but
apparently
we
need
more
approvals
for
that.
B
A
Then
yeah
this
is
this
is
another
question.
Now
then
I
also
put
some
information,
some
end-to-end
tests.
We
don't
have
all
end-to-end
tests
for
all
any
container,
behaviors
and
I.
We
need
to
get
going
on
like
describing
that
and
writing
tests.
So
we'll
have
some
pattern
like
this
is
how
end-to-end
test
will
look
like
and
when
we'll
add
in
equal
things,
just
additive
change,
rather
than
re-implement
it
and
everything.
A
So
this
is
another
task
that
we
can
pick
up
and
after
today's
discussion,
I
think
we.
What
we
need
to
do
is
started
paid
for
container,
so
we've
been
discussing
that
it
will
speed,
startup
probes
and
the
Readiness
probes.
Then
you'll
need
to
have
a
new
status
for
containers
and
I.
Much
as
you
know
this
logic
very
well,
is
there
any
problems
implementing
the
status
for
all
containers
right
now
and
then
just
inherit
it
from
indicators
or
you
wouldn't
suggest
to
implement
it
for
regular
containers.
D
A
Okay,
so
I
mean
just
thinking
generally
like.
If
you
implement
the
status,
you
probably
want
to
have
the
status
for
regular,
continuous
as
well,
because,
like
before
residence
props
kick
in,
you
will
already
see
that
status
changed
right.
So
it
may
be
a
good
idea
anyway.
A
This
is
another
task,
and
this
is
all
I
came
up
with
in
terms
of
tasks.
We
can
already
start
doing
if
anybody
interested
to
take
any
of
those
so
like
help
is
then
please
take
it
and
like
start
going
with
it,
and
once
we
like
I,
think
all
other
implementations.
Like
all
other
tasks
we
need
to
have
like,
we
need
to
have
a
restart
policy
flag
on
a
new
container
to
implement
logic.
A
C
A
Want
to
take
any
implementation
tasks
like
maybe
you
can
synchronize
some
slack
or
you
just
create
a
GitHub
issue
for
that
with
that
I
want
to
give
you
a
couple
minutes
before
signal
meeting.
So
thank
you.
Everybody
for
attention,
nice
chatting
with
you
bye.