►
From YouTube: SIG Node Sidecar WG 2022-12-13
Description
Meeting notes and agenda: https://docs.google.com/document/d/1E1guvFJ5KBQIGcjCrQqFywU9_cBQHRtHvjuqcVbCXvU/edit#heading=h.m8xoiv5t6qma
GMT20221206-170520_Recording_1920x1120
A
Hello,
it's
Tuesday,
December,
13
2022.
It's
a
signal,
sidecar
working
group
meeting
today.
We
we
had
two
items
on
agenda.
I
can
start
the
second
one.
It's
we
wanted.
We
had
a
like
overall
proposal
how
everything
should
look
like
if
one
sidecar
continues
to
be
interlaced
with
init
containers,
changing
the
logic
of
a
new
containers,
a
little
bit
and
I
sent
this.
So
what
we
want
to
do
is
do
things
like
first
to
understand.
A
If
you
implement
Justice,
we
will
satisfy
most
of
scenarios
and
whether
we
missing
some
very
important
scenario
or
meeting
somebody
important
drawback
of
this
proposal,
so
I
sent
an
email
but
haven't
had
any
replies
yet
assuming
that
everything
is
working.
A
A
Okay
and
beyond
that.
We
another
thing
we
wanted
to
do
is
to
keep
filling
up
open
questions,
so
I
I
can
go
through
the
list
right
now
and
if
you
have
any
more
open
questions
you
want
to
add
on
the
list.
Please
do
so
so
naming
is
obviously
a
big
thing.
A
The
termination
ordering
is
something
that
we
want
to
discuss
today.
Joe
was
working
on
that
I
know
we
discussed
it
offline
and
he
prepares
this
document.
I
think
I
can
share
it.
I
hope
Joe
is
not
that's
fine.
A
We
can
go
through
this
document
right
after
listing
all
the
open
questions,
so
it's
elimination
ordering
termination
scenarios,
that's
something
we
want
to
discuss.
Then
sidecars.
A
Yeah,
this
is
one
important
scenario
that
we
also
discussed
with
Joe,
and
we
need
to
understand
this-
how
the
behavior
of
termination
will
be
in
with
regard
to
graceful
termination
of
regular
containers.
Today,
when
Port
enters
termination,
it
will
not
restart
anything
any
longer.
Any
containers
with
sidecars
situations
a
little
bit
different
beside
cars,
you
want
to
own
sidecars-
may
be
needed
for
graceful
termination
of
regular
containers.
A
So
if
side
cars
crashed
or
exited
during
termination
of
a
port,
then
we
probably
need
to
restart
them
again,
but
then
a
situation
will
be
more
complicated
when
we
start
thinking
about
sending
a
term
signal
to
all
the
containers
on
termination.
So
we
already
send
the
term
signal
to
you,
but
we
expect
that
you
are
not.
You
are
not
gonna
stop.
A
This
is
very
strange
proposition
for
sidecar
authors,
I
mean
typically
when
you
receive
term
you
need
to
stop,
but
in
in
this
case
we
send
a
new
term,
but
we
don't
want
you
to
stop,
which
is
very
weird
and
then
do
we
need
to
double
term.
You
somehow
like
send
another
signal
that
will
be
another
question
so
yeah,
that's
something
that
Germany
needs
to
be
discussed
through
determination.
Discussion
then
before
backward
compatibility
is
a
big
one.
A
We
need
to
understand
what
kind
of
applications
will
be
if
we
will
just
add
those
contents
in
each
section,
or
maybe
you
need
to
have
different
sections
that
code
in
need,
but
Like
New
in
it
or
like
some
other
name.
We
need
coming
back
to
naming
question
but
yeah
backward
compatibility
and
understanding
what
can
be
broken
with
sidecars
having
this
different
behavior
is
something
important
resource
managers
inside
cars.
This
is
the
question
we
need
to
understand
like
today.
A
We
already
have
some
problems
with
init
and
the
regular
containers.
We
try
to
allocate
resources
for
them,
but
then
they're
not
necessarily
will
be
allocated
properly
and
that
recite
cars.
The
situation
may
be
a
little
even
more
complicated,
so
something
we
need
to
discuss.
I
started
conversation
with
Francesco
and
let's
see
where
it
will
go,
I'll
report
back
for
sure
if
needed,
we'll
just
bring
more
people
into
this
meeting
and
then
Team
suggested
this
topic
like
we
need
to
explore
everything.
A
Every
scenario
how
users
can
abuse
this
or
intentionally
otherwise,
so
people
mistakenly
put
this
attribute
on
a
regular
container
or
people.
I
said
some
like
change.
We
can
change
behavior
of
ports
drastically
in
a
way
we
didn't
anticipate.
So
this
is
kind
of
use
cases.
We
want
to
investigate
and
understand
how
this
new
session
can
be
abused
because
it's
obviously
quite
powerful
and
finally
I
I-
want
to
understand
six
scheduling,
concerns
so
I
added
a
topic
to
this
Thursday
six
scheduling
meeting.
A
If
you
want
to
join,
please
do
I
think
it's
at
10
I
may
be
wrong,
but
yeah
check
the
calendar.
It's
somewhere
on
30.
A
Okay,
if
you
remember
anything
else
that
we
need
to
clear
up
because
before
before
kept
writing,
please
add
to
this
list
and
timing.
Wise
I
think
it
will
be
great
to
have
cap
written
down
in
the
beginning
of
in
in
the
middle
of
January,
before
that
we
will
definitely
talk
to
more
people
trying
to
explaining
them
what
we
are
trying
to
achieve
and
like
why
we
get
to
this
conclusion.
A
But
before
that
we
will
we
need
to
keep
answering
open
questions
and
middle
generally.
We
need
to
have
cap
written
so
it
can
be
approved.
B
Do
we
need
to
present
something
to
seek
node
before
writing
the
gap,
or
we
do
it
after.
A
Yeah
I
think
we
can
give
a
presentation
before
that
I
don't
think
there
will
be
meeting
like
today.
There
will
be
meeting,
but
maybe
next
two
weeks
it
will
be
canceled.
So
first
meeting
will
be
just
before
middle
of
January,
so
we
probably
will
present
some
draft
in
a
good
shape.
A
Just
answer
the
question:
yeah,
okay
and
yeah,
we
will
probably
will
get
API
review.
It
will
be
team,
probably-
and
people
recommend
to
ask
more
API
reviewers
to
to
do
this
review
because
it's
obviously
a
big
change.
That
means
a
lot
of
attention.
A
Okay
and
show
is
not
here
yet
so
what
I
propose
to
do.
A
A
You
and
like
do
you
want
to
discuss
this
resource
manager
I
think
what
time
you
may
have
I.
C
I'm
here
to
listen,
I,
don't
want
to
steal
time
for
other
topics,
but
if
you
folks
want
to
discuss
this
topic
I'm
more
than
happy
to
do
so,.
A
Yeah,
it's
quite
important
I,
so
I'm
still
waiting,
maybe
Joe
will
like
read
the
message
and
join
us
because
he
said
he'll
be
here,
so
I
don't
want
to
go
through
his
document
without
him
and
resource
management
is
a
very
important
topic.
So
if
we
can
talk
about,
it
will
be
great.
C
A
Yeah,
so
most
of
the
discussion
and
most
of
the
documents
will
be
describing
why
we
don't
go
different
approach.
Actual
approach
we
picked
is
quite
simple
to
explain.
A
So
the
idea
is
that,
among
all
you
need
containers,
there'll
be
special
type
of
containers,
currently
we're
thinking
of
restart
policy.
Always
that
would
mean
that
this
containers
Will
Survive
the
initial
initialization
stage.
So,
instead
of
waiting
for
termination
for
completion
of
this
container,
we
will
be
waiting
for
Readiness
of
this
container,
and
then
we
will.
We
will
not
touch
these
containers
till
the
end
of
the
port.
A
That's
the
idea
and
that
obviously
changes
how
we
calculate
amount
of
resources
needed
before
it
was
marks
of
init,
container
resources,
plus
marks
of
unit
containers
and
Mark
and
Mark
so
like
and
some
of
all
the
regular
containers.
Now
this
containers
will
be
included
into
some
of
regular
containers,
but
also
participate
in
conversion
marks
of
init
containers,
so
I
hope,
I
I
make
it
I'm
making.
C
A
Yeah
so
now
I'm
I
remember
there
were
some
implications
between
init
containers
and
regular
containers,
but
Swati
corrected
me
that
I
was
wrong
in
my
understanding
when
we
calculate
hints
or
allocating
containers
and
specific
CPUs
or
such.
D
A
D
Yeah,
my
understanding
is
that,
like
you
said
we
can
we
calculate
max
of
init
or
we
add
all
the
containers,
so
we
identify
Maps
based
on
that
and
that
check
if
I,
remember
correctly,
that
happens
at
admission
time.
So
if
you
have
a
pod
within
it
container
that
does
not
meet
the
criteria.
You
know,
there's
not
enough.
On
a
single
lumenode
and
the
node
has
been
configured
with
single
numeral
policy,
it
would
not
be
admitted.
That's
as
I
understand.
That's
the
current
way.
D
We
are
doing
things
if
we
were
to
change
change
the
init
container
in
a
way
that
it's
kind
of
long
running
and
it's
along
the
life
cycle
of
the
board.
Then
we
have
to
definitely
take
the
resources
request
requested
by
that
in
the
container
into
consideration.
A
Okay,
so
if
you
calculate
resources
correctly,
the
next
question
will
be
when
we
do
actually
pin
continuous
to
specific
CPUs.
Will
this
be
affected
by
the
fact
that
some
contents
will
be
pinned
during
in
Civilization
station?
Some
will
need
to
be
pinned
to
the
same
CPUs
later.
Is
there
any
complications
here.
D
What
happened
I
think
I
I
want
to
understand
what
is
so.
When
we
say
admission,
my
understanding
was
initialization
was
after
admission,
so
Port
has
been
admitted,
and
now
we
are
trying
to
initialize
this
particular
container
with
say
Network
requirements
or
whatever
it
needs
is
that
is
that
assumption
correct?
Yes,.
D
A
Happen
we
will
calculate
that
everything
like
we
have
enough
resources,
but
now
I
I'm
I'm
wondering
what,
when
we
actually
will
start
allocating
resources
with
the
presence
of
sidecar
content
so
change,
the
behavior
will
make
things
more
complex.
Somehow.
D
Yeah
so
in
case
you
know,
the
board
is
restarting
I,
see
the
two
init
containers
in
the
example
that
you're
sharing
right
now
so
one
is
one
starts
up
on
failure
and
the
other
one
stays
always
up.
So
you
know
if
the
Pod
part
fails,
for
example,
for
the
first
one.
We
just
have
to
take
into
consideration
the
maximum
amount
of
resources
that
that
pod
needs,
and
this
could
influence
that
part,
but
for
the
second
one
it
could
change.
You
know
if
we
are
using
the
checkpointing
file.
D
A
Okay
and
today,
when
we've
been
to
specific
CPUs
pin
regular
containers,
do
we
do
them
in
a
batch
somehow
or
whenever
they
came
up.
D
Yeah,
whenever
they
are
sent
to
the
node
and
cubelet
picks
them
up.
That's
when
it
starts
processing.
A
Okay,
so
yeah
it's
on
Edgar
there,
okay,
so
potentially
to
be
fine
without
any
additional
changes.
So
we
don't
need
to
somehow
Mark
sidecar
containers
special
way
for
all
the
resource
managers,
so
they
approach
them
differently.
They
can
approach
them
as
a
regular
containers
and
it
will
be
enough.
D
Yeah
but
the
resource
allocation
logic
would
probably
have
to
change
and
now
we'd
have
to
consider
like
in
for
init
containers
that
are
long-running
resources,
that
the
resources
would
be
occupied
throughout
the
life
cycle
of
the
pod.
C
C
D
Yeah,
the
only
other
thing
I
can
think
of
we
should
make
sure
is
in
the
current
framework.
Do
we
handle
init
containers
differently,
so
if
an
init
container
is
requesting
exclusive
CPUs,
does
it
get
allocated
at
least
initially?
It
goes
away,
of
course,
before,
like
after
the
app
container
starts,
but
initially
does
it
get
that
allocated?
A
D
A
Okay,
let's
dive
on
determination,
document
I
share
the
link
with
everybody
on
on
this
I
I
can
also
send
it
in
chat
if
you're
interested
and.
A
A
Use
cases
that
we
have
been
discussing
for
determination
is,
we
want
sidecars
to
survive
longer,
so
it
have
enough
time.
After
all,
the
containers
submitted
their
logs
to
upload
the
slogs
to
other
backend.
They
have,
and
second
scenario
is
Grace
determination,
so
shut
down.
So
if,
during
graceful
shutdown
we
we
lost
our
proxy,
then
we
will
lose
all
the
network
connections,
so
we
don't
want
to
lose
it.
A
We
want
to
restart
the
sidecars
over
and
over
again,
hey
sorry,
I'm
late,
oh
hi,
we
just
started
your
document
like
you
exactly
perfect
timing,
so
I
I
went
through
motivational
use
cases
if
you
can
take
it
off
from
here.
It'll
be
great
sure.
E
Yeah,
so
so
the
prompted
band
I
think
I
think
Sergey
may
have
covered
this
The
Prompt
has
been
to
look
at
the
the
determination
use
case
in
more
detail.
We
knew
we
knew
generally
what
we
thought
we
wanted,
but
we
wanted
to
like
break
down
the
specifics.
So
these
were
the
two
motivating
use
cases.
There's
probably
more.
E
One
is
mostly
like
the
logging
monitoring
one.
The
other
one
is
like
the
proxy
Sidecar,
both
of
them
kind
of
have
a
similar
requirement
that
you
want
your
side,
cars
life
cycle
to
be
kind
of
wider,
like
start
before
and
after
the
primary
container
right,
because
if,
if
you're
trying
to
capture
all
its
logs-
and
you
terminate
it
before
it,
does
you
can't
possibly
do
that?
And
you
know
if
you're
a
proxy
and
you're
trying
to
deal
with
its
entire
life
cycle?
You
know
at
the
end
of
a
healthy
container's
life
cycle.
E
E
So
from
that
I
kind
of
captured
a
couple
main
goals
which
I
think
I
already
kind
of
alluded
to
I
did
want
to
just
talk
about
the
most
basic
things
that
are
going
to
happen.
So
today,
when
you
do,
you
can
think
of
two
major
classes,
so
there's
jobs
in
their
services.
E
A
job
is
interesting
because
a
job
in
kubernetes
terminates
kind
of
like
once
all
the
containers
terminate
so
sidecars
are
really
annoying,
because
the
sidecar
has
to
figure
out
that
the
primary
containers
have
terminated
and
then
it
needs
to
terminate
itself.
Otherwise
the
job
basically
gets
hung,
so
we
can
probably
do
better
than
that
pretty
easily.
Just
by
like
once,
all
the
primary
containers
are
done.
We
can
say:
okay
now
termination
is
really
like
we're
we're
ending
this
pod
and
so
we'll
just
start
starting
the
the
sidecar
containers.
E
That's
really
clean
because
the
primer
containers
have
already
stopped,
so
we
naturally
get
the
life
cycle
that
we
want,
which
is
all
the
primary
containers
have
stopped
because
they've
terminated
themselves
and
then
the
sidecar
containers.
We
just
need
to
make
sure
that
they
get
shut
down
and
cleaned
up
in
a
service.
It's
a
little
more
complicated
because,
like
that's,
there's
an
external
request
to
stop
it
rather
than
the
containers
is
terminating.
E
So
now
you
have
to
have
like
you
have
to
have
like
you
have
to
decide
like
what
kind
of
ordering
you're
going
to
have,
and
so
the
rest
of
the
dock
actually
just
covers
that
main
case,
which
is,
do
we
need
to
do
something
special
around
ordering
for
services
specifically
and
my
proposal
for
this
first
at
least
this
first
pass
on
this
feature
is
that
we
don't
do
anything
special
that,
instead
of
trying
to
do
something,
that's
really
sophisticated
with
service
ordering.
E
We
just
when
we
get
a
request
to
stop
a
pod
that
has
sidecars
that
we
just
we
just
send
a
sync
term
to
all
the
containers
and
the
sidecar
ones.
If
they
want
to
live
longer
than
the
primary
containers,
they
are
going
to
have
a
grace
period,
they're
going
to
get
a
sick
term
before
they
get
a
sick,
kill.
They'll,
maybe
have
30
seconds
by
default.
E
They
need
to
do
what
they
do
today,
which
is
basically
try
and
stay
alive
until
the
main
containers
have
gracefully
shut
down
and
then,
if
they
can
gracefully
shut
themselves
down,
we
could
try
and
do
something
more
complicated.
I
looked
into
some
of
the
things
that
you
could
do.
It
gets
pretty
complicated
to
try
and
find
a
clean
API
to
do
this,
and
so
I
listed
out
like
two
or
three
alternatives.
E
I
did
prove
to
myself.
I.
Think
one
thing
which
is:
if
we
don't
do
anything
really
sophisticated
now,
it
doesn't
prevent
us
to
do
it
from
doing
something
more
sophisticated
in
the
future.
All
the
Alternatives
I
listed
you.
We
have
to
make
some
kind
of
New
Field
to
kind
of
say
what
we're
doing,
and
it
wouldn't
be
any
harder
to
do
that
than
it
would
be
to
do
now.
E
So
I
kind
of
thought
of
this
as
being
a
risk
assessment,
there's
a
risk
of
not
doing
enough
now
and
that
we
make
life
more
difficult
in
the
future
and
there's
a
risk
to
trying
to
do
too
much
now
trying
to
create
too
complicated
of
a
feature
up
front,
and
you
know
making
really
very
specific
design
decisions
before
we
have
a
lot
of
information.
E
So
that's
kind
of
how
I
saw
the
decision
here.
I
do
have
an
alternative
that
I
think
could
maybe
be
made
to
work,
but
it's
pretty
complicated
and
it
might
be
better
to
build
this
basic
feature
out.
We
would
be
no
worse
than
we
are
today.
Jobs
would
still
be
a
lot
easier
to
handle
any
sidecar.
That's
already
written
today
is
going
to
continue
to
work
because
it
already
has
to
monitor
the
primary
containers
and
then
in
the
future.
E
If
we
still
think
there's
a
problem,
we
could
address
it,
but
when
I
talked
to
some
people,
I
went
talk
to
I,
think
John,
Howard
I,
don't
know.
If
you're
on
the
line
there
there
was
jobs
were
one
of
the
problems
and
there
was
a
bunch
of
Alternatives
considered
on
kind
of
like
how
to
help
side.
Current
containers
know
that
they
should
shut
down,
but
we're
always
going
to
be
sending
them
down
so
we're
already
kind
of
in
a
better
situation
than
we'll
rebirth.
Before
does
that
make
sense?
E
Yeah
yeah
the
catch
the
catch
there
is
if
we
try
and
do
like
a
reverse
ordering,
we
end
up
violating
the
the
graceful
shutdown
period
That's
set
on
pods.
E
So
there's
this
yeah
I
should
have
mentioned
this
before
so
on
on
pods,
there's
a
graceful
termination
period,
the
defaults
to
30
seconds,
and
if
we
wanted
to
do
really
strict
ordering
we
would
we
would
have
to
decide
how
we're
going
to
accommodate
that
pre-existing
field.
So
we
could
do
like
all
the
main
containers
shut
down
in
parallel
and
then
we
could
violate
that
30
second
limit
and
give
each
side
car
its
own
30
seconds.
E
But
then,
if
you
had
four
side
cars,
you
could
be
two
minutes
for
shutdown
which
clearly
violates
the
intent
of
that
field,
which
is
that
the
whole
pod
should
shut
down
in
30
seconds
or
we
could
do
an
alternative.
We
could
say
well
we'll
give
the
mean
containers
30
seconds,
because
we
guaranteed
that
to
them
before
and
then
we
could
take
whatever
time's
left
and
try
and
do
an
ordered
shutdown
of
the
sidecars
that's
problematic,
because
now
the
sidecars
only
gets
what's
left
right.
E
So
one
of
the
Alternatives
I
listed
was
that
sidecar
containers
that
need
an
additional
grace
period
would
have
their
own
grace
period
on
their
specific
container,
and
then
we
would
deliberately.
This
is
alternative
one.
We
would
deliberately
violate
the
Pod
level
grace
period
by
having
like
container
level
Grace
periods
in
addition
to
that,
but
you
would
have
to
opt
into
it
and
if
you
didn't
opt
into
it,
you
would
either
be
grouped
with
the
main
pods
or
something.
E
So
you
can
kind
of
see
like
why
this
gets
really
complicated,
which
isn't
to
say
we
can't
do
it
I'm
just
proposing
that
we
don't
take
all
that
on
now
and
kind
of
defer
this
as
a
thing
that
we
could
do
later,
but
that
I
think
only
makes
things
better
and
I.
Don't
think
we
make
ourselves
worse
by
not
doing
it
now.
F
Yeah,
so
to
add
on
to
that,
the
cubelet
sends
it
goes
through
all
of
the
pods
sandbox
and
sends
a
Sig
term,
basically
to
the
sandbox,
and
so
the
container
runtime
is
managing
stopping
of
all
the
containers,
and
so
the
cue
it
today
doesn't
do
any
ordering
of
it,
the
shutdowns,
and
so
we
might
need
a
change
to
The
Container
run
time
if
we
want
to
order
those
shutdowns
as
well,
and
so
that
also
complicates
the
shutdown
termination
process.
F
I
have
a
question
about
the.
If
we
could
go
back
to
the
top
of
the
document,
the
motivating
use
cases,
starting
with
the
first
one
I'm,
not
convinced
that
either
of
these
use
cases
can't
be
solved
with
the
current
can,
with
using
no
side,
cars
and
I
was
wondering
if
we
could
clarify
how
the
current
containers
don't
are
not
motivating
for
these
use
cases,
starting
with
the
first
one
a
container.
E
This
these
these
use
cases
weren't
intended
to
be
motivators
for
sidecars.
They
would
be
intended
to
be
motivators
for
the
behavior
of
termination-
oh
okay,
so
that
that
this
this
doc
was
intended
to
be
purely
focused
on
termination.
Should
we
do
sidecars,
but
if
you
do
sidecars,
how
should
termination
work
gotcha.
F
Okay,
I
think
we
should
try
and
make
these
use
cases
to
be
in
line
with
with
how
we
want
sidecars
to
work
as
well,
and
that
might
help
us
flush
out
the
documents.
Some
more.
E
Okay,
yeah
yeah,
please
put
comments
on
here
on
on
how
I
can
best
do
that
I'll
be
happy
to
try
and
make
some
adjustments
thanks.
A
So
can
we
discuss
this
Crush
during
termination
problem,
okay,
yeah.
E
This
is
an
interesting
problem
that
I
think
a
couple
people
brought
up,
which
is
say
say
especially
for
the
proxy
case,
say
that
your
sidecar
crashes,
for
whatever
reason,
usually
during
during
the
the
normal,
the
main
part
of
the
life
cycle,
where
everything's
just
supposed
to
stay
alive,
then
what
we
would
get
is
the
is
the
kublet
would
restart
the
sidecar
once
it
crashed
right,
so
you
would
get
a
blip,
but
that's
the
best
you
can
do
if
the
sidecar
crashes,
while
the
main
containers
have
already
started
their
termination
phase,
which
is
like
they've,
already
gotten
sick
terms,
they're
supposed
to
be
shutting
down,
but
maybe
they're
still
clearing
out
infight
requests,
they're
still
draining
they're.
E
You
know
a
request
trying
to
do
a
really
nice
graceful
shutdown.
The
proxies
probably
still
helpful
during
that
shutdown
period,
so
I
tried
to
go
through
and
figure
out.
Is
there
something
that
we
can
do
to
make
that
better?
If
the
sidecar
crashes,
it's
a
little
tricky,
so
the
proxy
case
is
really
interesting,
because
if
the
sidecar
crashes
after
you
start
draining
I,
don't
think
you
can
recover
in-flight
requests.
Anyways
right,
so
you've
got
a
request
to
a
proxy
connection
to
a
proxy
from
a
client.
E
For
monitoring
and
logging
that
is
potentially
a
problem
right
like
if
it
crashed,
then
it
could
potentially
not
shift
off
all
all
data
from
The
Host
that
one
there
might
be
some
opportunity
to
do
better.
I'm,
not
super
convinced,
that's
the
use
case
for
the
optimizing
for
but
I.
Think
generally
like
you
want
to
build
sidecars
to
be
pretty
super
stable
services
like
if
they're
flopping,
a
lot
you're
going
to
have
a
lot
of
problems.
E
E
E
A
Yeah
I
think
this
is
one
of
the
features
that
I'm
thinking
about
it.
So
we
need
to
understand
whether
anything
this
feature
later,
we
will
require
to
add
the
flag
to
our
side.
Cars
or
it
should
be
built
into
slide
cars
from
a
get-go
and
another
related
problem
to
this
problem
is
whether
we
apply
a
crash.
Loop
Crest,
loop
back
off,
timeouts
or
sidecars
are
exempt
from
back
off
logic.
A
It's
a
backup
logic,
maybe
problematic
I
mean
it
goes
all
the
way
to
up
to
five
minutes
and
then
like
sidecars,
will
never
restart
and
like
what
will
render
itself
unusable
because
of
this
back
off.
That
happened.
E
Yeah,
that's
a
really
good
point.
The
back
off
goes
pretty
high.
I
actually
didn't
address
that
here.
Let's
get
some
comments
in
I
did
look
at
some
of
the
other
things
that
happened
during
termination.
So
there
is
a
pre-stop
phase
that
normal
containers
have
I
was
proposing
that
we
keep
all
that
same
behavior
for
side
cars.
E
Pre-Stop
has
a
lot
of
nice
properties
and
I.
Think
if
needed,
you
can
use
it
in
the
same
way.
E
There's
also
like
the
the
the
probes
themselves,
like
the
Readiness
probe
and
the
liveness
probe
have
their
own
Grace
periods,
so
I
was
proposing
that
we
keep
those
working
more
or
less
the
same
I
believe
the
behavior
I
didn't
try
and
get
into
startup
too
much
here,
but
my
understanding
was
that
when
you
start
a
sidecar
that
it's
going
to
since
it's
in
the
init
containers,
it's
going
to
block
any
subsequent
in
the
containers
from
starting
until
it's
ready.
E
A
E
Kind
of
happy
and
sad
I'm
kind
of
with
Sergey
on
this
I'm
happy
that
I
I
feel
pretty
comfortable
that
if
we
do
nothing
that
we
don't
make
things
worse
for
people
today,
we
do
make
it
better
for
users
of
jobs
and
I
feel
pretty
safe
about
being
able
to
do
more
detailed
work
on
this
in
the
future,
which
may
happen
but
I
like
the
idea
of
like
being
able
to
have
a
release.
That's
doesn't
have
to
do
all
this
just
to
get
something
out
in
Alpha
for
people's
hands.
A
Okay
thanks
everyone.
Thank
you
yeah.
We
still
have
all
this
open
questions.
Some
of
them
will
be
discussing
actually
scheduling
the
search
differences
I'm,
not
sure
whether
how
many
people
around
next
week.
Thank
you.
So
maybe
we
can.
A
Keep
discussion
offline
and
asynchronously
and
I
yeah
I.
Don't
think
we
have
very
big
topic
to
discuss
next
week.
Specifically
mostly,
you
start,
writing,
I,
think
and
yeah
Beyond
naming
naming.
Maybe
one
topic
that
we
need
to
keep
all
together
and
in
person
and
try
to
brainstorm,
but
I
think
we
can
do
it
in
January
I.
Don't
think
we
need
to
have
this
decision.
A
Happened
right
now,
so
what
do
you
think
of
skipping
next
meeting
and
start
writing
some
document?
Vocab,
okay,
I,
see
noise
and
hats.
B
Actually,
it
might
happen
in
February,
we'll
see.
A
Okay,
we'll
have
special
edition
of
sidecar
meeting
okay
yeah
for
me
in
person,
it's
like
I,
see
in
your
face,
at
least
it's
good
enough
yeah.
So
if
you
interested
in
participation
in
writing,
the
cap
I
will
start
the
draft
and
invite
everybody
to
write
it.
We
can
start
with
skeleton
and
copy
paste
bunch
of
texts
that
we
had
already
in
the
previous
cabs
I
suggest
we
do
it
in
as
Google
doc
initially
and
then
keep
working
on
that
so
yeah.
A
Okay,
then
I
think
we
can
cut
this
meeting
short
again.
Six
scheduling
if
you
want
to
tell
them
about
sidecars,
please
join
on
Thursday,
otherwise,
we'll
keep
working
offline
and
yeah
cheers
everybody.