►
From YouTube: SIG Node Sidecar WG 2023-01-03
Description
Meeting notes and agenda: https://docs.google.com/document/d/1E1guvFJ5KBQIGcjCrQqFywU9_cBQHRtHvjuqcVbCXvU/edit#heading=h.m8xoiv5t6qma
GMT20221206-170520_Recording_1920x1120
A
Hello,
it's
January
2022
and
we
started
it's
a
scientific
group
2023.
Yes,
and
we
started
this
meeting
very
late
because
I
was
late,
I'm,
sorry
about
it,
but
nevertheless
that
we're
trying
to
flesh
out
what
what's
left
in
in
this
cap
and
trying
to
understand
a
game
plan
for
the
next
a
few
days,
I
think
maybe
months,
at
least
for
care
per
closing,
and
we
started
looking
into
open
questions.
So
we
said
that
we
have
a
document
that
we
need
to
fill
out.
A
It's
a
Kappa
document
that
you
transform
to
markdown
after
you
fill
the
tabs
here
and
we're
looking
through
open
questions
now.
So
this
scenario
seems
to
be
important.
I
got
a
few
indications
that
this
is
an
important
scenario
because
it's
impossible
to
implement
outside
of
kublet,
so
nobody
can
like
code,
it
somehow
nicely.
So
this
may
be
important
scenario,
but
also
John
Howard
from
history
team.
He
says
that
I
mean
do
something
like
this
is
not
as
critical
as
just
have
inside
cars.
A
So
yes,
it's
I
mean
it's
Edge
case
scenario
and
I
think
we
need
to
think
about
it
very
carefully
from
a
get-go
from
beginning.
A
A
We
get
like
if
controllers
like
external
control
is
fit
in
such
a
way
that
they're
not
they
will
not
fail
on
a
new
Fields
being
added.
Then
it
will
be
really
hard
to
make
them
aware
of
it
outside
of
like
just
them.
They
need
to
understand
the
new
field
came
up
and
the
situation
changed,
and
now
scheduling
needs
to
be
changed
as
well,
so
yeah
I
think
there
is
no
good
answer.
That's
why
I
probably
don't
need
to
spend
too
much
time.
A
We
just
say
that
this
is
a
new
field.
It
needs
to
be
treated
very
differently.
We
of
course
need
to
look
at
few
edge
cases
like
this
one,
this
item
that
Team
suggested.
A
We
need
to
understand
what
the
side
effects
will
be
if
customers,
using
it
like
users,
users,
field
incorrectly
or
try
to
intentionally
or
unintentionally,
break
something
in
Kubla.
So
what
if
they
put?
A
This
a
field
on
by
mistake
on
one
of
the
existing
init
containers
so
do
some
of
that
sort
or
how
they
can
break
qubit
by
setting
very
high
grade
school
termination
period
and
then,
like
I,
don't
know
terminations
are
sidecar
over
and
over
again
and
like
cause
some
sort
of
resource
starvation,
or
anything
like
that
I'm,
just
like
brainstorming
right
now,
it's
just
bring
dump
but
yeah.
We
need
to
think
about
the
scenarios.
A
Research
managers,
topology
managers,
I
think
from
early
conversation
with
Fred
Jessica
last
meeting
and
Swati
I.
Think
we
good
on
this
front.
So
nothing
supposed
to
be
broken.
A
It
just
changes
needed
to
be
applied
to
CPU
and
topology
manager,
but
other
than
that
it's
it
should
be.
Okay,
generally
I
still
waiting
for
final
confirmation.
We
just
got
it
very
late
in
December
and
we
just
need
to
double
check
that
it's
indeed
you'll
have
no
issues
with
this.
Topology
managers
yeah
and
the
six
scheduling
6000
was
fine,
is
what
you're
proposing
and
the
only
thing
they
recommended
is
to
combine
somehow
codes
that
calculates
needed.
A
Resources
like
when
you
calculate
resources
needed
for
Port.
You
will
need
to
calculate,
like
logic
may
be
very
straightforward.
You
can
either
take
maximum
with
all
you
need,
containers,
requests
and
limits,
and
some
of
all
requests
and
limits
of
regular
containers
and
then
take
maximum
of
these
two
or
alternatively,
we
can
calculate,
but
with
like
with
the
sidecar
containers,
we
now
need
to
add
all
the
sidecar
containers
to
all
the
regular
containers
as
well.
A
The
trick
here
is
sidecar.
Scatter
is
defined
later.
So
if
there
is
a
very
heavy,
you
need
containers
that
runs
before
sidecar
containers,
then
it's
requests
and
limits
not
necessarily
needs
to
be
combined
with
requests
and
limits
of
sidecar
containers,
because
this
candidate
will
finish
before
sidecar
containers
will
start.
So
we
need
to
calculate
this
running
maximum
of
all
unit
containers
to
understand
how
much
any
containers
will
be
consuming
in
terms
of
requests
and
limits.
A
A
You
only
need
to
add
those
resources
to
init
containers
that
run
after
sidecar
was
initialized,
and
that's
where,
like
really
with
a
tricky
logic,
comes
into
play
and
it's
not
tricky
it's
very
straightforward
to
just.
You
need
to
be
carefully
coded
and
test
it.
A
That's
why
six
calories
in
like,
if
you
do
something
like
that,
do
it
in
both
places
at
the
same
time,
so
there
wouldn't
be
situations
when
we
have
one
logic
in
scheduler
and
another
logic
and
pod
admission,
and
they
don't
match
to
each
other.
So.
A
So
since,
since
sidecar
containers
needs
to
be
running
all
the
time,
we
want
to
minimize
number
of
times
the
the
killed
by
killer.
That's
why
we
probably
need
to
adjust
how
we
calculate
score
for
sidecar
containers.
Today,
sidecar
containers
typically
have
very
little
resources
requested
I
mean
comparing
to
regular
containers
and
that's
why
they
will
be
very,
they
will
cons,
they
will
be
they'll,
have
very
low
score.
A
I
think
low
high
high
score
yeah
high
score
is
bad,
so
they'll
have
high
score
adjustments,
so
I
think
it's
most
of
the
time
will
be
999.
So
it's
a
maximum
score
possible
and,
with
this
score,
killer
will
literally
just
kill
them
right
away
all
the
time-
and
this
is
not
ideal,
because
we
want
this
containers
to
run
through
the
life
cycle
of
regular
containers.
So
probably
they
need
to
share
the
same
score.
A
Adjustment
together
at
least
to
be
fair
and
another
question
that
came
up
again
is
back
off.
A
A
So
this
question
again
in
one
of
the
documents
from
Joe:
we
discussed
it
there
and
I
think
we
left
this
question.
Is
there.
A
A
They
will
be
I
restarted
with
back
off
timeout,
so
it
will
be
very
short
initially,
but
then
growing
all
the
way
to
five
minutes,
and
that
may
be
problematic
if
we,
if,
if
all
regular
containers
need
sidecar
containers,
I
need
to
maybe
problematic
that
they
are
not
active
for
that
long
of
a
time
when
they
were
killed
by
mistake
or
some
somehow,
but
the
same
other
way
like
it's
also
problematic
when
we
keep
restarting
something
that
will
never
start
so
it's
some
like
broke
into
to
no
extent
so
we
waste
a
lot
of
resources
just
to
restart
it
over
and
over
again.
A
So
it's
we
need
to
decide
how
how
to
what's
the
best
position
here.
The
worst
part
is
there
is
no
control
over.
Oh,
so
even
if
customers
wants
to
control
it,
somehow
there
is
no
way
to
control
backup
timeout,
it's
hard
coded
today.
A
So
if
they
know
that
there
are
situations
when
sidecar
May
crash
and
it
needs
to
be
restarted
right
away-
let's
say
it's
a
metrics
thingy
and
it
has
a
big
cache
that
sometimes
grows
out
of
limits
and
developer
knows
about
it,
and
then
they
just
want
to
restart
it
over
and
over
again,
and
it
typically
helps
them.
Then
Progressive
back
of
timeout
may
may
not
be
ideal
for
them.
A
Maybe
they
know
that
they
do
a
recover
sometime
anyway.
Those
are
the
questions
that
you
need
to
answer
that
an
answer,
maybe,
is
that
we
can
come
up
come
back
to
this
question
later,
because
Progressive
work
of
timeout
configuration
is
the
requested
feature.
Maybe
we
need
to
have
this
feature
a
separate
cap
for
all
the
containers.
A
Yeah,
okay,
so
I
think
what
you
can
do
is
to
finish
up
this
write-up
with
some
of
these
questions
to
open
and
keep
keep
answering
these
questions.
Some,
like
this
question,
is
I.
Think
Joy
is
working
on
that
one
I
will
double
check
that
he's
on
task
and
we
can
double
check
if
Francesca
can
swipe
you
if
they
can
help
with
this,
to
completely
double
check
that
it's
indeed,
would
not
be
a
problem
background
accountability.
A
I
think
we
need
to
have
API
review
with
some
somebody,
but
I
think
this
API
review
may
only
happen
after
we
completed
the
right
app.
So
once
you
have
a
write
up,
we
can
go
to
API
review
and,
like
reviews,
more
people.
A
Yeah
I
think
that's
all
I
want
to
discuss
this
meeting
and
just
refresh
the
memory
like
come
back
to
work
mood
and
get
back
together
and
sorry,
I'm
late
I
think
we
missed
a
lot
of
people
because
of
that
at
least
sunny.
B
Yeah
I
think
it's
next
week,
yeah,
absolutely
how
long
we
got
to
mail
about
127.
When
is
the
cap
closing.