►
From YouTube: SIG Node Sidecar WG 2023-01-24
Description
Meeting notes and agenda: https://docs.google.com/document/d/1E1guvFJ5KBQIGcjCrQqFywU9_cBQHRtHvjuqcVbCXvU/edit#heading=h.m8xoiv5t6qma
GMT20230124-170521_Recording_1424x1120
A
Hello,
hello,
it's
January,
24th
2023!
It's
a
sidecar
working
group
welcome
everybody.
We
have
that
was
really
short
agenda
today.
I
wanted
to
just
go
through
the
cap
and
some
comments
and
I
see
so
Clayton
joined
as
well.
So
Clayton
gave
many
interesting
comments
and
maybe
we
can
dig
into
those
and
then
I
started
preparing
some
slides
I
wanted
to
show
the
sites
on
Signal
meeting
that
will
be
hour
later
and
you
can
use
the
slides
as
well.
A
If
you
want
for
any
reason
that
said,
let's
get
into,
is
there
any
anything
else
on
agenda?
If,
if
you
want
to
add
something,
please
do
it
here.
A
Yeah,
let's
go
into
cab,
so
cap
is
up
and
running
and
I
think
maybe
a
better
way
to
review.
It
is
to
look
Services
comments.
A
So
one
big
comment
was
regarding
init
containers
and
regular
containers
sharing
the
schema,
and
this
sharing
of
schema
makes
you
make
decisions
and
try
to
feed
every
decision
you
made
for
one
type
of
containers
to
another
type
of
containers.
In
this
particular
case,
we
introducing
restart
container
restart
policy
for
init
containers
and
it
automatically
creates
the
same
restart
policy
over
regular
containers
and
we
don't
intend
to
implement
any
of
override
possibilities
today,
and
the
question
is
one
of
the
questions
whether
we
need
to
split.
A
We
need
containers
and
regular
container
schemas,
I.
Think
yeah
I
think
many
people
think
we
think
we
should
split
them,
but
looking
at
the
amount
of
work
needs
to
be
done.
It's
like
all
three
Bill
work,
but
it's
a
lot
of
work.
B
A
Yesterday,
I
just
renamed,
you
need
containers
into
I
mean
just
create
a
new
type.
We
need
containers
and
it
immediately
shows
many
places
when
we
do
a
visit
pattern,
when
we
just
go
through
all
the
containers
and
try
to
calculate
some
qos,
for
instance,
or
some
other
things,
it's
not
like
overwhelming
number
of
places,
but
it's
enough
to
make
it
complicated.
A
I
think
what
this
exercise
gives
me
is
clear
understanding
what
places
I
need
to
revisit
when
implementation
will
start.
So
when
we
start
implementing
this
cap
by
looking
at
places
where
you
need
a
regular
containers
intermumble,
we
can
clearly
see
like
what
can
be
what
needs
to
be
adjusted
for
sidecars
as
well.
C
I
don't
know
if
this
has
already
been
discussed,
but
if,
if
ephemeral
containers
were
introduced
with
a
new
type,
that
would
be
different
than
if
we
took
existing
in
itkinator
containers
and
then
moved
them
to
a
new
type
because
they're
already
out
there.
If
we
introduce
a
new
like
sidecar
container
with
a
new
type,
that
would
probably
be
easier,
I,
don't
know,
I,
don't
know
what
the
implication
on
clients
would
be
to
like.
Switching
a
type.
A
C
Yeah,
that's
the
only
complication.
I
could
see
with
this
I
I.
Think
I
generally
do
agree
that,
like
having
its
own
type,
is
probably
better
I'm,
just
trying
to
think
through
the
mechanics
as
best
I
can.
A
Okay,
yeah,
we
went
through
renaming
handlers
so
like
we,
we
used
to
share
a
type
of
probes
and
handlers
like
what
is
HTTP
prop
was
HTTP
Handler
lifecycle,
Handler,
and
we
split
those
I
think
a
couple
of
releases
back.
It
was
quite
an
exist,
basically
renaming
type
score
and
then
going
to
every
place
that
construct
or
somehow
reference
those
types
and
like
adjust
in
one
way
or
another,
and
it
was
quite
straightforward
for
that
case,
because
logic
is
not
quite
intermingled.
In
this
case,
it's
a
little
bit
more
involved.
I
think.
A
Okay,
so
I
feel
that
General
agreement
that
we
may
want
to
take
this
work
on
on
this
cap.
A
A
Yeah
so
another
thing
we
that
there
are
comments,
I,
think
Clayton.
You
brought
it
again.
We
discussed
it
already,
but
it's
worth
rehashing
can
understand
what
we
can
do
about
it.
So
if
we
introduce
restart
policy
always
it
may
be
a
little
bit
confusing
semantic
for
people
for
any
containers.
A
That's
it's
unclear
like
it
may
not
be
very
easy
to
crack
or
people
used
to
regularly
need
containers
that
always
will
not
be
the
same
as
on
failure,
so
they
get
used
to
if
it
containers
being
restarted
whenever
they
need
to
be
restarted,
so
it's
always
been
restarted
unless
they
complete
it.
A
A
D
B
Can
you
guys
hear
me
clearly?
Yes,
okay,
sorry
a
little,
it's
cutting
out
from
my
end,
so
I
what
didn't
catch
all
of
it
if
init
containers
are
not
regular
containers
and
there
is
confusion
on
the
developer
side
right,
we've
made
accidents
inside
the
cubelet.
Thinking
of
the
same.
So
that's
you
know,
there's
good
reasons
to
split
them
and
to
treat
them
differently,
but
for
all
the
places
that
they
aren't
different.
We
want
to
clearly
communicate
that
they're.
The
same
I
feel
like
people.
B
We
we're
under
documenting
how
init
containers
start
restart
and
behave
today.
People
have
incorrect
assumptions
and
they
go
through
trial
and
error.
Restart
policy
is
so
fundamental
to
the
difference
between
the
knit
containers
and
containers
that,
even
though
that
today
we
don't
allow
it
to
be
configured,
they
are
not
the
same
and
they
behave
very
differently.
B
Reusing
the
same
constant
in
two
closely
related
places
is
just
going
to
be
a
potential
point
of
Confusion
And,
so
I
I
understand
there's
a
deeper
discussion
here.
I
may
not
be
able
to
hear
all
the
arguments
for
or
against,
but
I
think
just
based
on
that.
No
one
should
ever
be
surprised
by
looking
at
the
restart
policy
and
figuring
out
what
it
means
and
I
think
emphasizing
the
difference
in
containers
in
any
containers.
B
If
there's
ever
a
field
to
do
it
on
it's
going
to
be
this
because
they
don't
even
have
similar
semantics
right
like
we
do
restart
init
containers.
If
the
container
runtime
is
refreshed
right,
that's
an
implication
that
you
know
no
one
would
get
or
just
expect
to
happen
unless
they
thought
about
it.
But
it's
something:
we'd
need
to
communicate
so
I'm
a
little
worried
about
any
reuse
of
constants
on
this
type
of
field.
A
It
goes
I,
don't
think
it's
that
different
between
any
conditions
and
regular
containers.
So
if
Port
restart
policy
is
always,
then
init
containers
will
be
restarted
and
the
regular
candidates
will
be
restarted.
The
only
difference
is
once
you
can
run
to
completion.
It's
a
life
cycle
done
right,
so
it
will
stop
being
restarted.
A
So
it's
kind
of
similar,
if
you
understand
something
you
come
inside
cycle,
is
limited.
A
And
I
think
confusion
here
is,
if
you
put
a
restart
policy
always
to
regular
init
containers
that
you
already
have,
then
the
word
that
will
happen.
It
will
keep
restarting
over
and
over
again,
so
you
introduce
this
noise
and
it's
not
I
mean
nothing
wrong
with
this
noise.
It's
just
nothing
will
likely
break
on
opposite
side.
If
you.
A
Don't
like
you
unlikely,
will
make
a
mistake
by
placing
by
removing
restart
policy
always
from
sidecar
container.
So
if
you
introduce
the
site
container,
you
likely
intentionally
put
the
restart
policy
always
on
that
and,
unlike
you,
will
remove
it
so
confusion
wouldn't
go
opposite
direction,
so
I
think
I'm
trying
to
say
is
confusion
you,
even
if
it
will
be
confusion.
This
confusion
will
not
break
anything.
C
A
Yeah,
when
I,
when
I'm
doing
this
logical
exercise
in
my
head,
I
always
get
to
the
point
when
we
need
a
new
field.
Expressions
is
behavior
because
I
mean,
if
you
want
to
make
it
100
clear.
You
will
not
use
restart
policy
if
you
want
to
make
it
very
close
to
be
clear,
but
maybe
introducing
some
confusions
and
restart
policy
is
a
good
enough
work
and
it
doesn't
introduce
new
Concepts.
C
Yeah
I
I
think
I'm
in
agreement
that
restart
policy
is
probably
a
good
place
for
it,
and
I
mostly
have
gone
back
and
forth
between
sometimes
I
see
always,
and
it
makes
sense
to
me
because
it's
container
level
and
so
I
know
that
container
level
is
different
than
pod
level
and
sometimes
when
I
think
about
both
init
containers
and
Main
containers
mic
is
always
really
the
same
on
them
and
I.
C
Think
I
can't
quite
convince
myself
it's
exactly
the
same,
but
it's
really
close,
so
yeah,
I
I,
don't
think
I
would
challenge
it
either
way.
Right
now,
I
don't
know.
B
A
Yeah
and
unfortunately,
this
kind
of
decision
is
really
hard
to
take
back
like
once
we
go
with
it.
It's
it's
not
easy
to
vote
back
and
change
it
later.
A
Yeah,
this
is
a
big
problem,
so
future
use
of
restart
policy
field
again
I
introduce
this
section
just
to
make
sure
we
all
on
the
same
page.
Why
will
the
user
restart
policy
filter
other
than
introducing
New,
Field
and
I
feel
that
the
any
any
amount
of
details
I
put
there?
A
It
will
get
into
argument
of
specific
scenaries
you're
going
to
support
and
how
the
scenario
will
look
like
so
I
want
to
put
enough
details
to
make
it
clear
that
toast
analysis
exists,
but
at
the
same
time
make
it
uncontroversial.
Now
that
I
don't
put
enough
details,
how
exactly
they
will
be
implemented.
So
I
think
that
the
team
on
this
comment
wanted
to.
Oh,
he
replied.
A
Yeah
so
yeah
team
suggestions
that
maybe
we
can
just
list
few
scenarios
here
and
then,
with
a
few
scenarios
listed
here,
we
can
put
all
the
other
details
in
later
section
I
think
I
agree
with
it,
and
I
tried
to
put
the
scenarios
in
the
slides
as
well,
so
make
it
a
little
bit
smaller.
A
Yeah
two
scenarios
that
I
came
up
with,
which
will
justify
restart
policies.
If
you
have
a
job
with
a
restart
policy.
Never-
and
you
have
initialization
containers
that
may
be
flaky,
you
can
make
installation
containers
with
own
failure
and
the
job
itself
will
be
this
restarted
policy,
never
I
think
it's
a
reasonable
scenario
that
people
may
want
to
implement.
A
Another
scenario
is:
if
there
are
two
containers,
one
runs
to
completion
and
can
accelerate
three
stars
and
another
candidate
rate
restarts
and
like.
Why?
Won't
you
just
make
it
want
to
be
run
like
restart
policy,
never
and
another
one.
You
can
keep
it
starting,
keep
it
killed
or
anything
like
that.
A
A
A
Okay,
yeah,
it's
a
team
suggested
to
make
it
move
it
out
and
I
will
do
that.
Clayton
disappeared.
A
Okay
formula:
this
is
about
formals
that
we
will
expose
in
a
calculation
of
resource
usage
and
resource
limit
to
expose
and
Team
suggested
import
status
and
not
put
metric
I'm,
not
sure
how
it
works.
Do
you
know.
A
C
A
I'm
not
sure
what
what
status
is.
Can
you
explain
maybe
I
just.
C
Oh
sorry,
yes,
so
every
every
kubernetes
resource
every
yaml
has
two
halves:
the
the
spec
and
the
status.
So
the
spec
is
the
declared
state
of
what
you
want.
The
Pod
to
be,
and
the
status
is
feedback
from
the
system
telling
what
the
actual
state
of
the
you
know
underlying
resources
that
the
ml
represents
are.
D
C
So
I
think
he's
just
recommending
that
we
just
put
a
field
somewhere
in
the
status.
Oh.
D
C
I
I
don't
know
if
Tim
had
a
strong
opinion
there.
He
says
not
a
metric,
but
we
we
could
maybe
follow
up
on
that.
A
Yeah
I
think
metric
down
one
metric,
because
value
will
never
change,
there's,
basically
a
constant
as
we
expose.
C
There
there
are,
if,
if
you
have
a
metric,
the
system
doesn't
provide
that
you
need.
There
are
things
like
Kube
Coupe,
State
metrics
like
there
are
basically
controllers
out
there,
that
will
aggregate
data
and
provide
metrics
for
you.
If
we
think
it's
something
a
lot
of
people
need
and
should
be
a
normal
metric
and
we're
willing
to
use
up
a
little
bit
of
memory
for
it.
C
We
could
make
the
argument
that
should
be
metric,
sometimes
even
something
that
is
like
static
per
resource
is
worth
knowing,
because
then
you
can
count
to
an
aggregate
against
all
the
resources
and
see
what
percentage
of
like
your
cluster,
have
some
particular
value.
So
I.
If,
if
you
see
an
argument
to
be
made
here
for
metrics,
definitely
make
it
I
I'd
have
to
think
about
it
more,
but
I
do
agree.
It
should
be
in
the
status.
A
So
do
you
know
when
status
will
be
calculated,
is
it
when
Port
created
or
when
it's
scheduled.
C
For
a
lot
it
depends
so
for
a
lot
of
resources,
there's
a
controller
that
that
basically
is
actuating
your
resource
and
it
you
know
it
looks
at
whatever
the
underlying
actual
resources
are
tries
to
converge
in
with
your
desired
State
and
then
updates
the
status.
I
I'll,
be
honest,
I,
don't
know
as
much
about
node,
so
I
don't
know
exactly
how
some
of
these
get
updated.
I
don't
know
if
there
is,
there
are
controllers
involved
with
some
of
these,
like
the
scheduler
I
think
it's
probably
a
mix.
C
My
guess
is
that
some
of
the
stuff's
done
by
the
scheduler
and
some
of
it's
done
by
the
kublet
but
I
don't
know
for
sure.
We
should
clarify
that,
though,
what's
going
to
set
up
the
status.
A
Because
I'm
pretty
sure
not
name,
would
be
the
status
right.
I
would
expect
not
name
to
be
in
a
status.
E
So
since
this
formula
is
fairly
complicated,
we
wouldn't
ask
if
it's
in
the
docks
as
well,
it
seems
to
go
dig
through
the
code
to
find
it
I'll
work
on
Carpenter
different
cluster
Auto
scaler
in
particular,
like
the
diamond
set
case.
It's
pretty
important
to
know
like
what
are
the
exact
resources
that
Cube
schedule
is
going
to
assume
the
damage
that
needs
where
your
pod
doesn't
exist.
So
there
is
no
status
to
look
at
so
you
actually
need
to
calculate
it.
Sort
of
a
priori.
D
E
A
A
Okay,
so
what
we
end
up
is
naming.
A
I
think
everybody
on
the
same
page,
about
restart
policies,
though
and
I
think
it
feels
good
that
at
least
this
one
is
algorithm.
C
A
A
C
Sure
I
just
there
seemed
to
be
a
theme
about
documentation.
So
that's
just
really
know
what
people
said.
A
Foreign,
okay,
I
think
we
went
through
all
the
comments
and
now
I
just
want
to
go
through
slides,
really
quick.
There
are
some
I
want
to
talk
about
a
pattern
like
what
is
sidecar
pattern.
Then
I
have
some
motivation.
Why
this
pattern
doesn't
work
really
easy,
because
I'm
scenarios
and
I'm,
like
not
terminating
jobs,
I
think
it's
the
biggest
problem
after
that
is
service
mesh.
We
need
containers
to
ensure
that
every
communication
is
mtls
for
instance,
and
then
log
from
initialization
startup
are
hard
to
collect.
A
Typically,
people
would
dump
locks
somewhere
and
then
local
container
will
pick
it
up
from
this
place.
But
it's
a
whole
new
path
to
pick
up
works
that
needs
to
be
implemented
or
some
just
want
grpc
or
whatever.
A
A
Anyway,
so
this
goes
into
stages
like
what
sort
of
Smash
we're
doing
it
getting
some
configuration.
First,
then
it
get
easily
go
through
institution
stage,
mostly
change
in
IP
tables
and
starting
the
proxy.
A
A
If
it
fails,
it
will
be
restarted,
but
while
it
will
be
started,
it
will
be
not
ready,
so
incoming
traffic
will
not
be
sent
to
the
port,
and
outgoing
traffic
will
be
will
not
go
because
proxy
is
not
running
so
regular
containers
will
need
to
do
something
about
it
and
then
on
termination,
sidecar
will
ignore
the
sixth
term,
and
then
it
will
be
killed
once
support
is
completely
terminated.
A
A
Then
I
went
through
future
scenariable,
it's
not
a
policy.
Why
we
use
the
recyclers
versus
something
else
and
go
through
some
Alternatives
that
were
rejected
for
sidecar's
implementation
and
then
I'm
going
into
what
is
not
what
is
sidecar
container?
Not
we
don't
want
to
guarantee
any
interdependence
between
sidecar
and
other
containers.
Example.
Here
is
that
when
sidecars
go
down,
we
don't
want
anybody
in
an
attempt
to
go
to
do
some
outbound
connections
and
one
way
to
implement
it
by
the
way
is
to
rewrite
wildness
probes
for
regular
containers
to
point
into
sidecar.
A
So
instead,
like
in
a
port,
you
can
point
to
different
port
and
this
port
may
be
exposed
by
sidecar.
So
sidecar
will,
in
turn
colon
to
regular
container
kind
of
like
play
like
a
proxy,
and
maybe
one
one
way
to
implement
this
like.
If
you
want
guarantee
to
kill
regular
container
when
sidecar
is
not
running,
you
can
do
that,
but
also
it
wouldn't
be
guaranteed
to
be
like
one
second
delay
or
something.
A
So
we
don't
do
any
work,
any
work
to
make
this
really
guaranteed.
You
can
provide
your
tools
to
do
that,
but
it's
not
ideal
tools
and
finally,
sidecars
and
security
boundaries.
It's
also
something
that
Clayton
suggested
and
we
discuss
it
on
our
first
meeting,
there
are
desires
to
make
boundaries
between
containers,
more
precise,
specifically
like.
A
And
other
containers
that
cannot
only
can
read
and
write
to
this
dick,
so
it
cannot
mount
on
Mount.
So
it
has
a
less
privileges.
Same
with,
like
a
modification
of
Ip
tables
like
only
sidecar,
can
modify
IP
tables
right
through
ideas,
how
to
implement
it
and
how
to
enforce
it.
A
Some
people
suggesting
they
want
to
split
port
into
multiple
Parts,
a
multi-part
port
when
some
candle
is
defined
in
one
object
and
some
kind
of
seen
another
object,
and
then
they
need
to
be
immersed
on
the
Node,
and
this
object
can
be
somehow
controlled
occult
differently.
So
only
one
object
can
be,
can
have
some
privileged
contents
and
stuff
like
that,
but
we
don't
plan
to
implement
any
of
that
mostly
because
this
problem
is
out
like
be
on-site.
A
Cars
or
sidecars
typically
need
this
kind
of
things,
but
other
encounters
may
also
on
the
same.
So
if
we
will
tackle
this
problem,
you'll
probably
tackle
it
for
both
for
regular
new
containers
for
infosight
cars
together,
and
that's
all
I
think
that
explains
that
Kappa
is
a
little
bit
more
like
visuals.
D
A
If
there
is
nothing
else,
let's
call
it
a
meeting,
I,
don't
think
Clayton
is
joining
back
I.
Let
me
check
messages.
Maybe.
A
No
I
don't
have
any
messages
from
Clayton,
so
I
will
try
to
get
his
attention
later
and
let's
see
how
we
can
resolve
this
naming
thing.