►
From YouTube: SIG Node Sidecar WG 2023-02-21
Description
Meeting notes and agenda: https://docs.google.com/document/d/1E1guvFJ5KBQIGcjCrQqFywU9_cBQHRtHvjuqcVbCXvU/edit#heading=h.m8xoiv5t6qma
GMT20230207-170414_Recording_1344x1120
A
A
Sidecar
working
group
meeting
welcome
everybody.
I
wanted
to
start
with
what
we
wanted
to
discuss.
First
time,
sequencing
of
work,
I
started
looking
at
the
implementation.
A
And
I
mean,
if
you
implement
all
in
one
shot,
this
will
be
a
huge
PR,
not
because
it's
a
lot
of
functionality.
In
fact,
it's
not
that
much
of
functionality
that,
like
I,
mean
looking
at
like
all
the
places
I
need
to
change.
It's
not
that
much,
but
at
the
same
time
it's
stashing,
so
many
pieces
of
code,
so
I
wanted
to
break
down
a
little
bit
and
in
fact
we
already
started
this
work.
A
A
little
bit
like
thought
here
on
the
call
already
doing
one
of
those
actions,
so
yeah
I
wanted
to
discuss
a
breakdown,
and
maybe
like
We,
can
brainstorm
on
more
ideas.
What
to
do
is
this
font
good
enough,
because
I
copied
it
from
some
markdown
thingy
I
may
can
make
it
bigger.
A
B
A
So
I
want
to
break
it
down,
break
work
down
into
pieces
what's
needed
before
we
change
API
before
we
add
the
flag,
what
we
can
do
like
in
terms
of
refactoring
and
like
unifying
of
code,
then,
what's
the
minimal
work,
we
need
to
do
with
API
change
like
single
PR.
That
brings
necessary
functionality
and
then
what
we
can
do
after
we
did
the
API
change,
so
we
can
add
extra
pieces
of
functionality.
That
is
not
I
mean
that
is
needed,
but
it's
not
like
absolute
minimum.
A
I
I
think
where
we
can
start
is:
let's
discuss
what
we
need
to
what
we
can
change
with
API
change
like
minimal
things
that
you
need
to
do
so
when
you
think
we
need
to
do
is
first
at
startup.
We
need
to
wait
for
started
not
for
completion
for
those
who
need
containers
for
sidecars,
then
we
need
to
keep
reconciling
status
of
sidecar,
so
it
wouldn't
be
like
unexpected.
So
like
every
time
we
reconcile
like
it's.
A
It's
expected
that
this
containers
in
internal,
like
is
you
need
container
and
not
a
regular
container
collection,
an
intermination
yeah
we
need
to
like
when
we
decide
whether
like
when
we
look
at
containers
being
terminated
and
we
still
have
sidecar
running.
We
need
to
terminate
all
the
entire
port
and
like
all
containers
anyway.
So
this
is
a
third
piece
of
things
that
you
need
to
do
and
then
results.
Calculation
update
is
also
like
small
piece
that,
without
that
it
wouldn't
be
I,
don't
think
we
can
get
this.
A
Take
this
PR
without
this
Microsoft
calculation
update
I'm,
not
sure
about
topology
and
CPU,
upgrade
like
it's
not
absolutely
necessary.
A
From
my
perspective,
we
may
I
decide
to
put
it
into
after
collection,
but
we
may
want
to
tackle
it
in
the
same
PR
as
well
and
I
I'm,
saying
that,
because
thought
is
making
this
work
to
unifying
this
Logic
for
resource
calculation
among
multiple
pieces
and
topology
manager
was
one
of
those
pieces,
so
maybe
it
will
be
no
up
like
I'll
handle
this
two
in
the
same
PR,
I'm,
not
sure
yet.
A
I
I
didn't
look
very
deep
into
this
chord,
so
I
think
this
is
absolute
minimal
that
we
need
to
do
in
single
PR
with
API
change.
What
it
means
that
we
need
to
do
like
logic
of
restarting
of
sidecar
on
failure.
We
can
do
it
later.
A
I
mean
it
will
be
started
in
a
sequence
properly.
It
will
keep
living,
but
if
it
will
fail,
it
will
fail
like
we
like.
A
We
can
do
this
part
later,
like
after
we
complete
like
first
PR,
and
then
we
can
enable
props
on
sidecars
also
later
materials
already
looking
into
logic,
of
making
sure
that
started
field
will
be
set
on
init
containers
properly
and
it
will
be
set
before
any
probe,
so
it
will
be
set
unconditionally,
so
we
wouldn't
allow
sidecars
to
initialize
properly,
but
I
mean
it's
not
super
critical
for
this
PR.
We
can
always
say
that
it's
separate
PR
that
will
come
later
and
it's
like
it's
not
very
complicated.
A
It
should
be.
It
should
be
quite
targeted
and
like
in
specific
area,
then
yeah
portraits,
take
questions
should
also
be,
can
also
be
postponed.
We
want
this.
We
want
to
make
sure
that
sidecars
can
affect
both
radius,
but
we
can
do
it
later.
We
don't
have
to
do
it
before
and
I
looked
at
this
logic.
It
feels
to
me
that
this
logic
already
will
work
just
out
of
the
box
once
once
we
enable
probes,
but
we
still
need
to
write
some
tests.
A
I
think
to
just
make
sure
it's
working
properly.
Lifecycle
hooks
also
can
be
enabled
later.
It
may
happen
that
it
will
be
easier
to
handle
it
along
with
this,
but
I
don't
think
so
because
so
for
to
enable
lifecycle
hooks.
The
biggest
change
should
be
changing,
API
validation.
So
whenever
we
admit
the
port
we
need
to
like
right
now
we
don't
allow
any
containers
with
a
life
cycle
hook,
so
we
need
to
start
allowing
them.
A
So
API
validation
will
be
fixed
in
here,
but
I
think
implementation
wise.
It
will
just
work
because
start
start
container
function
doesn't
discriminate,
it
does
the
same
logic
for
all
of
them.
It
just
doesn't
it
just
doesn't
have
any
lifecycle
hook
or
you
need
containers,
that's
why
it
does
nothing
for
them,
but
ultimately
the
same
logic
as
far
as
I
can
say
by
analyzing
code
and
then
home
score.
Adjustment
is
like
minor
features
that
you
can.
A
You
can
postpone
to
the
very
end
as
well,
so
this
is
like
in
terms
of
like
API
change,
minimal
change
and
what
we
can
do
after
and
in
terms
of
what
you
can
do
before
we
already
discussed
whether
you
want
to
fork
in
it.
Containers
I
think
we
don't
want
to
Fork
them
lots
of
feedback
that
it
would
complicate
adoption
quite
significantly.
A
So
what
taught
us
doing
a
sport
resource
calculation
centralization
so
to
be
single
function
that
we
need
to
update?
It
will
be
much
easier
in
our
PR
to
do
and
then
started
as
my
tears.
Looking
into
that,
we
want
to
have
started
field
for
init
containers
populated,
properly
and
I
want
to
discuss
this
more
later
in
this
meeting
and
then
yeah.
A
Okay
and
then
I
will
probably
need
to
just
create
a
CI
job
for
pre-submit
and
have
it
ready.
So
whenever
we
have
end-to-end
test
I
have
some
ideas
for
internet
tests
as
well.
There,
then
we
can
just
test
it
on
any
any
PR
and
it
will
be
already
prepared
for
us.
So
I
think
this.
This
is
something
I
want
to
considerate.
Maybe
this
we
can
begin
next
week,
so
we
will
have
some
time
for
all
these
changes
later.
C
To
understand
like
before,
with
and
after
these
are
like
three
PRS,
or
these
are
like
different
PRS
that
must
merge
together.
A
So
I
think
this
will
be
single
PR.
No
because
I
mean
when
you,
when
you
we
wouldn't
be
allowed
to
do
API
change
without
any
implementation.
So
this
is
like
this
is
how
kubernetes
operates
this
as
this?
So
you
cannot
just
say:
I
have
this
plan.
All
these
PRS
are
lined
up.
I
want
to
do
API
change.
First,
it
doesn't
work
this
way,
so
we
want
we.
We
will
need
to
likely
we'll
I
mean
we
can
try,
but
likely
we
will
be
asked
to
do
all
together.
Yeah.
D
Yeah
I
was
going
to
add
one
thing
to
that:
Sergey
is
absolutely
right,
like
there's
a
pretty
strong
line
right
now
that
we
don't
merge
API
changes
without
their
corresponding
implementation,
so
you
usually
end
up
with
a
pretty
big
PR.
However,
if
you
need
to
you
can
use
somebody,
you
can
use
a
branch
somebody
where
either
like
somebody's
personal
PR
branch,
and
you
can
combine
commits
from
multiple
people
there
and
then
merge
them
and
keep
the
commits
separate
to
retain
attribution.
D
A
It's
a
good
point
and.
D
And
we
can,
we
can
merge
it
anytime
so
anytime
between
now
and
freeze,
which
is
what
is
it
March
mid-march?
Something
like
that?
I,
don't
want
to
say
date
unless
I
know.
Let
me
find
it.
A
Yeah
I
hope
as
soon
as
possible.
One
problem
right
now
is
on
vacation,
very
relaxing
one,
and
we
will
need
to
have
some.
We
need
to
find
some
approver
to
get
our
changes
in
it's
not
trivial.
These
days,
yeah.
D
It's
code
freezes,
March
15th,
so
if
you
assume
you're
giving
your
group
your
reviewers
at
least
a
week
for
the
big
PR,
maybe
more
we've
got
two
to
three
weeks
of
development.
A
Yeah
so
hopefully
we
can
get
this
PR
end
of
next
week.
We
can
try
and
again
like
it
all
depends
on
like
whether
we
can
merge
this
one
as
well
and
after
changes.
Do
you
agree
with
this
splish
split
of
with
change
and
like
after
change?
It
doesn't
make
sense.
C
Start
and
then
how
do
we
line
up
with
the
the
the
the
one
from
vinay,
because
we
are
touching
the
same
areas
right.
A
Yeah
I,
don't
know
like
I,
really
hope
that,
like
we've
been
discussing
it
on
New
Year
that
we
want
to
merge
it
to
tomorrow
and
like
it's
still
not
merged
I,
don't
want
to
block
ourselves
on
that.
I
think
how
it
changes
very
similar
places,
but
not
exactly
the
same
place
once
you
don't
touch
runtime
at
all,
like
a
CRI,
so
we
only
touch
life
cycle,
mostly
syncbot
and
such.
D
D
E
I
was
gonna:
ask
if
you
get
the
API
change
PRN
but
don't
finish
all
of
the
after,
but
would
you
have
to
revert
that
API
change.
D
What
basically,
the
the
decision
the
approvers
are
going
to
make
is,
is
exactly
that
they're
going
to
want
the
API
change
to
be
enough,
so
that
you
wouldn't
have
to
revert
it
okay,
so
they're
going
to
try
and
Define
that
bundle
such
that
if
nothing
else
goes
in
they're.
Okay
with
it.
C
C
D
C
A
C
Can
tag
like
Jordan
or
our
team
and
they
can
comment
on
it.
A
Makes
sense:
okay,
yeah
I
will
do
that
and
if
you
have
any
other
ideas
how
to
make
more
preparation
work,
it
will
be
perfect.
So
what
I
wanted
to
talk
about
like
yeah
like
this
one,
is
started
versus
like
new
started
field
on
on
init
containers.
So
today
it
is
a
function
called
find
next
init
container
to
run
this
function.
Basically,
what
we
need
to
change
to.
A
We
need
to
make
sure
that
the
next
container,
like
we
will
okay
to
start
next
init
container
when
previously
like
sidecar
container
or
in
started
state
and
I
mistaken,
like
maxia,
started
this
PR
to
initialize
Target
Field
on
init
containers,
which
is
great
but
I,
didn't
realize
that
this
find
next
init
containers
works
on
runtime
status
on
the
scoop
container
export
status
rather
than
V1
Port
status
when
probe
manager,
operation
V1
Port
status.
A
So
the
flow
goes
like
that
there
is
a
sync
Port
that
calculates
changes
needed
to
implement
and
one
of
the
changes
next
in
it
contains
to
run.
And
then
it's
all
calculated
on
a
collection
like
I,
think
this
collection
Port
status
collection,
which
is
runtime,
Implement,
runtime
right,
I'm,
a
representation
of
Port
status,
and
then
it
calculates
all
the
changes.
It
started
applying
these
changes
and
then
there
is
a
function
to
apply
this
like
to
report.
A
These
changes
back
to
API
server
I
think
it's
called
something
like
convert:
runtime
change
to
API
change
or
a
runtime
status,
and
this
function
will
call
into
either
before
this
function
or
in
inside
this
function,
we'll
call
into
probe
manager
and
ask
to
update
Port
status
like
V1
Port
status,
which
is
I
mean.
That
means
that
this
function
doesn't
have
information
about
a
started
or
already
of
a
container.
A
So
here's
the
question
to
make
this
function
like
find
next
and
you
can
then
to
run
know
about
started
State.
We
may
want
to
refactor
prop
manager
completely
to
operate
on
runtime
status
instead
of
V1
pod
status,
so
it
would
mean
that
when
we
calculate
changes
needed,
we
will
call
into
prop
manager
to
update
runtime
status,
and
then
we
may
want
to
even
convert
runtime
status
into
API
status
without
even
calling
into
prop
manager.
So
all
the
logic
will
move
into
sync
Port,
rather
than
to
stay
in
proc
manager.
A
C
A
No,
it
doesn't
even
need
to
update
V1
status.
You
can.
We
can
move
the
logic
of
updating
V1
status
into
this
function,
convert
runtime
status
to
Port
status,
so
you
can
do
it
outside
of
product
manager.
A
A
C
Yeah,
but
there
is
a
philosophical
difference
like
the
the
V1
status,
is,
is
the
status
regarding
the
probes,
so
so
V1
containers
that
you
started
means
the
probe
succeeded
and
the
runtime
status
is
really
the
runtime
status.
Is
the
container
started,
but
we
don't?
We
don't
know
if
the
application
has
started
inside
the
container.
A
Yeah
just
runtime
status
that
has
continue
statuses,
let's
go
to
yeah
each
container
status
it
doesn't
have
even
have
like
it
have
a
state
and
state
is
one
of
those
yeah
created
running
exited
or
unknown.
A
It
has
started
at
which
is
a
time
when
we
just
like
called
into
a
CRI.
We
don't
even
know
whether
it
started.
We
just
know
that
we
started
it
at
this
time
and
then
like
we
don't
have
any
probe
related
statuses
here.
No,
no.
C
A
Don't
yeah
so
what
I
suggest?
Maybe
we
can
refactor
prop
manager
and,
like
add
our
ID
and
started
State
here
as
a
Boolean
variables
and
then
pass
this
status
into
probe
manager
and
do
it
before
we
call
into
find
next
init
container.
A
Otherwise,
I
don't
know
how
to
get
post
status
from
V1
Port
status
and
pass
it
into
sync.
What
Loop
before
we
calculate
the
changes
needed.
A
Is
there
any
opinions
here
like
I
I
know,
like
I
I
know,
I
looked
at
this
code
very
recently,
just
because
it's
a
change
and
I
also
expected
that,
like
the
change
wouldn't
be
significant,
but
now
I
realize
that
we
need
to
educate
our
simple
about
probes
status
like
before.
We
know
we
needed
probe
status
for
any
life
cycle
calculations.
A
B
A
A
Okay,
but
you
do
you
want
to
drive
this
and
create
an
issue
or
you
want
me
to
create
an
issue.
C
A
A
Tried
you
so
what
I
tried
is
I
I
formulated
issue
in
very
vague
terms,
inside
chat
GPT,
create
me
an
issue
for
like
good
first
issue
for
kubernetes,
and
it
did
something
like
completely
broken,
so
it
didn't
work.
A
Okay,
so
I
didn't
look
into
germination
yet
like
how
to
I.
Think
I
looked
at
partially
when
we
wanted
to
implement,
terminate
Port
logic
like
terminate
Port
logic
was
such
that
when
containers
stopped,
we
wanted
to
kill
the
entire
Port.
So
I
looked
briefly
and
at
this
logic
at
that
time,
I
didn't
refresh
it
yet,
in
my
mind,
I
think
it's
it
will
pretty
straightforward
to
understand
when
we
need
to
terminate
so
and
I.
C
It
will
be
very
similar
to
what
Aditi
did
like
long
time
ago
when
we
had
the
main
container,
but
but
I
think
we
just
apply
this
Logic
on
init
containers
instead
of
I
I,
don't
think
it's
it's
difficult.
A
F
A
No,
we
we
in
the
first
stage
we
didn't
want
to
order
termination,
so
we
didn't
want
to
complicate
the
logic
in
any
way.
What
we
want
to
do
in
first
stage
is
to
make
sure
that
when
all
other
containers
finished,
we
will
kill
the
sidecars.
A
So
this
is
a
key
life
cycle
Behavior.
We
didn't
want
to
like
what
we
discussed
forces
in
that
like
we
not
only
kill,
sidecar
cans,
but
we
also
wanted
to
kill
them
with
different
grace
period
or
we
want
to
kill
them
in
order.
So
if
you
have
like
serious
mesh
and
login,
you
can
Define
what
goes
first
and
what
goes
last
so
like
login
may
have
mainly
the
service
match.
A
So
login
needs
to
be
killed
first
and
serious
mesh
very
last,
so
that's
kind
of
logic
will
be
postponed
to
Beta,
but
we
still
need
to
terminate
the
port
when
all
containers
except
side
cars
are
stopped.
B
A
C
A
Yeah
so
I
think
geological
pod
resource
calculation
is
already
like.
Is
it
I
think
it's
a
GTM
that
at
least
right?
So
we
need
to
find
a
program.
E
Yeah
there's
two
PR
there's
one
for
the
the
calculation
we're
still
waiting
for
a
review
or
there's
another
one
that
adds
that
total
resources
request
to
feel
to
the
pod,
which
is
for
the
summation
of
all
the
requests
for
the
file
check.
I,
think
it
still
needs
actually
helps
again
for
some
for
someone.
A
E
A
Okay,
I
will
try
to
find
somebody.
Okay
last
topic,
I
want
to
discuss
today
is
I
was
looking
for
ideas
how
to
write
a
good
life
cycle
test.
So
I
was
looking
at
some
tests
that
implemented
most
of
the
tests
looking
at
status
of
ports,
so
they
do
some
logic
and
they
check
the
status
of
a
port.
Well,
thank
you
that
I'll
paste
it.
A
Sorry
about
that
Interruption
anyway,
so
yeah
lifecycle
tests.
When
we
arrive
life
cycle
tests,
we
typically
do
some
actions
like
start
a
portal,
like
start
with
a
probe
and
then
look
at
a
result,
like
a
result
of
maybe
weather
status,
updated
to
started
with
a
status
updated
to
running
this
kind
of
tests.
A
When
we
start
looking
at
the
more
complicated
logic
when,
like
let's
say,
I
want
to
make
sure
that
the
life
cycle
like
post
start
hook
will
block
execution
of
the
next
container.
This
is
quite
tricky
to
implement.
So
I
was
thinking
yeah,
and
even
this,
like
very
simple
test,
like
let's
say
we
have
a
end-to-end
test.
End-To-End
test
has
a
sidecar
container
and
has
two
containers.
A
A
So
what
they
need
to
do
like
one
of
the
test
cases
will
be
that
after
any
container
will
start
only
with
once,
our
car
has
started
completely.
A
So
here
the
way
to
like
I
was
thinking
how
to
implement
this
test
and
if
you
only
look
at
Port
status
and
we
either
need
to
have
a
timeout
inside
the
containers
that
will
slow
container
down
for
specific
timeout
and
be
sure
to
check
the
status
of
container
during
this
timeout
period
or
into
send
some
message
down
to
continue
saying
you
are
like
I
already
validated
the
status.
Now
you
can
proceed
and
be
started.
So
that's
kind
of
thing,
so
one
like
another
option
would
be
to
inside
the
container.
A
We
write
some
file
and
every
container
in
the
port
writes
in
the
same
file
on
same
log
somewhere,
and
then
we
read
this
log
file
and
validate
the
order
of
entries
in
this
log
file.
So
we
can
have
let's
say
in
this
case
we
have
sidecar
being
active
and
write
some
things
for
maybe
like
30
seconds,
and
then
we
validated
when
after
container
starts
and
right
in
the
log,
something
that
it
started.
A
We
never
have
any
other
entry
is
about
started
and
we
also
I
mean
we
wanted
it
after
a
container
has
the
entry
in
this
log
file
so
either
we
send
the
signal
to
container
or
we
write
a
file
from
within
the
container
and
then
validate
it,
or
we
do
some
very
crazy
timing,
calculation
so
timeouts
and
make
sure
that
we
check
status
between
this
timeout
I
didn't
find
many
good
examples
of
like
I,
found
many
examples
of
first
approach.
A
A
I,
don't
really
like
this
approach,
but
it's
what
we
have
so
I
was
wondering
if
anybody
have
a
suggestion,
how
to
write
good
end-to-end
test
with
either
signaling
into
container
or
writing
some
files
from
it
within
the
port,
like
all
containers
in
the
port
and
then
consuming
this
file
in
the
test.
If
you
have
any
ideas
how
to
do
that,
the
best
way
it
will
be
great,
we
can
start
writing
all
the
tests.
This
way.
E
Either
end
of
the
same
file
or
writing
to
different
files
in
the
same
amount
of
directory,
so
they
could
have
individual
tracks.
Yeah
lots
of
same
issues
with
the
timeouts
just
end
up
being
flaky
and
then
having
to
add
artificial
delays
and
just
seemed
like
a
lot
of
the
folks
are
from
timeout
relating
things.
A
Maybe
news
it
was
a
file
approach.
You
will
have
a
buffer
issue
when.
E
Well,
if
you
write
to
different
files,
so
your
site
call
writes
to
one
file,
then
you're
a
primary
work,
primary
container
up
to
a
different
file
and
at
the
end
you
can
sort
of
check
across
them
or
even
have
the
have
the
container
check
to
ensure
that
hey
the
sidecar
file
should
exist,
or
else.
A
Yeah
I
mean
there's
a
single
file:
I,
don't
think
there
will
be
a
lot
of
buffer
issues
like
let's
say
one
kind
of
Road
sound
like
in
the
next
container
have
troubles
right
like
wrote,
something
and
then
first
container
buffer,
flushed,
so
I
think
we
can
avoid
this
by
flushing
preempt
2D,
oh
with
different
files
here,
I
agree
that
will
be
tricky.
A
It
may
be
tricky
yeah,
yeah
and
even
with
file
approach.
I,
don't
even
I
mean
I
I,
don't
know
what
the
best
file
would
be
to
write
and
where
this
file
should
be
like.
We
can
try
to
write
on
a
host
somewhere
like
Mount
some
host
folder,
but
then
we
will
require
all
the
ports
to
write
on
the
privileged
modes,
which
is
not
ideal
either.
A
A
Okay,
yeah,
if
you
find
any
examples
of
good
life
cycle
tests,
I
will
also
ask
around
if
anybody
has
ideas,
how
to
write
them
perfectly
I
think
we
can
implement
this
test
without
complications
like
Advanced
Techniques,
but
if
you
can
find
a
good
technique
to
use
to
be
perfect,
I
think
that
would
be
what
we
need.
E
E
Like
update
design,
config
map
and
then
music
resource
versions,
you
don't
so
you
know
you're
a
bit
noise
copy
unless
you
avoid
a
foul
positive
that
way,
but
then
everybody
needs
permissions
at
the.
A
Yeah,
it
makes
it
complicated,
yeah
I
was
thinking
also
about
oh,
what
is
it
like
opening
some
socket
back
to
test.
A
Yeah
and
also
I
was
thinking
about,
like
some
agnostic
cost
already
have
some
command,
so
do
you
know
again
host
containers
that
we
use
in
many
tests?
It's
a
it's
a
kind
like
it's
a
image
that
we
create,
and
this
image
has
different
functions,
so
it
makes
both
HTTP
short
references,
so
it
may
expose
grpc
server.
It
may
expose
like
different
things
for
different
purposes,
and
you
can
code
whatever
you
want
to
do
this
so
I
was
thinking.
A
Maybe
we
can
have
some
command
servers
there
and
like
just
when
you
start
a
container,
we
can
send
some
HTTP
signals
there,
but
then
it
will
be
a
little
bit
trickier
when
you
check,
for
instance,
like
some
life
cycle
hooks,
because
during
lifecycle
hooks
there
is
no
port
to
connect
to.
So
it
will
be
better
if
container
connects
back
to
us
rather
than
we
connect
to
container
and
yeah.
That
creates
another
complication.
A
Foreign
would
know.
Is
there
a
good
way
to
to
test
it.
D
Yeah
Adrian
host
was
the
only
thing
I
knew
of
that
was
useful.
We
used
after
web
hooks,
but
I
don't
know
anything
about
the
startup
rooms.
D
A
G
Hey
hey
guys:
I
was
just
gonna
quickly.
Chime
in
my
name
is
Ahmad
devops
infrastructure,
engineer,
big
fan
of
the
cncf
user
of
most
products,
contributor
of
not
many
nearly
as
many
as
I
would
like
to,
and
that's
like
one
plus
I
would
say
anyway,
I'm
trying
to
make
my
rounds
to
get
a
fill
for
the
different
Sig
groups,
really
just
kind
of
being
open.
G
There's
some
obviously
I
think
all
of
us
are
more
interested
and
then
others,
but
you
never
really
get
a
good
feel
of
how
things
are
like
running
and
like
what
are
some
of
the
challenges
that
are
facing
each
one
until
you
like
jump
in.
So
it's
awesome
that
they're
open
to
the
public,
obviously
just
general
sort
of
like
open
source
and
say
groups
across
kubernetes,
and
maybe
some
other
projects
as
well.
G
G
I'll
see
you
guys
again
in
this
one
super
cool
stuff,
so
yeah
looking
forward
to
you
know
at
some
point
being
able
to
contribute,
but
it's
been
great
just
to
get
a
feel
for
how
you
guys
are
operating
this
sig
and
seeing
some
work.
That's
going
on.
A
D
A
Yeah,
it's
current
technology,
you
wouldn't
know
right.
Yeah
I
just
want
to
say
that
signaled
my
meeting
is
in
20
minutes.
This
is
a
working
group
of
specific
stream
of
work.
So
it's
not
like
main
signals
missing,
but
yeah
you're
welcome
to
attend
either
welcome.
G
I
got
it:
okay,
it's
the
the
sidecar
working
group,
a
part
of
the
notesick,
yes
yeah.
G
Awesome
awesome
thanks
for
the
clarification
you
guys
cool
cool
and
that
one's
coming
up.
You
said.
G
D
And
if
you
ever
get
stuck
on
next
steps
on
how
you
can
help
always
ping
people
on
slack,
there
is
huge
work
streams
to
be.
G
Done
No
Doubt
got,
it
will
do
thanks.