►
From YouTube: SIG Node Sidecar WG 2023-02-28
Description
Meeting notes and agenda: https://docs.google.com/document/d/1E1guvFJ5KBQIGcjCrQqFywU9_cBQHRtHvjuqcVbCXvU/edit#heading=h.m8xoiv5t6qma
GMT20230207-170414_Recording_1344x1120
A
Hello,
hello:
it's
a
sidecar
working
group
meeting.
Welcome
everybody.
It's
Tuesday,
February
28th
I
have
the
iPhone
agenda
first,.
B
Yeah
so
some
background.
For
me,
my
name
is
Dave
parasowski
I
work
at
VMware
I've
been
contributing
to
the
Canadian
project
for
the
last
few
years
and
I'm
kind
of
like
the
serving
working
group
lead,
which
is
country
where
people
come
to
bring
their
containers
to
run
in
like
a
serverless
way
type
of
thing.
We
make
an
extensive
use
of
a
sidecar,
so
I'm
just
kind
of
coming
here
to
highlight
what
we
use
it
for
and
just
make
sure
that
whatever
changes
you're
proposing
help
us,
which
is
pretty
much.
B
Why
I'm
here?
So
it's
a
quick
I'll,
just
kind
of
go
through
the
things
quickly.
What
is
key
native
serving
specifically
it's
like
it's
request,
driven
compute
requests
come
in
things,
spin
up,
you
know
the
request
and
they
spend
down
so
that
and
then
we
also
abstract
away
a
lot
of
networking
details,
so
users
sort
of
just
Define
traffic,
splitting
rules
and
other
routing
things,
but
they
don't
actually
have
to
worry
about
like
how
do
you
program
missio
to
do
this
stuff?
B
How
do
you
program
contour
and
Courier,
so
you
provide
like
a
thin
abstraction
over
that
all
the
stuff
predated,
Gateway
API,
but
eventually
we'll
switch
to
that
when
that
matures
so
in
terms
of
their
crds,
we
kind
of
have
like
this
top
level
service,
the
name's,
not
great,
you
could
call
it
like
application
or
whatever,
but
essentially
that
boils
down
to
like
a
route
object
and
configuration
as
you
make
edits.
B
We
produce
revisions
that
allows
users
to
like
roll
back
to
a
working
revision
and
do
traffic
splitting
across
them
Etc
et
cetera,
so
I
kind
of
mentioned,
like
you
can
set
up
routing
this
way
as
an
example.
B
So
we
have
a
sidecar
and
then
so
the
what
I
highlighted
here
is
like.
We
eventually
created
a
deployment
from
the
revision
to
reconcile
it
all
the
way
down,
and
we
configure
like
a
sidecar
with
it.
And
what
does
it
do?
Well,
it
does
a
bunch
of
things.
B
So
requests
are
routed
to
the
sidecar.
Obviously
so,
and
the
reason
why
we
do
this
is
to
offer
up
metrics
to
our
Auto
scaling
system,
so
it
uses
sampling
and
things
like
that
to
figure
out
like
the
load
and
we'll
spin
up
new
pods
when
there's
more
and
probably
the
big
one
too,
is
like
we
kind
of
want
this.
B
Have
this
concept
of
concurrency
like
hey
this
function
or
pod
or
user
container,
can
only
handle
one
request
at
a
time
you
can
set
it
to
five
ten
thousand
et
cetera,
et
cetera,
depending
on
your
need,
so
this
sort
of
enables
like
the
fast
use
case
and
that
the
sidecar
is
what
enforces
that
this
one's
a
bit
interesting
is
as
part
of
performance.
B
B
There's
other
nuances
with
this
one
because,
like
we
might
report
ready,
but
then
there's
also
like
the
node
control
Loop
that
isn't
like.
If
you,
the
timing
of
the
node
control,
I
figure,
what
it's
called,
if
it
doesn't
line
up,
then
you
can
still
get
delays
in
Readiness.
B
That's
why
we
kind
of
hope
that,
like
subsequent
probing
would
be
helpful
for
us
in
the
future,
then
we
could
hopefully
drop
that
from
our
sidecar,
because
we
really
want
aggressive
probes
on
start
and
probably
the
big
one
is.
Our
sidecar
is
responsible
for
draining
requests
during
shutdown,
and
we
kind
of
do
this
even
when
the
user,
users,
application
or
container
doesn't
know
anything
about
like
how
to
do
a
concrete
spell
shutdown
and
I
kind
of
want
to
like
dig
into
that
a
little
bit.
B
So
when
you
drink
a
request,
request
comes
in
hit
when
we've
proxied
to
the
user
container.
Now
we
have
a
deletion
for
whatever
reason
what
we
actually
do
is
we
rewrite
the
pre-stop
hook
on
the
user
container
to
actually
hit
our
sidecar,
and
what
that
does
that
mean
is?
Is
we
so
we
that
prevents
the
term
signal
from
going
to
the
user
container?
B
And
then
this
lets
our
side
car
kind
of
like
not
trigger
the
shutdown
of
the
user
container
until
we
actually
properly
drain
all
the
requests,
in
addition
to
draining
our
Q
procs
equal,
like
fake
Readiness,
because
it
takes
like
there's
a
delay
between,
like
your
pod
being
not
ready
and
it
disappearing
from
the
network
layer?
B
First,
it
has
to
show
up
in
endpoints
the
service
endpoints
and
then,
whatever
networking
layer
you
have
like
istio
or
Contour.
It
takes
time
for
them
to
update
their
routing
rules,
not
to
include
it
in
the
little
bouncing
So.
Eventually,
when,
after
like
the
requests
go
away,
we
return
the
pre-stop
hook,
then
the
term
hits
the
user
container
and
then
eventually
it
like
the
whole
pod
will
just
shut
down
gracefully
so
I'm
kind
of
here
to
just
kind
of
like
highlight
what
I
would
say
are
our
sidecar
asks.
A
So,
can
you
go
back
and
do
you
already
write?
Do
you
already
examination
grade
period
second
seconds
for
both
containers
or
you.
B
Just
yeah
we
do
as
well
yeah,
so
we
we
actually
set
our
termination
Grace
So
in
we
have
in
our
revision
at
the
top
level,
like
timeout,
for
the
request
we
kind
of
set
our
grace
period.
I
think
to
like
at
least
I
have
to
double
check
this,
at
least
what
the
timeout
is.
B
So
if
the
user
expects
their
request
to
be
able
to
be
handled
for
like
or
the
timeout
60
seconds,
we
have
to
set
the
termination
grace
period
to
that
60
seconds,
because
maybe
even
double
it,
because
a
request
could
come
in
after
the
deletion
timestamp
because
of
the
lag
in
the
network
programming.
B
So
that's
why
our
termination
degrees
period
is
set
much
larger
to
handle
like
getting
a
late
request,
potentially
hitting
a
top
pod,
that's
terminating
because
it
could
be
in
flight.
And
then
we
want
to
give
the
request
as
times
to
succeed
and
then
also
then
inhabited
during.
A
Is
there
some
Can
customer
configure
their
termination
grace
period
like
for
additional
cleanup
all.
B
Right
so
I
guess
one
thing
I
didn't
really
highlight
is
like
we
don't
like,
even
though
we
expose
like
the
Pod
template
we
kind
of
like
in
here.
This
is
really
I'm,
not
giving
people
the
full
pod
spec.
We
don't
want
them
to
do
that.
We
don't
want
them,
because
the
idea
is
like
to
we'll
do
all
these
defaults
for
you
and
then,
as
a
user,
I
would
probably
just
specify
I
think.
Maybe
even
here
like
like.
B
This
is
why
we
fake
don't
do
the
turn
signal,
let
it
drain
and
then
do
the
turn
signal
because,
like
in
theory,
we
add
a
bit
more
robustness
to
user
containers
as
long
as
as
long
as
we
can
do
the
shutdown
ourselves
does
that
sort
of
answer
your
question.
A
B
Yeah,
so
so
kind
of
what
I
written
down
as
some
asks
is
like
the
reason.
What
prompted
me
to
come
to
this,
because
I
I
remember
in
the
dock,
that
there
was
like
a
recommendation
for
the
signal
order
and
then
I
didn't
see
it
in
the
thing.
So
I
would
kind
of
want
to
highlight,
like
hey
the
order
of
term
signals
is
important
for
us
we
currently
use.
B
Thankfully,
the
pre-stop
books
existed,
so
that
we
could
actually
halt
the
term
on
the
or
the
user
container
and
kind
of
like
handle
it
in
our
Sidecar
and
have
some
mechanism
of
when
term
signals
hit.
Certain
containers.
B
This
is
why
I
kind
of
like
flushed
out,
like
hey
this
pre-sox
pre-soft
Hook,
is
very
useful.
It'd
be
cool
to
provide
additional
context
like
hey
is:
are
we
restarting
because
pod
shutdown,
or
are
we
restarting
because
of
like
liveliness,
is
failing
and
things
like
that?
B
We
don't
have
a
way
to
do
that
is
there
are
some
recommendations
on
how
like,
oh,
you
could
just
like
hit
the
API
call
and
read
the
pods,
the
Pod
diagram
when
you
get
like
a
turn
signal
and
it's
like
on
a
or
a
large
enough
scale
that
doesn't
really
work,
especially
there's
like
a
whole
bunch
of
things,
shutting
down
I
think
the
other
one.
B
That's
probably
important
for
us
is,
you
know
the
migrate
from
like
our
old
style
deployment,
has
a
container
our
cycle
container
in
the
container
list
and
the
seeing
now
that
we
have
I
think
in
The
Proposal.
The
contains
the
side
card
now
is
in
the
in
it
block.
B
We
need
a
way
to
be
able
to
migrate
our
existing
users
from
like
that
older
deployment
to
the
newer
one
I,
don't
know
how
maybe
it's
more
like
can
I
take
an
old
deployment
and
then
just
like
literally
shift
that
one
container
into
in
a
container
with
the
restart
never
and
does
that
work.
I
hope
it
does.
Maybe
that's
something
that
should
be
part
of
I.
Think
like
a
beta
or
GA
criteria.
I,
just
don't
recall
this
I
I
haven't
read
the
kept
in
full
detail.
A
This
last
point
is:
can
we
can
you
stop
on
here
for
a
moment
so
you're
saying
about
deployments
so
sidecars,
mostly
solving
jobs
problem
when
I
restarted
policy
is
never,
but
in
your
case
it
feels
like
you
have
a
deployment
when
nobody
starts
so,
like
I
mean
obviously
starts.
So
what
does
Sidecar
solves
to
you
like?
What's
the
biggest
problem,
you
want
us
to
solve.
B
B
B
B
And
then
this
last
one's
not
really
an
issue,
but
just
something
I've
noticed
it's
like.
B
Sometimes
users
need
to
have
their
pod
configured
a
certain
way,
for
example,
I
think
in
order
to
use
IM
on
AWS
to
reach
out
to
some
Services,
you
have
to
set
like
FS
group
on
the
Pod
security
defaults,
but
the
challenge
there
is
like,
as
we
kind
of
have
this
split
between
like
hey.
This
sidebar
is
really
sort
of
a
hidden
component
to
the
user.
They're
not
meant
to
like
go
and
poke
at
it.
B
But
setting
these
like
pod
security
context
properties
can
influence
that
that
that
sidecar
I'm
not
just
highlighting
that
as
a
as
a
thing
I,
don't
know
what
the
solution
there
is.
We
haven't
really
been
bidding
it,
but
I
could
imagine
like
if
this
could
be
a
problem
in
the
future,
but
it's
not
like,
like
maybe
the
other
example,
is
istio
needed
to
do
IP
tables
rerouting
in
the
past
and
for
their
Sidecar,
which
I
think
needed
potentially
like
some
privileged
access.
B
So
if
you,
for
example,
in
your
pod
security
context,
said
like
disable
all
root
users
or
something
like
that,
I
don't
know
if
you
can
do
that
the
Pod
level,
but
then
that
would
probably
break
the
SEO
sidecar
as
an
example.
So,
like
I,
don't
know
of
a
way
to
kind
of
say
like
hey.
This
pod
and
its
group
of
containers
is
controlled
by
the
user,
but
then
there's
sort
of
this
like
sibling
pod,
that
it
is
like
a
side.
B
D
A
As
you
pointing
out
this,
is
it's
a
vague
idea
that
needs
to
happen,
but
then
you
don't
have
any
specifics
yet
yeah.
B
Like
my
only
thing
in,
there
is
like
our
only
use
cases
for
routing
traffic.
So
then
really
what
we
just
need
is
like
two
pods
in
the
same
network
name
space
but
I.
Don't
think
it's
a
sidecar
thing,
but
something
else.
So
that's
it!
That's
all
I
have.
A
So
you
you,
don't
let
your
customers
control
any
initialization
ordering
right,
so
installation
doesn't
really
matter
for
you.
B
No
not
really,
but
some
there
have
been
requests
for
init
containers,
so
in
theory
we
would
probably
want
to
place
our
sidecar
at
the
top
it'd
be
interesting
to
see
because,
right
now
we
also
let
people
users
bring
multi-container
as
part
of
k
m
like
bringing
two
containers,
I
think
the
I
don't
really
know
what
the
use
cases
are
there,
because
most
people
just
kind
of
want
to
run
like
a
single
container
and
that
will
work
for
the
majority
of
workloads,
with
the
exception
of
like
actually
no
I,
don't
have
a
good
example
there.
B
B
But
there's
some
semantics
means
to
figure
out
like
how
do
we
interact
with
other
containers
as
well.
So
what
what
I
mean
by
that,
then?
Why
am
I
mentioning
because
to
your
questions
like
hey?
What?
If
a
user
then
brings
another,
not
mold
like
a
not
a,
not
a
container
but
another
side
card
for
some
reason,
like
Dapper
as
an
example
in
Candia,
for
whatever
reason
I
don't
know
if
that
works
or
not,
what
are
the
implications
like
I
don't
know.
B
I
would
then
hope
that,
like
hey,
maybe
our
we
can
position
our
sidecar
first
and
then
their
second,
maybe
that
all
works
I,
don't
have
a
good
answer.
I
guess
to
your
question:
okay,.
A
D
A
Startup
is
one,
but
the
determination
is
also
important
like
even
now.
We
are
not
sure
how
to
solve
the
login
and
service
mesh
problem
like
how
like
how
to
express
that
one
should
go
before
another
and
how
to
do
it
genetically
so
you
can
inject
one
each
of
them
by
themselves,
but
them
are
together
as
well,
and
ideally
the
same
configuration
should
work
for
both
of
them.
Yeah.
B
B
Yeah,
that
might
be
okay,
I
think
as
long
as
we
start
before
and
can
manage
like
the
container.
That's
not
the
sidecar
and
not
in
any
container,
then
I
think
we
should
be
okay,
so
I,
don't
think
our
use
cases
that,
like
that
complex,
so.
C
E
B
B
A
Thanks
for
having
me,
okay,
thank
you.
Yeah,
we'll
go
into
this
case.
An
implementation,
if
you
want
to
you,
can
stay
but
free
to
drop
off.
B
I
get
I
guess,
as
decisions
are
made,
what's
the
best
way
to
track
like
new
things,.
A
We
generally
will
maybe
you'll
announce
like
I,
really
hope
we
will
get
to
implementation
this
at
least
and
then
we'll
announce
it
on
slack
for
sure,
as
Matias
said
in
one
of
the
PR's
code,
freeze
is
already
pricing
on
us
very
very
firmly,
so
we
may
there
is
a
chance.
We
can
sleep,
but
I
I
think
I
hope
that
we
can
push
through
okay,
cool
yeah.
B
B
And
I
guess,
like
the
other
question
I,
have,
is
like
because
I
skim,
the
cap
before
the
call
like
in
the
dock,
you
had
an
order
for
the
term
signals,
but
in
the
cap
there
is
not
is,
is
that?
Are
you
going
to
firm
up
an
order
or
provide
some
optionality
there
prior
to
I,
guess,
Alpha
or.
A
Is
that
something
yeah
Alpha
only
concentrate
on
Startup
ordering
can
shut
down
for
jobs,
so
this
is
a
main
promise
for
Alpha
and
shutdown
ordering
between
containers
and
ordering
among
side
cars.
This
will
be
will
be
in
beta
I,
see.
Okay,
unfortunately
I
mean
maybe
it
will
help
you
maybe
like
not
but
you'll,
be
you'll
appreciate
any
feedback,
especially
when
we
start
bait
and
like
termination
we'll
have
a
design
fleshed
out
completely.
Then
we'll
appreciate
your
feedback.
Yeah.
A
All
right,
thank
you.
I
will
go
ahead
and
share
my
screen
now.
A
Yes,
we
see
okay,
yeah,
so
yeah,
following
up
from
last
week,
I
was
foreign
extremely
torn
into
a
different
direction,
so
I
didn't
participate
too
much,
but
I
felt
all
the
all
the
issues
that
I
promised.
So
this
is
a
sidecar
branch
and
I
know
that
it's
already
have
a
few
commits
for
I
mean
it
has
a
br
for
API
change.
A
Let
me
check
really
quick
yeah.
There
is
a
PR
yeah,
so
yeah
I
I'll
probably
go
ahead
and
nurse
this
and
rebase
a
branch.
So
this
is
just
introduction
of
the
restart
policy
field.
I
need
to
go
through
this.
A
It
should
be
all
good
I
glanced
over
it.
It
was
good,
so
I
will
probably
just
merge
it
in.
If
you
want
to
review,
please
go
ahead.
The
CPI
change.
A
Then
this
is
Uber
issue
just
to
get
through
and
Tim
Hawkins
was
on
vacation
last
two
week,
so
I
will
poke
him
and
I.
Think
Ronald
was
on
a
vacation.
He
may
still
be,
but
he
said
that
he
may
come
up
come
back
this
week
for
just
couple
approvals,
so
I
will
see
if
I
can
get
anybody
approve.
This
proposed
course
of
action.
C
A
I
think
G
dupe
is
still
in
progress
whatever
yeah,
so
we
need
approvers
right.
So
bosses
on
I
think
the
little
gtmed
at
some
point
yeah.
This
one
is
lgmt
and
this
one
I,
don't
I
thought.
Oh
I
I
was
waiting
for
this
one
yeah
Fair
Point,
okay,
yeah
this
address
is
all
the
concerns.
I
think
the
Francesca
at
the
moment
when
I
reviewed
it
and
seems
that
Francesca
agreed
with
this
change
it.
Actually
we
found
a
bug
right,
which
is
cool.
E
I,
don't
think
so,
so
they
can.
They
can
both
go
in
yeah.
The
D
Duke
Pottery
sources
is
going
to
save
a
lot
of
work.
There's
like
calculations
all
over
the
place,
so
it'll
save
time
for
both
of
this
and
then
the
in
place.
Pottery
sizing
one
as
well.
D
So
on
yeah,
this
one
was
just
like
modifying
the
the
problem
manager
to
always
work
on
the
Cube
runtime
status.
Something
like
this.
Instead
of
using
the
the
the
API
V1
status.
D
So
the
the
pr
looks
good
now
from
me.
The
only
issue
I
have
is
with
the
the
unit
test,
yeah
the
problem
manager
test
and
that
one
needs
some
refactoring,
but
every
time
I
try
to
I
try
something
it's
not
quite
working
and
every
time
it
takes
me
like
half
an
hour
to
enter
into
code,
then
I
try
something
it
doesn't
work
and
then
I
tried
the
day
after
and
I
forgot
everything
that
I
that
I
tried
so
I'm
a
bit
struggling
with
it.
D
Now
the
the
the
thing
is
so
the
test
is
based
on
you
set
some
some
statues
like
a
preview
status,
an
existing
status
and
then
a
modification
on
this
status,
and
then
the
expected
one-
and
all
of
this
is
based
on
the
API
V1
status.
D
But
now
that
the
problem
manager
is
working
on
the
coupon
time
status,
so
I
need
to
to
change
the
the
input
for
the
for
this
method
to
make
something
relevant
and
I'm
always
struggling
to
to
make
the
right
changes.
So
yeah
I,
don't
know
if
someone
can
wants
to
have
a
look
or
ping
me
at
the
I
will
still
try
for
the
next
few
days,
but
yeah
for
the
moment.
I'm
kind
of
stuck
okay
and.
C
D
Worst
thing
is
that
if
you
take
like
the
real
tests,
where
you
have
the
the
problem
manager
and
the
the
the
cubelete
running
and
everything
it
works,
because
the
the
mechanism
of
of
using
the
the
cube
runtime
status
to
to
set
the
API
status
is,
is
working
well.
So
it's
only
this
unit
test
that
that
needs
some
refactoring
to
to
take
into
account
the
New
Logic.
A
And
in
terms
of
changes
here,
did
you
change
the
place
we
update,
so
it
used
to
be
in
this
generate
API,
Port
stats
right.
So
now
now.
A
Before
file
next
init
container
right-
yes,
okay,
so
how
this
status-
okay,
yeah
I,
will
see
like
as
I
said
last
week
was
like
super
crazy
for
me
and
I
I
plan
to
spend
more
time
on
sidecars
this
week.
So
hopefully
we
can
pin
me
on
stock.
If
you
have
a
continuous
trouble,
I
will
try
to
help.
D
Okay,
but
at
least
if
someone
wants
to
review
everything
except
the
the
program
manager
test,
it's
it's
it's
ready
now.
I
just
need
to
tweak
this
test
to
have
something
meaningful,
and
the
problem
is
that
I
I
wrote
also
to
to
Clayton
in
the
in
the
comments.
Is
that
every
time
I
change
it?
It
seems
to
me
that
the
test
is
irrelevant,
so.
D
A
D
A
D
A
Yeah,
absolutely
okay,
yeah!
If
anybody
wants
to
help
with
uni
tests,
please
help
Matthias.
C
C
D
Okay:
okay,
thank
you,
but
you
can
ping
me
tonight,
no,
not
too
too
late
like
if
you
ping
me
in
the
next
five
hours,
I
will
probably
reply,
but
not
later
on
next
four
hours,
but
I
can
definitely
follow
up
tomorrow
morning.
A
E
Yeah
sure
yeah
just
paste
it
there
and
chat.
E
So
this
was
just
taking
your
idea
from
last
week
of
having
the
having
containers
basically
right
to
a
common
log
file
and
then
just
looking
into
analysis
of
the
law
about
the
very
end,
to
figure
out
kind
of
what
went
on
so
I'll
work
that
up
in
this
one
I
thought
it
turned
out.
The
tests
are
a
lot
easier
to
write.
So
basically
you
have
some
container
figs.
You
just
sort
of
configure
like
what
cap
of
container
is
this.
E
How
long
is
the
delay
with
what's
its
exit
code,
etc,
etc?
And
then,
if
you
scroll
down
a
little
bit
farther
there's
a
sample
of
the
output
file
generates
and
then
yeah.
So
that's
kind
of
the
output
file.
It
looks
like
it's
just
a
a
single
file
in
in
a
shared
volume
is
mounted
to
all
the
containers,
remote
yeah
containers,
and
then
you
can
just
do
some
tests,
like
you
know,
ensure
that
a
knit
starts
before
you
know.
E
One
starts
before
knit
two
and
then
one
exits,
four
and
a
two
et
cetera,
et
cetera,
all
the
way
through
and
then
once
you
have
side
cars
in
there,
then
you
can
sort
of
Imagine
like
okay,
ensure
that
your
side
guard
starts.
You
know
before
they
need
nicknames
after.
E
Scroll
down
yeah,
so
these
are
the
pods.
So
there's
that
shared
volume,
I
think
I
did
a
yeah.
It's
a
it's
a
host
path
directory,
so
I
think
I.
Think
I
ran
make
temp
to
create
a
share
director
on
the
host
and
then
the
mouth
to
the
band.
A
So
you,
in
your
case,
like
oh
test
and
no
training
on
the
same
cost
right,
I'm,
sorry,
test
and
node
are
running
on
the
same
host.
That's
how
you
do
it.
D
E
I'm
trying
to
think
yeah
so
yeah,
it
assumes
it
that
the
oh.
C
E
I
I
did
think
of
like
one
way
of
working
around
that
would
be
to
like
at
the
very
end
and
this
it
kind
of
gets
a
little
harder
to
to
work
out,
but
basically
have
some
pod
that
cats
that
fall
off
of
the
test,
basically
so
that
you
can
then
read
it
somehow
transfer
it
but
yeah.
If
you're
trying
to
like
yeah
launch
pods
on
a
different
host
and
where
the
tests
are
running
yeah,
you
can't
share
the
same
file.
A
Okay,
but
I
think
this
may
work
for
etoe
node
holder,
because
that's
how
we
run
CI
at
least
yeah.
E
And
even
in
the
remote
runs,
your
tests
are
still
running
on
the
Node,
so
that
should
be
good.
A
But
not
for
other,
like
Slash
node.
In
that
case,
different
machine.
C
A
No
I
think
it's
perfect
for
this
folder,
it
should
be,
should
be
working
fine.
Does
it
run
normally
like
did
you
check
the
test
actually
shows
up
in
this
CI.
E
A
I'll
yeah,
probably.
E
A
A
Yeah
I,
don't
remember
which
one
we'll
have
it,
but
if
it's
not
serial
or
anything,
it
will
be
fine.
Yeah.
D
I
I
have
to
say,
I
had
a
look
when
you
you
pushed
your
latest,
commit
like
yesterday
or
Sunday,
and
don't
remember
it's
it's
just
great
what
you
did
like
really
nice
interface,
everything
cool.
Thank
you.
Thank
you.
A
A
Okay
and
I
think
What
also
need
to
happen.
I
will
try
to
do
this,
like
I'll.
Try
to
do
this
CI
for
sure,
and
maybe
I
will
try
to
try
your
approach
with
end-to-end
tests
to
try
to
write
some
end-to-end
tests.
So
we
have
some
framework
before
we
sends
us
a
big
PR
here.
C
A
A
A
Okay,
then
I
don't
have
anything
else.
I
think
I
will
try
to
find
approvers
for
PRS
and
we'll
start
sending
my
PRS
so
maybe
end
of
week.
We'll
have
more
progress.