►
From YouTube: Meshery Development Meeting (Dec 22nd, 2021)
Description
Meshery Development Meeting - December 22nd, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Hello,
everyone
we'll
get
started
shortly,
so
let's
take
some
time
to
add
our
names
to
the
attendees
list.
Let
me
share
the
meeting
minutes
and
yeah
nice.
It's
also
a
good
time
to
add
any
topics
that
you
that
you
have
to
discuss
on
the
core
or
work.
You
have
been
doing
that
you
would
like
to
show
the
call.
So
if
you
have
any
feel
free
to
add
that
to
the
meeting
minutes,
let's
give
it
another
minute
to
let
others.
B
A
A
I
think
we
have
a
couple
of
new
folks
on
the
call
so
abhiji
abi
abhiji
jain,
like
is
this
your
first
time
on
the.
A
Nice,
nice
to
see
you
again.
A
All
right,
so
we
can
start
off
this
meeting
with
a
progress
update
on
the
refactoring
mastery
ui
initiative
is.
A
All
right,
okay,
then
related
to
that.
Maybe
richa
can
start
us
off
with
some
of
her
work.
Richard,
like.
D
On
good
first
issues
and
took
three
issues
and
which
are
fully
merged
and
to
just
be
familiar
with
the
code
base.
So
after
that
I
got
familiar
after
that.
I
got
introduced
to
this
pdf
excel
and
I
then
took
a
calendar
component
and
made
it
an
independent
calendar
component
and
which
is
also
merged,
and
as
of
now,
I
am
working
in
autocomplete,
adding
autocomplete
component
using
downshift
dependency.
D
That
is
also
complete,
but
I
am
facing
some
issues
in
the
autocomplete
thing,
so
I
am
working
on
it
and
I
need
to
address.
There
are
a
couple
of
comments
in
that
pr.
I
need
to
address
that
as
well,
so
yeah
and
today
morning
I
just
got
a
very
weird
issue
and
it
shows
that
migration
is
needed,
so
I'm
yeah,
I'm
in
touch
with
nitish.
So
I
I
think
that
I
will
be
working
in
migration
stuff
also
because
I
am
not
able
to
locally
develop
machine
in
my
system
now.
B
Yeah
so
aditya
or
monov,
you
guys
might
have
some
advice
there.
So
migration
in
terms
of
front
end,
oh
by
the
way,
rika
nice
to
talk
to
you
nice
to
meet
you.
B
Not
wasting
any
time
yeah
cool,
so
migration,
though
so,
certainly
we
have
data
data
migration
that
happens
for
info.
That's
persistent
to
measures
database,
but
then
my
great
oh
migration,
maybe
because
you're
working
with
a
newer
version
of
next
js,
newer
version
of
react,
newer
version
of
material
ui,
maybe.
D
E
D
D
B
C
Because
mui
v5,
probably
is
is
removing
the
breakpoint
system
and
he
done
hidden
hidden
component
and
there
are
a
lot
of
component
that
has
been
changed
and
or
are
like
completely
removed.
So
you
have
to
you
have
to
do
a
migration
for
it.
So
there
is
a
tool
like
you
should
go
to
like
v4
to
v5
migration.
C
So
so
you
should
have
visited
mui
v4
to
v5
migration
page.
So
all
these
points
are
are
returned
down.
Like
all
the
considerations
are
you
should
you
know?
Okay
should
be
aware
of,
while
doing
so.
I
I
most
probably
it
is
because
theme
dot
breakpoints
has
has
been
deprecated,
may
be
removed.
A
But
this
issue
seems
to
be
quite
new
because
I
was
working
on
the
refactoring
yesterday
and
we've
been
using
mui5
from
the
beginning
with
breakpoints.
So
I
don't
know
how
this
came
up
right
now.
D
C
So
issue
signifies
late
theme.
Dot
breakpoints
is
undefined.
So
probably
the
theme
is
not.
You
know
the.
C
C
Breakpoints
is
available
there,
so
so
probably
you
should
fix
a
file
and
issue
like
the
complete
code
where
it
is
originating
from
so
that
you
can
collectively
take
a
look
on
it.
So.
E
I
hope
so
yeah.
Yes,
he
would.
D
So
the
second
period,
the
drop
down
across
in
dog.
There
is
a
comment-
and
I
have
not
addressed
this
yet
because
I
was
working
in
calendar
component
and
auto
complete
component
and
after
that
side
by
side.
I
am
looking
in
the
loading
spinner
animation
thing,
also
using
svg
rsd.
So
I
have
not.
I
have
not
fully
involved
in
this
pr,
but
I
will
do
it
as
soon
as
possible.
D
A
All
right
so
yep,
so
richard
will
file
an
issue
for
that
and.
A
All
right,
we'll
come
back
to
the
other
updates
in
the
in
the
machine
ui
after
nithys
joins
so
moving
over
to
the
next
topic.
B
B
So
so
one
one
of
the
calls
we
were
on
previously
miss
jared
had
mentioned
that
he
bumps
into
the
occasional
bug
and
as
we
go
to
over
the
hump
on
v0.5,
headed
into
like
landing
v
0.6,
it's
time
to
put
more
structure
in
place
around
quality
around
how
tests
are
done
unit
testing
integration
testing.
I
was
getting
a
lesson
from
mario
last
time
on.
B
Integration,
testing
and
functional
testing
and
the
difference
and
and
so
there's
lots
of
testing
that
we
need
to
do.
There's
a
number
of
tests
that
happen,
but
many
more
tests
to
be
built
out
and
so.
B
Is
going
to
show
us
how
far
along
we
are
in
terms
of
coverage
for
integration
tests,
in
accordance
with
the
1.6
test
plan,
to
help
bolster
this
effort
and
make
sure
that
we
all
can
understand
like?
Not
only
can
the
contributors
understand
where
we're
at
in
terms
of
what's
being
tested
and
what
the
status
of
those
tests
are,
but
also
for
users
and
for
contributors?
B
What's
the
current
compatibility
of
a
given
measuring
release
with
a
certain
kubernetes
version,
a
certain
version
of
each
individual
service
mesh,
a
particular
cloud
like
there's
a
long
list
of
things
to
be
compatible
with,
so
the
project
can
use
a
compatibility
matrix.
That's
something
that
a
compatibility
matrix
more
tests,
as
these
are
things
that
we've
been
looking
at
for
a
long
time
and
then
acknowledging
the
need
for
for
a
long
time
and
and
they
they
the
need
for
them,
increases
as
we
get
closer
to
a
1.0.
B
So
so
we
want
to
put
in
some
framework
and
structure
early
on
in
the
so
they've
been
there.
She
has
been
championing
the
creation
of
integration
tests
that
run
for
each
individual
measuring
adapter.
B
So
if
you'd
like
to
look
at
how
those
integration
tests
work
or
how
they
run,
if
you
take
a
random
example
and
go
to
well,
I
guess
this
won't
be
random.
I'm
specifically
could
go
to
measuring
istio
these
tests.
He
should
be
propagating
the
rest
of
these
today,
mr
tiwari,
I
hope,
but
the
way
that
the
integration
test
for
each
individual
messaging
adapter
is
being
run
right
now
is
through
github
workflow.
So
we
use
a
lot
of
gita
workflows
here
right
now.
B
These
are
structured
in
two
separate
workflows,
which
I
question
why
that
is,
I
know,
there's
a
logic
behind
it,
but
I
would
still
question
that
logic
like
ideally
there's
one
set
of
tests
that
run
these
tests
run
every
time
that
someone
submits
a
pull
request.
These
tests
run
after
the
pull
request,
merges
these
tests.
B
We
we're
always
running
checks
and
tests,
we're
always
doing
inter
unit
tests
and
we're
always
doing
I'm
always
making
sure
that
things
build
and
then
we're
verifying
that
like,
since
this
is
golang
centric
here
in
the
adapters,
you
can
look
at
like
the
any
of
the
other
workflows
about
what
things
are
checked.
What
code
is
reviewed?
What
what,
how
it's
linted,
how
security
is
checked
for
what
static
analysis
is
going
on
all
those
things
those
have
been
in
the
project
for
a
long
time?
B
What's
relatively
new,
are
these
integration
tests
so
once
the
code
is
compiled
and
the
static
analysis
and
security
analysis
and
other
things
have
have
passed
now
that
there's
you
know
a
statically
compiled,
you
know
piece
of
golding:
let's
go
ahead
and
run
it.
Let's
get
a
kubernetes
environment,
let's
bring
up
measuring
server,
let's
run
this
thing
and
test
its
functionality
and
let's
do
an
integration
test,
and
that's
mostly
what
these
two
new
workflows
are
about
and
they'll
go
through
and
test
a
number
of
things.
B
Do
you
want
to
talk
about
how
many
integration
tests
using
this
issue
adapter
as
the
example
are
being
run?
If
we've
got
anything
left,
that's
uncovered,
that's
not
tested.
C
So
as
these
are
end-to-end
tests,
one
of
the
core
capabilities
of
any
adapter
any
machine
adapter
is
to
is
to
deploy
a
service
mesh
along
with
a
bunch
of
add-ons.
So
the
core
workflow
is
pretty
configurable,
so
it
has
a
bunch
of
inputs.
It
has
a
bunch
of
outputs
and
currently
we're
using
only
adapters
to
test
currently
we're
only
testing
adapters
with
the
help
of
that
core
workflow.
So
it
takes
a
bunch
of
inputs
about
what
kind
of
what
are
we
testing.
C
So
the
version
is
fetched
dynamically,
so,
for
example,
at
each
particular
time
when
we
run
these
tests,
we
fetch
the
latest
version
of
that
particular
service
measure,
for
example,
in
this
case
it's
istio.
So
we
set
all
these
input
parameters
and
then
the
output
parameter
is
just
a
json
which
has
the
which
has
the
which
has
a
test
result
for
us,
which
has
a
bunch
of
metadata
and
a
bunch
of
assertions
that
we
make,
and
these
assertions
are
also
configurable.
C
So,
for
example,
in
the
case
of
in
the
case
of
s2
adapter,
the
assertions
are
that
hey.
Is
there
a
steered
control
plane?
Do
we
have
the
egress
gateway?
Do
we
have
the
increased
gateway?
Do
we
have
the
grafana
add-on
and
the
prometheus
add-on
and
the
system
name
space?
If,
yes,
then,
then,
basically
yeah
the
tests
are
passing.
If
one
of
these
is
if
even
one
of
these
is
passing,
that
it
is
considered
as
the
partially
tests
are
passing,
but
it's
not
actually
passing
it's
partial.
C
Otherwise,
it's
failing
so
basically
every
time
the
pull
request
is
made,
there
are
three
stages.
First
stage
is
to
actually
create
that
pattern
file.
So
if
I
glossed
over,
a
pattern
file
is
what
we
are
using
because
pattern
file,
we
kind
of
want
everything
in
messaging
to
use
pattern
file.
A
pattern
file
is
used
patent
file,
it
has
a
bunch
of
services
and
it
can
be
used
to
deploy
your
service
mesh
or
anything
that
you
can
deploy
with
your
kubernetes
and
service
mesh
normal
manifest.
C
You
can
deploy
that
so
with
a
with
a
patent
file,
so
we
are
using
patent
file
in
this
particular.
So
this
workflow
is
very
patent
pattern
file
specific
it
can.
It
will
only
run
your
tests
if
you
have
a
patent
file
to
deploy
and
a
bunch
of
inversions
to
make.
So
this
we're
in
the
patent
file
we're
only
deploying
service
meshes
for
now
for
for
these
tests
and
a
bunch
of
add-ons,
so
it
will
deploy
the
those
so
for
normal
people.
Request.
What's
going
to
happen,
is
it
will
create
that
patent
file?
C
It
will
make
all
these
assertions
and
if
the
tests
pass,
it
means
that
this
pull
request,
didn't
actually
you
know,
mess
anything
up
and
in
when
it
gets
merged.
So
that
means
that
hey,
we
should
actually
publish
these
results
somewhere.
So
in
that
case
the
those
results
would
be
published
to
the
ui.
I
think
leash
early
showed
you
or
you
could
go
ahead
and
yeah.
C
This
can
compare
compatibility
matrix
because
the
yellow
one
is
means
that
the
partially
tester
passing
green
means
the
test
are
passing
so
the
the
istio
thing,
the
istio
rows,
that
you're
seeing
there
are
the
actual
data,
so,
for
example,
the
pr
is
merged
whenever
the
pr
is
merged.
In
this
theory,
adapter
that
is
counted
as
the
component
version
is
counted
as
edge.
So
you're
getting
these
extra
data
about
hey.
C
We
ran
this
thing
on
the
mixture
server
version,
v0.6
rc2
and
if
you
click
on
any
of
these
things,
you'll
have
to
actually
get
what
were
the
assertions
that
were
made
made.
So
if
you
click
on
histo,
the
istio
was
running,
egress
gate
was
running,
rafa
adam
was
running
from
his
head
and
was
running,
and
you
have
some
extra
bunch
of
metadata
like
the
time.
The
time
it
ran
on,
etc,
etc.
C
Whenever
a
release
would
be
made
instead
of
the
component
version
being
edged,
you'll
actually
get
that
particular
release
version
there
and
after
incorporating
this
thing
in
all
of
the
adapters,
this
matrix
would
be
actually
can
be
used
as
a
single
source
of
truth.
For,
if
something
fails,
we
can
direct
people
to
go
ahead
and
look
at
the
look
at
this
compatibility
matrix
and
we
can
actually
because
the
things
are
time
stamped.
So,
even
if,
when
a
pr
gets
merged,
we
can
go
back
and
see
which
pr
mess
things
up.
C
So
this
will
kind
of
you
know,
help
us
to
make
the
whole
system
a
bit
more
resilient.
Considering
we
have
a
lot
of
adapters
and
it
gets
hard
to
manage
them
all,
so
this
will
help
us
to
make
sure
that
each
adapter
is
doing
what
is
what
it
is
supposed
to
do
so
yeah,
that's
kind
of
everything
about
this
in
these
integration
tests.
In
a
nutshell,.
B
Thoughts-
this
is
a
start,
there's
a
there's,
a
few
things
that
are
maybe
wrong
about
this
approach
and
there's
a
few
things
that
can
be
enhanced
about
it
as
well.
Feedback
and
thoughts.
G
Well,
with
regards
to
kind
of
compatibility
matrix,
what
is
the
plan
for
support?
Are
you
doing
like
n
minus
one
or,
what's
that.
B
H
C
C
To
add
here
might
be
because,
as
as
these
test
results,
get
we
get
more
and
more
test
results,
the
table
will
be
huge.
So
so
we
can
group
things
in
a
very
smart
way.
We
have
to
find
the
parameter
on
which
we
should
group
these
things,
because
there
are
a
bunch
of
them.
So
should
we
group
them
on
the
basis
of
the
service
measure,
should
we
group
them
on
the
basis
of
service
mesh
type,
or
should
we
go
on
the
or
the
measuring
component
version
type?
C
So,
for
example,
every
time
a
pr
is
merged,
if
15
pdrs
are
that
you'd
get
these
15
results,
so
we,
the
most
reliable
source,
is
the
the
recent
one.
So
we
can
have
the
recent
one
displayed
here
and
when
we
click
on
the
recent
one,
we
can
get
all
the
past
results
in
in
another
table.
Maybe
so,
basically
the
thing
is
to
break
one
table
into
multiple
tables,
and
this
original
table
can
have
the
the
most
recent
data.
I
A
G
Regarding
compatibility
matrix-
and
you
said,
n
minus
50
in
regards
to
to
what
I
guess
that,
should
I
should
clarify
that
a
bit.
Are
you
talking
about
mesh
reversion
service,
mesh
version.
B
Don't
know
yeah,
okay,
yeah,
actually,
there's
a
reason
why
jared
is
asking
again
because,
like
the
the
answer
to
his
first
question,
if
I
were
jared-
and
I
heard
the
response
back,
I
would
think
I
would
laugh
and
walk
away
like
you're
insane.
I
don't
know
what
you're
like
either
you
can't
keep
up
with
that
many.
You
can't
remain
that
compatible,
so
it
really
requires
some
explanation
to
your
second
question:
jared
that
the
n
minus
50,
that
that
focus
is
really
about.
B
B
That
and
this
will
this
will
get
really
hairy
as
we
go
to
the
further
along
the
project.
Goes
that
actually,
let
me
not
dig
into
it,
because
it
because
I
think
it'll
confuse
some
folks
and
it's
probably
not
worth
it
but
but
to
to
jared's
point
like
here.
Let
me
give
an
example
of
the
focus
of
like
of
that
n
minus
50
answer,
and
some
of
us
are
still
saying
what
do
you
mean
in
in
my?
What
are
you
guys
talking
about?
B
Here's
an
example
of
what
jared
is
referring
to
he's
saying:
you've
got
a
lot
of
service
meshes
to
test
and
they
each
have
their
own.
They
each
have
their
own
version
so
and
if,
if
you've
got
this
like
exponentially
complex
this,
this
set
of
permutations
that
you're
going
to
go
through,
where
you'll
say
great,
so
you're
running,
measuring,
v0.6
and-
and
you
want
to
make
sure
that
each
of
these
it
or
it
orchestrates
this
service
mesh
at
version
1.99
across
each
of
these
things.
B
If
you
just
stop
the
sentence
there,
that
the
number
of
permutations
is
quite
high,
there's
a
lot
of
combinations
and
then
you
say
well,
but
not
this
type
of
service
entry
like
that
type
like
a
private
ip
address
versus
a
public.
It's
like
the
number
gets
large.
Then
you
put
in
kubernetes
current
kubernetes
version
across
all
that
again
and
all
that
again
and
all
that
again,
it's
like,
like
wow
you're,
barely
even
scratching
the
surface,
you're,
barely
even
like
wait.
You
know
like
a
one
one
thousandths
of
a
percent.
B
Are
you
covering
in
terms
of
the
potential
combinations
of
what
someone
might
actually
be
running
the
yeah?
So
there's
a
few
things
here,
so
one
the
fact
that
there's
that
many
combinations
really
speaks
to
the
importance
of
strong
framework
of
automation
around
these
types
of
around
tests,
testing
in
layers
just
like
there's,
defense
and
layers
or
security
and
layers.
So
too
is
there
kind
of
defense
about
quality
and
layers.
B
That
test
result
info
into
a
single
pane.
You
know
a
single
view
to
the
extent
possible.
If
you
look
at,
I
can't
remember
the
kubernetes
dashboard
for
test
test,
infrared
something
I
can't
remember,
but
it
is,
if
you
look
at
it
for
kubernetes
or
for
the
test.
B
Dashboard
test
results,
the
compatibility
matrix
that
they
have
for
kubernetes
and
for
openstack
and
other
big
projects
like
first
of
all,
they're
horrifically
ugly
and
that's
in
part,
because
they're
trying
to
deal
with
so
much
data
and
slices
in
so
many
ways
that
measuring
is
up
in
the
same
up
the
same
creek
like
in
the
same
boat
like
it
has
the
same
problem.
B
We
do
need
to
cover
quite
a
few
of
them
and
the
one
of
the
reasons
why
we
say:
hey
we're,
looking
to
support
a
lot
of
a
lot
of
these
versions
going
back
is
because
the
way
that
the
software
is
built
in
the
first
place,
a
lot
of
it,
is
well
the
components
that
we're
using
they're,
auto
generated
they're
programmatically,
we're
programmatically,
generating
the
way
that
we're
supporting
these
service
meshes
we're
also
programmatically,
generating
the
way
that
we
support
kubernetes
and
its
versions,
and
so
long
as
that
code
is
done.
B
Well,
we
shouldn't
be
experiencing
a
lot
of
hiccups,
a
lot
of
bugs
in
in
the
execution
of
that
code.
If
the
code
is
auto-generated,
it's
generated
well,
when
you
execute
that
auto-generated
code,
it
generally
should
be
working.
We
have
to.
We
have
to
verify-
and
that's
what
we're
talking
about
now
and
so.
Okay,
fine
we're
talking
about
verifying
right
now,
adapters
and
having
them
run
some
very
high
level
tests
of
like
really
what
ashish
was
pointing
out
was
like
life
cycle
management
of
standing
up,
different
components
of
istio.
B
In
this
case
the
ingress
gateway,
the
egress
gateway,
sdod,
etc
good,
but
do
they
actually
do
what
they're
supposed
to
do
like?
Are
they
passing
traffic?
Are
they
rerouting
traffic?
Are
they
configured?
It's
like
wow
that
you
really
get
into
all
those
combinations,
then
it's
possible
for
us
to
dig
into
a
number
of
those
by
way
of
service
mesh
patterns,
because
if
you
take
this
as
a
pattern-
and
you
begin
to
express
some
things
like
I
mean
like
to
really
test
it-
you
need
a
scent.
B
You
need
an
application
deployed,
you
probably
need
some
load
generated.
You
probably
need
kubernetes
up
and
running.
You
need
the
service
mesh
deployed
and
then
you
need
to
a
way
of
defining
what
tests
you're
going
to
do
like
what
config
you
would
have
and
then
a
way
of
asserting
whether
or
not
the
system
behaved
like
you
wanted
it
to
it's
like
we
already
have
all
those
like.
B
Basically,
all
those
components
you
think
about
smi
conformance
testing,
that's
what
that
is,
if
you
think
about,
and
so
so
that
brings
up
the
other
issue
I
was
going
to
mention
before.
Is
that
like
we're
doing
some
of
this?
This
is
a
lot
of
the
same
things
in
different
areas.
Smi
conformance
testing
uses,
like
jared,
had
pointed
out
recently,
it's
using
in
part
cuddle
as
a
library
to
do
part
to
define
its
assertions
and
then
verify
whether
or
not
those
things
are
true.
B
So
it's
using
a
piece
of
tech
there
we're
using
github
workflows
here
and
some
github
actions
that
have
been
created
to
do
some
of
this
and
we're
using
some
bats
in
some
other
places.
B
That
needs
to
be
considered
as
we
go
to
heavily
invest
into
a
few
different
frameworks
need
to
comprehend
another
like
the
build
and
release
the
test.
There's
a
master
test
strategy,
a
design
document.
That's
in
the
google
drive
that
is
essentially
a
blank
document
like
it
really
needs
to
itemize
things
like.
I
just
said
that
we
there
are
these
types
of
testing
that
need
to
be
done.
These
are
the
tools
that
are
being
used
in
those
areas:
here's
how
they
underlap
here's,
how
they
overlap.
B
Here's,
how
that's
going
to
be
reported
out
to
you
guys
point
like
yeah
this.
This
very
quickly
becomes
extraordinarily
unwieldy.
Like
it's,
it's
pathetically
naive
in
its
approach.
B
Moreover,
these
are
integration
tests
like
where's
the
compatibility
matrix,
that's
a
whole
different
section
like
the
the
info
from
these
needs
to
be
pulled
out
and
then
laid
out
very
clearly
that,
like
for
any
of
mesher's
integrations
or
its
dependencies
like
kubernetes
and
these
service
meshes
yeah,
there
needs
to
be
a
much
different
way
of
navigating
the
the
info
here
and
a
bunch
of
summarizations
jekyll
is
gonna,
be
like.
I
was
working
on
grouping
by
by
what
she
should
actually
intuitively
mentioned,
which
was
grouping.
B
These
results
by
service
mesh,
in
this
case
jekyll
as
a
framework
only
takes
things
so
far,
but
that
that's
an
aside
like
we
will
get
the
presentation
of
it
figured
out
the
testing
of
it,
though
like
so
hopefully
as
soon
as
I'm
done,
haranguing
everyone,
the
vendor's,
going
to
say,
yeah,
he's
going
to
bring
up
the
test
case
spreadsheet
and
show
us
what
integration
coverage
is
going
on
for
measuring
server
and
what's
being
tested
with
it.
B
B
We
don't
want
to
be
investing
having
to
sustain
two
different
approaches
in
this
regard,
because
it's
really
just
the
same
thing
when
you
do
people
very
soon,
I
don't
know
jared.
We
might
see
this
in
the
next
quarter,
especially
if
psyllium
service
mesh
comes
over
to
say
their
data
plane
is
better
than
everyone
else's
data
plane
or
more
performant.
Anyway,
we're
going
to
see
a
lot
of
people
want
to
use
the
smp
action
and
then
publishing
their
results,
because
otherwise
it's
going
to
be
a
war
on
he
should.
B
He
said
she
said
like
this
one's
faster
yeah.
Of
course
it
is
in
their
environment,
because
they're
not
using
a
standard
measure
to
do
that,
the
thought
being
that
service
mesh
performance
as
a
standard
independent
measure
is
going
to
hopefully
help
level
the
discussion.
You
know
the
level
set
the
discussion
and
make
some
of
that
fair
anyway.
My
point
is
that
that
github
action
might
be
used
more
and
more.
G
Something
that
I've
been
trying
to
deal
with
on
our
side
as
well-
and
you
know
thinking
about
from
a
test
perspective.
You
know,
because
we
have
to
worry
about
okay,
what
are
all
the
different
platforms
we
support,
you
know
manage
cloud
environments
besides,
you
know.
G
Manually
created
clusters
using
like
a
cube,
adm
or
something
like
that,
and
then
also
kubernetes
versions,
the
versions
of
internet
service
mesh
that
we
support.
You
know
backwards,
compatibility
things
like
that,
and
I've
been
trying
to
focus
on
more
as
like.
What
are
we
actually
wanting
to
test
here
and
in
this
case
we're
wanting
to
test
measuring
and
measuring
adapters?
G
So
maybe
just
we
could
think
about
that.
When
we're
talking
about
these
or
or
brainstorming
about
these
tests,
where
can
we
get
the
most
value
for
our
time
to
catch
as
many
things
as
quickly
as
possible,
so
that
we
so
that
you
could
have
less
time
spent
on
chasing
down
all
these
edge
cases
and
making
sure
that
the
core
product
is
very
stable?
B
Yeah,
it's
a
great
it's
you're,
the
perfect
individual,
actually
on
the
call
to
say
something
like
that.
It's
such
a
great
such
a
great
thing
to
say
it's
like
if
there's
there's
kind
of
two
things
at
work
or
the
first
one
is
hey.
Why
repeat
the
testing,
if
other
others
have
already
done
that
testing
it's
like
wow
yeah.
B
We
should
be
very
aware
of
that
and
just
try
to
marry
that
up
to
the
extent
possible
that,
like
so
that
there
isn't
so
there
isn't
just
a
bunch
of
duplication,
like
you
and
specific,
is
like
you're
a
perfect
individual
to
to
be
able
to
identify
that
line
very
clearly,
if
not
just
not
just
for
nginx
service
mesh,
but
like
potentially
for
many
of
the
other
ones
as
well
and
to
to
uplift
the
if
some
of
those
those
tests
are
public
or
what
happens
to
uplift
and
to
point
to
them
and
say
if
you
really
wanted
to
know
more
like
go,
go,
see
those
and
if
there's
a
shared
framework
at
all.
B
If
there's,
if
there's
like,
like
like,
I
was
using
the
s,
p
action
kid
of
action
as
an
example
is
like
if
performance
testing
it
like
part
of
the
point,
is
that
performance
testing
could
be
done
for
each
of
the
service
meshes
using
some.
Some
common
framework
be
really
helpful
toward
one
of
the
goals
in
service
mesh
performance
that
is
to
have
a
you
know,
basically,
a
global
dashboard
that
says:
here's
the
speeds
of
of
these
different
service
meshes.
B
Oh,
is
that
yeah,
the
the
management
software
like
meshri
has
to
walk
kind
of
a
fine
line
between
someone
having
if
there
is
something
wrong
with
istio
and
mesri
isn't
testing
it.
A
mastery
user
might
experience
it
and
blame
measury,
even
though
it
was
an
istio
bug
and
that's
something
to
hold
hold
in
the
you
know
like
a
as
the
as
the
approach
is.
B
As
the
approach
goes
forward,
you
have
to
kind
of
hold
some
of
that
in
the
back
of
your
mind
as
well,
that
that's
not
a
justification
or
an
excuse
for
like
going
and
testing
a
bunch
more
things.
No,
no!
That's
not
it.
It's
like
to
help
draw
the
delineation
so
that
both
the
user
and
the
and
the
individuals
that
are
maintaining
all
of
these
projects
to
be
able
to.
B
Direct
people
in
the
right
way
as
to
where
to
go
to
figure
out
the
issue,
and
so
jared,
I'm
very
good.
It
was
really
cool.
You
just
said
that,
because,
because
yeah
I
mean
like
it's
a
great
principle
to
hold
in
the
back
your
mind
as
well,
about
like
hey
look,
why
are
we
starting
with?
We
don't
have
all
of
our
unit
tests
done?
Why
are
we
starting
way
out
here
this
high
level
on
integration
tests?
B
Well,
it's
for
the
principle
that
jared
just
said,
which
is
like:
let's
get
these
core
flows
down,
let's
test
a
bunch
of
code
in
one
fell
swoop
and
actually
the
tests
these
integration
tests
are
doing.
This
is
like
generally,
what
a
user
does
a
user
want
to
stand
up
a
mesh
and
make
sure
that
the
mesh
components
are
doing
okay
and
then
maybe
use
them
to
do
something
like
yeah
and
that's
what
these
initial
integration
tests
are
doing
to
jarrod's
point
these
initial
integration
tests
are
kind
of
they're
really
focused
on.
B
Can
you
like
the
other
examples
that
I
was
giving
of
like?
Can
you
tell
istio
tell
any
of
these
meshes
to
do
traffic
routing
rule
and
does
that
rule
work
like
it?
Should
it's
like?
Well,
isn't
like
that's
good.
If
measuring
can
verify
you
know
sure,
but
also
isn't
that
kind
of
the
responsibility
of
the
mesh
itself
to
make
sure
that
that's
happening
and
it's
like
yeah,
it's
a
sh.
G
B
Yeah,
so
we
should
go
right
down,
we
should
go
clarify
our
terminology,
not
because
not
because
just
so
that
we
can
yes,
actually
these
tests
are
doing
just
that.
These
tests
are
provisioning,
a
kubernetes,
deploying
a
meshery
server
and
then
deploying
the
the
the
point
in
time
build
of
this
particular
adapter
and
then
doing
the
test
from
there.
G
That
sounds
like
the
like:
an
accurate
nomenclature
for
that.
What
about
the
entire
so
like
from
a
user
perspective,
I
I
install
mesh
to
cuddle
and
do
system
start
where
it
it
goes
ahead
and
spins
up
all
the
adapters
and
everything
just
like
a
user
would.
Is
that
done
as
well?
A
So
I
think
I
I
linked
the
the
workflow
in
chat,
so
we
we
use
in
the
in
that
an
end-to-end
test.
We
use
a
published
version
of
measuring
ctl
and
we
deploy
it
on
ins
both
inside
inside
a
cluster
as
well
as
outside
a
cluster,
and
we
run
through
some
scenarios,
testing
various
aspects
of
both
measuring
ctl,
as
well
as
measuring
servers.
So
currently
we
have
things
like
running
performance,
test,
running
deploying
service,
meshes,
applying
service
mesh
patterns
to
test
some,
the
adapter
capabilities
and
all
those
things.
A
So
I
have
been
trying
to
work
work
on
in
advancing
this
test
to
cover
more
usage
use
cases
both
for
machinery
ctl,
as
well
as
the
in
the
core
measuring
server
functionalities,
so
yeah.
I
I
think
like
this
this.
This
is
what
an
end-to-end
test
is.
Basically,
so
since
we
are
using
the
like,
we
are
basically
walking
through
from
a
perspective
of
how
a
user
would
use
machinery.
G
Okay,
cool-
I
am
a
bit
confused.
It
says
this
is
a
manual
e2e
test
line
23
there.
B
Yeah,
oh,
it
should
be
yeah,
that's
maybe
that's
a
good.
It
should
be
the
case
that
you
can
manually
invoke
the
test,
but
it
also
so
so
that
anyone
can
just
come
over.
You
know
run
the
thing,
but
it
also
is
set
to
run
nightly.
G
A
Oh,
I
was
just
pointing
out
that,
like
like,
if
you're
running
it
manually
like
we,
can
just
do
a
test
with
the
pattern
file.
We
want
the
the
performance
profile
we
want
and
all
those
things,
but
in
the
scheduled
workflow.
We
we
run
through
different
combinations
of
different
permutations
of
all
those
as
well.
A
Oh
jared:
go
ahead,
like
you
are
saying
something.
G
Oh,
I
was
wondering
if
there's
a
way
to
integrate
a
subset
of
these
tests
into
each
component
into
the
github
actions
there
that
are
run
as
part
of
the
the
pr
checklist.
G
You
know
that,
based
on
the
number
of
tests
here,
if
they're
running
in
in
real
time,
I'm
assuming
that
it'll
take,
you
know
a
while
to
run
through
all
these,
and
probably
don't
want
that,
but
just
one
or
two
basic
tests
to
ensure
everything,
spins
up
and
runs
that
might
help
with
preventing
things
from
from
getting
merged
that
are
if
it
breaks
something.
H
G
So
you
know
a
couple
weeks
ago,
I
I
found
that
issue
that
I
I
couldn't
deploy
meshri
at
all,
using
the
the
latest
or
the
the
stable
build,
and
I
was
maybe
it's
just
my
system.
I
don't
know,
but
I
was
a
bit
surprised
at
something
that
critical
was
able
to
get
past
the
checks
and
get
merged,
and
so
that's
why
I
was
kind
of
thinking
along
these
lines
like
if
you
had
one
of
these.
G
You
know
just
a
single
one
of
those
end-to-end
tests
that
checks
the
the
build
before
merging
right
it
would
you
could
catch
these
before
they
get
merged.
That
makes
sense.
B
Totally
yeah
some
some
of
it
yeah.
I
think
it
was
on
a
specific
adapter
that
yeah,
but
basically
that
like,
if
you
take,
if
you
take
a
look,
here's
the
main
meshery
repo,
just
a
random.
I
think
this
is
the
last
pr
to
have
been
submitted
on
and
when
you
take
a
look
at
the
checks
that
it's
running
today
now
bear
in
mind.
B
Different
checks
will
run
based
on
different
labels
that
are
assigned
to
a
pr
so
there's
so
we
we
use
the
heck
out
of
github
actions
and
we
try
to
do
it
intelligent.
You
know
doing
different
different
tests
for
different
things
in
general,
like
one
of
the
things
that
jared
he's
used.
This
word
in
the
past
and
I
think
he's
the
only
person
I've
heard
use
this
word.
It's
a
good
one
is
smoke
testing
and
it's
important,
I
think
kind
of
what
he's
referring
to
here
is
just
like:
hey,
hey,
you
know
like
before.
B
You
know,
like
it's
gonna,
sound,
really
funny
when
I
say
it
he's
trying
to
say
it
in
politest
of
ways
and
which
is
like
hey.
You
know
before
before
a
a
pr's
merged
like
wouldn't
you
just
sort
of
stand,
the
thing
up
kick
the
tires
and
and
then
you
make
sure
that,
like
at
least
it
stands
up
and
doesn't
just
fall
over
and
that's
kind
of
what
smoke
testing
is
about,
and
so,
when
you
look
at
it's
like
okay
well,
so
what
are
we
testing
today?
B
Just
as
I
can't
remember
if
this
was
a
golden
change
or
what
yeah
it
must
be,
and
that
happened
in
here,
it's
like
okay
for
golang,
then
what
are
we
doing?
Well
unit
tests,
yeah
good
in
some
integration
tests,
so
so
this
the
term
integration
there
we're
probably
not.
We
do
need
to
go
write
down
what
our
terms
mean.
For
my
part,
it
doesn't
really
matter
to
me
quite
how
we
define
them.
B
We
just
should
try
to
define
them
so
that
we're
all
you
know
saying
the
same
thing
when
we
say
something,
but
but
anyway,
so
we're
doing
we're.
Building
the
anyway.
Let's
not
walk
through
this
whole
thing,
but
to
jared's
point
is
like
here's
the
right
place
to
look.
Are
we
doing
that
high
level
test
of
standing?
B
The
thing
up
and
making
sure
it
doesn't
fall
over
or
those
ete
tests
that
we
were
just
looking
at
for
the
mesh
readapters,
where
it's
just
it
is
just
basically
doing
that
it's
like
standing
up
istio
or
because
it
was
that
was
the
example.
Did
these
components
start
running?
Okay
great?
They
did
okay,
great,
let's
not
take
the
test
further
at
this
point,
but
that
was
enough.
It's
like
well
so
to
jared's
point
he's
like
yeah.
Yes,
the
answer
is
yes,
that
all
of
that
those
workflows
are
highly
reusable.
B
You
can
call
one
to
the
next
and
and
so
to
his
point,
it's
like.
Can
we
be
doing
that
on
a
per
pull
request
basis
so
that
you
know
nothing
breaks
the
build.
Having
broken
builds
is
a
really
you
know,
ugly
thing
more
than
that,
can
we
be
layering
on
top,
maybe
some
some
nightly
builds
the
reason
to
do
nightly.
Things
is
not
to
say,
let's
retroactively
do
re
like,
and
so
here's
regression
testing
right
comes
in,
like
let's
retroactively,
go
back
and
test
to
see
if
anything
was
broken
like.
B
Why
are
you
do?
Why
aren't
we
just
doing
that?
At
the
point
in
time
before
we
merged
the
thing,
it's
like
well
in
part,
because
what
jared
was
saying
earlier,
which
is
like
there's
a
massive
set
of
permeate
all
these
things
to
cover,
so
how
many
of
these?
How
long
do
you
want
this
thing
to
run?
And
how
much
of
that
do
you
want
tested
here
and
it's
not
a
hot
mess?
What
we
have
it's,
not
it's
actually
there's
been
a
lot
of
effort
put
toward
it,
but
it's
actually
not
written
down.
B
In
one
pl,
I
mean
it's,
it's
kind
of
written
down
in
one
place,
it's
written
down
what
we're
doing
the
strategy
some
of
the
strategy
behind
what
we
need
to
do.
Next,
though,
like
I'm,
looking
like
right
at
mario
right
at
jared,
you
two
gentlemen
have
some
of
this
is
like
like
taking
candy
from
a
baby
in
terms
of
like
the
type
of
impact
you
could
have
on
the
project
in
terms
of
like
well
the
things
that
jared
is
saying
right
now,
so
I'm
gonna
stop
talking.
B
I'm
gonna
drop
a
link
to
the
master
test
strategy
that
really
needs
to
be
flushed
out.
Better
needs
to
account
for
the
things
that
jared
is
bringing
up.
I
do
and
it's
it's
highly
appropriate.
We
need
to.
We
need
for
the
project
to
never
have
a
reputation
of
being
of
poor
quality,
which
I
think
before
we
ever
get
there
I
mean
like,
and
we're
kind
of
in
the
danger
zone,
with
some
of
the
experiences
that
jared's
had.
B
Let's
overcome
that,
if
this
is
a
great
place
for
those
that
want
to
learn,
devops
type
stuff
or
things
that
fall
into
that,
whatever
devops
means
that
that
fall
into
that
is
a
great
place
to
go.
Dig
in,
and
it's
a
great
opportunity
like
you
really
do,
have
to
reflect
on
the
whole
thing,
because
there's
there's
less
there's
more
opportunity
for
reuse
than
we're
necessarily
doing
at
the
moment.
B
G
Yeah,
yeah,
that's,
and
hopefully
I'm
not.
I
don't
want
to
come
across
as
critical
or
anything
like
that,
everyone's
doing
an
amazing
job-
and
this
is
an
amazing
project
and
it's
a
complex
project,
there's
so
many
moving
parts.
You
know
I'm
just
working
on
the
service
mesh
aspect
of
it,
but
you're
abstracting
that
a
layer
up
which
just
adds
you
know
almost
infinite
amount
of
complexity.
G
On
top
of
that,
with
you
know
the
compatibility
all
that,
like
you,
were
saying
so
many
different
versions,
so
we
different
configurations
of
kubernetes
of
a
service
mesh
and
then
you're
trying
to
wrangle
all
that
on
on
top
of
it.
So
there's
a
lot
going
on
here
and
I
just
I
want
the
project
to
succeed,
and
so
that's
why
I'm
raising
these
and
hopefully
doesn't
come
across
as
critical
or
or
anything
like
that,
everyone's
doing
great.
G
In
regards
to
the
the
tests,
though,
what
you're
saying
smoke
test?
That's
it
exactly
what
I
was
what
I
was
trying
to
make
you
know
for
for
us
we
have
a
subset
of
our
into
intest.
Sorry,
let
me
make
a
slight
digression,
at
least
in
my
mind
when
I
think
of
an
intent
test.
Let's
say
it
see
it's
the
experience
that
a
user
would
run
into
so
it's
running
on
the
live
system,
with
everything
set
up
the
exact
way
that
a
user
would
have
it.
G
It's
an
integration
test
here,
you're
testing,
well,
integration
between
components
and
you
may
not
have
to
have
the
whole
live
system
set
up.
You
might
be
able
to
somehow
mock
that
up,
let's
say
or
something
like
that
and
then
obviously
there's
there's
unit
tests,
but
at
least
I
say
in
twin
tests,
but
what
I
mean
the
full
live
system,
everything
nothing
is
mocked
or
anything.
That's
what
I
mean
for
that
and
if
we
want
to
rediscuss
terminology
later,
that's
fine
anyways,
so
so
for
internet
service
mesh.
G
What
we
do
is
for
every
pr
is
we
have
a
few
end-to-end
tests
that
run
as
quickly
as
we
can
but,
like
you
said,
just
kind
of
kicks
the
tires
make
sure
that
nothing
we
just
managed
to
slip
through,
and
so
we
things
there
if
anything
major
has
been
broken,
and
then
we
have
a
nightly
pipeline
that
runs
to
to
test
everything
run
through
all
of
our
tests,
all
our
environments
that
might
take
a
couple
hours.
G
You
know
it's
not
very
livable
to
have
all
of
your
pr's
blocked
for
a
couple
hours
based
on
tests,
but
you
know
that's
where
we
have
the
nightly
come
in
like
you're
saying
and
then
every
morning
you
know,
one
of
us
will
be
just
checking
those
to
see
the
result
of
that
and
I'm
not
sure
what
what
your
process
is
for
reviewing
those
nightly
runs
is
someone
taking
a
look
at
it
to
catch
any
any
bugs
that
show
up.
B
I
think
the
nightly
runs
just
started,
no
that
we
need
that.
I
think
the
nightly
runs
just
began
a
few
days
ago,
like
a
week
ago,
like
so
yeah,
no
jared,
that's
like
that's
another
thing
that
needs
to
and
and
there's
automation
to
like.
I
don't
know
that.
B
I'm
not
sure
if
everyone
needs
another
email,
but
but
but
that's
why
they're
email
rules
but
but
yeah
jared
like
for
my
part,
I
only
receive
what
you're
saying
is
like
as
exactly
what
needs
to
be
done,
one
and
two
just
coming
from
the
best
of
places
of
like.
If
you
didn't
care,
you
wouldn't
be
spending
the
time
and
saying
this
stuff.
I
Can
I
share
something
yeah,
please
yeah,
so
this
is
mario.
I
think
this
is
a
great
discussion
and
about
the
results.
I
think
the
best
way
would
be
to
have
a
slack
action
that
notifies
in
a
specific
channel
whenever
there's
a
failure
so
that
it's
visible,
you
know
if,
if
it's
an
issue
with
you
know,
something's
not
configured
correctly,
you
know
it'll
be
a
bit
bothersome
right
to
some
people
like
just
receiving
a
lot
of
things
to
unless
specific
channels.
I
It's
not
an
actual
failure,
but
it'll
it'll
help
the
community
collaborate
and
make
those
kind
of
quality
signals.
You
know
stable
and
really
increase
the
confidence
right.
I
That's
something
I
I
like
wanted
to
share
and,
of
course,
try
to
reuse
as
much
as
of
our
testing
infrastructure
as
possible,
but
also,
for
example.
I
I
couldn't
help
notice
this,
so
we
have
github
checks
on
pull
requests,
but
my
question
would
be
if,
if
a
check
fails,
would
would
that
block
a
maintainer
from
merging
that
right,
like
is?
Is
there
some
kind
of
restriction?
I
Maybe
I'm
not
that
familiar
to
get
with
get
github
actions,
but
maybe
this
is
like
a
baby
step
question
but
because
I've
seen
merch
requests
with
with
failed
checks.
So
that's
why
I'm
asking
like
what
point
does
it
have
to
to
have
those
checks?
If,
if
there's
no
way
of
blocking
a
manual
action
like
from
merging
that
request,
if
it's,
if
it
contains
potentially
like
unstable
code,
you
know
like
something's,
broken.
B
Yeah
yeah
that
the
answer
is
yeah
or
like
yes,
github
can
be
configured
to
disallow
people
from
merging
if
there's
a
failed
check.
B
Now
that
is
on
the
surface,
what
a
beautiful
like,
let's
go,
turn
that
on
and
then
actually
I
think
this
will
take
us
past
our
time.
The
the
problem
is
it
ain't,
so
simple
like
and
yeah,
I
figured
like
it
like
without
explaining
it.
B
I
don't
know
if
people
can
readily
think
of
reasons
why
that
wouldn't
be,
but
there's
a
bunch
of
reasons
why
you
should
push
forward,
and
actually
we
were
just
like
a
great
example
here
last
week
that
there
was
some
there's
a
failed
ui
performance,
I'm
sorry
user
preference
in
cyprus.
We
had
changed
something
recently
the
test
needed
to
be
fixed,
the
other,
the
subsequent
20
30,
something
pull
requests
that
were
had
nothing
to
do
with
user
preferences.
B
They
all
would
have
been
held
up
for
a
week
in
whatever
couple
of
weeks,
while
that
other
one
was
still
failing,
which
had
nothing
to
do
with
it's
like
well.
If
you
enforce
the
fact
that
you
can't
merge
with
a
failed
check,
then
the
whole
project
stop
okay.
Well,
maybe
that
enforces
like
the
right
amount.
B
I
My
suggestion
yeah
my
suggestion
in
there
is
to
quarantine
those
kind
of
tests.
You
know,
of
course,
that
adds
like
certain
level
of
risk
of
something
going
past
by
those
checks
if
we're
disabling
a
test,
but
if,
but
the
idea
would
be
that
we
would
assume
that
risk
in
in
the
scenarios
where
you
know
we
would
still
want
like
yeah.
I
I
think
that
the
risk
of
allowing
a
pull
request
with
fail
checks
is
even
you
know,
greater
right,
because
you
know
it
could
be
okay,
but
then
there
could
be
some
scenarios
where
okay,
someone
just
before
approving
it.
The
check
seems
okay,
but
then
again,
there's
some
scenarios
where
you
know
someone
push
our
commit
and
then
you
know
you're,
just
they're,
proving
or
merging,
and
then
you
know
those
kind
of
scenarios
or
maybe
maybe
just
a
common
human
error.
I
You
know
you
just
merge
the
pull
request
because
it
look
good,
but
then
again
somehow
it's
if
the
check
is
not
okay
right,
but
you
know
that
this
kind
of
things
you
know
we
would
need
more
discussion
to
find.
You
know,
what's
the
best
setup
for
for
us
to
to
get
to
a
higher
quality.
In
that
sense,
right.
B
Yeah,
let's
see
if
we
can,
I
mean
since
it's
a
here,
is
there
there
are
two
documents:
one
is
a
test
strategy,
the
other
one
is
the
build
and
release
strategy.
They're
very
related.
B
B
So
it's
just
this
one
here
that
we
need
to
flush
out
a
bit
with
a
lot
of
the
things
we've
been
discussing,
and
I
don't
think
I've
heard
anyone
say
anything
that
isn't
agreeable
that
like
isn't
needed
so.
B
We
just
over
time
I
loved,
seeing
I
love
the
fact
that,
like
jared
and
specific
had
pointed
out
an
issue
last
time
we
met
and
we're
seeing
action
taken
on
that
issue
to
help
improve
the
quality
of
the
project.
Maybe
last
question
for
me
before
we
end
the
call
here
and
that
is:
do
you:
will
you
be
able
to
propagate
those
end-to-end
tests
today
across
the
adapters.
B
I
I
swear-
I
heard
a
yes
in
there,
so
I'm
going
to
take
it
as
a
yes
awesome
good
anyway.
I
think
we're
just
out
of
time
there's
a
couple
of
things
we
didn't
cover.
Please,
let's
raise
those
in
slack
and
help
make
sure
that
anyone's
unblocked
or
people
need
feedback
and
things.
Otherwise
we
meet
on
friday
for
the
community
call
so
we're
gonna
have
a
year
in
review
on
friday.
B
I
might
we
might
call
on
a
few
of
you
to
gather
up
some
info
about
things
that
have
happened
over
the
last
year,
some
things
that
have
changed
so
some
of
you.
It
would
be
great
if
some
of
you
were
presenting
what
that
is
so
all
right.
Anything
else
from
anybody
before
we
head
out.