►
From YouTube: Meshery Build & Release Meeting (Dec 23rd, 2021)
Description
Meshery Build & Release Meeting - December 23rd, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Okay,
anyway,
we're
live
so
great.
So
hey
thanks
all
for
coming
it's
I.
I
need
to
stop
watching
myself
on
the
live
stream
and
there
we
go
it's
december
23rd,
almost
the
end
of
the
year.
This
is
our
last
mess
rebuilding
release
meeting
of
2021,
and
I
think
it's
going
to
be
a
good
one.
We've
got
a
few
items
lined
up
on
the
agenda
and
a
number
of
you
on
the
call
if
you
haven't,
dropped
your
name
into
the
meeting
minutes
just
yet.
Please
take
a
moment
to
do
so.
A
There's
a
quick
preview,
so
we're
gonna,
so
mario's
been
getting
an
environment
set
up
working
on
building
up
more
cypress
tests,
we'll
take
a
look
at
his
environment.
Make
sure
that
he's
good
to
go
nibindu
you've
been
working
on
making
sure
that
we've
got
end
to
end
well,
whatever
end-to-end
means,
but
like
we
generally
all
agree,
I
just
call
that
out
as
something
we
need
to
go.
It
would
be
helpful
for
all
of
us
to
go
write
down
what
the
terms
are,
but
you're
going
to
talk
about
the
test
plan.
A
What
coverage
we've
got,
what
ongoing
workflows
are
being
written,
an
opportunity
for
vishal
and
aditya
and
just
and
other
others
on
the
call
to
jump
in
and
help
out.
We
should
also
do
a
quick
coverage
of
the
new
well
I'll
call
it
integration
test
for
now,
new
compatibility,
matrix
dashboard
and
how
that
plays
into
this.
A
Cool
all
right
well
with
that
mario
you're
up
first.
B
Okay,
so
having
some
issues
with,
you
know
this
accessing
measuring.
So
I
followed
the
steps
that
we
talked
about.
Let
me
just
show
measuring
ctl
system
status.
B
B
Both
and
they're
failing
so
they
just
time
out,
so
I'm
thinking
that
it's
related
to
the
fact
that
there's
this
platform
apparently
platform
issue,
it
says
that
docker
doesn't
expose
darker
network
to
the
host,
so
containers
will
only
be
reachable
from
the
host
via
port
forward.
Is
that
what
this
guide
is
helping
us
do,
or
is
this
just
letting
us
know
that
this
would
not
work
on
mac
and
nor
windows?
Just
kind
of
confuse
there.
A
B
A
Yeah
and
either
will
work
actually,
I'm
I'm
unfamiliar
with
metal
lb.
It
might
be
that
metal
lb
itself
is
just
doing
port
forwarding
like
in
in
essence,
the
same
thing
is
happening
with
an
ingredient
with
an
ingress
like
it's
just
doing,
port
forwarding
the
difference
being
so.
There's
like
here's.
The
nice
thing
is
there's
like
12
different
ways
to
solve
this,
but
the
the
confusing
thing
is
like
okay.
Well,
so,
which
one
should
we
should
yeah.
B
I
think
the
external
ip
I
mean
sorry
for
interrupting
the
load.
Balancer
seems
to
be
accessible
in
the
darker
network,
but
it's
not
being
exposed
to
the
host,
but
you're
saying
that
doing
like
using
metal
b
would
be
something
similar.
C
B
A
Yeah
there's
a
diagram
on
this.
I
think
it
made
its
way
into
mesh
redox
just
because,
because
so
many
you
know
so
many
users
will
hit
this
issue.
B
Exactly
yeah
and
I
think
it's
a
great
deterrent
for
like
newcomers
right,
it's
been
a
bit
frustrating,
so
maybe
we
could
suggest.
I
don't
know
some
other
approach,
at
least
for
what
I'm
trying
to
do,
which
is
cypress
scripting,
I'm
thinking.
Maybe
I
could
just
you
know
I
I
now
I
have
a
cluster.
B
Maybe
I
could
just
do
out
of
cluster
deploy
and
then
try
again,
because
I
had
some
issues
with
minicube,
but
maybe
with
kind
it'd
be
easier
like
to
do
that,
like
just
run
measure
in
darker
and
then
just
but
again
I
don't,
but
I
don't
think
that
we
have
like
specific
instructions
for
that
over
here.
Right.
A
B
Totally
yeah,
actually
it's
the
steps
seem
very
simple,
but
you
know
what
this
kind
of
instructions
yeah
they
might
find
the
amount
of
documentation,
but
at
least
for
my
part,
it
seemed
very
simple.
Besides
yeah,
besides
creating
the
cluster,
I
just
followed
the
specific
steps
for
measuring
ctl
and
just
specified
yeah.
A
B
A
Note
that
you
see
on
metal
lb
where
they're
saying
look,
those
networks
just
aren't
exposed
to
your
host
and
that's
where
you're
sitting
you,
mario
as
the
user,
so
just
to
recap,
by
the
way
so
you're
on
you're
on
a
windows,
machine
mac,
oh
mac,
okay,
on
mac,
docker,
desktop,
okay
and
then
right
now,
you're
running
kind,
yep!
Okay,
do
you
have
you
can
give
me
a
favor
like
since,
since
there's
like,
I
said,
there's
like
there's
more
than
12
there's
like
25
different
ways
to.
B
A
A
A
B
A
Yeah,
so
it
helps
get
some
of
those
ports
exposed
to
the
host
and
so
yeah.
If
you
also
stop
kind.
B
A
And
you're,
not,
though
that
you're
not
running
mini
cube
either.
I
assume.
B
Nope,
no
not
at
the
moment,
just
kind,
so
I'll
just
stop
it
over
here
or
maybe
from
the
yep.
This
should
be
okay
right
once
this
is
done.
Oh.
C
C
Effect
so
I'll
do
that
right
now:
okay,
what
about
this
one.
B
A
B
C
A
But
like
in
your
other
environment,
one
of
the
quick
ways
to
kind
of
solve
that
is
to
use
cube,
cto
and
do
a
cube,
ctl
port
forward.
Okay
is
one
one
way,
so
it's
just
it
keeps
ctl
command.
Another
way
is
and
there's
some
of
this
in
the
instructions
like
that
metal
lb
instruction,
it
was
saying,
oh,
you
could
use
an
ingress
and,
like
here's
an
example
ingress
that
you
can
use
to
expose
measuries
service
and
in
genetics
one
another.
One
is
what
we
were
saying
earlier
is
like
now
is.
A
I
was
saying
hey
to
expose
it
as
a
node
port.
This
might
be
an
issue
for
you
with
the
use
of
kind,
I'm
not
sure
but
but
yeah,
but
this
is
another
approach
as
well.
Some
of
the
docs
need
to
be
cleaned
up.
Some
of
the
docs.
We
just
want
to
not
necessarily
direct
people
to
use
helm
or
to
use
like
we
generally
want
the
flow
for
people
who
are
deploying
mastery
to
be
hey.
Come
here
like
like
this
is
there's
a
lot
of.
A
Yeah
and
it
accounts
for
a
lot
of
things
you,
you
know
it
really
tries
to
help,
try
to
help
overcome
for
those
that
are
like
really.
You
know
very
well
seasoned
with
kubernetes,
it's
like
hey,
they
are
there's
no
or
for
the
you
know
they
they
understand.
They've
had
to
suffer
through
this
with
something
else.
It's
not
a
measuring
thing
sure
for
some,
when
they're
using
managed
kubernetes,
it's
some.
C
A
All
right
so
good,
the
venue
you're
up.
D
D
So
just
to
give
you
an
update
on
that,
like
I,
we
are
tracking
both
the
integration
tests
as
well
as
the
entrance
like
or
how
we
define
those
tests
in
here
and
basically
we
also
track
the
the
overall
coverage.
So
all
of
these
are
automated.
So
if
you,
if
you
change
the
values
in
in
these
particular
cells,
this
gets
updated.
So
it
it's
a
view
of
how
we
are
progressing
with
the
test
coverage
and
yep.
So
for
the
end
to
end
tests.
D
I
have
been
writing
some
for
machinery
ctl
and
I
guess,
like
mario,
will
be
writing
some
with
cypress
for
the
ui
as
well.
So
I
have
17
tests
test
scenarios
covered
and
10
are
in
progress
which,
which
related,
which
are
related
to
life
cycle
management
in
measuring,
so
we'll
be
testing
some
of
those
here
so
so
yeah.
So
we
I.
I
also
mentioned
testing
testing
these
on
multiple
environments,
so
multiple
kubernetes
versions
and
multiple
operating
systems.
D
A
D
Yep,
so
currently
we
have
the
like.
We
have
the.
We
only
have
the
measuring
ctl
end
to
end
test,
which,
which
I
I
made
a
pr
a
couple
of
days
ago.
So
these
cities
just
use
measurement
ctl
to
basic.
Yes,
it
spins
up
a
cluster
deploys
measuring,
does
some
run
some
measuring
serial
commands,
so
the
these
covers
some
of
the
areas
in
measuring
the
other.
In
the
other
end,
tests
will
be
from.
D
We
will
be
in
the
measuring
ui,
so
we
are
testing
from
the
perspective
of
both
the
clients,
so
that
should
cover
the
cover.
These.
These
test
cases.
A
Okay,
they're
both
going
to
use
this
so
there's
a
yeah
there's
some
strategy
that
needs
to
be
written
down
here.
Kind
of
thought
about.
That
is
that
of
the
integration,
the
end-to-end
test
that
you
have
right
now.
It
looks
like
it
builds
measuring
cto
only
when
it
only
fires
when
there's
a
change
in
measuring
ctl,
and
that
that
makes
you
know
that
that
makes
well,
I
guess,
there's
you've
got
a
couple
of
different
ones.
Maybe
the
one
that
you're
showing
here
is
that
this
one.
D
This
particular
one
yeah
yeah.
This
was
basically
scheduled
to
run,
so
we
don't
actually
run
it
turn
it.
When
prs
are
made.
A
Good
okay,
so
this
particular
test
is
it
essentially
becomes
of
this?
Full
regression
is
what
it
can
be
kind
of
built
into,
which
is
to
say,
like
hey
once
a
night
or
on
some
basis
build
all
the
latest
components,
measure
ctl
measure,
ui,
mastery
server,
mesh
readapter,
all
of
them
mastery
operator
build
them
all,
build
mesh,
sync
build
them
all
and
and
then
it
would
go
through
and
deploy
kubernetes
deploy
a.
A
What
do
you
call
it
and
then
and
then
run
measuring
and
then
and
then
take
measuring
through
using
like
pattern
files
take
measuring
through
a
number
of
sequences
of
things?
What
th
this
the
high
level
like
it?
Doesn't
the
high
level
workflow
that
I'm
describing
there
that
full
regression
test
it
to
your
point?
If
it's
designed
well,
it
would
really
just
be
an
index
of
any
number
of
other
workflows
that
are
like
one,
that's
dedicated
to
just
building
mastery
cto
and
then
executing
a
number
of
measuring
ctl
commands.
A
You
know
that
that
perform
different
integration
tests
and
do
different
things
and
that
you
know
the
as
you
assert
what
that
test
is
whether
or
not
a
given
individual
test
was
successful.
We
would
want
to
have
commonality
across
like
whether
or
not
the
terms
that
we're
using
are
pat
what
we're.
A
Looking
for,
how
do
we
verify
that
each
thing
was
performed
correctly
so
when
we
report
back,
if
it
passed,
if
it
passed
partially
passed
or
failed,
and
what
the
results
of
like
that
would
be
the
overall
set
of
results,
but
then
for
each
individual
test
within
measuring
ctl,
for
example,
which
one's
passed
which
one's
failed,
we
don't
have
to
go
off
and
build
a
whole
we're
not
necessarily
trying
to
build
a
massive
framework
around
all
this
necessarily
we
do,
but
we
do
need
to
enforce
some
consistency
and
ultimately
bubble
up
some
of
these
reports
and,
if
we're
using
the
same
nomenclature
in
terms
of
what's
passing,
what's
partially
passing,
what's
failing,
that'll
really
help
we
can
yeah
and
so
the
the
then
for
each
component
like
building
a
measure
adapter
and
testing
it
like
this
particular
file
should
just
be
referencing.
A
Part
of
what
I'm
saying,
I
think,
probably
makes
sense
in
general
part
of
it.
Doesn't
this
it
really
the
the
we
have
to
go
write
down
what?
What
component?
What
workflow
is
accomplishing,
what
task
and
when
and
why?
There
are
multiple
different
workflows
needed
sure
some
of
it's
maintainability
of
like
a
one
workflow
for
a
specific
purpose.
Sure
also
because
there's
a
workflow
for
a
specific
purpose.
It
can
be
reused.
So
it's
like
when
someone
submits
a
pr.
A
It
gets
used
when
someone
when
the
pr
merges
potentially
gets
used
when
the
release
happens.
Potentially,
potentially
it
gets
used
every
night
when
for
full
regression
potentially
gets
used
some
of
the
service
mesh,
I'm
sorry
the
github
actions,
potentially
they
get
reused
through
here
as
well,
but
they
would
be
even
a
smaller
sub
component
of
some
of
the
other
tests.
So
really
there's
a
ton
of
reuse
that
can
go
on
a
lot
of
you
know
fairly
intelligent
series
of
interactions.
It's
like
one
repo
calling
the
next.
A
The
other
one
calling
that
one
and
like
and
then
potentially
running
in
parallel
and
doing
so
there's
and
that's
not
us
necessarily
building
out
a
massive,
our
own
framework
as
much
as
it's
just
it
kind
of
becomes
that
as
much
as
as
much
as
us,
just
using
github,
workflows
very
well,
and
I
think
the
the
this
particular
thing
is
helpful
and
offers
some
coverage.
The
mesri
ctl
smoke
test
that
you're
creating
like
offer
some
coverage
right
now.
A
Those
are
entirely
duplicative
of
the
golang
unit
tests
and
integration
tests
that
we're
running
for
mastery
cto.
But
I
understand
that
workflow
property
sets
us
up
for
additional
set
of
tests,
and
so
this
is
good,
like
I
think,
the
the
notion
that
hey
there
isn't
right
now
in
the
spreadsheet
there
isn't
that
spreadsheet
is
manually,
updated
and
that's
fine.
It's
not
meant
to
be
a
status
of
like.
A
A
Oh,
let's
list
out
a
bunch
of
just
you
know
tests
and
then,
let's
make
sure
that
they
have
os
coverage
and
then,
let's
make
sure
they
have
different
your
docker
versus
different
platform
coverage
and
then,
let's
make
sure
we
have
different
component
coverage
and
then
let's
work
toward
automation,
stuff
and
so
we're
toward
that
automation.
The
things
that
you'd
said
about,
like
testing
from
both
ends
from
site
measuring
ui
as
a
client
actually
ctl
as
a
client.
A
The
nice
thing
about
hitting
those
two
clients
is
that
and
from
a
functional
perspective
from
an
end-to-end
perspective
is
that
they
flex
we
don't
necessarily
they
flex
mesh
reserver
and
they
flex
the
adapters
as
well.
They
also
inherently
flex
the
rest
apis
and
the
graphql
apis
to
the
extent
that
we
have
tests
that
assert
certain
responses
for
certain
things.
A
So
it's
a
beautiful.
It's
a
really.
It
is
a
juicy
thing
to
spend
time
doing
and
there's
so
much
success
to
be
had
because
this
isn't
like
when
we
were
looking
at
testing
adapters.
Previously,
it
was
like.
Oh
one
of
the
maintainers
wanted
to
use
bats.
It's
this
bash
automation,
testing
some
framework,
something
system-
I
don't
know
good,
but
we're
basically
have
displaced
that
with
github
actions,
we've
displaced
it
with
unit
test
integration
test
and
what
we've
described
here
is
so
achievable.
A
It's
like
actually
we're
very
well
set
up
to
to
get
a
massive
collection
of
test
coverage
fairly
quickly,
based
on
all
the
work.
That's
been
done
in
workflows
today
so,
and
this
will
be
an
ongoing
focus
for
us
like
we
ultimately
really
really,
you
know
will
be
looking
at
these
reports
and
we'll
be
verified.
We
should
be
able
to
catch
things
pretty
easy
like
what
novendi
is
showing.
Is
that
hey
when
kubernetes
1.24
comes
out
and
they
change?
A
Maybe
the
api
version
from
v1
beta
1
of
a
certain
api
to
v1,
like
what
we
had
experienced
with
the
deprecation
of
custom
resource
of
crd
spec,
like
we
can
test
that
in
advance
of
doing
a
release.
The
output
from
here,
too,
is
very
helpful
in
terms
of
what
passed
and
what
failed
under
these
different
platforms.
A
So
so
we're
really
set
up,
you
know
very
well
look,
there's
been
a
lot
of.
The
groundwork
has
been
done
to
be
able
to
come
over
with
overarching
set
of
tests,
but
we
do
need
to
rationalize
like
what's
happening
at
the
different
strata
of
tests
between,
and
I
I
don't
mean
pedantically
like
what's
a
unit
test.
What's
an
integration
test,
what's
a
but
but
I
mean
like
yeah
what?
What
like?
What
frameworks?
A
Are
we
using
at
various
levels
and
what
the
value
is
of
those
and
when
they
run
how
frequently,
how
do
we
centralize
as
much
of
that
output
as
we
can?
How
much
of
that
is
just
going
to
be
thrown
away
based
on
the
history
of
what
gets
tracked
in
github,
which
is
actually
really
important
when
you
do
see
a
high
level
red
mark
and
a
failure,
a
failing
test
case,
it's
like
if
you're
going
to
fix
it.
A
Those
can
be
left
aside
for
now
from
this
focus
cyprus
and
the
tests
that
run
have
the
output
as
well.
It's
like
well
so,
mario
as
you
as
you
look
at
those.
It's
not
necessarily
the
case
that
we
have
to.
We
don't
necessarily
have
to
do
a
lot
with
integrating
with
cyprus
itself
as
much
as
just
yeah.
I
mean
in
terms
of
like
going
over
talking
to
its
api.
Getting
the
test
results
we
just
in
the
workflow
itself.
We
should
be
able
to
grab
the
results.
A
I
talked
for
a
long
time.
Are
you
where
nivendu
are
you
did
I
leave
you
confused
like
as
to
what
would
be
appropriate
to
do
next.
D
Yeah
so
like
to
be
clear
like
this,
the
scope
of
these
endpoints
would
be
to
test
measuring
in
its
entirety
and
it
would
actually
work
with
all
the
edge
releases
of
all
the
components
so
measuring
measuring
operator.
D
We
we
build
the
measuring
operator,
we
build
the
adapters,
we
build
the
measuring
server
and
all
that
and
then
try
to
test
it
right
like
instead
of
just
doing
the
cli
and
then
like
testing
from
a
user's
perspective.
A
But
actually,
but
both
are
valid
and
both
have
their
place.
Something
that
gives
us
hope
to
being
able
to
achieve
either
or
is
is
that
we
have
workflows.
We
have
workflows
to
today
generally
for
either
or
like
the
github
actions
are
generally
focused
toward
user-centric.
A
You
know
style
testing
of
published
artifacts
published
releases,
whereas
a
lot
of
the
workflows
that
we
have
in
the
repos
today
are
about
building
something
actively
and
testing
that
and
they're
both
helpful,
because
if
you
do
a
full
regression
test,
you
may
not
sometimes
you're
going
to
want
to
build
things
actively
and
other
times.
You
just
want
to
say
no
look
like
the
thing
that
I'm
trying
to
test
is
just
there's
a
new
kubernetes
version.
I
just
want
to
make
sure
that
what
we've
released
to
date
works
with
that,
so
don't
build
anything.
D
Yep
yep
you've
got
it
there,
so
so,
essentially
we
would
be
you'll
be
having
multiple
tests
for,
let's
say,
all
the
components
and
we
bring
together
all
of
those
under
one
test
and
then
it's
like
plug
and
play
right
like.
A
Yeah
yeah,
hopefully
yeah
if
we're
using
kind
of
common
common
cr
vernacular
in
our
tests
and
their
results
that
it
kind
of
becomes
plug
and
play
like
that
yeah.
If
we
have,
if
we're
reusing
these
workflows,
some
you
know
and
amalgamating
them
into
higher
level
like
full
regression
tests,
then
yeah,
then
it
is
kind
of
plug
and
play
in
that
regard.
A
For
some
of
you
you've
seen
this,
but
but
like
to
tie
this
concept
together.
There's
it's
an
early
but
forthcoming
compatibility.
Well,
like
compatibility,
matrix
and
sort
of
test
status
and
over
time
this
will
mature
quite
a
bit
and
become
well
probably
quite
the
resource
and
we'll
probably
have
slack
notifications
and
other
things
going
on.
But
there's
been
it,
but
part
of
the
way
that
this
has
taken
shape.
Is
that,
if
you
think
about
so
so
the
bendy
was
describing
an
overarching
end-to-end
tests
for
like
fully
regressing.
A
You
know
things
and
and
and
that's
good
and
the
output
of
that,
if
that's
running
like
every
night,
like
hey,
wouldn't
the
output
of
that
be
nice.
To
see
here,
like
you
know,
is
everything
is
the
tip
of
every
branch
still
good,
with
underserved
on
under
some
certain
scenario.
That's
one
question
that
people
have.
A
Another
thing
is
like
hey:
if
we
have
ongoing,
you
know,
there's
like
across
layer,
five
there's
like
70-something
repos,
there's
a
lot
going
on,
and
but
just
across
measuring
there's
like
30-something
repos,
and
so
how
do
you,
where
do
you
go
to
say,
is
linkardi
still
working
and
they
just
had
a
new
release?
Does
it
does
measure
work
with
that
new
release?
It's
like
well
in
the
mesh
in
the
mesherie
linker
d,
repo,
it's
it's
running
tests
you
know
fairly
frequently
and
if
we
take
and
right
now
the
output
of
those
tests.
A
What
were
the
like
that
this
was
the
version
of
osm.
If
you
click
on
the
row,
it
will
expose
just
a
very
nominal
set
of
info
today
and
the
rip,
and
you
know
what
says
well
what
poor
platform
was
it
done
on,
and
you
know
what
what
were
the
end
to
end
tests
that
were
run
like
what
was
and
right
now.
This
is
simple
life
cycle
management
tests
of
saying:
can
the
message
osm
adapter,
deploy
these
osm
components
and
did
they
stand
like
the
assertion?
That's
being
done
here
is:
did
they
achieve
a
running
state?
A
If
so,
then
this
particular
test
passed,
that's
what
ashish
tiwari
had
determined.
So
those
tests
can
be
refined
over
time.
We
can
change
the
criteria,
we
could
add
additional
tests
other
than
life
cycle
management
ones.
We
can
add
additional
platforms
and
stuff
like
the
report.
Format
here
needs
to
be
enhanced
and
we
need
to
be
able
to
slice
this
in
different
ways,
but
you
can.
A
A
A
Exactly
yeah
totally,
yes,
exactly
the
style
of
test,
it's
not
even
like
what
it's
not
even
like
necessarily
are
you
going
to
test
if
measure
ctl
context
switch
works
as
a
command
or
as
just
a
random
example,
it's
yeah
much
more
than
that
at
a
high
level.
It's
like!
B
Testing
and
then
look
for
compatibility
like
reporting
share
that.
A
Yeah
and
to
layer
the
tests
on
one
another
such
that,
if
there's
a
basic
smoke
test
that
happens,
which
is
in
some
respects,
that's
kind
of
what
we
were
just
looking
at
here.
It's
like
there's,
a
really
simple
like
did
it
do
they
did
it
like
stand
up
and
then
okay,
great
okay,
well
from
here
that
same
workflow
can
be
built
upon
to
you
could
add
to
that
workflow
running,
applying
a
service
mesh
pattern
and
the
pattern
might
be
quite
sophisticated
with
a
bunch
of
other.
A
A
So,
in
the
meantime,
so
this
is
the
challenge.
This
is
actually
where
I
get
myself
in
trouble.
Sometimes
it's
like
okay
great!
So
if,
if
you
walk
away
from
meeting
right
now,
what
you
understand
is
all
right.
So
as
a
contributor
as
a
community
member,
what
you
should
you
do
nothing,
it's
like!
No,
no,
no,
no,
no,
like
nothing
just
wait
for
the
the
the
design
spec
to
come
out
that
that
explains
all
this.
No,
like
that's
we're
not
gonna,
as
any
of
you
that
want
to
sign
up
to
write
down
the
strategy.
A
We
will
seek
out
those
that
will
we
will
hold
to
the
fire
and
make
sure
that
they
do
write
it
or
like
one
person
at
least
to
write
it,
and
then
everyone
else
can
review.
But
in
the
meantime,
it's
very
clear
that
this
is
a.
This
is
a
beneficial
test.
A
A
A
To
orient
you
very
quickly,
it's
just
this
runs
when
someone
opens
a
pull
request
or
when
a
public
request
gets
merged.
What
it
does
is
it
checks
out
the
adapter
code
and
builds
it
then,
and
then
it
applies
a
pattern
file,
a
simple
pattern
file
that
just
says
that
just
tells
the
adapter
to
deploy
that
particular
service
mesh,
and
these
are
the
way
that
ashish
has
done.
Ashish
tiwari
has
done.
This
is
he's
basically
written
down
assertions
here,
which
is
actually
I'm
for
the
first
time.
I'm
seeing
this
so
he's
saying
like.
A
A
A
Over
to
the
main
meshery
repo,
where
the
docs
are
so,
this
robot
account
he
uses
the
robot.
This
service
account
to
do
that
and
then
so,
actually,
in
this
way,
measuring
becomes
so
measuring
becomes
self-documenting
right,
like
every
time
that
this
is
run
like
all
the
time.
It's
like
now,
self-committing,
updating
those
tests
and
guess
what
so
jared
byers
from
f5
from
nginx
he's
been
asking
around
some
of
this
recently
and
and
what
a
beautiful
thing
for
him
to
do?
It's
very
encouraging
that
he's.
A
You
know
that
mario
and
specific
and
jared
are
both
here
actually
they're,
both
at
f5-
I
don't
know
so.
F5
produces
some
good
engineers,
apparently
and
so,
or
good
engineers
join
f5.
I
don't
know
which
is
but
jared
said:
hey
hey
how
many
versions
of
each
service
match
should
nestle
be
compatible
with.
A
You
know
one
two
you
know
four
and
and
the
response
was
like
well
like
20
30
50,
like
a
lot
of
most
of
them
up
to
a
certain
point,
but
like
a
lot-
and
you
know-
you
couldn't
see
his
eyes
on
the
call,
but
his
eyes
probably
popped
out
of
his.
You
know
he
probably
like
rolled
his
eyes,
saying
yeah.
Well,
good
luck
with
that
right
and
like
if
I
were
him,
that's
what
I
would
be
saying
he's
like
the
thing
is.
A
We
can
flip
that
on
its
head,
like
you,
the
if
you
get
this
right,
you
can
get
really
close
to
being
pretty
or
you
can
get
all
the
way
to
being
really
confident
that
that,
yes,
in
fact,
all
those
versions
work,
it's
a
massive
matrix
if
you
really
get
into
it
in
detail.
We
talked
about
this
yesterday,
all
the
possible
permutations.
A
What
else
do
we
have
today?
Oh
mario,
should
we
circle
back
to
you
real
quick
before
we
did
you
are
you
it's
working
thumbs
up
all
right,
yeah!
It's
super.
B
C
A
B
B
But
but
this
is
basically
helping
us
use
the
latest
stable
release
of
measuring
and
then
you
can
just
get
the
local
ui
running
and
then
this
will
help
anyone
get
started
with
with
cypress
testing
at
least
have
everything
that
they
need
and
actually
doing
any
kind
of
deploy.
Would
you
know
it
would
it
will
it's
closely
resembles
like
an
end
users,
environment,
so
pretty
cool.
A
A
So
prasad
is
with
us
and
honshu
was
earlier,
but
I
guess
we
lost
him
and
sayantan
and
vishal
and
navindi
and
mario
and
aditya
and
medina
and
the
other
aditya.
A
Prasad
you
you
with
this,
you
wanna,
say
hi
real
quick.
You
can
do
so
in
chat
if
you
want
to
it's
just
nice,
it's
nice
to
get
to
know
folks
on
the
call.
B
Yeah
yeah,
I
think
I
can
script
what's
spending
like
easily
so
I'll
get
get
this
going
shouldn't
take
much
longer.
A
D
A
Yeah
it
is
it
I
think,
long
term
we
might
want
to
consolidate,
but
we'll
yes,
then,
when
we're
done
we'll
see,
if
maybe
consolidating
into
the
build
and
release
dock,
it
makes
sense
or
not
yep
yeah
totally.