►
From YouTube: Meshery Build & Release Meeting (Nov 12th, 2021)
Description
Meshery Build & Release Meeting - November 12th 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/meshery
Twitter: https://twitter.com/mesheryio
LinkedIn: https://www.linkedin.com/showcase/mes...
Docker Hub: https://hub.docker.com/u/layer5/
A
C
B
I
think
just
curious,
so
the
what
you
shared
in
the
thread
is
that
that's
separate
from
the
other
area
of
interest
for
testing
right.
I
think
that
it
was
about
the
pattern.
So
that's
a
different
area.
Correct,
okay,
got
it.
Okay,
yeah
I'll,
take
a
look
on
both
and
you
know,
try
to
make
some
progress.
B
A
Lee
and
ad
has
just
joined,
so
we
were
just
talking
about
of
the
tests
defined
in
the
measuring
test
plan
and
how
mario
can
help
help
us.
C
Yeah,
mario,
it's
a
I'll
belabor.
The
point
by
saying
it's,
a
big
old
need
like
like
it's
really
important
for
the
with
every
release,
as
we
get
to
a
1.0
yeah
and
the
more
code
that
we
bring
in.
Like
the
more
anxiety
I
have,
as
I
hover
my
mouse
button
over
the
merge,
you
know
the
pr
merge.
Okay
and
I
look
to
the
green
check
marks
like
okay.
B
A
false
sense
of
security,
because
you
know
the
coverage
of
the
integration
and
end-to-end
tests.
You
know
it's
not
getting
bigger
with
every
commit
yeah.
I
I
get
you.
C
Yeah
one
of
the
things
it's
a
little
bit
under,
I
mean
I'll
again
I'll
belabor
the
point
by
just
saying
it's
a
little
bit
unnerving
as
we
go
to
fix
some
security
vulnerabilities
that
are
known
inside
of
measuring
dependencies.
C
You
know
we'll
go
over
and
or
every
time
that
we
accept
in
a
new
dependency
like
an
upgraded
dependency.
It's
like
yeah,
you
know
those
occasionally
break
or
like
there
are
breaks
and
the
problem
is
you
get.
C
You
know
you'll
get
like
10,
15,
20
of
them
at
a
time
and
going
through
and
manually
building
and
checking
and
extensions,
like
you
know
so,
yeah
the
there
are
a
number
of
willing
contributors
in
the
community
to
follow
your
lead
with
respect
to
like
having
an
understanding
as
to
what
cyprus
is
and
where
to
go,
and
it
they
don't
need
to
be
cyprus
experts
they
don't
need
to
like.
C
We
don't
want
to
propagate
a
bunch
of
I'll
use
that
word
fragile
again,
like
fragile
test
cases
that
might
but
but
so
long
as
they're,
adhering
to
what
you
know
to
be
best
practices
yeah-
and
you
know
we
can.
I
like,
like.
I
guess
I
guess
what
I'm
trying
the
message
I'm
trying
to
deliver
is
like
hey
we're,
I'm,
you
know
we're
clearly,
myself
and
others
are
stoked
that
you're
here
and
so,
as
we
make
asks
of
you,
the
intention
isn't
to
saddle
your
shoulders
with.
C
You
know
more
stress
responsibility,
one,
but
rather
to
empower
you
to
do
so
to
have
others
kind
of
follow
suit
and-
and
actually
one
thing
that
I
had
mentioned
before
that
I
didn't
follow
through
on
in
part,
because
I
wasn't
quite
sure
of
this,
which
is
you're
the
the
cypress
you,
the
cypress
dot,
io
account
that
you
have
today.
C
Is
there
a
particular
email
address
or
username,
or
that
I
should
that
we
can
associate
to
and
then
see.
If
we
can
add
you
to
the
the
team.
B
I
have
experience.
You
know
I
have
like
gone
over
all
cypress
docks,
so
I
have
knowledge
of
how
what's
the
features
included,
but
if
you
wanna
create
a
cypress
account
or
should
I
just
share
an
email
with
you
and
yeah.
C
Nice,
okay
yeah-
I
just
didn't-
want
to
have
you
roll
your
eyes
as
we
sent
you
like,
you
know
your.
C
It
all
right
all
right!
Sorry,
just
back
to
you.
A
That
was
actually
the
first
thing
we
had
to
discuss
on
this
call
so
like.
I
also
would
like
to
point
out
that
we
have
a
column
here
with
to
say
whether
the
tests
are
automated
or
not
so
like
our
ideal
goal
here
is
to
like
have
all
of
these
tests
automated,
like
that.
That
might
be
pretty
obvious,
but
like
mario,
like
when
you
work
through
this,
like
you,
can
use
this
spreadsheet
to
keep
track
of
things
like
which
test,
which
things
have
you
automated
and
which
you
haven't
all
right.
A
All
right,
so
the
next
topic
is
reusing.
The
github
actions
for
e2e
testing.
So
rudraksha,
do
you
want
to
start
this
off,
or
should
I
just.
A
Okay,
so
for
some
context
we
have
an
an
action
in
our
flow
in
the
mastery
repo
that
basically
uses
our
github
actions.
So
we
have
the
machery
smp
github
action
and
we
also
have
the
the
smi
conformance
github
action.
We
also
have
one
github
action
that
is
being
built
which
which
wraps
around
measuring
patterns.
A
So
these
tests
use
this
action
to
run
a
couple
of
workflows
across
multiple
kubernetes
version
and
multiple
environments
that
is
deploying
measuring
on
docker
and
deploying
measuring
on
kubernetes.
A
A
The
machery
smp
github
action,
so
there
must
be
which
oh,
it's
actually
merged,
so
so
what
this
does
is
it
updates
the
action
to
account
for
some
to
actually
deploy
service
meshes
onboard
applications
on
the
service
meshes
and
run
a
couple
of
workflows.
So
basically,
that's
end-to-end
functionality,
so
we
might
use
this
updated
action
in
the
workflows
as
well,
and
one
thing
that
we
need
to
do
is
finish
off
the
service
mesh
patterns,
action
and
use
the
pattern
section
in
the
in
the
end-to-end
workflow
as
well.
A
So
what
the
pattern
action
can
do
is
actually
maybe
deploy
a
service
mesh
or
maybe
onboard
as
service
mesh
onboard
an
application
or
do
a
service
mesh
and
configure
it.
So,
basically
that
will
cover
a
lot
of
areas
in
measuring
ctl
as
well
as
measuring
server,
so
that
is
still
yet
to
be
done.
So,
if
scientist
is
on
the
call,
I
guess
it's
not
so
I'll.
A
We
need
to
follow
up
with
santan
to
get
to
complete
the
action
quickly
and
like
use
it
use
it
in
these
places.
Rudraksha
anything
else
to
add.
D
I
guess
the
action
the
other
the
workflow
was
showing
on
mystery
repository.
It's
it's
not
running,
so
we
even
need
to
put
the
appropriate
secret
that
we
see
in
finite
token
for
this
as
well.
So
since
you
have
I
mean
you,
you
can
see
the
secret
so
just
reminder
to
update
the
secrets
with
the
token
as
well.
D
D
A
I
actually
have
the
action
running
on
a
rapper
of
mine.
Maybe
you
can
take
a
look,
so
basically,
you
can
run
this
action
within
like
you,
you
can
run
this
action
manually
or
you
can
run
these
actions.
These
are
also
scheduled
to
run
daily.
A
We
can
change
that
to
like
whenever
we
want
so,
and
you
can
like
just
let
me
just
try
to
show
you
so
basically,
we
run
tests
across
multiple
load,
generators,
multiple
service
meshes
and
multiple
configuration
files.
So
let's
say:
if
we
have
two
different
tests
defined
in
two
different
configuration
files.
Let's
say
they
are
a
load
test
and
a
soak
test.
Maybe
we
can
we
can
cycle
through
all
the
possible
combination
of
load,
generator
service
measures
and
the
test
defined,
and
this
will
actually
run
in
a
scheduled
function.
A
C
C
C
Did
you,
okay,
yeah,
you
I'm
sorry.
I
totally
missed
this.
You
you
did
just
invoke
those
manually-
okay,
great
and
but
we've
yet
to
include
nighthawk
in
these
tests.
A
Yeah,
that's
like
we
can
include
it
like.
We
can
just
add
it
to
the
yeah.
C
A
A
And
we
actually,
so
we
will
have
a
separate
account
to
run
the
scheduled
benchmarks
and
this
account
will
be
a
special
account
and
it
will
have
infinite
token,
and
we
also,
I
also
try
to
name
the
tests
such
that
it
is
traceable.
So
we
can
trace
back
to
what
what
like
what
all
configuration
we
did
and
even
the
test,
the
actual
configuration
file
name
that
we
used.
A
So
that
is
also
something
that
that
makes
it
pretty
easy
to
navigate
through
in
the
measuring
ui
and
probably
try
and
look
at
those
results.
C
A
B
A
In
the
like,
currently,
both
of
these
are
same,
but,
like
I
added
these
two
workflows
to
point
out
that,
like
we
can
have
multiple
configuration
files
and
different
tests
differently,
so
in
in
performance
testing
terms
like
a
soap
test
is,
is
of
like
it's
a
test
run
that
is
for
a
longer
period
of
time.
A
D
A
Yeah
so
lee
or
like
what
I
need,
what
we
need
here
is
like
we
need
to
define
some
actual
tests
that
needs
to
be
run,
so
these
are
just
sample
tests.
So
basically
we
need
to
rewrite
what
this
is
going
to
be.
So
I
I
was
trying
to
get
sungoon
involved
here,
so
you
can
look
at
into
this
and,
like
one
thing.
C
That
yeah,
actually,
like
you
know,
maybe
this
is
debatable
the
definition
and
distinction
between
a
generic
load
test
and
a
soap
text
test,
specifically
but
I'll,
but
I'll
suggest
that
there's
a
difference
here
and
as
I
do
it
raises
the
question
as
to
whether
or
not
nighthawk
is
capable
of
a
soap
test,
and
if
it
is,
then
whether
or
not
we
need
to
be
exposing
that
type
of
exposing
support
for
it
in
smp
in
this.
So
briefly,
the
you
know-
and
people
can
disagree
like.
C
I
don't
think
that
these
are
necessarily
industry
hardened
terms,
but
but
a
load
test
being
mostly
what
you
generally
think
of,
which
is
like
hey
you've,
got
some
business
requirement.
That
says
you
want
to
have
a
certain
performance
characteristic
characteristics
of
a
given
service,
and
so
you
ramp
up
load
on
it
you're
watching
it
you're
looking
at
it
and
over
some
whatever
amount
of
time
you
want
to
you
say:
yeah,
it
looks
good
or
it
doesn't,
but
that
was
the
load
test
and
then
the
soap
test
is
a
bit
more
like
hey.
C
Let's
run
this
for
an
extended
period,
which
is
one
difference,
and
the
second
difference
is:
let's
go
ahead
and
load
it
up
at
the.
You
know
like
the
same
load
that
we
would
have
done
in
like
a
load
test,
but
let's
also
include
some
variability
like
let's
run
it
for
an
extended
period,
have
it
hammer
on
it
for
a
bit
give
me
spike,
but
then
you
know,
can
I
come
back
down?
Do
some
other
stuff
in
a
spike
and,
like
you
know,
sort
of
yeah
yeah.
C
Blog
post
out
there
on
this
that
covers
it
a
bit,
but
then
the
question
becomes
oh
or
what
I
start
to
acknowledge
is
that
in
smp
we
don't
identify.
We
don't
have
variability
over
time
as
a
configurable
item.
So
there's
an
open
question
then
as
to
whether
or
not
nighthawk
supports
variability.
B
Cool,
I
have
a
question
about
the
testing,
the
performance
management,
but
I
can
wait
until
like
there's
no
more
topics.
A
Oh
yeah.
E
A
B
Okay
sure
so
navendu
shared
that
in
the
measured
test
plan,
so
it's
desired
like
a
high
priority.
Is
the
performance
management
dashboard
right?
B
So
the
test
is
in
the
in
the
spreadsheet.
It's
a
167.
Is
this
correct?
Is
that
the
one
you
were
talking
about
navendo.
A
There
may
be
some
scenarios
that
might
not
have
been
defined
in
this
test.
B
B
Maybe
you
know
I'll,
do
my
my
homework
and
I'll
go
over
the
documentation
to
get
a
better
idea
of
you
know
the
functionality
and
but
right
out
of
the
bat,
and
is
there
any
setup
required
in
order
to
run
that
test
like
just
trying
to
think
like
from
what
we
have
in
the
workflows
like
there's,
some
end-to-end,
that's
going
on
like
already
with
cyprus.
So
is
there
an
additional
steps
we
would
need
to
script
in
order
to
get
the
conditions
ready
for
the
test
to
run.
A
B
B
I'm
thinking
there
should
be
some
additional
configuration
for
that
right
or
or
would
it
do
a
performance
test
against
measure
itself?
Maybe
that's
something.
I'm
missing.
A
We
can't
run
tests
so
like
there
is
this
configuration,
so
basically
we
can.
We
can
paste
in
a
url
that
we
that
need
where
we
will
have
a
workload
and
where
we
can
run
tests
against
so
like
other
than
that,
like
there
needs
there
isn't
much
configuration
so.
C
B
Okay,
so
then,
once
it's
installed
like
do,
do
you
have
the
end-to-end
workflow,
like
I
mean
the
github
actions
workflow
just
trying
to
understand
like
if
we
add
a
cypress
test
in
the
end,
to
end
folder
like
what?
B
What
would
the
performance
test
run
against
like
I'm
like?
I
was
trying
to
understand
like
how
yeah
what
what's
being
tested
at
that
moment,
right
or
or
or
where
would
be
at
any
configuration,
but
you
say
it's
not
needed,
but
then
again
so
the
workflow
would
install
all
the
measuring
components
right
and
services,
but
then
again
what
what
would
it
test
is
there
at
that
point?
Is
there
anything
deployed
that
it
would
do
performance
testing
against.
D
B
D
F
B
Like
me,
you
mean
me
okay,
yeah,
but
I'm
just
asking
like
like
how?
Where
can
I
go
over
like
where's
the
where,
where
are
those
workflows
like
I'm
assuming
it's
one
of
these?
But
it's
not
obvious
for
me.
Okay,
maybe
ci
I'll,
take
a
look
just
a
sec.
C
So
you
guys
can
you
guys
can
answer
it
yourself,
which
is
which
is
look
yeah,
I
mean
the
cypress
testing
could
could
invoke
or
could
benefit
from
a
github
action
or
maybe
just
in
terms
of
understanding,
what's
going
on
sure,
but
actually
the
point
of
running
the
cypress
test
in
addition
to
running
the
smp
github
action
that
nivendu
just
kicked
off
is
that
they
will
do
somewhat
of
duplicative
coverage,
which
is
good
so
in
the
ongoing
github
actions.
C
Like
argue.
Arguably,
we
could
go
to
cyprus,
focus
just
on
cyprus
and
get
all
the
same
stuff
that
we
otherwise
got
out
of
the
github
action,
because
cyprus
is
a
higher
level
and
it
starts
from
a
higher
starting
point,
which
is
the
measure
ui
as
the
client,
whereas
the
git
of
action
starts
from
measuring
ctl
as
the
client
and
that's
they're,
both
hitting
it
for
two
different
vectors,
and
so
it's
good.
C
This
one
helps
people
pipeline
and
it's
convenient
and
we're
going
to
pump
out,
hopefully,
a
bunch
of
reports
and
data,
and
that's
great
and
good,
etc.
Over
here
in
cyprus,
ui
we're
going
to
test
that
all
of
our
javascript
is
happening.
The
login
is
going
on
the
there's
like
a
basic
set
of
cypress
setup,
that's
going
to
be
reused
across.
Oh
I'm,
assuming
I'm
a
cypress
ignoramus,
but
I'm
assuming
like
you'll.
C
You
know
you
develop
a
couple
of
reusable
functions
or
reusable
tests
like
you
perform
this
test
and
then,
if
that's
successful,
you
build
off
of
it,
and
so
it's
like
the
login
test
and
did
that
work?
Yes,
okay,
you
go
to
the
next
one,
which
is
then
and
so
to
what
mario
is
asking.
It's
like.
Okay,
great
I'm,
ready
to
you,
know
chow
down
on
this
first
kind
of
the
highest
priority
item,
which
is
you
know,
this
performance
management
area
is
the
deepest
area
that
mastery
has
and
it's
you
know
as
such.
C
It's
it's
critical
like
a
lot
of
people
come
in
to
use
mastery
to
run
these
tests
so
great.
Where
do
we
get
started
and
as
we
get
started,
what
configuration
is
needed
and
there's
a
couple
of
there's
a
couple,
easy
answers
and
there's
a
couple
that
kind
of
get
into
it
a
little
bit
more.
So
one
of
the
easy
answers
is
more
or
less
where
I
think
mario
was
about
to
land
which
is
okay
so
from
within
cyprus.
C
The
pseudo
code
for
the
one
of
the
first
tests
to
write
is
well,
and
I
guess
like
actually
now
that
I
think
of
it.
Sorry
now
that
I
think
of
it,
cyprus,
I
forgot
cyprus
runs
inside
of
a
github
workflow
as
well,
so
so
it
can
benefit
like
the
tests
that
cyprus
run
alongside
those
initial
set
of
tests
can
be
some
infrastructure
set
up
a
local
kubernetes
instance
with
a
pre-deployed
measuring
ctl
and
then
whatever
that
repetitive
pattern.
C
Is
that
then
we'll
get
into
this
cyprus
area
which
basically
expects
okay,
there's
a
running
kubernetes
and
there's
a
running
measuring
commissary.
Now
it's
time
to
perform
the
the
login
test,
which
I
think
we
have
you
know
great
okay.
So
then
the
starting
point
for
these
next
performance
management
tests
are,
but
you
haven't.
You
have
all
that
infrastructure.
C
You
have
just
vanilla,
kubernetes,
with
meshri
sitting
there
and
you're
logged
in
now,
there's
no
service
mesh
and
there's
no
workloads,
so
we
do
ultimately
want
to
deploy
a
service
mesh
and
deploy
a
workload
and
hammer
on
that
workload.
Generate
some
load.
C
B
B
Then,
of
course
we
have
like,
which
are
the
load
generators,
not
that
familiar
with
them,
but
I
know
like
they're,
fortunate,
wrk2
and
nighthawk,
but
the
question
would
be
like
like
like
what
is
like
what?
What?
What
do
we
need
prepared
in
that
workflow?
Or
maybe
it's
just
some
something
we
can
invoke.
C
From
a
couple
of
answers
here,
one
one
is
that
okay,
so
the
yeah
there's,
I
don't
know
that
we
want
to
have
tests
like
yes,
it
would
be
great
if
we
have
tests
that
cover
all
of
these
permutations,
but
as
we
go
through
it
you'll
and
the
more
comfortable
all
of
us
get
the
more
opinionated.
We
will
be
about
just
how
many
tests
are
needed
versus
one
test
covering
five
other
tech
potential
individual
tests.
C
So
one
of
those
things
is
that
you
can
go
into
mesh
refresh
as
without
using
a
remote
provider,
so
without
signing
in
you're,
just
basically
an
anonymous
user
you're
using
mesh
regis
as
a
generic
tool
message
doesn't
know
who
you
are,
doesn't
save
any
of
your
data,
and
so
you
can
go
in
and
you'd
have.
No.
C
There
would
be
no
pre-configured
performance
profiles
that
you
could
just
invoke
an
operation
or
run
a
test
of
which
means
that,
okay,
if
you
that's,
really
easy
to
sign
in
because
you're
technically
not
signing
in
you
just
you
just
go
in
as
no
no
user.
Now
the
cypher's
test
would
need
to
supply
its
own
configuration,
so
it
would
need
to
say
here's
the
end
point
I
want
to
hit.
C
Would
you
you
know
what,
in
this
case,
it's
okay
if
it
were
to
hit
itself
or
hit
something
locally
like
or
if
it
sends
something
out
remote
like
you
know,
either
way,
I
don't
think
it,
whichever
one
is
most
reliable,
so
we're
not
getting
false
positives,
false
negatives,
on
the
fact
that
the
github
runner
tried
to
send
some
packets
out
to
the
internet
but
like
for
whatever
reason
they
didn't
make
it
back,
and
so
the
test
fails.
C
Even
though
it
wasn't
a
measuring
code
issue,
it
was
just
an
environment
issue
for
the
workflow,
so
choosing
like
localhost,
something
something
is
probably
a
good
idea
and
then
yeah
the
other
parameters
there
will
be.
There
are
system
defaults
for
there's,
actually
only
one
parameter
that
the
user
must
specify,
and
that
is
yeah
the
url
yeah
good
you
we
might
or
might
not
want
to
specify
that
the
duration
is
the
default
is
30
seconds.
Maybe
that's,
okay,
but
you
can
set
it
to
something
else.
C
The
rest!
You
could
leave
because,
even
if
it's
I
mean
you
could
set
the
concurrency
to
one
and
the
queries
for
a
second
to
one
and
we
can
get
you
know
we
can,
I
think
actually,
the
point
is
that
you,
you
probably
would
set
a
value
or
two
because
that's
part
of
flexing
the
into
the
ui.
C
You
know
part
of.
E
B
A
C
Over
time
these
smoke
tests
become
part
of
a
larger
regression,
of
course,
safety
net
yeah,
which
then,
which
then
is
to
say
like
well
hey.
If
this
is
just
a
smoke
test
to
verify
that
performance
tests
work
like,
maybe
we
don't
need
to
supply
it
anything
other
than
the
default
values,
but
in
the
regression
test,
yeah
yeah.
Let's
give
it
a
couple
of
variations,
a
negative
value-
and
you
know
stuff
like
this.
C
Of
course
I
think
like
so
so
it
should.
I
don't
know
the
verification
that
the
test
ran
successfully
and
that
cyprus,
you
know,
the
assertion
that
we
would
give
to
cyprus
would
be
I'm
not
sure
it's
an
asynchronous
invocation,
because
you
might
have
a
10
hour
long
test,
and
so
we
don't
force
users
to
sit
there
and
wait
for
the
synchronous
message
to
come
back
and
say
here
are
your
results
so
from.
C
Or
well
in
measuring
ui
there
there
isn't!
It's
really
so.
C
B
B
Then
there's
a
let's:
what
is
it
like,
a
connection
that
is
when?
Is
it
stay
alive
or
can't
remember
the
term,
but
then
it's
just
waiting
for
our
message
back.
C
Yeah
exactly
yeah,
it's
just
a
perpetually
open,
socket
events
are
pushed
to.
So
in
this
case
it's
a
graphql
subscription,
so
you're
right
that,
like
the
ui,
that
is
a
is
a
client
and
of
mesh
reserver
and
and
in
the
ui
like
this
javascript
libraries
sort
of
sitting.
There
saying
you
know
server,
I'm
I'm
subscribed
to
this
this
topic
and
this
socket
this
topic.
So
you
know
whenever
you're
ready,
just
push
stuff
over
to
me
and
I'll
just
you
know,
show
it
to
the
user.
You
know.
B
C
B
Yeah,
I
I
you
know
what
I
think
this
is
a
good
discussion
and
I
I
found
like
the
guides
thanks
for
sharing
that,
like
there's
a
guide
for
performance
management,
so
I'll
try
to
you
know,
get
a
local
instance
running
and
you
know
try
to
come
up
with
like
a
really
basic
test
that
will
help
us
fly
off
here.
Right,
like
just
a
starter,
then
we
can
go
from
there
right.
C
Mario,
it
should
be
the
case
that
when
the
performance
test
is
submitted
when,
when
the
but
the
run
test
button
is
clicked
that
there
should
be
an
acknowledgement
back
from
mesri
server
to
the
the
ui
saying
you
know,
thank
you
and
and
here's
the
name
of
your
performance
test,
which
might
be
a
bit
of
gobbledygook
like
it's
just
sort
of
a
unique
here's.
The
unique
identifier
for
the
performance
test
that
you
just
invoked,
and
so
with
that
it
might
be
that
you
wait
45
seconds
or
whatever
and
then
request
the
results
of
that.
C
B
C
It
has
a
couple
of
performance,
pre-existing
performance
profiles
like
the
quick
test
or
whatever,
whatever
it's
called
and
then,
and
so
in
that
instance,
you
wouldn't
have
to
the
cypress
test,
doesn't
have
to
supply
the
configuration
as
much
as
it
just
has
to
identify,
select.
B
So
yeah
sure
yeah
there
could
be
separate
tests,
one
for
actually
running
the
profile.
You
know
we
could
do
what
you
just
said
and
another
one
just
to
try
to
add
a
profile.
You
know
they
don't
have
to
be
together
right
as
long
as
we
like
go
over
both
workflows.
C
Yeah
yeah
the
run
time
is
free
for
them
or
we're
we're
not
exceeding
our.
C
C
C
So
there
is
like
a
slight
issue
in
terms
of
our
graphql
subscription
here
like
it
should
have
gone
to
result.
One,
and
I
think,
if
you
refresh
the
screen,
it
will
say,
result
one
but
we're
not
flushing,
as
we
necessarily
should.
So.
C
C
So
the
yaml
that
we
see
inside
of
the
github
action
that
that's
like
extraordinary,
it's
it's
representative
of
exactly
what
you're
typing
into
the
ui.
It's
the
same
fields,
the
reason
that
we
can
guarantee
that
it's
the
same
fields
well,
there's
a
couple
reasons,
but
largely
because
those
fields
are
based
on
the
service
mesh
performance
specification
and
so
over.
Here,
it's
in
yaml
when
we
downloaded
it
from
the
ui.
C
B
Is
failing
yeah?
Maybe
could
you
could
you
open
your
console
in
the
browser,
please
so,
okay,
how
can
you
try
again,
let's
see
the
request.
B
B
A
We
have
a
couple
of
other
items
to
discuss.
Since
we
are
15
minutes.
We
only
have
50
minutes
left.
Maybe
we
can
get
to
that
topics
rudraksha.
Do
you
want
to
go
ahead
with
this.
D
Yeah,
so
I
can
share
my
screen
and
explain
the
problem.
D
So
basically,
the
thing
is
that
mission
ctl
now
installs
using
hill
charts
and
the
helm
chat,
manifest,
doesn't
have
this
label
and
let
me
just
close
this
right
now
in
mystery
server.
We
fetch
this
from
that
label.
So,
yes,
there
is
space
for
improvement,
and
this
one
is
fetched
from
the
containers
tag
itself.
I
don't
know
if
we
should
do
it
here
and
other
than
that
I'll
probably
need
to
catch
up
with
it
curse
on
how
can
mesh
sync
send
this
version
so
yep.
C
And
so
near
to
your
knowledge,
do
you,
if
you
query
the
I
don't
know
I
don't
know
if
you
can,
are
you
able
to
does
mesh
sync
expose
a
api
system
version.
C
C
All
right,
I
do
think
that
nats,
the
nats
port
is
oh
yeah.
The
nats
subscription
port
is
exposed,
but
that
may
not
be
serving
mesh
kits.
Api
system
version,
so,
okay,
all
right.
Some
investigation
needed
then
yeah
good.
This
is
just
like
it.
It's
a
small
item,
important
to
the
extent
that
you
know
we're
trying
to
troubleshoot
things.
C
A
A
D
D
C
Right
yeah,
so
we
can.
I
don't
think
we
can
speak
to
the
seed
content
thing
just
yet,
but
the
helm
versus
manifest
so
rudraksha.
I
missed
quite
what
you
said.
Are
we
so
that
that
there's
an
update
in
that
pr
on
mastery
ctl
for
when
you
invoke
system
stop-
and
I
think
what's
going
on
there-
is
that
darren
would
have
included
the
removal
of
water,
otherwise
orphaned,
artifacts?
C
D
A
A
So
I
guess
we
have
a
couple
of
release
blocks
pending,
I'm
not
mentioning
about
the
the
main
release
block,
but
the
release
block
for
particular
features
that
we
are
planning
to
add
in
this
release.
So
we
need
to
actively
try
to
get
these
written
down.
C
I
forgot
there
was
a
one
other
item.
Oh
yeah,
there
is
there's
one
other
item,
I'd
like
to
make
sure
that
you
all
are
at
least
aware
of.
There
might
be
some
new
contributors
that
come
or
you
might
meet.
Some
that
want
to
do
this.
This
would
be
ideal
is
of
the
test
case.
The
service
mesh
performance
tests
are
now
being
scheduled
through
that
action.
C
C
It
should
ultimately
be
publishing
these
results
and
saying
in
in
for
this
service
mesh
on
this
version.
Here
was
its
response
time
under
the
soap
test
or
under
the
whatever,
and
we
should
we
need
to
have
a
dashboard,
a
public-facing
dashboard
that
people
can
just
come
refer
to
and
compare
and
contrast,
and
so
for.
The
hobbies
of
the
world
like
this
is
this
is
relatively
straightforward
to
do,
based
on
the
exposure
of
those
metrics
from
measuring
cloud.
C
C
Can
you
go
to
measuring.io,
slash
smi,
so
mescheri
runs
conformance
conformance
tests
for
these
service
meshes
and
it
does
it
with
different
test
assertions
on
using
different
versions
and
that
kind
of
a
thing
and
we
publish
the
results
like
these
results
are
run
and
they're
sent
back
to
mesh
recloud
messaging
cloud
exposes
the
results
and
then
here
that
the
mesri.io
has
some
javascript,
that
just
grabs
that
json
and
you
know,
shows
the
world
the
dashboard
of
what
service
meshes
are
conformant
with
smi
in
a
similar
way.
C
We
need
an
smp
dashboard
that
shows
the
world
the
performance
of
different
service
meshes
under
different
configurations,
and
so
we
can
very
quickly
expose
that
performance
data
from
those
scheduled
tests
that
we
were
just
looking
at
earlier
on
the
call
so
very
quickly.
We
can
toss
up
a
table
that
grabs
some
json
and
shows
people
who's
fastest.
E
C
E
Yeah,
so
basically
I
have
to
look
to
smi
interface
and
how
the
dashboard
is
like
how
the
data
is
shown
here
as
a
dashboard
right
and
then
do
the
same
for
smp
like
yeah.
If
there
is
a
need
of
optimization
with
designs,
then
definitely
yeah.
E
This
great,
so
basically,
it
needs
back
end
also
right
so
yeah
the
data
is
def
may
be
coming,
but
does
it
needs
different
type
of
data
format?
You
know.
C
There
is
no
back
end
for
it
right
now,
other
than
mavendu.
If
you
navigate
to
the
main
meshery
page,
the
underneath
the
splash
like
underwear
smp,
it
says,
5096
performance
test
run.
That
is
a
real.
That
is
real
time.
That
number
is
actually
pulled
from
measuring
cloud
whenever
someone
loads
this
page,
and
so
if
you
run
a
performance
test
right
now,
the
count
will
go
up.
C
So
there
is
some
of
the
data.
It's
just
a
single
number
that's
exposed
and
what
we
need
to
do
is
expose
the
rest
and
so
yeah
the
back
end.
Isn't
there
I'm
just
so
now
that
you
have
full
context
for
what
we're
going
to
try
to
do
later.
When
we
talk
about
it,
you'll
know
what
the
goal
is.
E
C
C
And
then,
okay
at
risk
of
going
over
time,
adina
really
quick,
like
almost
last
chance
that
you
know
I
don't
know,
is
so.
Do
you
any
other
any
questions
that
I
anything
that
still
doesn't
make
sense
about
that
measuring
ctl
patterns,
github
action.
H
C
H
C
Okay,
cool
as
they
go
as
they
go
forth.
This
would
be
a
great
a
great
one
for
you
to
watch
and
maybe
add
reviews
on
as
as
they
do
it
cause.
They
know
your
context
will
come
as
you
as
they
go.
H
G
All
right,
okay
revenue,
anything
else
on
the
call
nope,
that's
pretty
much.
A
It
for
today
so
we'll
meet
back
in
two
weeks,
hopefully
with
measuri
v06
released
and
the
beta
programs
being
active.
So
I'll
see
you
guys,
then
bye
thanks.