►
From YouTube: Meshery CI Working Group (June 1st, 2020)
Description
A demo of @LitmusChaos github action with @udit_gaurav15 and @ksatchit today.
B
D
A
A
It's
hard
to
get
a
good
screenshot
that
highlights
metri,
as,
as
the
you
know,
the
project
that
you're
focused
on
just
because
you
know
the
ones
that
are
inside
of
the
cncf
get
a
bit
a
bit
extra
elevation,
but
that
was
good.
That
was.
E
Hey,
yes,
lee
I've
actually
figured
out
every
monday
morning,
something
regarding
measuring
to
add
in
the
linkedin
update,
so
people
that
got
to
know
that
we
are
working
on
that
we
have
some
making
some
progress.
So
I
have
very
small
timing,
I'm
working
on
a
private
project,
so
I
just
throw
it
out
that
people
might
to
know
that
we
have
achieved
a
very
lit
huge
in
a
little.
D
Amount
of
time
yeah
really
yeah
they.
I
totally
thank
you
for
that.
It
looked,
it
looks
good.
It
looks
great.
A
Well
now
I
realize
we're
kind
of
you
know
that
maybe
let's
just
start
the
call
we'll
we'll
talk
about
that
a
little
bit
so
to
about
more
about
three
afters,
we'll
get
rolling
and
to
see
karthik.
Here
I
owe
karthik
a
response
on
some
things
so
karthik
we
can
probably
talk
about
that.
Actually,
it's
a
great
topic
karthik
for
this
meeting.
F
Haley
yeah.
Thank
you.
This
is
a
small
poc
that
we've
worked
on,
which
is
a
sample
lamp.
I
just
wanted
to
know
how
we
can
use
that
in
the
mysteries
context.
Yeah.
A
Okay,
nice,
so
let
me
share
the
meeting
minutes
and,
let's
put
it
on
the.
C
C
C
C
A
Okay,
so
it's
assigned
just
to
finish
up
that
last
conversation.
A
That
sounds
good.
That's
great,
that
you
find
just
a
quick
minute
to
put
something
out.
That's
enough!
That's
good,
and
that
was
a
good
rip.
Just
seeing
your
update
was
a
good
reminder
to
me
that
well
a
couple
of
things,
one
that
we
need
to
go,
write
that
up
a
little
bit
and
talk
about
all
of
last
week's
news,
which
was
quite
a
bit
but
then
also
specifically,
I
opened
up
this
issue
428
on
the.
A
A
Well
good,
so
I
asked
nippor
if
she
wanted
to
do
it,
but
but
if
actually,
if
you're
inclined
to
you
know,
please
grab
it.
E
All
right,
yes,
last
time
we
were
talking
about
that
if
somebody
hit
the
mystery
website
so
for
the
last
continuous
and
he
doesn't
come
back
in
a
month
or
three
month
or
four
months,
we
have
subscribed
to
them
and
showing
them
updates,
that's
encouraging
to
hate
our
website
or
encourage
them
to
go
on
this
live.
So
I
am
very
pretty
much
inclined
for
that,
and
also,
I
think
we
put
together
some
other
updates
as
well
worldwide
consoles
partnership
with
the
hashicorp.
E
What
console
is
updating,
so
people
got
to
know
that
we
are
pretty
much
in
the
business
now
of
updating
ourselves
regarding
understanding
other
peoples
are
doing
so
I'm
pretty
much
inclined
with
that.
So
really,
I
think
everybody
is
in
the
call
encouraging
that,
so
we
can
weekly
show
them
the
useful
link
for
mystery
to
learn
about
that
this
week.
We
are
done
that
the
last
time
we
have
updating.
E
E
A
The
there
have
been
a
few
of
us
in
the
community
that
have
contributed
blog
posts,
and
a
few
of
us
that
have
actually
like
karthik
and
ud,
who
are
here
very
directly,
have
helped
contribute
to
measuring
going
into
landing
inside
the
landscape,
the
cncf
landscape,
and
so
so.
Thank
you
for
that.
A
Also
we'll
we'll
put
a
small
piece
of
news
on
the
site,
which
will
probably
be
nothing
other
than
just
a
redirection
that
shows
people
the
fact
that
it's
there
and
I
guess,
to
science
point
karthik
as
we
take
a
look
at
the
as
we
discuss
some
of
the
items
with
witness
chaos
today,
if,
if
we
do
end
up
being
successful
with
anything
really
whatever
that
is
it's
a.
F
F
A
Okay,
well,
this
then
actually
that's
it's.
A
good
next
topic
by
the
way
is
karthik.
Last
week
was
just
with
really
something
of
a
blur
for
me,
I
I
didn't
even
I
know
in
my
mind.
I
know
that
there
was
a
message
that
you
had
written
and
I'd
read
like
the
first
sentence,
and
but
that's
not
as
far
as
so,
don't
assume
that
I
have
all
of
the
context.
I
guess.
F
No
problem
yeah,
no
problem.
The
context
was,
I
think,
some
of
the
calls
in
the
preceding
weeks
where
they
discussed
the
possibility
of
running
a
chaos
action
where
you
could
bring
up
a
kind
cluster
as
part
of
the
cia
pipeline
through
a
kit
of
action
and
probably
deploy
a
lap.
That's
that's
part
of
the
measuring
suite
and
then
run
a
simple
chaos
test
against
it.
F
F
So
you
could,
as
part
of
the
ci,
we
could
launch
a
kind
cluster
on
the
vm
that
is
provided
by
github
actions
itself
and
the
this
is
post
building
the
docker
image.
For
that
particular
application,
you
could
run
any
unit
tests
or
any
such
static
checks.
F
Maybe
before
doing
the
image
bit
and
after
that,
once
the
kind
cluster
is
created,
you
could
do
a
kind
load
of
the
docker
image
so
that
the
recently
built
image
is
available
on
the
cluster
and
when
probably
launch
kubernetes,
manifest
such
that
it
uses
the
image
that
has
just
been
built
as
part
of
the
cm,
and
once
that
application
is
built,
there
could
be
a
lightest
check,
which
is
basically
just
trying
to
say
that
the
application
is,
you
know
alive
and
healthy,
some
kind
of
an
indicator
followed
by
a
chaos
action
and
if
the
chaos
action
is
succeeded.
F
Now
you
could
consider
this
like
some
kind
of
an
e
to
v.
If
that
has
succeeded,
then
you
go
and
push
the
image
into
the
repository
once
this
is
passed.
So
this
is
the
flow
flow
that
we
sort
of
tried
to
work
with
a
sample
application,
not
really
a
mystery
app,
but
just
a
hello
world
app.
Otherwise,
maybe
users
can
just
share
the
screen.
If
that
particular
flow
is
handy.
H
Yes,
so
thanks
karthik,
hello,
everyone
yeah,
so
as
karthik
mentioned
so
a
similar
type
of
workflow
yaml
is
this.
I
I
will
go
through
this.
Let
me
trigger
it
first,
so
that
just
the
creation
time
and
all
those
times
should
get
accomplished.
H
I
H
Yeah,
so
so
this
is
the
workflow
yaml,
which
is
going
to
be
trigger
and
the
first
step
it's
basically
building
a
docker
image.
So
this
image
is
containing
in
the
root
directory.
Here
contains
some
hello,
app
application,
a
sample
one.
It
can
be
different
according
to
our
need.
So
after
building
it,
we
will
just
see
whether
it's
up
and
there
and
then
we'll
just
make
a
kind
of
cluster.
H
So
this
ubuntu
image
ubuntu
latest
comes
up
with
the
kind
installation
we
just
need
to
up
the
cluster
with
I
haven't.
I
have
chosen
a
default
name.
We
can
give
that
also
and
after
having
the
kind
pressure
up,
we
just
need
to
do
a
kind,
get
cube,
config
internal.
It
will
just
make
it
kind
cluster
to
be
accessible
to
our
github
actions
and
after
that,
just
checking
the
nodes
state.
H
So
right
now
it's
not
pushed
and
we
we
will
just
use
this
to
create
this
application
and
we
wait
for
it
to
come
up
and
check
the
status
of
it.
Then
we
will
create
a
liveness
pod
in
this
and
then
we'll
wait
for
that
also
to
come
up,
so
this
is
the
workflow.
And
finally,
we
came
to
our
actions,
which
will
just
right
now
it
is
spot
delete.
F
H
So
this
is
the
run,
so
he
it
deployed
the
it
build
the
dockerfile,
and
now
it's
just
checking
it's
whether
they're,
present
or
not,
and
the
whole
workflow
runs
like
that.
A
Yes,
okay
yeah!
This
is
pretty
exciting.
How
long
well
or
for
this
particular
sample
app
about
how
long
does
the
full
workflow
take.
H
Yeah
yeah
yeah,
so
it
takes
seven
minutes.
Approximately
eight
minutes
depends
on
the
docker
build,
so
it
can
be
very
very
it
can
vary
with
the
builds
like
if
it
is
a
heavy
docker
file.
It
can
might
take
some
four
minutes
here,
but
yeah
so
successful.
For
some
successful
run.
It
takes
seven
to
eight
minutes
like
it
was
one
successful
one.
It
takes
eight
minutes
13
seconds.
A
Curious
about
about
docker,
I'm
sorry,
github,
github's
infrastructure
and
what
they
provide
to
open
source
projects.
I
we,
I
don't
recall
if
it
was
on
this
call
or
a
different
one,
that
we
kind
of
looked
at
the
limitations
on
github
workflows
and
how
many
you
can
run
and
that
kind
of
a
thing
for
open
source
projects.
Clearly,
it's
not
an
issue
for
whatever
those
limitations
are
aren't
an
issue
for
what
we're
doing
here.
I
guess
the
question
I'm
trying
to
get
here
is
the
accounting
environment.
H
Yes,
so
kind
of
cluster
which
I'm
using
right
now
I
just
took
the
default
things
like,
so
it
will
just
build
one
master
node
and
do
that
we
can
cust.
We
can
customize
it
like.
We
can
give
some
number
of
nodes
like
two
three,
whatever
we
want,
and
we
can
do
in
that
way.
Also.
A
Okay,
do
you
know
if
there
is
a
like
if
you
were,
to
try
to
put
up
a
hundred
node
cluster,
I'm
assuming
at
some
point,
there's.
F
H
Not
really
tried
that
thing,
but
yeah
I
tested
it
with
two
three
cluster
nodes.
I
think
it
takes
the
vm
vm,
whatever
the
vm
configuration
is.
If
that
allows
to
take
it
hundred
nodes,
it
will
run
with
that.
G
F
Earlier
conversation,
but
have
not
really
gone
back
and
checked
early,
but
there
are
these
runners
that
we
can
probably
configure.
So
that's
a
vm
environment
which
is
not
the
one
that
it
provides
by
default,
but
we
could
set
up
one
that
sort
of
is
fit
for
our
requirements
and
get
these
actions
done
there.
A
I
think,
okay
before
all
right,
another
question
here
or
actually
another
comment
here
and
that's
for
kaneshkar
and
then
anyone
else
who's
focused
on
or
familiar
with,
the
smi
conformance,
tooling
or.
A
How
just
a
little
bit
of
context
for
everyone
else.
A
Well,
across
between
a
community
bridge
and
a
google
summer
of
code
project
for
free
to
be
used
as
a
validation
tool,
a
conformance
tool
to
to
to
assess
whether
or
not
individual
messages
are
in
fact
compliant
with
the
service
mesh
interface,
the
smi
spec,
and
so
as
we
we're
on
the
forefront
of
executing
that
project,
which
means
that
we're
a
little
bit
of
the
architectural
approach,
the
tooling
and
how
we
would
get
it
done
and
it
it
strikes
me
that
one
of
the
particular
approaches
includes
well.
A
It
strikes
me
that
a
lot
of
times
that
we'll
be
doing
those
performance
tests
in
context
of
a
event.
I
should
say
in
that
event,
being
a
new
version
of
the
eskimize
that
you
know
like
there's
a
pr
merge,
something
similar
to
that
in
the
smi
project
that
we
would
want
to
kick
off.
A
set
of
you
know,
test
validation
tests
and
these
validation
tests
are
a
bit
like
what
we're
looking
at
here.
A
They
are
in
the
end,
multi-system
integration
tests,
and
so
I
thought
I
would
bring
this
up
and
ask
like
so
kaneshkar.
Is
you
taking
that
into
context
and
taking
in
the
context
of
what.
A
A
B
I
don't
I
mean
nothing
as
of
now
but
yeah
we
could
discuss
on
this.
I
think
we
should
discuss
on
this.
A
Yeah
I
mean
this:
is
it
may
not
be
that
this
would
be
the
only
way
that
nurturing
would
be
used
to
validate
at
my
conformance,
but
it
might
actually
be
a
relatively
more
expedient
way
because
litmus
chaos,
and
not
just
with
chaos,
but
just
also
this
additional
project
that
karthik
would
need
to
have.
A
A
A
And
the
clearly,
the
you
know
back
the
chaos
action
that
you
guys
have
created
has
built
in
and
it
has
built,
I'm
assuming,
like
you
know,
sort
of
first
class
knowledge
of
the
type
of
experiment
like
there's.
The
type
of
experiment
you
want
to
run
want
to
run.
The
experiment
name
is
pod
delete.
A
Is
that
what
I'm
trying
to
get
to
is
like
we're
going
to
have
defined
particular
integration
tests
or
particular
chaos?
If
you
will.
G
F
Okay,
so
this
is
basically,
as
you
can
see,
these
are
all
parameters
that
we
are
going
to
use
to
precondition.
The
experiment,
camels
or
artifacts,
which
are
sort
of
going
to
be
pulled
as
part
of
the
chaos
run
itself.
So
there
is
this
install.
It
must
rule
that
you
see
here
it's
actually
going
to
pull
the
experiment
dml
and
then
going
to
pre-condition
it
with
the
details
that
you've
provided
like
app
name,
space
label,
experiment,
image,
etc,
and
then
the
experiment
is
going
to
be
run
according
to
that
particular
manifest.
F
Now
this
is
a
particulate
experiment
and,
like
you
said
this
particular
action
with
experiment
names,
it
reportedly
knows
exactly
to
run
only
that
experiment,
but
you
could
write
the
actions
for
including
custom
experiments
for
a
particular
app.
So
you
know
what
all
variables
we
want
to
pass
to
it
from
outside.
F
A
Oh
just
to
get
more
familiar
with
it
of
the
actions
that
are
invokable
it
like,
like
the
experiment
image
listed
there.
This
could
be
any
custom
experiment,
that's
cool
and
that
the
way
to
integrate
with
the
chaos
action
that
you
guys
have
created
is
to
have
your
experiment
available
in
a
runner-friendly
format.
F
A
Got
it?
Okay,
okay,
so
there's
a
couple
of
other
contributors
who
are
focused
on
the
project
that
I
was
just
talking
about
about
smi,
conformance
that
that
aren't
but
we'll
be
having
you
all
this
week
about
our
approach
and
the
action
items
that
the
team
has.
A
So
we're
going
to
spend
some
time.
Looking
at
this
looking
at,
if
this
can
accelerate,
I
mean,
I
think
I
think
it's
kind
of
there's
kind
of
two
things.
Maybe
there's
actually
a
couple
of
things.
I
think
that
collection
here,
one
is,
is
what's
being
shown
here.
Valuable
in
context
of
the
smi
conformance
effort
is
what's
being
shown
here,
valuable
in
terms
of
just
mescheri's
own
platform.
A
Compatibility
testing,
because
it
measuring
needs
to
be
compatible
with
any
number
of
environments
that
you'll
be
running
meshes
on
and
any
number
of
service
meshes,
and
so
we've
got
kind
of
a
gaping
hole
around
integration
tests
ongoing.
You
know,
compatibility
tests.
I
think
that
was
maybe
the
initial
context
in
which
we
were
talking
about
this.
So
so
there's
a
couple
of
avenues
to
explore.
F
Right,
I
think,
probably
also
as
part
of
the
discussion
we
could
just
fit
in
the
chaos
workflows.
I
think
in
1.4.0
we
just
sort
of
tried
to
write
some
more
documentation
around
how
you
could
use
our
go
to
run
parallel
chaos.
I
think,
if
I
understood
right,
I
heard
you
talk
about
performance
testing,
so
you
could
have
a
performance
workload
triggered
off
and
then
parallely
have
some
kind
of
staggered
chaos
when
the
performance
run
is
going
on.
A
Yeah,
that's
right,
yeah!
That's
that's
right!
There
there's
another
project
that
is
about
to
endeavor
into
kind
of
distributed
performance
and
so
yeah
the
we
they're
like
we
were
talking
about
before
we're
kind
of
missing
a
schedule,
a
centralized
scheduler
if
you
will
and
something
to
kind
of
coordinate
and
orchestrate
the
test.
So
this
is
that's
a
another
venue,
another
use
case
and
and
that
one
in
some
respects
is
kind
of.
A
I
think
I
think
conceptually
that
one
is
even
maybe
the
most
aligned
with
the
use
cases
that
litmus
chaos
was
sort
of
created
for
and
that
being
whether
it
was
deleting
a
pod
or
saturating
the
network
with
a
bunch
of
load
that
one
aligns
really
well.
A
F
A
Let's
think
about
that,
for
an
imagehub
one
is
nice
and
lightweight.
It
is
at
the
moment,
there's
at
this
very
point
in
time,
there's
an
enhancement
that
needs
to
be
put
into
the
mesh
adapter
for
console
to
really
make
that
a
smooth
integration
like,
I
think,
that's
a
good,
a
good
app
in
general.
At
the
moment,
a
bit
broken
we'll
have
to
fix
that.
A
A
A
A
Yet,
and
so
this
would
be
a
good
there
could
be
some
hiccups,
I
hope
not,
but
the
reason
I
call
out
natish
and
a
couple
of
others
is
because
they're
looking
at
measuring
ctl
as
a
client
to
invoke
those
same
operations
like
you
know,
to
spin
up
a
service
mesh
to
deploy
a
sample
app
and-
and
we
haven't
gotten
there
yet,
but
had
we
have
already
been
on
the
other
side
of
that.
I
think
we
would
have
worked
through
potential
kinks.
F
Is
there
an
application
which
you
would
and
forgive
me
for
like
this
knowledge
on
the
side?
Yet?
Is
there
an
application
which
can
be
directly
created
via
kubernetes,
manifest,
for
example,
and
is
the
need
of
some
kind
of
sanity
yeah
tests
right
now.
A
I
guess
actually,
in
that
regard
considering
well
a
couple
things.
I
think
how
how
sexy
the
image
hub
is
sexy,
I
mostly
mean
in
terms
of
like
it,
hit
the
technology
that
it's
doing
kind
of
how
it's
teaching
people
some
new
things
and
how
lightweight
it
is
it,
even
though
it
might
be
the
right
place
to
start,
because
we
will
eventually
fix
the
issue
with
the
con.
The
measuring
adapter
for
console
and
because
to
your
point,
there
are
kubernetes,
manifests
for
laying
down
the
image,
laying
down,
console
and
laying
down
the
image
hub.
A
Even
if
meshri
has
a
you
know,
setting
mastery
aside
for
the
moment
that
you
would
be
successful
with
deploying
console
using
a
helm,
command
and
deploying
the
image
hub
using
directly
applying
those
manifests
to
kubernetes,
so,
okay,
yeah,
so
good.
It
is
the
right
way
to
go
forward.
A
The
instructions
on
the
readme
for
image
hub
are
accurate.
Today
that
you
should
have.
You
would
have
success
in
following
those.
A
Okay,
okay,
yeah,
okay,
so
this
is
good.
So
assuming
there
aren't
comments
from
others
on
this
topic,
the
thing
karthik
that
we'll
do
this
week
is
we'll.
Have
it
we'll
be
meeting
on
smi
conformance
doing
talking
about
the
current
approach
and
then
reflecting
on
this
possibility
as
well?
A
Okay,
and
then
my
hope
would
be
that
when
we
meet
next
week
at
this
time
that
we
would
have
we'd
be
either
be
able
to
either
plan
to
kind
of
review
the
current
review,
those
notes
rather
kind
of
review
the
the
reflections
of
the
others
that
are
not
on
this
call
on
how
we
might
be
able
to
move
more
briskly
with
litmus
chaos.
A
Yeah
now
this
is
good
thanks
for
continuing
to
yeah
a
great
demo.
A
There's
a
lot
of
promise
here.
It's
it's
really
is
an
area
of
need
for
us.
A
Back
to
meeting
minutes,
anyone
else
have
any
comments
on
what
we
were
just
talking
about.
E
So
with
that
in
mind,
I
think
we
have
two
approaches.
One
is
kind
and
one
is
k3.
Yes,
so
k3
as
I
have
written
about
and
learning
about
that
this
is
a
cluster
that
built
for
production,
use
cases
so
k3s.
So
it's
have
a
small
and
a
very
lightweight
cluster,
so
you
can
test
it
out,
run
some
edge
devices
and
other
other
things
as
well.
E
So
I
think
if
the,
if
I
understand
the
flow
of
that
that's
the
subs,
some
I
confirm
instance,
or
some
kind
of
a
measuring
commands
as
we're
testing,
we
need
a
cluster
there
to
test
those
commands.
So
a
part
of
that,
I
think
we
need
these
two
options
either
go
for
the
k3s
or
for
the
kind.
For
my
knowledge,
I
think
k3s
is
a
very
lightweight
compared
to
the
client
cluster.
E
So
k3
has
an
option
for
the
production,
use
cases
and
also
the
kts
is
from
ranchers
lab,
so
they
have
a
production
in
which
you
have
a
persistent
persistent
volume
claim
there.
So
you
can
stateful
application
for
that.
So
I
think
most
of
the
people
are
going
for
the
k3s
because
of
their
lightweight
production
use
cases.
So
I
think
when
we
integrate
the
continuous
integration
testing,
that's
I've.
E
So
we
have
to
build
these
two
clusters
side
by
side,
either
go
with
a
kind
either
for
the
k3.
As
for
that,
so
I'm
not
pretty
sure
I'm
drawing
I'm
trying
out
some
diagram,
but
some
hindrance
there
for
me.
So
how
we
test
that?
How
do
we
test
measuring
in
particular
this?
What
we?
What
are
the
things
that
we're
going
to
test
so
in
that
place?
E
I
am
really
stuck
right
now
I
have
drawing
the
development
workflow,
but
I'm
going
to
the
continuous
integration
workflow,
I'm
just
missing
what
kind
of
things
that
we
are
testing
and
when,
with
this
testing
done,
we
have
to
go
back
to
other
productions
and
code
for
that.
So
these
two
recommendations
for
me.
I
think
we
should
to
check
it
out
k3s
as
well.
G
F
Yeah,
I
think
k3
is
also
pretty
popular
like
simon,
is
saying
and
there's
a
lot
of
production
usage.
The
k3
is
from
what
we've
seen
one
of
the
reasons
why
we
sort
of
devote
with
kind,
I
think,
maybe
it's
a
good
exercise
to
see
how
the
k3s
goes
as
well.
That
will
be
new
learning
for
us
as
well
kind,
I
think,
is
being
used
a
lot
in
ci
environments.
E
Adding
gothic's
flight-
I
think
one
thing
one
use
case
is
that
I
think
that's
excites,
that
when
we
go
for
the
client
cluster,
I
think
people
have
to
know
about
something
about
kubernetes
operations
work.
So
if
we
go
to
the
k3s,
the
many
of
the
people
are
talking
about.
They
doesn't
wondering
how
the
kubernetes
is
working,
so
k3s
is
working
on
top
of
kubernetes
and
they've
done
things
on
behalf
of
you
in
that
you
have
a
very
little
knowledge
of
kubernetes.
E
You
can
go
with
that.
I
think
in
a
current
cluster,
when
I'm
working
in
the
past,
I
think
you
have
to
know
some
particular
things
about
kubernetes,
so
I
think
if
one
person
that
have
testing
our
measuring
and
other
that
it
doesn't,
it
is
working.
E
A
That
gave
me
a
moment
to
catch
up
on
all
the
notes
that
we
had
from
karthik
and
odie's
presentation.
That's
good!
The
next
topic
that
we
had
up
was
chinese
car.
So
it's
a
topic
that
we've
had
since
we
initiated
this
working
group
and
it's
sort
of
the
item
where
we
were
saying.
A
Our
highest
priority
within
the
and
that
is
a
hold
of
control
over
mercury's
releases
measuring
server
releases,
and
to
do
that,
we
were
saying
well,
mastery
server
needs
to
be
first
self-aware.
He
needs
to
be
cognizant
of
its
own
version,
so
that
yeah
and
so
leor
had
recently
like
with
this
last
week,
I
think
produced
a
updated
workflow
include
properties.yaml
file
system
of
the
mexi
container
image,
and
that
now
I
think
I
think
we're
ready
for
these
next
two
steps.
K
A
Yep,
the
primary
well
a
couple
of
reasons,
but
maybe
the
the
initial
reason
for
the
rest.
Api
to
expose
mesherie's
properties
is
so
that
a
client
like
measuring
ctl
when
it
runs.
When
you
run
message,
ctl
version
could
return
on
the
command
line,
the
version
of
measuring
ctl
as
a
binary
and
then
also
the
version
of
meschery
server
as
a
server,
so
to
basically
return
the
the
versions
of
the
client
and
server
and
the
way
that
measuring
ctl
interfaces
with
azure
server
is
through
rest.
And
so
I
mean
that's.
K
J
A
A
K
A
Nice,
okay
natish
since
you're
on
the
call
as
well
and
have
been,
I
think
you
know
entirely
wrapped
your
arms
around
measuring
ctl.
I
in
this
issue.
L
So
I
I,
what
is
the
question?
Sorry?
I
didn't.
I
didn't
completely
follow
like
this.
This
command
exists
right
now
and
it
gives
you
the
client
or
does
it
give
you?
The
server
version
doesn't
give
you
the
client
version.
L
A
My
question
is
almost
rhetorical
in
that
yeah,
as
basically
like
we
have.
We
have
a
user
experience
today,
which
says:
here's
your
clients
and
server
version.
L
A
L
I
feel
like
a
top
level
version
command
is
more
intuitive.
Okay,
yeah,
just
based
on
other
clies,
where
everyone,
either
flags,
even
either
passes
the
dash
dash
version
or
just
the
version
command,
but
we
could
still
have
a
system
version
that
replicates
the
same
behavior
and
there's
there's
no
reason
why
we
couldn't
do
that.
A
A
A
Commands,
okay,
yep,
and
so
as
we
do
so
that
makes
intuitive
sense
to
me
that,
like
hey
most
time,
you
use
a
tool
like
that
that'll,
that's
what
you
do
as
we
think
about
you
guys,
just
reflecting
on
it
for
a
moment,
all
of
the
other
measury
server,
all
the
other
actions
that
you
invoke
against
mesri
server,
they'll,
be
there
the
system
command.
L
Right
right,
yeah,
I
I
was
thinking
about
the
same
under
the
mesh
command
yeah.
I
guess
I
guess
that's
that's
that's
where
we
can.
We
can
have
the
version
under
each
of
these
parent
commands
and
we
have
the
regular
dash
dash
version
that
just
tells
you
from
a
messy
ctl
perspective.
What
the
client
and
server
version
are
exactly
like
cube
ctl
for
that
matter.
A
Nice,
okay,
yep
very
good,
makes
sense,
and
it's
probably
one
of
the
very
few
global
commands
that
actually
kind
of
constrains
itself
it
only
like
it's
possible.
I
don't
know
another
good
thought
here
is
well
assuming
that
we
have
measuring
ctl
mesh
version,
if,
assuming
that
we
already
had
that,
would
it
in
the
future
be
desired,
behavior
that
this
would
spit
out.
L
A
L
Yeah,
that's,
I
think,
that's
a
legit
thought
too,
rather
than
having
to
figure
out
what
the
subversion
commands
are.
You
just
have
one
top
level
that
it
gives
you
all
the
versions
yeah.
I
think
I
think
that's
a
better
experience
honestly
than
going
into
sub
commands
unless
we
were
doing
something
really
different
in
those
version
commands.
A
Okay,
next
I'll
try
to
type
this
up
to
be
a
little
more
understandable,
but.
L
That
isn't
yeah
we
could,
we
could
break
it
down,
though,
like
we
could
have
a
instead
of
a
global
flag.
You
have
a
root
level
flag
and
then
you
have
sub
flags
within
sorry
flags
within
every
sub
command.
That
only
tells
you
about
the
version
for
that
that
command
like
if
you
say
system,
it
only
shows
you
the
client
and
server.
If
you
say
mesh
dash
dash
version,
it
shows
you
everything
about
the
mesh,
but
if
you
do
it
from
the
the
root
level,
it
shows
you
all.
The
information.
A
It
sounds
like
one
of
those
things
we
should
put
into
the
cli
doc
yeah
yeah.
Anyone
else
have
an
opinion
in.
C
A
A
More
intelligently,
written
but
yeah
that
sounds
okay.
That
was
the
last
item
of
the
day.
Anyone
have
anything
else.
A
No
okay,
good
all
right,
good!
Happy
monday!
Everyone
we'll
speak
a
couple
more
times
this
week,
I'm
sure
karthik
community
minimally.
I
guess
we'll
speak
same
time
next
week.
So,
if
not
before.