►
From YouTube: Meshery Development Meeting (Nov 24th, 2021)
Description
Meshery Development Meeting - November 24th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
So
welcome
everyone
to
the
machine
development
meeting
today
is
the
24th
of
november,
and
we
can
kickstart
this
meeting
with
with
an
update
from
scientin
on
the
refactoring
measuring
ui
initiative,
so
scientin
the
flow
resolvers.
B
Hello,
am
I
audible,
yeah
yep,
so
I
would
like
to
share
my
screens
so
yeah.
So
basically,
we
are
planning
to
like
you.
We
are
doing
a
new
structure
of
the
machinery
so
which
is
in
the
machine
ui
restructuring
branch.
B
So-
and
this
is
the
common
tracker
actually
which
we
are
following
so
to
track
the
progress
that
have
been
already
completed
and
the
things
which
are
in
progress.
B
So
basically,
I
think
aditya
has
already
solved
two
of
them
and
the
rest.
I
have
created
some
pr
for
them
and
right
now
I
would
like
to
show
you
the
work
which
I
have
done
so
basically,
this
is
the
the
decision
has
changed
from
3000
to
4001,
and
I
have
some
I
have
like
like.
I
just
wanted
to
match
this
ui,
like
the
original
measures.
Ui
to
this,
the
structure
runs
so
right
now.
I
think
this
is
looking
quite
similar
to
that
of
908.1,
and
I
think
there
was
some
responsiveness
issues
also.
B
I
tried
to
fix
some
of
them
so
basically,
right
now,
this
adapters
are
somewhat
responsive
and
also
you
can
see
this.
This
is
going
on
top
of
this,
which
was
not
there
previously,
but
right
now
I
made
this
thing
responsive,
but
the
problem
occurs
when
this
collapses
a
little
bit
now.
Yes,
this
position.
This
is
the
mobile
view.
Actually,
in
this
view,
this
dashboard
should
collapse
on
its
own,
but
it's
not.
I
will
look
into
this
so
yeah
I
made
it's
somewhat
responsive
that
then
it
was
previously
there
and.
B
Yeah,
that's
basically
my
update
from
my
side,
which
I
have
done.
A
Nice,
I
have
a
comment.
I
don't
think
we
have
to
make
it
responsive
that
much
like
we
were
saying
about
the
mobile
view,
because
I'm
sure
that
only
pc
people
will
be
using
this
and.
B
No,
actually,
the
message
is
9081.
Is
that
much
responsive?
So
basically,
if
you
see
this
one,
this
one
is
more
responsive,
see
it's
okay,
okay,
yeah!
So
basically
I
think
we.
I
should
also
make
this
responsible.
That's
why
I'm
working
on
that.
A
Other
comments
to
say,
anthony
and
other
updates
in
this
initiative.
B
So
I
think
miekna
was
working
on
some
of
like
the
code
editor
and
big
number.
So
he
see
present
in
the
call.
B
And
I
have
also
added
one
more
component:
the
progress
one
which
need
to
be
written.
If
anyone
would
like
to
take
it,
they
can
go
ahead
with
this.
C
Yeah,
you
know
so
one
of
the
maintainers
of
measure
ui
nitish
karthik,
is
out,
or
he
sends
his
regrets
to
today's
meeting.
He'll
be
back
on
friday,
but
this
what
scienton
is
going
over?
What
aditya
has
been
working
on
and
megana
and
and
some
others?
It's
a
significant.
C
It's
it's
it's
more
or
less
an
overhaul
of
the
ui
forthcoming
in
the
next
measuring
release.
So
there'll
be
a
lot
of
discussion
about
the
the
ui,
but
maybe
I
shouldn't
say
the
discussion
as
much
as
a
lot,
a
fair
bit
of
refactoring
to
account
for
support
for
react.
C
C
We
also
end
up
in
a
place
where
the
components
themselves
are
much
more
reusable
much
better
written
they're
geez.
A
factor
of
that
is
also
the
the
the
concept
that
measury
is
a
platform
that
is
extensible
extendable
meshry
supports
plug-ins
and,
as
such,
has
well
part
of
that.
Refactoring
will
need
to
account
for
extensibility
and
so
there's
a
lot
in
v0.
C
The
next
release
with
respect
to
ui.
Actually
we'll
probably
talk
about
it
later
in
the
call
there'll
be
whole
new
dashboards,
whole
new
metrics,
not
just
new
dashboards
and
new
metrics,
but
new
custom
dashboards
and
custom
metrics
the
ability
for
people
to
define
what
metrics
what
they
want
to
see
in
their
panels.
C
C
C
I
think
it
was
the
menu
and
that
that's
part
of
one
of
the
plugins
is
that
when
someone
defines
a
plug-in
they
should
also
be
able
to
control
whether
or
not
they
want
the
menu
collapsed.
C
A
Thank
you,
science.
So
next
up,
ashish
has
been
working
on
some
workflows
for
ci
tests
and
adapters.
So
ashish.
D
Okay,
assuming
you
can
see
my
screen,
so
I
created
this
pr
in
easter
adapter
a
few
days
ago
to
add
ci
tests
in
adapter,
so
that
every
time
the
pull
request
is
made
on
the
adapter,
we
can
basically
do
an
end-to-end
testing
of
the
adapter.
That
can
we
deploy
patterns
and
can
the
adapter
then
go
ahead
and
actually
provision
the
service
meshes
and
all
so.
D
There
was
a
little
bit
of
a
refactoring
that
on
my
side,
so
I
I
just
wanted
to
go
over
a
little
bit
of
that
to
clarify
or
whatever
what
I'm
aiming
for
right
now.
So
this
is
my
test
repository.
I've
created
this
to
you,
know,
demo
and
stuff.
So
what
I'm
proposing
is
that
we
can
have
a
general
file,
some
general
file,
that
you
can
keep
somewhere,
which
does
all
the
heavy
lifting.
So
this
would
be
like
the
function
declaration
and
the
this
small
little
file.
D
We
can
have
this
small
little
workflow
in
each
of
the
adapters.
That
will
actually
call
this
call
this
another
workflow
github
has
the
feature
to
reference
of
the
workflows,
so
we
can
call
this
another
workflow
and
we
can
pass
parameters
like
these
are
the
names
of
the
parts
that
we
expect.
These
are
the
and
we
expect
these
there's
a
one-to-one
relationship
between
expected,
pods
and
expected
namespaces.
D
So
we
expect
this
part
in
this
name
space,
and
then
we
can
pass
the
url
of
the
deployment
url
of
the
service
name
of
the
adapter.
So
it's
like
a
functional
invocation
of
this
entire
workflow
and
yeah.
That
would
run
our
job
and
if,
if
these
particular
pods
that
we
expected,
like
I
I
expected
here-
is
steve
egress
and
steven
grace
and
his
theory
in
this
your
system.
So
if
that
is
satisfied
the
workflow
pass
and
if
not
the
workflow
failed,
one
more
thing
would
be
added
here.
D
Is
that
before
running
the
actual
test
job
I
can,
I
will
run
another
job
that
to
update
the
pattern
file,
the
that
will
be
used
to
basically
to
reflect
latest
versions
in
that
so
that's
kind
of
yeah.
So
that's
the
that's
the
thing
so,
instead
of
creating
an
entire
action,
just
create
a
general
workflow
and
put
it
somewhere
and
then
you
have
keep
referencing
it
in
all.
The
adapters
that
we
create
and
in
future,
is
not
just
about
installing
installation
of
a
service
mesh
that
we
can
text
that
we
can
test.
D
We
can
have
our
have
our
patent
file
do
anything
and
then
we
can
just
keep
expecting
pods
here
and
in
their
in
their
suitable
name
spaces.
So
there
would
be
almost
little
to
no
code
change
if
we
want
to
extend
our
the
functionality
with
which
our
tests
are
run.
So
anyone
has
any
comments
to
it.
D
From
a
different
okay,
so
currently
I'm
referencing
it
from
the
my
current
repository,
but
in
the
docs
it
doesn't
specify
that
I
don't
think
it
specifies
that
it
needs
to
be
in
the
same
repository.
I
think
it
can
be
different,
yeah
yeah
it
actually
can
be
in
a
different
repository.
The
only
thing
is
that
the
repository
shouldn't
be
private.
E
E
D
Yeah,
so
to
go
over
quickly,
it
sets
up
the
mini
cube.
It
checks
out.
The
adapter
code
builds
that
image
locally
and
then
sets
up
the
messaging
configuration
starts.
The
adapter
starts,
messy
server
starts
mini
cube
tunnel
so
that
we
can
reach
meshi
server,
then
checks
the
logs
and
allocates
the
external
ip.
D
Basically,
what
happens
is
that
in
the
config.aml
of
in
in
the
configuration
of
messaging,
we
need
to
specify
the
correct
type
address
that
we
get,
that
we
got
from
the
out
that
we
got
after
we
did
mini
cube
tunnel,
so
we
reset
the
message
address
and
then
finally,
we
do
the
pattern
apply,
which
is
currently
hard
coded
okay.
So
this
we
deploy
underscore
steel.yaml
in
each
of
the
adapter.
We
would
have
deploy
under
underscore
traffic.tml,
but
those
won't
actually
be
static.
D
We
will,
we
would
run
a
job
to
actually
update
these
yamls
to
reflect
our
latest
versions
and
all,
and
then
we
sleep
for
some
time
and
then
we
check
the
logs
adapter
logs
measuring
logs
to
see,
if
I
mean
it
would
be
helpful
in
debugging
and
then
we
ex
basically
do
our
final
thing.
We
get
the
pod
names
and
we
loop
through
all
of
the
ports
to
check
if
they
are
actually
running
in
their
ask
name,
spaces
or
not.
F
Oh,
I
see
so
this
is
general
to
test
adapters
right
really.
F
About
the
code,
could
you
go
to
where
the
code
is?
Is
it
isn't
it
compiled
somewhere.
F
D
Yeah
yeah,
here
after
I
check
out
the
the
in
each
of
the
adapter,
we
have
the
docker
file
in
our
route,
so
we
just
hit
the
docker
image.
F
Oh,
let's
see
cool
is
that
checkout
code,
so
would
it
check
out
the
code
of
the
current.
D
Yeah,
it
checks
out
the
it
checks
out
the
code
of
the
basically.
The
latest
comment
that
I'm
referencing,
like
this
particular
commit
that
that
was
made
with
this
particular.
D
In
that
comment
would
be
actually
checked
out
and
build.
F
So
then,
once
this
is
invoked
from
some
other
workflow,
the
adapter
would
be
already.
What
how
can
you
say
it
installed
or
added
to
the
to
the
measuring
server.
F
Yeah
my
question
is:
let's
say
this
is
part
of
a
some
other
workflow,
where
there
will
be
an
end-to-end
test,
so
the
other
workflow
would
depend
on
like
the
ui
and
the
server
being
there
and
then
running.
This
would
add
a
whatever
adapter
we're
using
this
for
to
our
to
our
mastery
server.
D
This
particular
workflow
is
actually
very
very
specific
to
this
current
use
case,
so
the
only
place
we
would
use
it
would
be
to
the
only
job
we
would
use
it.
To
would
be,
for
testing
would
be
for
testing
the
functionality
of
our
adapters
and
inside
of
the
inside
of
the
inside.
D
In
the
context
of
this
particular
workflow,
messy
server
would
be
running
and
the
particular
messy
adapter
would
be
running
so
if
you're
running
in
in
basically,
if
you're
running
for
istio,
the
only
the
only
adapter
that
would
be
running
would
be
stored
after
and
I
mean
if
you're
the
same
goes
for
other
adapters
as
well,
so
the
only
things
that
would
be
running
would
be
messy
server
and
then
basically
adapt
adapter
and
the
adapter
of
that
particular.
D
F
D
F
I'm
just
thinking
some
testing
scenarios
where
we
want
to
to
test
like
in
functionality
with
with
multiple
adapters,
but
I'm
not
that
familiar.
So
that's
why
I'm
asking
to
the
higher
the
bigger
audience.
D
So
so
yeah
this
particular
workflow
has
been
made
to
the
one
thing
that
has
been
kept
in
mind
is
that
this
would
be
used
only,
and
only
for
one
particular
adapter,
and
you
can
pass
the
adapter
name
here,
like
I
have
done,
is
to.
D
Yeah
one
at
a
time
so
because
we
are
using,
we
would
be
customizing
this.
These
arguments
in
different,
different,
basically
different
adapters.
For
example,
in
traffic
measure
fee
we
would
be
expecting
different
parts
and
in
different
name
spaces
with
different
deployment.
Url
different
services,
the
adapter
name,
would
be
different
and
this
would
be
not.
D
This
particular
workflow
would
not
be
used
for
any
other
thing
other
than
to
test
the
functionality
of
a
given
of
a
given
adapter
and
to
be
more
specific
to
test,
I
mean
it
uses
my
shop's
v2.
It
doesn't
use
my
shop's
v1,
so
it
does
pattern
apply,
so
it
uses
patterns
so
the
the
functionality
that
would
be
tested
and
it
can
be
extended
in
future
that
anything
that
has
been
given
in
the
pattern.
So,
for
example,
in
this
theo
pattern
we
can,
along
with
that
deploy
issue
service
machine.
D
We
can
add
add-ons
as
well.
So
we
can
add
the
pod
names
of
those
given
add-ons
here
in
specific
name
spaces.
So
the
only
use
case
that
this
would
cater
would
be
to
test
whether
an
adapter
is
able
to
deploy
and
provision
a
service,
mesh
and
add-ons,
and
anything
that
has
been
given
to
a
particular
patent
file
to
that
adapter
and
no
and
this
workflow
would
not
be
used
for
any
other
use
case.
F
Yep
looks
good,
looks
like
it's
it'll
be
fit
for
the
main
use
case
that
is
desired
right
now,
cool!
That's
all
thanks!.
D
And
I
I
I
didn't
know
I
didn't
make
it
to
cater
to.
I
mean
mystery
because
we
all
because
we
all
also
need
the
workflows
for
mesherie
as
well,
but
here
because
this,
but
this
specific
workflow
deploys
these
service
meshes
and
it
it
is
very
biased
towards
in
in
the
way
it
works.
Is
it
it
deploys
service
meshes
it
deploys
something
it
uses
patent
deploy
and
we,
I
don't
think
we
would.
D
Maybe
we
would
test
that
in
messy
server,
maybe
not
maybe
we
would,
but
we
would
be
testing
a
lot
more
things
than
just
deploying
service
meshes.
So
that
way
this
particular
workflow
has
been
made
just
for
the
adapters
and
for
all
the
adapters
and
for
in
and
in
each
adapter.
We
would
need
only
this
much
of
stuff.
This
much
of
code.
C
Ashish,
for
my
part,
I
I
have
to
admit
to
not
having
digested
all
of
what
you
had
said
earlier
in
part
the
some
of
mario's
line
of
questions.
C
Something
is
it
reminds
me
of
some
of
the
things
I
would
have
asked
earlier
about
the
reuse
of
these
workflows
so
setting
this
one
aside,
which
is
custom
and
specific
to
only
one
repo,
because
we're
building
a
component
that
repo
and
it's
intended
to
use
the
particular
artifact-
that's
built.
I
guess
this
isn't
that
one
this
this
is
custom
to
the
istio
adapter.
This
actually
isn't.
C
This
is
custom
to
the
deployment
of
istio,
which
it
saddens
me
to
see
that
we're
using
manifest
here,
because
we're
trying
to
get
away
from
manifest
we've
been
trying
to
get
over
to
helm.
This
is
bothersome.
I
think
they'll.
D
C
D
I,
as
I
said
in
the
earlier
meeting
as
well,
I
do
not
know
whether
mesh
deploy
I
we
can
whether
we
can
pass
the
url
of
basically,
I
don't
think
you
are
in
the
meeting
as
you're
in
the
meeting.
Is
there
a
way
by
using
messy
ctl?
I
can
deploy
an
adapter
with
a
custom
image.
If
there
is
then
yes,
I
can
build
the
image,
then
I
push
it
to
the
registry
and
then
use
it
from
there.
A
A
I'm
not
sure
that
would
yeah.
There
is
no
feature
like
that.
C
Right
so
the
environment
in
which
you're
deploying
the
custom
built
adapter
or
the
adapter
that's
built
at
at
time.
At
the
run
time
of
this
workflow
that's
being
deployed
inside
of
kubernetes,
not
that
there
is
not
a
docker
environment
in
which
we're
testing
this
right,
yeah,
okay,.
C
Yeah,
as
you
go
through,
this
you'll
find
potential
use
cases
for
enhancements
to
mastery
ctl
or
to
the
customization
of
the
helm,
charts
both
mastery
ctl
for
like
mesh
dip
like
from
mystery,
ctl
mesh
mastery
ctl.
C
You
might
find
it
some
of
the
other
ones
as
well
yeah,
it's
just
it's.
C
Not
good
like
it
would
almost
be
just
as
good
to
not
use
those
yamas
at
all,
because
we're
trying
to
get
you
to
move
away
from
them
and
just
use
a
docker
command
to
deploy
or
a
cube
ctl
command
to
deploy
it
instead,
without
necessarily
referencing
it
yeah
it's
it's
kind
of
a
it's
a
tough
situation
to
be
in
because,
if
you
use
you
could
use
cube
ctl
with
no
yaml,
which
is
not
really
any
better
than
doing
what
you're
doing,
and
these
have
at
least
been
used
for
a
while.
D
So
one
thing
we
can
do
is
maybe
have
some
kind
of
invert,
because
the
based
installation
that
we
currently
have
it
has
a
parameter
for
override
values.
So
we
can
override
the
image
name
there,
but
we
need
to
flow
that
image
name
from
from
our
I
mean
up
to
there,
so
we
can
have
some
kind
of
environmental
variable
while
running.
I
don't
know,
maybe
messy
ctl
that
that
actually
oh,
helps
us
override
some
stuff,
because
other
than
doing
that,
I
don't
see
any
way
to
externally
pass
some
custom
image.
Name
towards
that
thing,
and.
D
C
Ideally
the
well
like
it
could
be
either
like,
probably
in
the
end
that
we
would,
if
measure
ctl,
was
robust
enough
to
allow
specification
right
now.
It
allows
specification
of
any
tag
that
you
want
for
the
for
a
image
name
like
like
you
can't
change
the
image
name,
at
least
for
kubernetes
deployments,
for
what
you
call
it
docker-based
deployments.
You
can
change
the
image
name,
because
I
don't
know
that
we're
rewriting
the
entire
docker
file
each
time
that
you
go
the
compose
file
each
time
anyway.
C
It's
not
a
primary
focus
for
us,
because
it's
just
kind
of
off
to
the
side,
it's
nice
to
have
it's.
It
facilitates
this
type
of
testing.
It
potentially
facilitates
someone
running
their
own
custom,
built
adapter.
I
don't
know
why
they
necessarily
would
it
it.
At
some
point
we
will
be
deploying
any
number
of
measuring
perf
containers
that
doesn't
mean
that,
with
that
a
custom,
image,
name
or
custom
docker
hub
repo
name-
is
our
first
priority.
C
C
What
do
you
call
it
registry
docker
registry?
Then
they
could
use
the
same
names
but
have
totally
different
images
in
there.
There
are
ways
for
people
to
circumvent
it,
even
though
we
don't,
the
project
doesn't
directly
support
it.
Today,
yeah,
probably
it's
easiest
and
first
to
start
with
modifying
helm
values
and
then
from
there
you
know
providing
the
ability
to
support
custom
image
names
in
mesh
config
which
from
mesh
config
that
would
drive
a
helm
values
or
it
would
drive
a
docker
compose.
C
Having
said
all
of
that,
I
actually
don't
think
it's
worth
it.
I
think
we
should.
You
should
stay
with
the
manifesto
there.
It's
just
my
it's
like
our
duty
to
point
out
that,
like
oh,
that's,
not
what
we
want
like.
We
should
make
a
note
of
that.
It's
probably
not
something
we
go
actively
work
on,
but
we
should
now
be
aware
that,
like
manifests
were
just
deprecated
and
we
weren't
used
and
they're
not
used
anywhere
wait
now
we're
using
them
somewhere.
We
should
like
it
should
be
a
note.
C
That
kind
of
thing
so
you're
supposed
to
so
in
this
this
particular
to
digest
this
a
little
bit
further
you're
specifying
test
assertions
in
here
about
expected,
pods,
great
expected
name
spaces
and
what's
the
difference
between
those
three
expected
namespaces,
is
that
just
an
example
yeah
yeah
just
an
example:
yeah
cool
the
adapter
name
is
istio,
so
you're
like
also
hard
coding
and
appending
meschery.
D
I'm
using
adapter
name
to
you
know
to
basically
grab
that
particular
name
to
check
out
the
logs
and
all
those
things
so
so
that
I
don't
have
to
keep
referencing
that
particular
adapter
name
inside
of
the
workflow,
so
I'll
I'll
be
using
the
the,
for
example,
to
know
whether
these
two
parts
are
stored.
After
I
started.
C
C
And
I
know
that
that
like
by
the
way
the
work
they've
done,
this
is
great
and
by
the
way
you
did
quickly.
Basically,
we
should
put
this
into
action
like
we
should
go.
If
these
tests
are
passing
like
hey,
we
we
need
this
stuff
pretty
quickly
and
to
propagate
it.
It's
nice
that
there's
a
that
github
allows
you
to
reference
other
workflows.
C
So
if
you,
if
it
was
propagated
across
10
repos
and
then
later
you,
you
swapped
out
a
particular
bash,
you
know
ctl
command
with
a
mastery
ctl
command
like
it's,
probably
not
too
much
work
to
have
that
propagated
and
things.
So,
for
my
part,
it
sounds
like
you're
getting
a
lot
of
negative
feedback
or
like,
but
that's
not
the
case
like
rather
rather,
this
is
great.
We
work
iteratively
all
the
time.
C
Some
of
these
other
points
that
are
being
made
are
like,
oh
yeah,
okay,
there's
a
potential
enhancement
there,
a
potential
enhancement
there.
Some
of
these
aren't
going
to
be
worth
necessarily
doing,
and
some
of
them
probably
are
over
time
because
it'll
just
we'll
have
not
recursive
but
we'll
have
sort
of
while
testing
an
adapter.
We
might
have
regression
tests
of
other
measuring
components
as
well,
which
is
which
is
good
the
pattern
so
yeah.
C
It's
also,
I
mean,
as
you
think,
on
this
and
you've
done
a
good
job
of
like
splitting
out
where
this
is
an
adapter,
specific
considerations.
These
and
then
you
know
we
can
centralize
some
of
these
more
common
functions
about
like
deploying
a
kubernetes
deploying
doing
a
mastery
server
deploy,
and
these
things
that
that's
great,
because
this
form
is
a
foundation
then
for
a
bunch
of
reuse
of
the
of
these,
and
these
may
eventually
use
some
of
our
measuring
github
actions
as
well.
C
In
the
components
where
we're
not
trying
to
test
and
active
like
in
the
builds
where
we're
not
trying
to
test
an
active
artifact
because
yeah
like
the
thing
is,
is
like
in
some
of
those
github
actions
like
yeah
one
for
service
mesh
performance,
it
needs
a
kubernetes
cluster
and
needs
to
deploy
measuring.
Well,
that's
the
same
thing
that
needs
to
be
done
here,
except
you
just
have
the
build
artifact.
C
That's
transient
is
just
this
single
adapter.
All
of
the
rest
should
be
the
same,
and
so
that's
in
part
why
I
impress
upon
all
of
us
like
hey
if,
if
this
script
is
very
long,
it's
like.
Why
is
it?
Why
can't?
Why
aren't
we
we
have
prior
art
here?
How
do
we
reuse
that
some
of
it
is
like?
C
Well,
it
was
written
under
a
different
pretense
under
a
different
use
case,
so
it
doesn't
exactly
you
know
it
doesn't
exactly
line
up
and,
and
so
it's
fat,
you
know
easier
and
quicker
to
write
out
what
we
have
now
long
term.
It's
it's!
The
sustaining
cost
is
much
higher
so
anyway,
yeah
that's
my
feedback.
C
Of
sustaining
two
independent
things,
so,
if
you're
deploying
kubernetes
one
way
here,
deploying
it
one
way
here,
you're
testing,
an
adapter
and
testing
assertions
of
whether
or
not
a
mesh
is
up
over
there
doing
it.
In
that
way.
This
way,
that's
actually,
why
we're
trying
to
get
rid
of
the
manifest,
because
the
sustaining
cost
of
both
manifest
and
home
charts
was
was
both
the
sustaining
costs,
but
also
just
the
number
of
bugs
that
people
were
seeing
so
like
part
of
the
benefit
of
reuse
is
in
concept,
speed
time
to
market
like
speed
of
delivery.
C
Also,
then,
quality
through
reuse,
because
you
hopefully
have
fewer
bugs
and
also
hopefully
lower
sustaining
costs,
because
you're
sustaining
fewer
lines
of
code
with
higher
quality
and
and
so
ultimately
happier
users.
That's
the
concept.
It
doesn't
always
work
out
beautifully
as
you're,
demonstrating
that,
like
some
of
the
messenger
ctl
commands
weren't
written
in
that
context,
they
don't
quite
support
that
thing,
and
so.
D
Yeah,
try
to
replace
these
cube
pedal
commands
with
as
many
mesh
regular
commands
as
I
can,
and
can
you
can
you
give
me
an
idea
of
where
you
really
think
this
central
this?
This
thing
should
be
this
yaml,
but.
C
Yeah
it
might,
it
might
be,
measuring
measuring
it
might
be
the
mystery
server
repo
in
part
because,
like
we
would
really
hope
for
it
to
be
true
that
we
figure
out,
we
don't
like
to
run
mini
cube
or
just
as
just
a
random
example
like
we
get
opinionated
about
some
of
the
common
procedures
that
we
use
to
set
up
these
continuous
integration
environments
and
that
same
process
gets
used
in
the
service
mesh
performance
project
in
the
surface
mesh
patterns
project
in
in
measuring
proper
in
the
adapters.
C
Yes,
but
also,
then,
in
the
other,
externalized
components
like
mesh
reperf
will
be
another
component
and
mesh
free
operator
as
another
component,
and
so
what
I
was
trying
to
say
is
like,
if
you
put
it
in
mesh
kit,
that's
kind
of
a
central,
decent
location,
because
most
components
will
leverage
that.
C
This
is
great
as
you
go
to,
if
you
would
please
add,
an
update
to
the
build
and
release
documentation
that
itemizes
what
you
just
said
today
like
here
here:
here's
what
these
files
are:
here's
their
hierarchy,
here's
how
this
one
is
supposed
to
be
reused
and
then
that
way,
it'll
help
some
of
the
rest
of
us
digest
as
well.
C
The
build
and
release
I'll
put
a
link
in
there
in
mystery.
Docs
part
of
that,
then
is
all
of
us
should
reflect
on
opportunities
for
our
actions
to
be
used
in
some
of
those
workflows.
C
Some
of
them
aren't
appropriate.
Some
of
them
are
highly
appropriate
for
what
what's
trying
to
be
accomplished.
Some
of
them
might
need
to
be
augmented
to
say
when
the
action
fires
like
just
like
when
the
open
service,
the
microsoft
engineers
from
open
service
mesh,
were
saying.
Oh
great,
we'd
love
to
use
this
github
action
for
smi
to
test
the
conformance,
except
this
github
action
is
assuming
that
I
want
to
pull
a
container,
that's
already
out
that
I've
already
built
and
pushed,
and
they
don't,
they
were
saying.
No,
we
don't.
E
C
Hopefully
she
schmidt,
like
I,
I
don't
know
if
I
haven't
had
a
lot
of
coffee,
so
I
might
not
be
sounding
chipper.
This
is
from
my
part
and
mario
is
giving
feedback
and
stuff.
This
is
fantastic
it.
We
really
really
really
need
this
stuff.
It's
nerve-wracking
to
make
a
release
of
something
you're,
just
kind
of
keeping
your
fingers
crossed
and,
like
there
isn't
something
broken
inside
this.
A
E
I
think
we
are
in
a
good
position
now
and
we
have
caught
all
the
places
we
were
actually
using
manifest
replace
them,
and
the
the
other
thing
is
that
there
was
just
this
issue
that
I
noticed
a
little
late,
but
better
late
than
never,
and
it
was
like
the
it
has
been
referenced
in
the
past.
It's
about
malformed
version
its
latest.
E
It
happens
because
the
library
that
we
use
for
checking
releases
I
mean
latest
release
for
github
repositories.
It
doesn't
treat
its
latest
as
a
new
one.
I
mean
a
semantic
version
and
that's
why
it
gives
an
error,
so
the
upstream
code
for
deploying
operator
might
be
failing
right
now,
and
this
vr
that
I
linked
in
the
chat
would
take
care
of
this
other
than
this.
There
is
a
short
note,
because
how
helm
is
design
helm
doesn't
like
when
your
cluster
already
has
some
objects
which
the
helm
chart
aims
to
improve.
E
I
mean
aims
to
install
so
if
you
have
pre-deployments
like
if
misery
operator
is
deployed
in
a
cluster
before
with
a
using
manifest,
then
the
when
you
upgrade
to
v06
you'll,
have
to
first
remove
the
operator
yourself
manually.
First,
that's
because
helen
won't
actually
recognize
that
installation
as
installed
by
itself.
A
Okay,
I
I
like
linkardi
was
also
facing
similar
issues
with
the
most
multi-cluster
part
of
it
and
what
they
were
implementing
like
what
they
implemented
is
they
started
deleting
resources
with
the
labels
matching
the
labels
in
the
entire
cluster.
A
So
my
suggestion
is
that
we,
after
installing
uninstalling
machine,
we
kind
of
search
the
cluster
with
the
labels
we
have
like.
If
we
are
attaching
label
to
every
hem
chart,
we
like
scrap
them
and
delete
them
manually
manually,
not
manually.
I
mean
progress
programmatically
after
uninstalling
machinery,
so
that
might
clean
out
the
whole
cluster
with
message.
Resources
and
no
user
should
face
this
error
that
something
is
already
present.
E
So,
even
if
helm
discovers
that
object,
like
the
thing
that
you're
pointing
to
is
right.
Okay,
that
should
work,
but
even
if
that
resource
doesn't
have
that
annotation
that
it
is
managed
by
helm
helm
would
a
helm
would
refuse
to
install
it.
Even
then,
because
it's
it
would
say
that
mystery
operator
is
already
present
in
this
cluster.
It
is
not
managed
by
me,
so
I'm
not
gonna
touch
it.
So
a
complete
cleanup
is
the
thing
it's
the
best
thing
to
do
here
and
in
my
opinion
we
should
do
that
right
now.
E
A
C
Just
a
quick
recap
from
the
okay
yeah
yeah,
no
good,
yeah,
good,
good,
sorry,
bad
question.
A
D
Oh
okay,
so
the
demo
is
about
being
able
to
try
it
or
verify
the
designs
that
we
create
in
mishma.
D
What
exactly
that
means
is
that,
let's
say
I'm
not
running
all
of
the
components
in
all
of
the
adapters
and
that's
why
I'm
only
getting
this
one
drop
down
if
someone
is
thinking
why
I'm
having
only
a
single
program,
so
the
demo
is
actually
about
the
ability
to
verify
if
your
deployment
is
if
your
provisioning
process
is
actually
going
to
go
through
without
actually
touching
the
infrastructure.
So
let's
say
I
am
designing
my
infrastructure
here
in
mishma.
D
D
Field,
but
definitely
a
container
is
all
multiple
containers
are
because,
obviously,
most
probably,
if
you
are
deploying
your
application,
you
do
want
to
run.
You
do
want
to
run
a
process
in
there.
So
let
me
try
to
just
hit
verify
this
particular
button.
If
someone
remembers,
I
don't
expect
everyone
to
remember,
but
somebody
remembers
this
icon
used
to
be
disabled
with
you,
because
this
was
something
in
progress.
D
Trident
was
not
something
that
was
available
in
mystery,
but
recently
that
particular
capability
was
added
to
mystery
server
as
an
endpoint.
So
what
this
does
this
button
is
that
it
will,
when
I
would
hit
it,
it
will
actually
reach
out
to
the
mystery
server
and
will
it
will
drive
in
this
entire
thing,
the
entire,
basically,
the
entire
design
that
you
will
create
here.
It
could
be
one
node,
two
node
three
nodes
or
hundreds
of
node,
and
it
will
try
it
on
it.
It
will
it
won't.
D
It
ensures
that
it
doesn't
touches
your
infrastructure
so
that
your
infrastructure
is
safe.
So,
let's
say:
even
if
your
10
nodes
were
fine,
while
11th
one
had
a
problem,
it's
not
going
to
it's
not
going
to
deploy
those
10
things.
After
that,
it's
not
going
to
tell
you
that.
Okay,
I
actually
tested
10
but
11th
one
heart
problem,
so
now
you're
investigators
in
a
stage
where
what
you
wanted
is
not
exactly
the
final
stage.
So,
basically
that's
what
the
purpose
of
this
verification
is.
D
That
is,
that
is
if
this
particular
thing
says
that
this
thing
is
going
to
go
through.
The
chances
are
that,
yes,
it
will
actually
go
through.
The
reason
is
that
it
does
multiple
kind
of
tests,
like
the
query
plan
that
it
has
prepared.
Is
that
going?
Does
it
have
cycle
or
something
like
that?
D
And
it
will
also
check
that
the
fields
that
you
have
filled
in
here
they
are
they
proper
or
not,
and
not
only
if
not
only
if
you
have
filled
the
fields
or
not
also
like
are
they
valid
or
not,
so
you
cannot
actually
type
in
in
replicas
field.
You
cannot
actually
type
in
alphabet
because
that's
not
valid,
so
if
it
will
actually
hand
it
over
to
kubernetes
and
the
deployment
will
actually
fail,
because
definitely
you
cannot
have
yeah.
I
hope
I'm
making
sense.
D
So
let's
try
something
something
fill
in
him
so
because
this
is
a
static
analysis,
so
someone
can
say:
okay,
I'm
filling
in
test
in
the
image
test.
That's
probably
not
an
image,
that's
available.
So
what
is
going
to
happen?
Is
that
right
and
going
to
fail?
Or
is
it
going
to
succeed?
It
will
actually
succeed
because
static
analysis.
It
cannot
actually
verify
if
this
image
already
exists
or
not,
or
it
cannot
verify
that
if
the
image
exists
after
creating
the
container,
is
it
going
to
run
on?
D
That's
not
at
least
what
mystery
supports
right
now,
because,
as
I
mentioned,
it's
a
static
analysis,
so
let's
try
to
hit
verify
again.
It
says
verified
successfully,
so
it's
almost
guaranteed
that
when
I
would
hit
deploy
or
it
will
actually
provision
this
particular
application
for
me-
definitely
like
I'm
not
sure
if
there
is
actually
a
image
called
test.
Most
probably
there
is
not
so
in
the
cluster.
This
deployment
will
probably
fail
but
other
than
that
yeah,
so
the
starting
analysis
portion
is
what
this
particular
try
running
is
sort
of
offering.
D
So
any
questions,
any
suggestions,
one
of
the
things
that
someone
may
have
noticed
that
when
I
hit
verify
okay,
it's
saying
verified
successfully,
but
it
was
seeing
that
the
verification
failed.
It
was
just
saying
verification
paid.
It
was
not
exactly
telling
why
it
failed.
So
that's
something
that's
a
ux
that
needs
to
be
added
and
I
was
not
quite
sure
where
to
exactly
put
it
other
than
so
right.
Now,
there's
a
snack
bar!
D
D
Yeah
definitely
actually
this
was
this
is
already
verified.
So
I
just
didn't
change.
C
That
but
yeah
actually,
but
that's
kind
of
you-
that's
kind
of
you
not
to
say
that
I
was
the
one
that
called
it
verified.
C
Good
awesome,
I
admit
I
I
apologize,
but
when
you've
done
the
demo
and
you
had
an
invalid
design,
there
was
a
snack
bar
that
came
out
and
told
the.
D
User,
it
just
it
just
yeah.
It
just
said
that
it's
a
verification
field
it
right
now
it
doesn't
tell
why
it
fails
not
because
that's
functionality
is
not
available
in
the
api.
It
actually
does
tell
you
what
exactly
is
failing,
but
because
I
was
not
quite
sure
where
to
show
that
information,
because,
like
sniper,
would
actually
fill
up
completely.
D
C
So
for
barack
and
adika,
and
some
other
folks
that
are
close
to
this
and
then
just
for
the
rest
of
you
who
are
on
the
phone
as
you
think
about
this
from
a
user's
perspective,
you
know
when
you're
you're
online
you're
in
a
web,
page
you're
filling
in
a
forum
which
raphael
just
did,
which
is
great
raphael.
C
You
you're
filling
in
the
text
fields
and-
and
you
know
more
often
than
not-
there's
an
immediate
feedback
in
from
the
form
validation
a
lot
of
times,
you're
feeling
informative
and
it
validates
right
there.
You
know
the
way
that
okay,
which
is
crash.
If
you
don't
mind,
if
you
open
back
up
that
tooltip
dude,
this
is,
I
think,
I'm
walking
through
this
as
much
from
my
own
understanding,
as
everyone
else
is.
C
Maybe
is
that
there's
a
distinguishment
between
the
function,
the
server-side
functionality
that
you're,
referring
to
with
the
this
the
validate
capability.
There's
a
distinguishment
between
kind
of
form-based
validation,
of
the
fact
that
you
typed
in
a
a
string
into
the
namespace
field
that
kubernetes
could
actually
interpret
and
then
would
actually
apply
and
be
successful.
Applying
there's
a
difference
between
that
and
the
validate
button
which
which
could
do
things
like.
C
Well,
I
wanted
to
to
give
an
example
using
the
same
name,
space
example,
and
it
kind
of
leads
me
to
question
how
capable
the
this
validate
function.
It
like
like
there's
it's
nuanced,
but
there's
we're
kind
of
walking
through
it
now
there's
like
client-side
form
field
validation
of,
like
you
know,
did
you
type
in
a
digit
when
it
was
expecting
a
string.
C
There's
the
server
side
validate
that
you
just
demonstrated
that,
like
it's,
its
function
is
really
is
a
static
analysis
of
which
takes
that
for
the
client
side
stuff
a
little
bit
further
to
say
well,
d
is
measuring
server.
It
has
a
registry
of
these
various
capab,
the
various
operations,
basically
a
set
of
capability,
catalog
capabilities
that
all
the
mesh,
readapters
and
mastery
server
itself
have
basically
signed
into
that
registry
and
said:
here's
the
things
that
I'm
capable
of,
and
so
that
static
analysis
just
says.
C
Well,
you
know
is
that
even
if
you
have
this
pattern,
that's
asking
for
all
these
things
like.
Could
I
even
possibly
execute
that
and-
and
I
think
that
that,
like
that's
the
focus
of
what
you
just
you
know,
demonstrated
there's
a
there's
another
step
and
kind
of
component
to
this,
which
is
which
is
this?
This
probably,
this
other
word
dry
run,
which
is
well
yeah
like
hey,
I've
got
all
those
you
know.
C
Mastery
might
have
all
those
capabilities
listed
in
the
registry,
but
it
turns
out
your
particular
kubernetes
environment
may,
just
as
an
example
like
it
can't
accept
any
more
name:
spaces
like
it's
plumbed,
full
and,
and
so
before
you
try
to
do
an
actual
deployment.
You
might
want
to
do
a
dry
run
where
you're
you
know
like
if
this
concept
is
familiar
to
most
of
you,
but
I'll
give
one
other
analogy,
and
it's
like
if
you've
ever
done
a
sequel
plan.
C
C
Sorry,
yeah
good,
I
haven't
done
a
whole
bunch
of
them,
but
one
of
the
things
that
I
was
impressed
with
you
know
like
back
then
was
well
part
of
it.
Was
this
that
static
analysis
sometimes
included
what
you?
What
you
would
anticipate
in
terms
of
performance
of
that
particular
query?
C
So
don't
we
do
a
lot
around
performance
here
and
like
like,
maybe
trying
to
optimize
with
that.
You
know
your
optimal
configuration
like
I
wonder.
If,
in
the
future,
we
wouldn't
have
an
optimize
button
or
if
part
of
like
the
dry
run
step
might
be,
they
might
have
phases
where
it's
like
verification
of
capacity
and
then
you
know
sort
of
verification
or
analysis
of
optimal
config,
optimal
performance.
C
So,
for
my
part,
I
I
think
some
really
cool
things
to
include
here.
One
of
the
things
that
I
begin
to
wonder-
and
this
is
what
I
wanted
all
of
you
to
think
about
for
a
moment-
is
when
you're
filling
in
the
form
here-
and
it
says:
oh,
you
can't
have
digit.
You
can't
have
symbols
in
the
namespace
field.
It's
like
okay,
that's
simple,
client-side,
validation,
great,
but
like
this
server-side
validation
that
karsh
was
just
demoing.
It's
like
it
seems
like
a
potential
user
preference
that
some
people
might
want.
C
They
might,
they
might
say,
like
look.
This
measuring
system
is
connected
to
my
actual
environment.
I'm
I've
got
this
much
capacity.
I'm
only
running
these
two
adapters,
I'm
only
going
to
run
linker
d
at
this
version.
So
please
don't
let
me
do
your
mesh
map,
please
don't
or
measure
a
ui.
Don't
let
me
design
a
pattern
that
I'm
just
not
going
to
be
able
to
use
like.
Don't
let
me
you
know
get
me
into
that
situation
like
prevent.
C
You
know,
so
the
the
notion
that
there's
continual
static
analysis
being
done
and
the
user
has
been
given
feedback.
It's
not
that
it's
invalid,
an
invalid
string
that
you've
typed
into
the
namespace.
It's
that,
like
your
system,
is
full
so
don't
or
like.
Rather
that's
a
bad
example.
The
capacity
example
is
bad.
It's
more
like
you,
don't
have
that
capability
in
your
registry
you're
trying
to
provision
an
istio
pattern,
but
you
only
have
linker
d
configured
so.
D
Somewhere,
so
actually
one
thing
that
I
have
which
sort
of
recovered
was
one
part
of
trial
running
that
is
not
yet
supported,
and
it's
because
of
some
reasons,
a
prop
it.
It
will
be
supported.
Definitely,
and
that
is
right
now.
What
we
don't
support
is
try
running
stuff
on
humanities.
That
is,
we
do
our
or
all
of
the
analysis
or
the
analysis
on
our
part,
but
we
don't
yet
actually
try
it
on
puberty.
D
The
reasons
actually
are
that
mesh
kit
function,
mishkid
apply,
manifest,
doesn't
yet
supports
android
and
although
client
core
supports
and
apply
manifest
applied
health
and
also
supports
so
yeah,
that's
actually
the
reason
that
we
are
not
enjoying
that
also
just
just
sort
of
covering
that
so
other
than
so.
Definitely
we
are
using
rjc
forms
it
does.
It
has
access
json
schema.
So
what
exactly
is
different?
D
What
is
exactly
is
different
from
the
from
the
basically
the
errors
that
you
get
here
or
from
the
server
side
right
in,
and
that
is
actually
first
thing
is
so
right
now.
What
I
just
did
is
drag
the
node
and
drop
in
here.
The
other
things
that
you
can
actually
do
is
actually
you
can
go
to
pattern
section.
You
can
load
the
previous
previous
designs.
I
should
I
should
probably
say
you
can
know
previous
designs
from
here
it
couldn't.
It
could
have
interdependencies,
that's
not
something
that
the
client
actually
verifies.
D
That's
something
execution.
Feasibility
is
something
that
the
server
is
actually
going
to
verify.
The
other
part
of
it
is
actually
messaging.
Server
also
has
concept
of
selectors,
so
every
the
dry
running
actually
happens
in
stages
also
like
provisioning
does
so.
The
concept
of
selectors
is
that
the
mystery
using
its
sql
database,
all
of
the
data
that
that
collects
from
the
cluster
it
can
actually
decide
for
and
decide
for
you
on
your
behalf
that
what
resource
should
it
be
for
deployment.
D
If
you
have
not
mentioned
a
version
or
it's
let's
say
something
like
that,
so
that's
also
something
that
mystery
does
on
it's.
Actually,
if
someone
is
going
to
check
when
it
happens,
I
think
it
happens
in
stage
three:
that's
where
it
actually
checks.
Okay,
I
think
that
this
is
the
resource
that
I
should
pick
to
deploy
and
then
it
will
actually
verify
and
that's
not
something-
that's
quite
available
in
the
client,
because
that
intelligence
is
built.
D
So
this
is
sort
of
the
difference
in
the
client,
side,
verification,
society,
verification
and
definitely
like.
There
are
a
few
things,
a
few
obvious
thing
that
needs
to
be
improved
and
that
is
dry,
running
kubernetes
provisioning.
So
a
few
things
I
have
to
add.
C
A
Yep,
maybe
we
can
help
rano
with
the
the
customer
error
page
issue
so
that
we
can
get
on.
G
C
G
Is
my
screen
visible
yeah?
Actually,
I
was
working
on
the
ui
of
this
particular
page,
which
I
was
making
for
that
error.
So
right
now
it
just
has
these
standard
like
strings
which
were
using
on
other
website
as
well,
and
this
is
a
button
but
other
than
that,
this
look,
this
page
looks
very
empty,
so
I
was
also
wondering
what
else
can
be
added
to
this
particular
page.
C
You
know
you
know
what
a
real
opportunity
this
particular
page
is
both
to
probably
give
a
little
snark.
You
know
even
even
graphically,
potentially
with
a
little.
You
know,
I
don't
know
some
other
thing,
but
but
also
like
that
discussion
forum
concept
is
quite
fantastic.
It's
like
hey
and
by
the
way
I
don't
know
what
language
we
necessarily.
C
We
all
want
to
necessarily
use,
I
think
of
mesherie.
As
you
know,
more
of
an
app
you
know
web
app
than
you
know
with
the
term
the
term
page
we
might
want
to
think
potentially,
but
but
what
an
opportunity.
This
is
to
do.
A
couple
of
things
that
I
know
piyush
has
been
hot
on
in
the
past
and
some
of
the
rest
of
you
and
that's
somebody
bumps
into
this
they're
like
okay.
Well,
most
the
time,
if
most
of
us
it's
gonna,
be
like
well,
I
don't
know
what
what's
going
on.
C
I
don't
know.
I
probably
need
some
help
if
they're
gonna,
you
know,
if
they're
gonna,
invest
and
get
some
help
like
they'll
hit,
that
discussion
forum
link,
go
they'll
hit
that
and
whether
they
go
open
a
github
issue
or
they
go
open
a
discussion
forum,
whether
they
go
into
slack
or
whether
they
like
there's
a
myriad
of
questions
that
they'll
need
to
bring
or
details.
They
need
to
bring
with
them
in
order
to
get
help,
and
that
is
as
pranav
receives
him
and
says.
C
Okay,
let
me
see
if
I
can
help
you
with
that
hey
what
version
of
mastery
are
you
running?
Oh
good!
Is
that
on
kubernetes?
Okay,
good?
Are
you
and
he's
got
like
you
know
like?
What's
your
system
environment
look
like,
and
so
push
has
been
tracking
the
concept
of
a
diagnostics
bundler,
like
a
report
like
bundle,
my
diagnostics
and
drop
into
a
zip
for
me,
so
you
can
make
it
easy
to
me
to
like
attach
that
to
an
issue
or
attach
it
to
a
discussion
forum.
C
Who
else
is
people?
Anybody
think
that,
like
you
know,
there's
a
little
bit
of
personality
and
and
kind
of
a
preference
that
we
begin
to
set
in
the
tooling
here
it
seems,
like
my
general
experience
has
been
well.
You
know
what
actually
I'll
I'll
say
this
for
a
moment,
and
that
is
that.
C
We're
all
consumers
we
all
we
all
have
some
of
our
favorite
brands.
Think
about
why
it
is
you,
like
those
brands?
What
is
it
you
enjoy
about
them?
C
Who
likes?
I
don't
know.
I
don't
know
this
is
about
anyway,
who
likes
navigating
an
enterprise
site
a
web
page.
You
go
to
cisco.com
and
go
there
and
then
go
to.
C
Something
else
it's
much
much
smaller,
there's
a
difference
between
the
level
of
stock
photos,
the
images
that
make
you
feel
like
it's
totally
and
completely
impersonal,
and
they
have
equal
representation
of
your,
your
asian,
your
indian,
your
caucasian
and
and
you're
you're
like
like
it's
just
it's
so
sanitized
right,
it's
so
impersonal,
which
is
safe
politically,
for
that
you
know
company
that
multinational
company
that
wants
to
try
not
to
use
any
language
that
might
upset
or
offend
you
know
some
other
language.
You
know
anyway.
C
C
C
C
Pranav
do
yeah.
Please
do
raise
up
the
question
about
like,
like
the
the
the
thing
that
you
just
showed,
how
how
would
that
be
hooked
in
such
that
any
any
crash
will
call
up
that
hair.
I
mean
that
display
that.
G
C
Rather
yeah
or
asking
it
yeah
good
asking
it
from
a
different
direction.
It's
like
how
do
we
ensure
that?
How
do
we
ensure
that
any
crash
displays
that,
so
how
do
we
have
a
global
error
handler.
A
Thank
you
pronoun.
So
before
we
wind
up
today's
call,
I
I
I'll
just
point
out
that
we
have
a
measuring,
build
and
release
meeting
for
tomorrow.
So
measuring.
I
guess
before
that
we
will
have
some
release
candidates
ready
to
for
the
v06
0.6
release,
so
we
can
look
into
that
on
the
build
and
release
meeting
as
well
and
probably
look
into
the
misery
roadmap
as
well.