►
From YouTube: Meshery Development Meeting (Sept 22nd, 2021)
Description
Meshery Development Meeting - September 22nd, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Welcome
everyone
to
the
mystery
development
meeting
today
is
the
22nd
of
september
and,
like
every
other
meeting,
clarifies
meetings,
we
we
record
this
meeting
and
it
will
be
made
public
with
that
out
of
the
way
we
have
some
newcomers
on.
This
call.
A
We
are
like,
if,
for
anyone
who
missed
your
earlier
conversation,
would
you
like
to
introduce
yourself.
B
I
am
currently
undergraduate
student,
I
am
on
my
last
semester
and
I
am
doing
a
business
software
engineering
and
I
mostly
worked
on
dxs
and
I
have
interest
in
cognitive
technologies.
A
Sounds
good
thanks
for
introducing
welcome
to
the
community.
Oh
thank.
C
D
Yeah
so
hello,
everyone,
my
name
is
muhammad
jasir
khan.
You
can
address
me
by
muhammad.
Basically,
I'm
computer
science
undergrad
from
india,
I'm
in
junior
year
and
just
got
across
layer.
Five
contributed
to
some
good
first
issues
and
looking
forward
to
contribute
to
missouri
in
near
future.
A
Thanks,
yes,
sir
thanks
for
the
introduction,
I
think
we
have
one
more
mohammed
on
the
call
muhammad
mosum.
Would
you
like
to
introduce
yourself.
A
Okay,
yeah,
you
can
introduce
yourself
in
the
chat
most
I'm
like
no
problem,
even
if
you
have
don't
have
a
microphone
welcome
to
the
community.
It's
nice
to
have
you
here.
E
Okay,
wait:
wait!
We're
really
quick
before
I
get
confused
for
everyone
else,
good
we've!
The
muhammad's
have
arrived
this.
This
is
great.
Well
welcome
just
for
to
facilitate
communication.
E
So
so
we
have
muhammad
pia
by
the
way
muhammad
pial,
like
you've,
got
a
you've,
got
a
nice
rhyme
going
on
in
your
last
name
like
muhammad
ahmed,
it's
a
it's!
It's
a
lot
of
it's
a
lot
of
ahmed's
in
there
anyway,
so
pyaal
and
then,
and
then
jasir
khan,
who
just
introduced.
So
you
go
generally
by.
D
Yeah,
basically,
it's
muhammad,
but
people
usually
like
to
call
it
in
a
form
like
mo.
E
E
A
A
E
Yeah
yeah
most
of
us
are
but
that
you
said,
react
js,
that's
awesome,
sir
yeah
there's
a
thank
you
yeah.
We
have
a.
We
have
this
program
in
the
community
called
mesh
mates
and
the
mesh
mates
are
individuals
that
dedicate
time
to
like
engaging
with
newcomers
as
they
come
in,
and
so
so
there's
certainly
mesh
mates
for
you
to
engage
with,
which
will
be
nice.
But
I
have
a
quick
story.
E
He
came
on
to
a
call
just
like
this
said
something
really
similar
to
what
you
just
said
about
a
year
ago,
and
he
said
yeah.
You
know
I
I'm
focused
on
react
and
about
five
people
jumped
on
him.
Saying.
Oh,
really,
there's
a
bunch
of
things
for
you
to
do
over
here,
like
they
were
at
the
time
there
were
very
few
people
focused
on
front
end
and
there's
a
lot
of
front-end
things
to
do
so
so
kudos
to
you
for
being
react,
focused,
there's
a
lot
of
stuff
that
goes
on
with
react.
A
All
right
welcome
everyone
all
new
guys.
So
let's
get
started
with
the
today's
agendas
and.
A
The
first
topic
is
about
a
new
meeting
or
more
like
a
restructuring,
an
existing
meeting,
so
we
we,
we
usually
have
bi-weekly
calls
for
initiatives
and
continuous
integration
in
mercedes.
So
it
is
the
measuring
ci
code,
so
we
are
kind
of
restructuring
it
to
be
the
machinery
build
and
release
meeting.
So
the
scope
of
the
meeting
is
to
ensure
that
we
test
out
measuring
and
make
sure
that
measurey
is
ready
for
a
new
release.
A
We
are
at
we
we
0.5
right
now,
so
we
are
almost
halfway
there
or
exactly
halfway
there
to
to
go
to
a
one
dot
release,
so
we
have
to
ensure
quality
and
quality
when
we
make
a
new
release
of
measuring.
So
the
upcoming
release
is
v
0.6
and
it
will
be
out
in
a
couple
of
weeks.
So
one
of
our
goal-
goals
in
this
meeting
is
to
test
out
measuring.
A
So
if
you
look
into
the
main
minutes,
you
will
find
a
link
to
the
measuring
test
plan.
So
what
this
basically
is
is
a
it's
a
series
of
tests
or
complete
use.
Cases
of
measuring
that
tries
to
touch
all
all
of
mercedes
components,
and
why
we
have
it
written
down
here
is
is
to
make
sure
that
the
functionalities
mentioned
here
actually
work,
work
across
all
platforms,
all
operating
systems,
and
to
also
make
sure
that
the
user
experience
while
they
are
running
through
these
steps,
as
actually
good
and
intuitive.
A
A
So
how
this
is
going
to
work
is
people
can
take
up
a
particular
area
or
particular
component,
or
a
particular
test
group
test
out
these
actions
mentioned
here
and
write
down
their
observation,
so
we
can
have
people
sign
a
sign
up
to
test
out
these
critical
areas,
and
then
other
thing
I
wanted
to
mention
was
on
the
measuring
testing
strategy,
so
as
we
as
we
walk
through
walk
through
these
tests,
most
of
which
are
a
manual
right
now.
A
One
of
the
other
goals
of
this
meeting
is
to
is
to
make
sure
that
we
are.
We
are
progressing
towards
automating
these
as
unit
tests
and
integration
tests.
So
essentially,
what
we
want
in
the
future
is
to
make
all
of
these
all
of
these
test
cases
automated
and
hopefully
trigger
them
in
the
build
and
release
or
the
in
the
in
the
cicd
pipelines.
A
So
the
action
item
here
right
now
is
is
for
people
to
sign
up
to
test
out
this
particular
areas.
The
what
we
are
looking
for
is
people
who
are
actually
people
who
have
actually
worked
with
mashari,
who
has
sufficient
understanding
to
carry
out
these
tests
and
to
write
writing
good
reports
on
what
is
actually
happening
versus
what
is
the
expected
outcome
and
to
raise
issues
and
go
and
going
and
fix
fixing
them.
A
That's
pretty
much
it
lee.
Would
you
like
to
add
something
here.
E
Briefly,
I'll
say
that
this
is
an
excellent
opportunity
for
push
actually
with
respect
to
measuring.
Ctl
push
has
already
been
doing
most
of
what
these
line
items
are
and
so
push.
This
is
a
it's
a
nice.
It's
nice,
it's
an
excellent
opportunity
to
lead
the
way.
There
are
right
now
a
couple
hundred
lines
in
the
spreadsheet.
E
Eventually,
we'll
want
to
automate
every
single
one
of
these.
Before
we
do,
there's
some
amount
of
human
checking.
That's
well
one!
We
don't
have
the
automation
today,
it's
not
necessarily
difficult
to
get
to
automation,
it's
difficult
to
get
to
sustainable
automation,.
E
So
automation
isn't
the
immediate
goal
for
the
upcoming
release.
There
are
kind
of
two
things
that
are
the
immediate
goal
for
the
upcoming
release.
One
is
to
make
sure
that
functionally
the
well,
that
the
feature
works
to
the
that
it
behaves
in
the
right
way
that
it
provides-
and
this
is
something
that
only
a
human
can
really
look
at
like
it-
provides
the
right
feedback
that,
like
thinking
of
measuring
ctl,
specifically
that
it
should
give
people
example
usages.
E
It
should
not
just
market
them
that
they're
missing
a
required
parameter.
It
should
say
specifically
this
parameter
and
and
here's
an
example,
and
so
that's
one
thing,
so
I'm
going
to
call
out
call
it
push
and
I'll
also
say
there
are
a
couple-
and
maybe
the
venue
just
said
this,
but
there's
a
couple
of
overarching
integration
tests
that
that
are
being
run,
there's
a
handful
of
them
that
are
being
run
every
time
that
a
pull
request
is
made
to
the
repository.
E
In
addition
to
that,
it's
it's
straightforward
enough
to
take
recently
written
github
actions.
Github
actions
that
have
been
written
for
measuring
specifically
there
are
two
it's
easy
enough.
It's
very
straightforward
to
take
both
of
those
actions
and
schedule
them
for
to
run
like
on
a
nightly
basis
and
those
github
actions
broad.
They
are
significant
unit
integration
tests.
They,
like
kind
of
they'll,
run
the
full
gamut
of
like
deploy,
bringing
up
a
cluster,
deploying
measuring
bringing
up
a
mesh
deploying
a
sample,
app
testing.
E
E
I
don't
know
that
that's
on
anyone's
plate,
so
I'll
take
a
note
in
the
meeting
minutes
and
it
could
be
a
good
opportunity
for
someone
who
is
devops
oriented
or
build
and
release
oriented.
So
yeah,
that's
good.
It's
good
to
recast
that
meeting
quality
is
not
something
that
we
can
afford
to
skimp
on
any
longer.
E
Measuring
right
now
is
in
its
dot
5
release.
That
means
it's
over
the
over
the
hill.
It's
like
it's!
It's
just
going
to
go
over
the
hump
and
start
to
go
like
down
the
hill
toward
1.0
toward
people
and
organizations
wanting
to
use
meshri
and
all
of
its
capabilities
in
production.
I
think
a
user
just
asked
on
friday.
E
E
What
time
is
the
meeting
at
tomorrow?
It's
about
four
hours
later
than
now.
I
think
for
five
four
hours.
A
Yeah
four
hours
later
than
today's
call
12
p.m.
Central
time
11,
30
ist,
we
also
have.
We
are
also
thinking
to
shift
the
timing
of
the
meeting
to
some
time
earlier
so
this
week.
It
will
be
this
time
and
maybe
we
can
reschedule
the
meeting
in
the
coming
weeks
to
better
suit
to
make
it
a
bit
early.
A
H
H
I
C
E
Is
the
by
the
way
did
you
end
up?
Did
you
end
up
having
if
you
have
conversations
with
jared
and
rude
rocks
or
others
about
like
a
question
that
you
raise
in
slack,
try
to
have
those
conversations
publicly
so
that
all
can
stay
abreast
of
what's
going
on?
E
Did
you
guys
converse
publicly,
and
I
just
missed
it?
No,
okay,
yeah!
Please!
Please
do
because
it
it's!
That
actually
is
a
good
reminder
for
everyone
like
it's
really
you're.
It's
really
easy
to
go,
have
private
direct
messages,
but
like
stop,
stop
don't
do
it.
I
challenge
you
all
to
have
public
conversations
you're
going
to
really
miss
out
on
being
able
to
collaborate
with
others,
learn
from
others.
You'll
do
duplicative
work
which
I
think
might
have
just
happened.
E
I'm
not
sure
I
just
raised
the
pr
on
the
nginx
repo
about
converting
from
b1
beta
1
to
v1,
which
I
assume
is.
What
you
guys
have
done
is
that
right.
H
Okay,
so
jared
and
rudraks
added
the
ability
to
download
the
service
mesh
from
hell.
So
I
was
talking
about
that.
E
Yes,
no,
no,
the
the
adapter
itself.
When
you
run
the
mesh
adapter
for
nginx,
you
tell
it
to
install
a
surface
mesh.
Was
it
installing
from
helm
or
was
it
installing
from
something
else.
I
It's
the
recent
change,
I
hope
I'm
audible
and
my
audio
is
not
breaking
or
something
so
just
the
recent
change
was
about
normalization,
basically
nginx
recently
overwrote
their
health
charts
and
the
version
in
their
hand
charts
like
previously.
It
contained
a
vn
now
with
there
and
so
the
function
that
was
present
before
a
pr
was
merged.
E
I
Oh,
like
a
lot
of
time
ago,
it
was
actually
using
nginx
the
engine
xsm
cli,
which
had
some-
I
don't
know
it
had
some
ula
issues
and
that
that
was
a
it
to
us
and
we
couldn't
get
the
next
mesh
out
of
beta
the
adapter
out
of
beta
and
then
recently
we
had
jared
joining
us
in
the
community
and
he
informed
us
that
you
can
install
nginx
using
helm
charts
now,
so
we
moved
to
helm
charts
instead
and
now
engine
xsm
is
installed
using
helm,
charts
and,
and
the
pr
that
was
talking
about
before
this
was
this
topic
so
yeah.
E
You
know
the
fact
that
their
helm
chart
is
using
deprecated
api
versions,
so
it
doesn't
support
one
dot,
kubernetes
one,
two,
two
they're
using
an
old
you
know
v,
one
beta
one
of
the
mutating
so
so
of
the
chain
of
the
interactions
that
so
stray
as
I'm
still
trying
to
like
extract
what
you
guys
figured
out
yesterday
like
how
is
this
working
now
from
the
challenge
that
you
were
facing
yesterday.
E
Okay,
okay
cool,
then
I
think
I
was
confused
from
the
start
that
great
so
cool,
oh
okay,
so
you're
still
seeing
the
okay
great
yeah.
I
was
confused
as
to
what
we
were
doing
so
you're
still
seeing
the
air
up
there,
nice
good,
so
by
the
way
for
everyone
else
on
the
call
just
the
every
measuring
adapter.
You
know,
there's
a
reason
why
there's
an
adapter
for
every
service
mesh
every
service
mesh
is
unique
and
provides
and
runs
differently,
wants
to
be
installed
differently.
E
I
have
to
say
that
I
hope
I
wish
jared
was
on
because
it's
better,
it's
nicer
to
give
critical
feedback
when
they're
there
on,
but
I
have
to
say,
like
the
nginx
adapter
the
helm
install
takes
for
free
than
ever
it
actually
wouldn't
uninstall.
In
my
cluster
I
couldn't
get
that
their
namespace
deleted.
I
had
to
wipe
my
cluster
and
so
yeah
shreyas.
I
think
that,
are
you
seeing
a
long
time
to
install
with
helm
as
well.
E
Okay,
I
mean
yeah
they're,
I
think
you
know
they're
in
part
like
deploying
spire
they're,
actually
deploying
a
few
things
like
you
know,
jager
and
prometheus
spire,
which
is
pretty
cool
they're
using
s
they're
they're,
fully
using
service
mesh
interface
that
spec
in
a
set
of
specs
they've
got
a
rate
limiter
crd
and
I
think
a
circuit
breaking
crd.
So
those
are
really
cool
things,
but
it
does
take
forever
to
install
and
actually
uninstalling.
E
I
couldn't
get
it
to
install
so
the
error
that
you're
seeing
right
there
with
the
validating
web
configuration
being
deprecated
in
160..
I
just
sent
in
a
pr
just
before
this
call
started
and
the
link
is
in
the
chat.
I'll
put
it
up
again,
it's
not
a
very
big
pr.
It
just
switches
off
of
v1
beta
1
for
web
hook,
configurations
to
v1
instead
of
v1
beta
1
v1,
because
that
api
version
was
deprecated
in
kubernetes
1.22.
E
H
I'm
running
one
burn
or
twenty
percent:
okay.
E
Okay,
you're
explicitly
calling
that
one
out
to
run
nice,
so
those
those
are
just
warning
messages
that
you're
seeing
then
okay
go
and
those
are
not
preventing
you
from
running
them.
Okay
and
then
do
you
see
nginx
in?
Do
you
happen
to
see
nginx
on
the
dashboard
in
meshary
now
that
it's
provisioned?
No?
No,
no,
no!
Okay!
I
E
I
A
I
So
basically,
this
is
about
getting
nighthawk
out
of
mysteries
container.
There
are
two
steps
to
this.
I
mean
it's
actually
one,
but
we
are
having
two
steps.
Basically,
nighthawk
should
run
in
a
separate
container,
not
in
mississauga's
misery,
service
container
and
in
all
load.
Generators
should
run
out
of
merchandise
container
and
in
a
separate
load
mystery
perf
container.
I
I
I
A
I
And
you
can
see
that
it's
sending
that
event
and
this
service
actually
running
in
the
image
of
container
this
receiving
test
configuration,
and
we
have.
We
have
the
graph
so
yeah.
Basically
this
full
request.
That
is
here
right
now
it
it
does
a
few
things.
It
make
sure
that
nighthawk
I
mean
the
mystery.
I
Which,
which
adds
the
ability
to
natively,
convert
the
output
produced
by
nighthawk
to
fork
the
compatible
output
which
is
used
for
making
this
histogram,
but
I'm
still
not
using
it
on
this
right
now,
because
it's
actually
not
released
and
do
I'm
having
a
second
thoughts
on
actually
using
this,
because
oh
staff
auto
and
the
chef
one
of
the
maintainers
of
my
talk
had
made.
This
comment
on
this
whole
request
that
we
can,
if
we
can
use
the
nighthawk
transformer
binary,
we
should
use
that
because
it
would
have
less
maintainability
and
yeah.
I
I
E
Using
nighthawks
transform,
isn't
there
a
bug
in
there
that
doesn't
support
fortio,
compatible
output.
I
Yeah,
actually,
they
were
so
the
code
which,
which
the
mixery
server
that
I
have
deployed
on
my
system
right
now
is
basically
the
old
version
of
this
like
before
this
pull
request
was
merged.
If
you
would
see,
then
that
is
actually
what
it
was
and
some
hacks,
because
the
output
transformer
the
resulted
returns
is
not
not
in
what
you
call
that
in
not
aligns
with
forked.
Even
the
best
terms
like
there
are
some
differences.
I
I
E
E
E
Cool
good,
all
right,
so,
by
the
way
I
think
you
were,
you
were
just
trying
to
highlight
to
everyone
that
you've
done
some
work
to
be
able
to
take
what
is
currently
a
built-in
a
load
generator
that
ships
inside
the
same
container
as
mesh
reservoir.
Does
you
were
just
showing
that
you
can
pull
that
out
and
run
it
in
a
separate
container
right,
yep,
nice,
good
and
that'll
set
us
up
for
distributed
performance
testing,
so
that'll
be
super
interesting
would
be
really
neat
along
with
this.
E
As
a
related
enhancement,
did
you
tell
everyone
about
the
reduction
in
image
size.
I
We
were
using
a
pretty
heavy
image
like
we
were
using
a
one-two
image.
We
we
thought
that
it
has
all
the
libraries
that
are
needed.
Basically,
nighthawk
binaries
are
static,
not
statically,
compiled
like
go
binary,
so
they
need
some
sort
of
shared
libraries
in
order
to
work
out
of
the
box
and
that
that
led
us
to
having
a
larger
image.
But
then
we
realized
we
could
use
a
smaller
image
which
one
one
with
comes
with
alpine
and
g
lipsy
and
nighthawk
would
be
satisfied
with
those
state
libraries.
I
So
we
recently
moved
to
that
image
and
also
the
binaries
that
were
compiled
and
placed
and
released
at
get
nighthawk.
They
need
to
be
investigated,
like
I
also
inquired
with
auto
on
this.
The
binaries
released
on
kid
nighthawk
repo
are
quite
big.
They
were
actually
600
mb
in
size
and
the
ones
that
we
have
in
the
container.
I
E
So
anybody
have
questions
for
rudraksha
somebody.
So
if
you,
by
the
way,
if,
if
those
that
looked
at
slide,
two
like
I
had
linked
in
the
chat
you'll
note
that
mesherie
supports
three
load
generators
and
each
of
those
three
are
shipped
inside
the
same
server.
E
Sorry,
the
same
container
image
today
and
so
brew
rocks
is
showing
that
that
can
those
can
be
pulled
out
and
put
into
their
own
image,
which
would
base
their
own
container,
which
would
basically
become
very
similar
to
an
adapter.
I'm
sure
you
would
treat
them
like
an
adapter
and
that
will
enable
mesherie
to
run
n
number
of
load
generators
so
that
it
can
really
do
an
intense
and
a
high
fidelity
performance
analysis.
E
It's
you
know
what
this
is
gonna.
This
will
kill
me,
but
let
me
let
me
help
here
to
be
more
concise
and
that
is
to
say
people
are
trying
to
deploy
mesherie
into
eks.
E
E
E
None
of
them
are
necessarily
easy
upfront
or
long-term
sustainable
things.
So
one
of
those
includes
bundling
aws's
cli
into
the
mystery
server
container,
which
is
exactly
what
rudrock's
just
got
done,
showing
that
we're
trying
not
to
do
anymore
we're
trying
to
get
away
from
those
things,
and
this
has
long
been
on
husseina's
agenda
to
get
us
off
of
those
dependencies.
E
So
hussein
rudraksha
has
done
a
fair
bit
of
work.
An
investigation
here
that
actually,
that,
frankly,
like
will
take
way
too
long
to
explain
on
the
call.
But
if
we
can
it's
time
to
hand
that
off
to
you
to
let
you
kind
of
go
through
it
and
just
you
know,
come
back
to
like
identify
or
suggest
which
route
we
might
take,
and
it's
actually
too
too
deep,
probably
to
go
into
here.
J
Which
is
credentials
part
and
there
are
the
existing
config
commands.
E
You've
got
your
hussein
there's
some
eks
cto
that
it's
probably
worth
doing
some
diligence
on
to
understand.
If,
if
that
utility
has
made
this
use
case,
any
easier.
E
Yep
yep,
but
but
even
more
than
that,
not
using
a
command
not
using
a
binary
but
but
like
to
your
point
using
an
sdk.
So
either
aws
is
sdk
or
like
a
go
client
of
eks
ctl,
which
is
a
project
from
weave.
J
Yeah
when
I
came
across
that
single
command,
I
tried
to
find
out
the
sdk,
but
there
was
not
much
documentation
or
any
any
of
that
implemented.
I
can
take
a
look
now
again.
G
A
Yep,
let's
move
on
to
the
next
topic:
hussain,
I
will
follow
up
with
rudraksh
here
darren
is
has
joined.
I
think
yes,.
G
Yeah
can
I
share
my
screen.
G
So
the
pr
here,
it's
just
basically
a
follow-up
of
the
of
the
hem
shot
worsening
bug
that
I
faced
last
friday
and
like
during
fixing
that
bug.
I
realized
that
you
know
the
ham
charge
is
missing.
Something
that's
why
the
broker
and
the
mesh
sink
parts
are
not
coming
up
after
using
the
hem
charge,
2d
measuring
installation,
and
then
this
pi
is
to
fix
that.
Besides
that,
I
also
noticed
that
the
let's
go
to
this
issue
so
that
I
can
explain
it
a
bit
easier.
G
Besides
that,
we
can
see
that
here
you
know
the
operator
metric
operator
has
two
container
one
is
the
manager,
the
other
one?
Is
the
rbac
coupe
rba
proceed
container?
While
you
know
the
hem
chart
only
has
the
the
manager
container
did
I
no.
I
didn't
really
list
out
here,
but
I
do
have
the
I
do
have
the
actual
cluster.
G
G
I
don't
know
why.
I
cannot
re-click
that
tab.
G
G
So
this
is
the
template,
so
basically
just
a
cr
instance
of
the
mesh
sync
and
then
the
broker
you
can
see
that
here
is
the
mess
sync
as
a
kind
and
then
the
broker
as
another
kind.
You
know,
so
the
users
can
basically
do
a
ham
install
with
these
two
hem
charts
to
install
the
actual
instance
for
the
for
the
broker
and
the
mesh
sync,
and
I
can
actually
show
you
in
my
cluster.
You
know
about
how
it
works
so
on
the
left
you
can
see
here.
G
I
have
all
the
other
parts
that
gets
installed
with
my
with
the
most
updated
hem
chart.
So
that's
the
command.
I
basically
ran
you
know,
and
then
I
got
the
system
like
this
on
the
left,
and
then
one
thing
I
wanted
to
show
is
the
operator,
and
now
it
has
the
our
back
proxy
container,
as
well
as
the
manager
in
the
same
part,
and
then,
if
I
you
know,
can
show
you
the
logs,
it
seems
it
it
is
doing
what
it's
supposed
to
be
doing.
G
I
compare
the
loss
with
the
dimensory
ctl,
also
this
guy,
but
you
can
see
here
the
match.
The
the
manager
is
not
doing
anything
after
starting
the
controllers
and
the
workers,
because
we
don't
really
have
the
cr
instance
for
the
broken
mesh
sync
so
that
you
know
the
reconciling
for,
like
the
reconcile
functions
for
both
two
resources
are
not
getting
triggered,
but
if
now
I
do,
the
ham
install
go
to.
Let's
call
this
broker
go
to
the
cr,
handshake,
kubernetes
and
mastery
cr
mystery
broker
and
then
in
namespace
mastery.
G
It
will
well
as
soon
as
I
did
that
we
can
see
here.
Oh
the
you
know
the
job.
The
operator
is
able
to
pick
up
this
resource
and
then
start
to
reconcile
the
broker,
and
if
I
go
back
to
the
cluster
we
can
see
here
we
have
a
mastery
broker
serial
that
gets.
You
know
it's
getting
created
here
in
the
cluster.
It's
still
trying
to
set
up
everything
he's
supposed
to
be
doing,
and
then
it's
the
same
for
the
basically
same
process
for
the
mesh
sync.
G
That's
basically
what
this
pr
is
about,
but
I
did
notice.
There
are
some
difference
between
the
existing
pr
and
the
well,
not
the
prt
existing
operator,
yemo
manifest
and
the
one
that
we
that
we
use
in
the
mastery
ctl,
for
example
the
ports.
Here
we
only
have
a
port
called.
You
know,
1000,
that's
for
the
http,
but
on
the
other
hand
the
one
that
we
use
for
the
mastery
ctl
we
have
like
two
ports.
One.
G
Is
that
nine
four
four
three
that's
for
the
server
and
then
that
the
other
is
8080,
that's
for
the
matrix,
but
we
don't
have
that
in
the
in
the
operator
hem
chart.
So
I
was
wondering
you
know
if
we,
if
I
should
update
this
so
that
it
it
is
consistent
with
the
one
that
we
have
in
the
what
we
use
in
the
the
mastery
cto.
E
Clearly,
this
is
great
actually
the
way
that
you
presented.
It
was
very
nice
because
it
falls
right
in
line
with
the
way
that
we
commonly
present
on
these
calls,
which
is
an
acknowledgement
that
most
of
us
are
learning.
Oh.
G
Okay,
so
yeah,
so
if
that
is
the
case,
I
would
like
to
cover
like
one
more
thing,
so
so
yeah.
So
the
reason
that
I
have
these
two
resources
installed
after
after
this
mastery
installation
is
because
that
so
with
this,
the
like
with
the
hem
charts
under
hem,
slash
mastery,
we
have
a
thing
called
the
crds
dot
demo
in
which
we
actually
define
like
two
customer
resource.
One
is
called
broker
dot,
you
know
mastery
dot,
layer,
the
io
115
io
and
the
other
one
is
the
mesh
sync.
G
So
for
those
for
those
of
you
who
don't
who
are
not
really
familiar
with
that
like
what
the
crd
is
is
acid
is
just
basically
a
an
extension
of
kubernetes
api.
It's
like
the
kubernetes.
They
provide
a
way
for
us
developers
to
create
our
own
object.
G
So,
as
you
may
know
that
there
are
different
objects
in
the
kubernetes
right,
for
example,
deployment.
That's
like
the
one
of
the
built-in
crd.
So
so
we
use
this
this
mechanism
to
create
our
own
custom
resource
definition.
G
One
thing
to
know
is
that
this,
so
what
so?
What
we're?
Seeing
here?
Just
a
definition:
it's
just
the
you
know
it:
it's
like
the
objects,
you
have
the
or
the
fields
inside
right,
but
you
don't
actually
have
the
objects
until
you
until
you
put
things
inside
so
now,
after
first
step,
we
only
have
the
definition
for
the
object.
G
G
So
that's
why
we
need
another
step
or
steps
to
do
the
installation
for
the
broker
and
the
mesh
sync
so
that
they
have
the
so
that
we
have
the
parts
in
the
clusters
and
then,
if
you
actually
take
a
look
at
the
the
broker
and
the
messaging
ham
chart
that
I
created,
they
have
a
kind
called
messing.
This
is
not
built
in
object.
That's
the
one
that
we
defined
in
the
crd
and
then
same
for
the
broker
is
called
broker.
You
know
that's
what
we
defined
in
the
crd.
E
Nice
cool,
good
yeah
thanks
for
that,
a
couple
of
couple
of
items
yep,
so
my
understanding
is
that
the
we
probably
want
to
be
consistent
with
mastery
ctl's
use
of
those
ports.
You
have
to
go
back
and
check,
but
certainly
the
deployment
of
mesh
tree
operator
from
mastery
ctl.
E
It
works
well
outside
of
the
crs
themselves,
which
is
in
part
what
you're?
Addressing
to
your
point
like
we
do
want
to
set
aside
the
manifest
as
on
not
the
desired
way
to
deploy.
You
know
we
want
a
whole
hog,
both
in
measuring
ctl,
the
way
that
it
deploys.
We
want
for
it
to
use
helm,
which
it
we
want
for
it
to
use
helm
for
all
of
mesher's
components.
E
We
want
for
measure.
There
are
use
cases
where
mesh
reservoir
will
also
need
to
deploy
the
operator,
and
so
it
would
use
helm
as
well
and
then
in
just
supporting
the
deployment
of
mastery
itself,
like
as
people
go
to
initially
deploy
measuring
that
they
could
use
helm
to
deploy,
measuring
and
all
of
mesher's
components.
So
that's
right.
That's
really
great!
There's
a
couple
of
objectives
that
we
try
to.
You
know
it's
like
these
objectives
that
are
written
here,
that
we
try
to
hold
true,
which
is
as
few
a
few
commands
as
possible.
E
E
In
the
back
of
your
mind
like
so
one
is
that
yeah
like
like
last
time
we
spoke,
I
was,
I
thought
we
were
using
operator
sdk,
but
if
you're
seeing
q
builder
stuff,
then
like
okay,
I
guess
we
switch
that
we're
using
cube
as
you
dig
in
like
there's
a
leader
election
that
seems
to
be
going
on
like
for
nothing
like
like,
like
one
of
the
potential
future
enhancements
to
the
operator.
Is
the
ability
to
be
in
nature,
or
are
you
highly
available
and
have
multiple
a
leader?
E
I
don't
know,
I
don't
know
the
terminology.
There
is
a
leader,
slash,
non-leader,
I'm
trying
to
say
try
not
to
say
anyway,
there's
also
so
in.
I
don't
know
in
mesher's
ui
next
time,
you're
in
there
there's
the
settings
area,
you
can
go
and
there's
a
switch.
You
can
flip
back
and
forth
to
deploy
or
undeploy
an
operator
from
a
cluster
the
like
one
of
the
things
that
is.
E
That
we're
struggling
to
find
the
balance
between
and
something
for
you
to
digest
and
sort
of
think
about.
As
you
spend
time
here
is
you
know,
like
the
the
we're
we
have
we're,
having
faith
that
there's
going
to
be
value
in
the
operator
carrying
and
feeding
for
those
two
custom
controllers
that
the
broker
one
and
mesh
sync,
and
now
that
we're
correcting
the
fact
that
those
crs
will
be
present
and
the
operator
can
do
its
reconciliation
job.
E
You
know
ongoing
great,
we'll,
probably
once
we
get
to
like
a
highly
available
setup
will
probably
find
more
value
in
the
operator.
There's
an
open
pr
that
on
measury
to
support
many
kubernetes
clusters
right
now.
The
architecture
of
mesherie
is
such
that
is
lined
up
to
support
many
clusters,
but
it
really
right
now
in
the
current
release,
measuring
only
supports
talking
to
a
single
cluster
at
a
time.
E
G
So
you
want
so
the
end
goal
of
that
is
to
want
to
basically
after
we
deploy
so
but
actually
in
short,
we
want
to
do
everything
in
one
command
right.
So
the
flow
is
that
we
want
to
do
you
know
ham
well,
either
ham
or
mastery
ctl.
We
want
to
do.
G
Let's
say:
let's
take
the
hem,
for
example,
we
do
you
know,
ham,
install
and
then
the
path
to
these
you
know
basically
just
the
first
command
here
and
then
we
do
this
and
then
after
the
operator
is
created
in
a
cluster
and
then
as
well
as
we
have
the
crds.
The
operator
will
start
to
create
the
cr
instance
for
the
broker
and
and
then
the
messing,
but
it
does
what
order
here
doesn't
really
matter
because
they're
two
independent
resource
and
then
and
then
the
operator
will
start
reconciling
for
both
resources.
E
I'll
characterize
two
other
two
other
behaviors
that
we
desire
for,
like
yeah
to
your
point
about
single
command.
Install
is
like,
if
you
measure
ctl
system,
start
it
it'll
it
it.
Let
me
I'll
describe
it
as
this
that,
like
we
want
for
mesh
reservoir
and
mastery
ctl
to
greedily,
if
I
can
use
it,
this
way
like
greedily
deploy,
mestry
operator
as
soon
as
a
user
connects
meshri
to
a
cluster
because
getting
mesh
sync
mesh
sync
is
really
the
heart
of
mastery
like
in
terms
of
it.
E
It
discovers
like
it
pumps
a
bunch
of
blood
if
you
will
a
bunch
of
info
to
the
various
components
so
that
they
know
what's
going
on,
it
can
take
action
in
the
right
way
and
so
getting
mesh
sync
up
as
early
in
that
deployment
process
as
possible
lends
to
a
ver
a
quick
time
to
value
for
users.
They
sort
of
bring
up
mesher's
ui,
it's
like
oh
wow,
like
it
already
knows
all
these
things,
and
so
immediately
they
can
start
to.
G
E
Okay,
but
but
yeah
anyway,
so
those
are
big,
big,
lofty
things
but
like
sort
of
here
and
now
things
are
something
I'd
be
curious
to
so,
thanks
for
the
education
by
the
way
on
the
kubernetes,
our
back
proxy
or
q
bar
back
proxy.
I
I
come
to
understand
that
it's
you
know
something
of
a
more
or
less
like
side
card.
Next
to
the.
G
E
Just
a
cycle
container,
and
but
as
you
think
about
this
more
deeply
like
part
of
my
questions,
are
well
hey.
What
what
value
have
we
yet
to
derive
from
using
from
having
an
operator
like
like?
E
What
do
we
because,
like
what
are
we
missing
and
again
like
these
are
just
things
to
think
about?
It's
like
people
will,
if
mesh
sync
isn't
running
and
reporting
isn't
communicating
to
mesh
reserver.
E
You
can
still
use
like
a
portion
of
mesherie's
functionality,
but
more
and
more
and
more
like
it
becomes
really
painful.
If
you
don't
have
one
if
mesh
sync
isn't
connected.
So
if
there's
certain
reconciliation,
loops
or
or
robust
or
there's
more
robustness
that
we
can
get
from
the
operator,
you
know
the
better.
G
Yeah,
I'm
just
thinking
that
I'm
not
really
sure
if
I
have
seen
a
pattern
that
the
operator
is
in
charge
of
the
resource
deployment,
because
you
know
all
the
either
all
the
projects
I
work
on
or
the
open
source
project
I
have
seen
people
learn
is
that
they
do
it
separately,
because
there
are
two
independent
things
ready.
The
operator
is
not
supposed
to
handle
the
creation
of
the
customer
resource.
Only
the
main
thought
of
that
is
to
reconcile
the
the
resource
that
you
are
well
that
you
want
it
to
reconcile
right.
It's
like
it.
G
E
G
E
E
Nice,
okay,
so
I
took
us
over
time.
I
wanted
to
make
sure
to
give
so
sorry
about
the
venue,
but
yeah
and
darren
this
is.
This
is
great.
It's
good!
Thank
you.
A
We
missed
a
couple
of
topics
today
because
some
folks
were
not
able
to
join
today,
but
I
think
we
covered
everything
that
we
set
out
to
discuss
just
to
remind
we
will
have
a
build
and
release
measuring,
build
and
release
meeting
tomorrow
you
can
see
the
details
in
the
layer
say:
community
calendar
so
join
if
you
are
interested
in
testing
it
out
and
taking
being
a
part
of
that
initiative.