►
From YouTube: Meshery Playground: Deployment Planning
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
B
Hi
Ella,
hey
yeah
I,
could
hear
you
a
second
ago
hey
there.
He
is
all
right,
hello,.
B
B
Well,
Ashish
is
on
the
call
he's
got
he's
got
color
in
his
beard
too.
So
I'm
kind
of
the
odd
man
out
here,
yeah.
B
Okay!
So
we
got
some
Mario
is
on.
Renov
is
on,
and
maybe
we'll
just
take
a
couple
of
minutes
to
try
to
get
organized
just
before
we
kick
off
the
call
in
case
a
couple
of
others
join
that
that
means
we
have
a
couple
of
minutes
just
to
just
to
say,
hi
to
Allah.
Actually,
so
all
I'm
saying
your
first
name
right
right.
It's
it's
all!.
B
Oh
nice,
okay
I
got
you
way
to
to
break
the
ice
even
further
Beyond
making
fun
of
my
beard
Let's.
Let
me
ask
you
this
or
or
say
a
bit
about
yourself.
If
you
would
just
in
terms
of
your
level
of
comfort
with
sort
of
you
know,
kubernetes
and
yeah.
C
So,
actually,
a
couple
of
months
ago,
I
joined
the
the
community
and
they
attended
some
the
meetings
with
them.
Some
of
the
weekly
meetings
then
I
had
some
like,
like
I
did
I
did
a
career
mistake
and
then
now
I'm
back
to
business
again.
C
So
my
background
is
from
coming
from
networking
to
system
Administrations
and
so
My
Level
to
kubernetes
I
have
like
four
months
experience
with
kubernetes
and
the
now
I'm
preparing
my
cka
so
in
terms
of
deploying
cursors
I
have
done
that
a
couple
of
times
and
cloud-based
VMS
I
never
have
did
this
in
bare
metal,
but
in
VMS
I
have
did
it
in
multiple
times
so
I,
as
I
mentioned,
they
may
reply
in
this
issue.
I
could
be
a
good
backup
for
more
experienced
people
here
or
you
have
a
good
understanding.
C
B
Okay,
that's
great
yeah!
That's
that's
all
of
you
in
a
nutshell
like
that,
that's
all
there
is
no
no
yeah.
That's
wonderful!
I
am
I'm
glad
that
you're
here
this
is
nice
as
a
matter
of
fact,
I'm
glad
that
there
aren't
45
other
people
here,
because
it
because
there's
enough,
for
you
know,
there's
enough
work
to
be
done
for
a
handful
of
people
to
get
into.
But
oh
man,
if
everybody
showed
up
it,
would
be
a
little
tough.
B
So
that's
great
hey!
You
know,
hey!
You
know,
Kudos
on
starting
off
with
networking
and
you
know
solid.
That's
that's
a
great
place
to
have
kind
of
have
it
have
your
initial
Focus,
networking
and
Linux
sysadmin.
That's
extremely
useful
knowledge
both
of
those
areas
and
yeah
it
I've
been
so
used
to
deploying
with
a
mini
Cube
or
a
Docker
desktop
that
I
tend
to
forget.
Oh
my
gosh,
how
much?
How?
B
Just
real,
quick,
Obby
or
pranav
or
Mario.
Do
you
guys
see
other
people
in
the
slack
maybe
asking
around
about
where
to
join?
Maybe
we
should
just
send
a
reminder
to
everybody.
That's
happening.
B
Thank
you.
Thank
you
perfect,
so
so
this
is
for
Allah,
but
also
for
the
rest
of
us
to
kind
of
be
on
the
same
page.
So
you
know
so
Gentlemen
We
Are
well
we're
on
a
mission
to
see
if
we
can
provide
to
see
it
like
essentially
the
cncf
playground
or
the
cloud
native
playground
or
the
meshery
playground.
Hopefully
those
become
synonymous
to
the
extent
that
meshri
as
a
piece
of
management
software
allows
people
to
configure
and
deploy
and
manage
the
ongoing
life
cycle
of
all
of
the
cncf
projects.
B
Hopefully
that's
you
manage
pretty
well
in
order
for
us
to
expose
that
capability
to
people
to
come
and
learn
and
and
explore
all
the
different
cncf
projects
we
we
need
to
run
an
instant.
We
need
to
get
serious
about
running
an
instance
of
this
playground,
a
demo
environment,
a
playground
environment,
one
of
the
inherent
challenges
to
any
sandbox
or
to
any
to
running
anything.
Is
people
purposefully
or
accidentally
abusing
it.
B
Somebody
might
want
to
come
over
and
mine
some
Bitcoin
or
run
some
things.
If,
if
they
can
in
the
demo
environment
they
can
go,
do
some
stuff,
they
might
be
doing
it
on
purpose.
They
might
be
doing
an
accident.
But
but
as
we
look
to
run
this
in
an
open
way,
the
damage
that
people
can
do
is
well.
It's
only
to
the
extent
of
these
two
bare
metal
systems
and
the
bare
metal
systems
that
we
have
are
running
in
the
cncf
labs,
which
is
well
it
used
to
be
packet.
They
were
acquired
by
equinix.
B
So
there's
this
equidex
portal
that
we're
looking
at
here
there's
a
CLI
for
interacting
with
the
equinix
API.
So
there's
a
CLI
called
metal
because
it's
equinix
metal
is
the
name
of
their
offering.
B
You
can
use
the
CLI
to
spin
up
and
spin
down
bare
metal
systems
that
sounds
kind
of
weird
to
spin
up
and
spin
down,
but
like,
but
anyway,
to
spin
up
and
spin
down
bare
metal
systems
to
install
os's
of
your
choice
to
you
to
pick
different
sized
physical
servers
in
different
locations
and
all
that,
and
so
so
so
we'll
get
into
that
a
little
bit
more
about
one
of
the
considerations
that
I
think
that
we
can
get
maybe
into
somewhat
headlong
is
because
Allah
is
here
is
well
is,
is
some
of
the
ongoing
operational
considerations
of
or
really
any
any
kubernetes
cluster
and
things
that
are
running
on
it
is
like
meshury
as
the
primary
workload
is
meshery
healthy?
B
Is
it
did
it
fall
over?
How
do
you?
So?
How
do
you
monitor
Mastery?
How
do
you
tell
that
it's
it's
healthy?
Some
of
that
can
be
answered
just
in
a
very
kubernetes
generic
way.
You're,
nothing
specific
to
Mastery.
Necessarily
just
is
the
Pod
up
or
is
the
you
know
like
there's
there's
a
lot
of
that
kind
of
a
thing
that
we,
so
we
don't
have.
If
you
go
to
the
meshery
docs,
like
it's
more
or
less
absent
of
kind
of
all
of
that
stuff,
we
don't
need
to
write
down.
B
We
don't
need
to
write
a
manual
on
how
to
do
generic
kubernetes
workload.
You
know,
availability,
you
know
monitoring
things,
but
we
do
need
to
highlight
special
considerations
that
people
have
for
meshery
as
an
app
like.
Does
it
expose
a
Prometheus
interface?
Does
it
emit
logs
those
types
of
things
we
need
to
part
of
this?
The
start
of
this
effort,
some
of
the
artifacts
that
will
be
produced,
would
be
SRE
type,
artifacts
or
or
yeah
recommendations
that
the
project
will
have
to
measure
users.
B
Yeah,
so
how
do
you
do
it
in
an
h
a
way,
and
just
you
know
the
so
yep,
so
we
have
these
two
bare
metal
systems
we'd
like
to
make
sure
that
there's
a
mesh
redeployment
that
runs
in
a
two
node
kubernetes
cluster
that
that
we
schedule
workloads
to
the
control
plane,
as
well
as
to
the
worker
nodes,
because
it's
just
two
nodes,
so
we
shouldn't
just
waste
one
only
running
a
control
plane
we'll
need
to
pick
a
load
balancer
as
we
go
to
expose
the
service.
B
We
need
a
load
balancer
that
works
on.
You
know
we're
in
a
metal
environment
or
that
we
can't
rely
on
a
cloud
provider's
load,
balancer
integration,
so
we
need
to
choose
Envoy
or
something
to
provide
our
own
Contour
nginx
or
something-
and
you
know,
go
over
so
each
of
those
areas
just
need
consideration
like
so
there's
a
public
ipv4
address.
That's
assigned
to
this,
this
particular
server
to
each
of
the
systems.
Okay!
B
D
So
should
I
start
yeah
yeah,
please,
okay,
so
just
to
give
a
brief,
although
you
also
covered
most
of
the
stuff,
we
have
this
playground.michi.io
running
right
now
and
currently
this
is
running
inside
of
AKs,
which
is
a
managed
community
service,
and
so
the
the
basically
the
ontology
for
this
thing
is
pretty
straightforward.
We
have
an
Ingress
pod
running
which
solves
the
requests
for
meshri,
and
we
have
gotten
this.
These
two
octinox
bare
metal
servers
and
we
are
in
the
process
of
migrating
that
migrating
to
these.
D
This
thing,
so
as
of
now,
the
current
status
is
these.
So
first
of
all
here
was
the
issue
that
was
created
and
I.
Think
you're
already
familiar
with
the
issue,
so
I
don't
need
to
go
over
what
the
issue
was,
and
the
issue
also
referenced.
This
measury
SMB
action,
and
this,
if
you
go
over
to
this
action,
you'll
click
on
actions,
you'll
find
these
tests
running,
and
these
tests
are
actually
running
on
the.
D
If
you
like,
look
at
all
these
tests,
these
are
running
on
the
servers
provided
by
equinix,
so
these
other,
so
the
first
two
things
that
you're
seeing
these
are
the
newly
created
servers
which
would
be
used
for
deploying
the
your
deploying
communities,
although
all
the
other
ones
over
here.
These
are
used
for
load
testing
in
measure
SMP
action.
D
So
if
you
need
to
so,
if
you
need
to
find
out
a
way
to
interact
with
the
Equinox
API
in
order
to
do
things
like
killer
server
or
do
something
like
that,
you
can
go
over
to
the
to
this
particular
you
can
come
over
here
and
you'll
find
inside
of
like
Mastery
cctl.sh
these
inside
of
these
scripts.
This
is
a
good
like
a
template
for
you
like
how
to
tear
down
these
servers
using
the
API.
We
are
not
using
the
CLI
inside
of
the
scripts.
D
We
are
using
that
API
using
tokens
and
all,
but
as
the
discussion
progresses,
we'll
reach
a
point
where
we
would,
the
part
of
the
discussion
would
be.
Do
we
need
to
tear
down
the
whole
server
or
do
we
need
to
do
a
cluster
cleanup?
So
with
that?
What's
the
current
state
of
this
whole
thing
right
now
so
currently,
as
you
can
see
that
there
are
so,
there
are
two
nodes
right
now.
This
zero
two
node
is
currently
the
master
node
and
the
zero
one
is
the
this.
D
Is
the
worker
node
right
now
and
currently
this
is
the
state
of
the
whole
cluster,
so
nothing
but
a
simple
nginx
deployment
is
present
right
now
the
cni
is
being
the
cni
being
used
as
cubes
panel
right
now,
because
it
had
only
one
gamble
file.
It
is
pretty
straightforward
to
deploy
and
usually
you
only
Deploy
cni
on
the
control
plane.
You
don't
need
to
do
it
on
the
data
plane
when
you
use
Cube
ADM
join
to
join
your
worker
node
with
the
with
the
other
node,
the
basically
the
the
whole.
D
The
cni
knows
how
to
to
make
all
that
connection.
So
so
yeah.
This
flannel
is
being
used
as
a
cni
to
give
even
more
context
for
running
the
containers.
Container
D
is
being
used,
although
Docker
is
installed
in
both
of
these
machines.
Docker
is
not
being
used
as
a
as
a
method
of
creating
the
containers
using
that
implementing
that
CRI
interface,
because
before
the
1.2
dot
1.20
version
of
kubernetes,
that
was
the
preferred
way,
but
now
we,
the
preferred
way,
is
to
use
containerdy
or
something
like
cryo.
D
So
we
are
using
that
and
I
am
telling
all
this,
because
this
is.
This
is
part
of
the
debugging
process
when
you
would
need
when
you
start
up
the
cluster
using
Cube
ADM
in
it,
and
then
you
join
your
worker
node
using
Cuba
DM
join.
There
are
places
where
you
would
need
to
debug
stuff,
so
you
would
use
system
CTL
to
look
at
the
demon,
things
that
are
running
so,
for
example,
I
am
right
now
in
the
root
node
and
I
can
look
at
the
status
of
I.
D
Can
look
look
at
the
status
of
container
d,
so
things
like
container
D
should
be
running
fine.
So
if
there
is
an
issue
with
continuity,
if
there
is
an
issue
with
cubelet,
if
there
is
an
issue
with
the
all
of
these
things,
we
would
run
into
an
problem.
So
for
those
who
don't
already
have
this
context,
what
cubeadium
does
is
it
starts?
Basically,
it
requires
you
to
have
cni
already
installed.
It
doesn't
provide,
is
the
cni.
It
does
not
provide
you,
the
cubelet.
You
need
to
have
a
cubelet
already
there.
D
So
when
you
use
Cube
ADM,
it
installs
some
some
static
ports
like
hcd
or
cube
API
server,
the
things
required
to
start
up
communities
inside
of
Slash
Etsy
some
directory,
and
then
it
asks
cubelet
to
use
those
to
run
those
spots
basically
and
once
those
spots
come
up
and
because,
when
you
do,
you
think
like
Cube,
ADM
init,
you
provide
the
so
this
all
of
this
is
mostly
this
is
covered
inside
of
the
instead
of
the
dock
that
that
was
provided
in
in
this
particular
issue.
D
So,
if
you'll
go
inside
of
this
thing
and
this,
if
you'll
click
on
the
communities
in
bare
metal
in
10
minutes,
this
goes
over
a
very
basic
procedure.
So
if
you
were
to
follow
all
these
steps
inside
of
by
after
doing
SSH
into
both
of
these
servers,
you
would
be
able
to
successfully
deploy
communities
across
those
of
the
both
of
those
servers.
The
only
caveat
in
our
case
was
that
basically
container
D
was
failing
over
and
then
even
for
that
was
or
I
think
I
haven't
sent
out.
D
Basically,
the
issue
was
that
there
was
some
problem
with
the
configuration
of
continuity,
so
as
I
said
that
using
system
CTL
to
figure
out
if
continuity
is
healthy,
if
cubelet
is
healthy,
these
things
like
so
for
example,
if
I
go
back
here,
if
the,
if
anything
fails
over
inside
of
the
cluster,
if
things
are
not,
containers
are
not
coming
up,
so
the
first
place
to
go
would
be
to
do
system
CTL
status
with
let's
say
cubelet,
so
cubelet
should
be
working
fine
and
in
our
case
the
cubelet
was
working.
D
Fine,
the
continuity
wasn't,
and
there
was
one
part
of
the
documentation
that
I
had
missed.
So
that
was
that
so
now
going
from
here,
what's
the
next
step,
so
the
next
step
is
to
deploy
measuri
so
how
to
deploy
measuri.
So
there
are
two
things
right
now:
first
to
deploy,
mh3
and
then
to
deploy
it
using
some
sort
of
a
GitHub
action,
or
maybe
an
Argo
controller,
which
will
look
watches
at
the
container
registry.
The
oci
registry,
where
we
push
our
Docker
images
and
as
a
machine
release,
is
made
the
way.
D
Github
actions
work
right
now
they
push
a
machinery
release
image
inside
of
that
repo
and
Argo
can
watch
for
those
images
and
based
on
that
it
can.
It
can
be
triggered
to
basically
do
a
because
by
default
communities
has
that
rolling
upgrade.
D
So
we
can
run
four
to
five
or
maybe
10
replicas,
because
we
have
two
clusters,
so
it
would
be
evenly
distributed,
maybe
like
five
five
parts
over
on
the
Node,
a
N5
pods
over
on
the
Node
B
and
when
whatever
we
use,
whether
we
use
GitHub
action
or
whether
we
use
the
other
some
some
kind
of
a
cicd
stuff
like
Argo.
The
the
final
goal
should
be
to
have
this
kind
of
a
high
availability
so
that
we
can
have
this
upgrade
with
no
downtime
so
yeah.
D
So
that's
the
second
part
of
the
discussion,
okay,
so
so
yeah.
That
was
so
that
that's
what
we
would
discuss
and
the
the
part
about
networking
we
would
be
I
mean
I
would
suggest
that
we
would.
We
would
use
metal
lb,
which
is
usually
used
on
the
bare
metal
servers.
So
what
we
would
need
to
do
is
we
would
need
to
allocate
some
elastic,
IPS
some
extra
IPS
and
create
a
metal
lb
configuration.
D
So
once
we
have
these
public
IPS,
we
would
be
able
to
go
inside
of
this
like
any
of
the
nodes
and,
if
you
do,
if
config
or
maybe
IP
route
show
so
right
right
now,
if
I
look
at
what
are
all
the,
what
are
all
the
network
interfaces
and
IEP
is
available
on
those.
So,
as
you
can
see
like
right
now,
there
is
this.
There
is
this.
This
is
the
public
IP
that
is
now
currently
exposed.
D
We
would
need
so
we
wouldn't
be
exposing
meshity
through
the
same
public
IP.
This
is
the
IP
that
we
use
to
access
the
kubernetes
API
server,
so
we
won't
want
to
use
the
same
IP,
of
course,
so
we
would
be
having
another
IP,
an
elastic,
IP
or
maybe
again,
that's
part
of
the
discussion.
D
There
are
other
network
interfaces
here
as
well,
and
here
is
the
flannel
one
which
is
used
to
make
sure
that,
basically,
which
is
used
for
the
cni
purposes
so
coming
back,
we
would
need
to
have
another
public
IP
and
using
whatever
public
IP,
that
we
create
whether
this
is
static
or
whether
this
is
subject
to
change.
I
I
think
this
should
be
static,
because
otherwise
we
would
need
to
have
another
workflow
which
make
sure
that
whenever
the
IEP
changes
over
here,
the
metal
lb
configuration
changes.
D
So
we
would
need
to
create
an
IP
over
here
and
based
on
this
IP.
We
would
need
to
create
a
metal
lb
configuration
and
then
metal
lb
controller
will
be
deployed,
which
would
make
sure
that
whenever
we
create
a
Machinery
Service
of
the
type
load
balancer,
it
is
automatically
assigned
an
IP
within
a
cider,
and
that
we
would
make
sure
that
the
The
Cider
that
we
provide
over
there
inside
of
metal
lb,
the
basically
the
IP
within
that
slider-
is
available
over
here.
D
So
we
already
have
given
it
an
IP
which
it
can
expose
to
the
outside
world.
So
that
would
be
the
part
of
networking
and
about
the
rollouts
and
things
again.
We
can
use
GitHub
actions
or
we
can
use
our
group
depending
upon
what
what
is
most
convenient.
So
that
is
all
I
had
to
say.
As
of
now.
D
If
you
have
any
questions
regarding
whatever
I
said
doubts
at
any
places,
you
can
ask.
B
So
Ashish
there's:
how
do
we
there's
a
there's?
A
number
of
folks
on
the
call,
some
who
are
some
of
you
who
are
here
to
out
of
interest,
because
you're
here
to
learn
and
kind
of
you
know,
be
a
fly
on
the
wall,
so
to
speak,
and
that's
that's
wonderful.
There
are
a
few
others
that
are
here
to
get
their
fingers
dirty
to
get
their
hands
onto
something
we
were
just
getting
to
know
all
a
second
ago.
B
I
wonder
if
yeah
I
wonder
how
we
start
to
break
out
some
of
these
tasks.
One
of
the
open
questions
for
me
is
just
is
in
part
not
having
familiarity
with
equinix
metal
and
this
last
subject
that
you
were
talking
about
on
public
IPS
and
you
know
more
or
less
like
networking
like
how
are
we?
B
D
Yeah
so
once
we
have
an
IP
from
here
and
we
can
use
let's
and
like
once,
we
have
an
IP
over
here
and
there
is
a
running
deployment
of
meshri
which
which
exposes
that
thing
as
a
service.
Then
we
can
start
a
let's
encrypting
and
it
will
do
the
verification
on
its
own
and
with
whatever
domain
name
we
would.
We
would
have
added
and
we
I
think
we
should
make
sure
that
that
IP
doesn't
change
and
I.
Think
we
can
do
that
over
here,
that
that
IP
is
attached
to
that
machine.
D
Only
so
we
would
need.
Actually
we
would
only
need
one
IB
for
one
node
and
not
we
wouldn't.
Basically,
we
would
need
probably
new
two
public
IPS
for
two
nodes,
because
communities
in
networking
internally
would
handle
the
traffic
between
the
two
nodes.
We
would
only
need
to
make
sure
that
traffic
can
reach
one
node,
which
is
which
can
be
Master
node,
which
can
be
depending
upon
where
we
want
to
have
that
Ingress
of
traffic
at
the
first
place,
so
so
so
yeah.
D
A
D
D
So
once
like
we
would,
we
would
need
to
just
make
sure
that
it
can
reach
one
node
and-
and
basically
this
when
we
create
machine
service
as
a
load
balancer
type,
whatever
the
external
IP
is
that
external
IP
will
match
the
IP
that
will
that
will
be
provided
over
here
and
that
IP
would
be
pointing
to
one
of
the
nodes.
So
the
traffic
would
be
solved
from
there.
A
D
The
current
deployment
of
playground
right
now
we
were
running
into
a
bunch
of
issues
with
cert
managers,
so
we
are
not
using
sort
manager
but
I.
Think
in
this
case
we
should
use
sort
manager.
There
was
because
it
was
a
managed
kubernetes
service.
I
couldn't
debug
a
lot,
because
the
whole
everything
works.
Everything
is
packed
inside
of
a
single
binary
and
not
exposed
for
us
to
debug.
D
So
this
is
one
advantage
if
we
get
if
we
are
not
using
a
managed
community
service
and
having
a
bare
metal
thing
that
we
have
a
lot
more
control
over
what
we
can
or
cannot
do.
D
Primarily
make
sure
it
primarily
make
sure
that
so,
for
example,
right
now,
every
60
days,
these
certificates
are
expired
or
something
right.
So
it
make
sure
that
after
30
days
or
something
so
there
is
a
certain
time
period
after
which
it
sends
out.
Basically,
it
asks
it
sends
out
an
issuance
request
which
makes
sure
that
the
whole
flow
which
which
was
used
to.
D
B
What
I
was
all
I
was
trying
to
say
is
that
it
should
be
a
nice
to
have
that
like
if
we
go
to
deploy
it
and
do
we
have
any
challenge,
then
we
should
just
in
about
one
minute,
generate
a
static
cert
that
expires
after
a
year
and
then
list
it
and
then
come
back
to
it.
It
would
be
good
for
us
to
have
it.
You
know,
like
you
said,.
D
B
So
do
we
I'll
try
to
help
with
itemizing
part
of
the
tasks?
What
we
want
to
do
is
like
break
down
these
tasks
and
understand,
if
understand
them,
and
then
understand.
If
individuals
are
picking
them
up,
prioritize
them
and
understand
who
might
take
them
on
anyone.
C
Else,
you
should
have
a
question
here:
yeah
yeah
yeah:
do
we
have
any
like
high
level
low
level
design
document
or
something,
but
we
can
pick
up
all
the
components
together
the
so
we
can
understand
where
to
put
the
service,
the
load
balancers
and
how
to
expose
things,
and
when
we
forget
something
we
can
go
back.
Just
go
back
to
the
high
level
design
to
see.
D
Yeah,
so
there
is
a
mystery
architecture:
deck
which
talks.
This
is
the
machine
architecture
that
we
can.
We
can
look
at
how
like
all
measuries
components
and
how
they
interact
with
each
other
in
different
deployment
models.
So
let
me
just
paste
the
link
over
in
the
chat.
The
other
thing
that
you
might
find
helpful
is
is,
if
you'll
go
inside,
of
the
looking
at
a
Helm
chart
basically
gives
you
a
lot
of
information
about
what
is
deployed
and,
and
all
that
thing.
D
So,
if
you'll
go
into
the
install
directory
in
machine
inside
of
communities
help,
you
you'd
find
that
we
have
two
charts
missionary
operator
and
measuri.
So
usually,
when
you
install
messery
and
when
you
connect
to
Machinery
it
automatically
installs
machine
operator,
so
we
would
be
installing
meshire.
We
just
measuring
operator
would
be
installed
by
meshd
server
automatically.
So
this
is
the
so
here
in
values.tml.
D
You
can
look
at
what
all
what
are
all
the
things
that
are
installed
by
meshri
and
eventually
this
is
the
helm
chart
that
we
would
use
to
to
basically
to
install
meshery
in
the
cluster,
and
there
are
other
components
as
well
like
we
have
adapters
and
all
of
these
adapters
are
optional.
So
when
you
do
a
message
deployment
you
can
pass
on,
you
can
change
your
Helm
chart,
values.aml
or
using
Flags
or
any,
which
way
to
make
sure
that
certain
ports
don't
come
up
or
certain
adapters
don't
come
up.
D
So,
ideally,
we
would
want
all
the
adapters
inside
of
this
particular
deployment
so
and
the
vanilla
Helm
chart
installation
where
we,
the
like
the
default
is
I.
Guess
true,
like
enabled,
is
true
for
most
of
these
adapters,
so
we
would
just
spin
up
a
message.
Server
with
all
of
these
adapters
enabled
us,
through
in
using
this
Helm
chart,
would
make
sure
that
everything
like
a
service
account
which
is
required
by
meshri
in
order
to
be
able
to
communicate
to
cube
API
everything
is
packed
inside
of
this
Helm
chart.
C
Shouldn't
we
adapt
this
this
this
topology
to
like
a
draw
IO,
something
to
be
accustomed
with
the
new
server
deployment.
C
I
said
I
said
like
if
we
take
the
measuring
architecture-
and
we
make
yes
this
one
and
we
make
it
with
the
custom
changes
that
we
will
do
in
the
bare
metal
servers
like
like
here.
If
we
see
this
topology,
where
is
this?
The
service
load
but
type
load
balancer
here,
for
example,.
C
D
Yeah
yeah,
so
I
think
we
can
add
another
slide
to
this
architecture
deck
only.
So
this
is
a
high
level
thing,
so
it
doesn't
assume
a
lot
of
things.
So,
for
example,
it
doesn't
it
just
says
this
API,
but
in
actuality
this
is
a
service
of
type
load
balancer,
if
you
have
deployed
it
in
kubernetes,
but
since
it
says
Docker
or
community,
so
it's
not
specific
to
a
platform
on
which
this
is
deployed.
D
So
I
I
know
what
what
your
concern
is
and
I
think
we
can
have
another
slide
where
we
can
expand
this
thing
specific
to
communities
so
that
there
we
can
move
around
with
stuff,
so
I
think
yeah.
The
first
task
is
to
have
another
slide
which
can
detail
how
the
how
the
topology
looks
like
for
measuri
and
its
deployments,
so
the
only
yeah
so
to
your
point,
the
only
service
of
type
load
balancer
would
be
of
measuring,
which
the
clients
communicate
to
all
the
adapters
also
have
a
deployment
and
a
service.
D
D
B
A
D
Yeah,
okay,
so
okay,
I
I,
get
to
you,
but
you,
where
you're
going
so
instead
of
using
draw.io.
What
we
can
really
use
is
mesh
map.
Only
the
thing
that
we're
trying
to
deploy
to
create
the
topology
of
and
I
could
feel
Lee
trying
to.
You
know
trying
to
force
himself
to
the
conversation
just
to
point
out.
We
have
a
tool
for
this.
That's
what
we're
building
right
now!
So,
instead
of
yeah
we
can.
D
We
can
build
a
design
over
here
and
collaborate
on
that
design,
so
that
so,
for
example,
it
would
be
easier
to
explain
that
thing
if
I
went
into
like
here
or
search
for
a
deployment
I
could
I
could
have
searched,
but
escrow
also
anyways,
so
I
give
like
we
can
drag
and
drop
deployments
and
services
and
group
them
into
node
groups
and
yeah
draw
edges
between
them.
So
I
will
draw
a
topology
here
and
then
yeah.
D
We
can
collaborate
on
this
topology
so
that
we're
on
the
same
page
over
here
insert
a
broader
IO
and
since
all
the
designs
are
public
right
now.
So
if
you
were
to
log
in
and
if
I
were
to
change
this
to,
let's
say
messy
playground,
you
would
be
able
to
see
this
thing
so
yeah.
You
would
be
able
to
change
these
things
as
well.
B
B
You
should
be
able
to
go
in
and
actively
if
you
go
back
for
a
second
Ashish
to
that
design,
just
to
make
sure
so
yeah,
so
you
can
even
better
than
draw.io
well
even
better
than
its
native
support.
You
can
actively
collaborate
so
yeah,
it's
a
little
bit
of
horse
before
the
cart
or
like
who
this
is
a
a
designer
for
kubernetes
workload
deployments
and
that's
what
we're
discussing
and
so
yeah.
We
should
try
to
use
the
thing
to
deploy
itself.
If
we
can
be
really
I
mean
it's
like
well,
wait
wait,
a
second!
B
B
It
doesn't
mean
yeah
it'd
be
really
interesting,
because
you
know
you
do
yep
I
mean
there's
a
lot
of
nice
benefits
to
that.
So
there's
there's
a
lot
of
use
of
equinix
servers.
These
bare
metal
servers
that
we're
discussing
there's
a
lot
of
use
that
the
meshery
project
and
the
service
mesh
performance
project
a
lot
of
use
that
they
excuse
me
that
they
make
of
these
servers
today,
there's
a
GitHub
action
that
runs
on
schedule
like
every
12
hours
or
four
hours
or
something
it
goes
over
and
Spins
up
about.
B
12
bare
metal
servers,
installs
Mastery,
mesh
reinstalls,
a
service
mesh
mesh
regenerates
a
bunch
of
load,
it
analyzes
that
load
takes
that
performance.
Those
performance
test
results
and
persist
them,
and
then
the
GitHub
action
deprovisions
all
those
bare
metal
systems.
So
this
community
has
already
created
an
automated
deployment
of
meshery,
which
is
this
measury
playground
is
a
deployment
of
measury.
B
It's
already
creating
provisioning
bare
metal
systems
is
already
de-provisioning
them.
We
should
go
look
at
that,
because,
in
order
to
get
there
quickly,
we
should
be
taking
a
similar
approach.
We've
already
overcome
some
of
these
challenges,
the
approach.
That's
within,
oh
man.
This
is
awesome.
How
many
people
do
we
have
collaborating
as.
B
The
approach
that
is
used
within
the
existing
GitHub
action
is
to
like
I,
said,
create
provision
bare
metal
servers
when
it
does
it
installs
mini
Cube.
It's
like
that's,
not
what
we
desire
for
this
deployment.
We
want
it
to
be
kubernetes
on
the
metal.
B
Having
said
that,
it's
still
a
great
template
to,
or
still
a
good
thing
to
understand.
It's
still
it
deals
with.
How
do
you
run
a
GitHub
action
on
github.com
and
have
it
remotely
connect
to
and
provision
servers
in
a
different
data
center
like
it
already
overcomes
a
few
of
these
challenges
so
using
it,
as
the
boilerplate
makes
a
lot
of
sense
to
me?
What
what
do
you
all
thinking.
B
E
I've
been
looking
for
the
specific
workflow,
but
it's
there's
a
couple
that
run
scheduled
benchmark
test
right.
Do
we
want
the
self-hosted
one
like
is:
is
that
part
of
the
requirement.
B
B
B
Yeah
and
there's
a
couple
of
bash
scripts,
I
think
one:
okay,
okay
and
it's
it's
installing
mini
Cube
It's
like
well.
That's
not
that
would
work.
That's
not
what
we're
really
going
for
for
a
number
of
reasons.
We
don't
want
to
necessarily
do
that
and
yes,
this
particular
action
is
runs
on
schedule.
It's
invoked
in
githubs.
B
B
B
There
are
a
couple
of
tasks
in
there
now.
Does
anybody
just
Allah
of
what
we've
discussed
are
there?
Is
there
a
particular
item
that
that
you
have
in
your
mind
that
you
might
want
to
and
be
able
to
dig
into.
B
D
Okay,
so
I'll
send
out
the
cube
config
to
you,
Allah
and
and
I'll
also
send
out
to
you
the
SSH
Keys,
so
that
you
can
log
into
the
service
and
for
networking
I
think.
The
first
thing
that
you
can
do
is
like
start
metal,
lb
provision
and
lee.
Do
we
like
I
think
we
should
be?
We
should
be
giving
access
to
the
dashboard
right,
this
particular
dashboard
to
people
who
would
be
contributing.
They
would
need
the
access.
B
B
B
Yeah
nope
other
than
the
the
automation
like
the
script
like
other
than
like,
when
we're
done
with
this
that
at
the
end
of
it,
there
should
be
automation
that
supports
deleting
the
like
de-provisioning,
the
full
metal
server
and
provisioning.
A
new
one
in
part,
like
the
backup
plan,
is
sort
of
implicit
to
the
fact
that
it's
it's
automated
and
but
no
actually,
that's
a
good
I
mean
to
your
point,
like
that.
That's
part
of
what
needs
to
be
articulated
in
the
tasks
is
oh,
how.
C
C
B
D
So
so
another
thing
that
you
can
do
is
that
you
can
go
over
this
particular
blog
post
and
figure
out
how
much
of
this
is
scriptable.
So
this
is
everything
after
you
have
your
servers
running,
and
most
of
these
things
are
like
just
bad
scripts.
Most
of
these
things
are
so
all
you
would
need
to
do
inside
of
those
workflows
is
to
SSH
into
one
of
these
servers
actually
both
of
them
and
then
do
line
by
line
whatever
this
thing
says.
D
So,
if
you
conversion
of
this
whole
everything
that
this
blog
post
says
inside
into
a
into
a
batch
script,
basically
and
appending
to
so
this
ends
with
like
this
blog
post
ends
at
where
you
have
a
running
cluster,
add
more
stuff
to
it
like
deploying
so
the
our
workflow
will
do
go
even
further
to
first
of
all,
I
will
note
down
that
caveat
of
what
what
needs
to
be
done.
Otherwise,
if
you'll
just
follow
this
blog
post
in
in
the
Equinox
deployment,
there
is
a
caveat
so
I'll.
D
Add
that
caveat
so
that
caveat
plus
this
thing
will
get
your
cluster
and
and
then
in
the
script
for
the
deploying
meshery
and
deploying
metal
lb.
So
that's
the
next
step
and
then,
if
we
have
a
static
public
IP,
that
is
that
should
be
good
so
that
we
can
hard
code
that
or
we'll
pass
that
as
an
environment
variable
into
the
script.
So
if
you
have
a
IP
there,
we
can
create
a
metal
lb
configuration
which
would
again
be
deployed
using
cube
cutter
or
any
every
anything
after
we
have
SS.
D
We
can
so
yeah
that
that
would
be
part
of
the
whole
script.
So
you
can
go
over
this
blog
post
and
see
if
you
can
convert
this
thing
into
a
script
and
how
feasible
that
thing
is.
According
to
you,
I'll
make
sure
I'll
take
the
responsibility
of
adding
the
metal
lb
and
public
IP
thing,
or
if
you
want
to
do
it,
you
can
do
it
so
so
I
think
yeah.
C
So
if
you,
if
you
Google
my
notice,
you
search
my
email,
you
should
find
me
here,
Maybe
the
organization
you
can
add.
D
I,
don't
know
how
much
fine
grain
performance
or
how
much
fine
grained
permissions
are
there
inside
of
this
thing.
So
if
we
can
give
people
access
to
because
there
is
a
global
SSH
key
that
I
am
using,
which
is
which
can
be
used
across
any
other
servers,
so
I
I
guess
we
can
generate
another
okay,
I
guess
I
don't
need
to
search
over
here
there
is,
there
might
be
some
other
place
where
I
need
to
search
or
not.
I
think
this
Global
search
is
for
Stuff
within
the
org.
B
Yeah,
it
also
is
the
case
that
there
are
other
this
particular
user
that
is
signed
in
may
not
have
permission
to
do.
C
That
yeah
yeah
that's
right
because
I
in
my
test
organization,
I
I
have
the
option
here
to
payment
methods,
billing
and
the
team
and
us
on.
Yes,.
B
Taking
it
down
as
a
follow-up,
ask,
though,
to
initially
it
could
be
that
yeah
SSH
credentials
or
cube.
B
Got
you
and
there
it
is
if
there
are
other
people
on
the
call
that
wanted
to
actively
pick
up
a
task.
You
know
Now's
the
Time
to
raise
your
hand,
otherwise
we'll
just
list
the
tasks
and
the
issues.
F
Ashish
first,
this
Saeed,
so
I
was
just
seeing
like
we
have
to
provision
some
kubernetes
classes
right
so
like
I,
just
had
one
input
like.
Why
aren't
we
using
like,
like
tools
like
ansible,
to
do
so?
Because
these
tools
will
make
it
easy
to
provision
the
cluster
configure
and
get
the
cubeconfig
file
of
the
current
or
the
provision
cluster
which
we
require.
B
Yeah,
it
sounds
good.
That's
a
great
question.
I
think
it's
well.
For
my
part,
it's
based
on
that's
exactly
why
you're
here,
it's
like
it's
based
on
ignorance
of
what
what
is
really
easy
within
the
equinix
environment
and
and
I'm,
going
to
write
that
down
as
a
task
like
like
I,
don't
know
they
have
Tinkerbell.
Is
that
what
we
should
be
using
like
there's,
there's
lots
of
tenants
that
equinix
has
that
that
have
this
same
use
case,
so
we
don't
need
to
go
custom,
write
it
like.
F
B
Yeah
or
yes,
although
I'll
rephrase
that
and
say
what
we
would,
what
would
be
really
helpful
to
start
with
is
to
is
to
understand
how
we
don't
have
to
write
our
own
ansible
scripts
like
what
what
is
already
available
for
us
to
leverage
off
the
shelf.
E
Yeah
I
wanted
to
suggest
that
on
the
answer
using
answerable,
they
do
have
these
equinix
metal
collection.
So
it
looks
like
it's
really
declarative.
It's
not
like.
We
have
to
redo
the
wheel,
but
ideally
I'm
going
over
the
10
minute
thing
for
kubernetes.
Like
all
of
these
commands,
they
could
be
once
we
provisioned
the
two
metal
servers
for
my
connects.
E
You
could
just
use
those
targets
and
just
run
these
arbitrary
commands
or
it's
something
that
is
declarative
and
that
it's
kind
of
like
helps
us
replicate
that
those
prerequisites
for
the
kubernetes
Clusters.
To
that
we
want
the
probation
right.
B
Yep
so
there's
just
to
clarify:
there's
a
CLI
from
equinix
the
CLI
is
called
metal
and
so
yeah
so
Mario's
pointing
out,
like
you,
can
programmatically
interact
with
the
equinix
API
using
metal,
the
CLI
and
harsh
part
of
and
Mario.
Like
part
of
that
action
item,
that
I'm
writing
down
is
to
do
a
quick
assessment
of
the
potential
use
of
the
metal
CLI
and
how
far
it
goes.
Does
it
stop
at
just
provisioning
the
metal
system,
installing
the
OS
and
like,
and
where
would
we
continue
into
provisioning
of
kubernetes
and
what
what
do?
B
What
does
Equinox
have
off
this
off
the
shelf?
Like
you
know,
there
should
be
a
myriad
of
either
ansible
or
whatever
tool
there
is
to
do
most
of
what
we're
looking
to
do.
B
So
I'm
just
updating
trying
to
take
notes
in
that
so
I'm
going
to
share
this
real
fast.
Maybe
so
we
can
sort
of
land
the
plane
and
be
done
on
this
call
that
is
and
I'll
need
help
with
this,
because
this
is
not
complete.
Does
anybody
know
how
to
let
how
to
enable
others
to
be
nice?
If
there
are
multiple
of
you
that
can
update
this
list
and
I
think
I
think
you
have
to
be
a
member
of
the
org
and
then
maybe.
B
E
B
So
here's
what
we've
got
thus
far,
we've
got
to
use
the
existing
GitHub
action
as
a
boilerplate
or
the
ability
to
like
the
ability
to
schedule
like
a
24-hour
schedule
to
reset.
B
B
B
B
B
Anyway,
we
start
to
write
out
on
this
out.
It's
like
yeah
and
provision
deploy
mystery,
and,
but
so
basically,
part
of
what
we
need
to
do
is
have
a
a
document
that
a
document
about
the
various
concerns
when
you
go
to
deploy
measury
so
semester
as
a
project
is
in
need
of
additional
documentation
around.
How
do
you
do
an
air
gapped
deployment
or
or
what
are
the
considerations
of
a
production
deployment
for
measuring?
B
So
one
task
is
to
have
an
entry
for
equinix
in
here.
Another
task
is
to
have
another,
a
document
that
how
to
deploy
meshery
to
production
document
so
I'm
trying
to.
E
B
Yeah,
the
ideally
part
of
what
has
been
described
is
that
I
don't
know
about
that
specific
I,
don't
know,
maybe
it
should
or
shouldn't
be,
or
maybe
we
should
have
three
nodes,
maybe
that
that
might
be
Overkill
I
do
know
that
of
what's
been
asked
thus
far
is
to
give
the
masternode
the
control
plane
node
the
ability
to
schedule
workloads
to
itself
to
itself.
E
Yeah
because
so
there's
two
servers,
I
guess
one
could
be
the
master
and
also
have
like
a
worker
node
process
running
I'm,
not
sure
if
that's
the
right
term,
because
one
is
the
physical
infrastructure
right
and
the
other
thing
is
the
the
kubernetes
kubernetes
node
is
just
software
running
on
each
of
them
right.
E
The
host
in
the
kubernetes
in
10
minutes,
so
it
looks
like
this
there's
some
instructions
for
the
master
and
others
for
the
other
notes,
but
yeah
I
guess
one
could
either
master
and
also
have
workloads
in
there
and
the
other.
Just
the
just
workloads
like
not
a
non-master
one
would.
E
D
Be
honest
yeah,
but
yeah,
but
in
our
case
we
have
and
I
think
it's
ideal
to
in
this
case,
in
a
two
node
cluster,
we
have
tainted
the
master,
so
we
can
deploy
on
that
as
well.
So
we
have
one
control,
plane
and
actually
two
data
plane,
because
one
of
the
master
is
also
data
frame.
So.
D
No
I
mean
oh
I
mean
we.
We
do
care
if
the
monster
is
down,
because
the
cluster
will
be
unreachable,
but
but
in
our
case
I
think
in
a
two
more
cluster.
Having
like
one
as
a
master
is
one
master
is
enough.
If.
F
D
B
So,
let's
make
so
one
of
the
action
items
that
I'll
take
is
to
make
sure
that
harsh
and
Allah
and
Mario
already
confirmed,
but
those
that
are
participating
have
the
ability
to
edit
the
description
so
that
at
least
we
can
at
least
we
can
fully
identify
everything
that
needs
to
be
done.
Some
of
this,
maybe
people
might
want
to
create
new
issues,
new
child
issues
from
if
this
is
an
epic
that
would
work
just
fine.
B
B
Okay,
good,
and
so
we
don't
all
need
to
sit
here
and
watch
lead
type,
so
the
other
thing
is
about
getting
people
credentialed
up
or
getting
them
access
to
the
system.
Right
now,
it's
a
two
node.
The
idea
is
that
it's
a
two
node
cluster,
maybe
that's
really
silly,
and
it
we
you
know
the
it
needs
to
be
four
really
small
nodes
or
three
medium
nodes
or
whatever.
B
But
you
know,
there's
a
bunch
of
considerations
a
lot.
A
lot
of
considerations
about
like
just
the
health
of
Mastery
deployment
itself.
Does
there
need
to
be?
Are
there
certain
dependencies
in
terms
of
the
order
by
which
these
things
spin
up?
If
we
have
meshery
adapters,
do
those
have
to
come
subsequent
to
the
server
and
a
lot?
A
lot
of
that
is
known
already,
but
perhaps
not
written
down
the
answer
to
that
one
by
the
way
is
no.
The
adapters
can
come
up.
B
First
before
the
server
adapters,
dynamically
try
to
connect
to
Mastery
server
meshery
adapters,
try
to
connect
to
mesh
reserver.
They
will
sit
there
and
retry
for
a
little
while
they
will
attempt
it'll,
stop
they'll
back
off
and
wait
a
while
to
come
back
and
they'll
do
that
anyway.
That
type
of
thing
needs
to
be,
we
need
to
make
sure
that
we've
got
it
written
down
and
that
we're
considering
that
in
context
of
a
kubernetes
deployment,
there's
a
wait.
A
second
is
there:
how
do
you
deploy
measury
today?
B
Yeah,
you
know
as
a
third
phase
to
all
this,
like
eventually,
it
would
be
really
great
if,
from
one
instance
of
meshery
is
able
to
take
that
mesh
map,
design
and
provision.
The
playground.
C
Have
a
question
just
the
the
mesh
map
I
tried
to
to
add
it
in
my
here
my
hosted
the
measuri
as
a
extension,
but
asks
me
to
to
subscribe
and
I.
Don't
know
how
to
access
the
mesh
map.
B
D
You
can
log
back
in
if
you're.
If
you
have
the
role
now,
you
would
be
able
to
see
a
mesh
map
instead
of
the
sign
up
phone.
B
B
Oh
yeah,
you
lost
your
right.
C
C
F
E
C
E
E
B
Yeah
yeah
so
at
risk
of
boring.
Everybody
like
this
is
actually
a
really
important
thing:
Allah
that
there's
a
lot
of
people
when
they
see
the
playground
they're
like
they're,
pretty
intrigued.
They're
people
sign
up
pretty
fast.
As
a
matter
of
fact,
they
sign
up
so
fast
that
so
far,
some
of
the
resources
on
one
of
the
systems
that
process
that
kind
of
run
out
of
resource
and
so
awesome,
let's
get
it
into
their
hands.
B
But
when
we
do
I'm
your
your
and
intelligent
individual
and
and
it's
not
it's
confusing,
the
current
flow
is
a
little
bit
confusing.
In
terms
of
like
great
you,
you
signed
up,
you
waited
patiently.
You
received
an
email
that
said
you
access
granted,
hopefully.
B
And
were
you,
you
were
already
run
you're
running
meshery,
locally,
yeah
and
you're
like
okay
great,
so
you
go
back
to
Mastery
and
you're.
Like
you
know,
I
don't
see
it
and
you
know
being
in
being
a
sophisticated
user
of
software
and
writing
software
and
all
that
stuff,
you,
you
might
log
out
and
log
back
in.
B
C
But
it's
yeah:
okay,
I
got
it
like
I
didn't
receive
an
email
about
the
access
granted
when
I
filled
the
form.
B
C
It's
like
I
was
first
hands
on
on
the
measure
problem
on
the
measuring
platform.
I
logged
in.
C
C
That's
it
and
like
like
this
first
impression:
users
they
will
just
go
like
I
did
and,
for
example,
here
I
found
another
issue
with
the
with
the
apps
configuration
apps.
C
Like
you
see
this
book
info,
it's
the
it's
the
application
that
that
is
figuring
out
in
the
documentation.
If
someone
would
like
to
to
do,
the
first
Hands-On
I
saw
that
in
the
documentation
that
just
go
and
deploy
the
book
info,
but
this
here
it
shows
an
issue
regarding
wait.
A
minute.
C
D
What
you,
what
you
see
is
a
feature.
It's
a.
What
you
see
is
the
bug.
It's
a
feature,
so
you
that
that,
if.
D
Yeah,
no,
no,
that,
if
not
present
to
you,
it
looks
like
if
not
present,
but
on
backend.
It
actually
is
corrected
properly,
so
that
a
prettified
config
is
shown
to
you
so
that
what
you
see
in
the
config
necessarily
does
not
correspond
to
what
is
given
inside
of
what
you
would
type
in
a
cube
config,
because
that
thing
is
like
designed
for
primarily
for
the
mesh
map
client.
D
So
that's
why
there
is
a
white
spacing
in
all,
but
at
the
time
of
deployment,
if
you
do
not
go
and
manually
change
the
config
in
any
way
like
you,
wouldn't
you
wouldn't
run
into
that
issue.
E
C
With
the
spaces
with
the
spaces,
the
cube
API
will
respond
and
say,
like
Do,
We
complain
that
the
the
option
is
not
valid,
and
this
is
correct,
but
there
is
no,
if
not
present,
without
with
spaces
in
the
can.
C
Right,
if,
if
I
put
it
back
here
so
then
it
works
this
way
it
should
it
should
this
way
it
shouldn't
work
it
shouldn't
this
way.
It
should
work
this
way.
Yeah
yeah
yeah
this,
but
the
kubernetes
cluster
will
not
accept
this
image
building
policy
because
it
doesn't
exist
in
the
kubernetes.
We.
D
C
Yeah,
but
here
this
is
this:
it's
this
measurally
respondent,
but
if
I
add
spaces
so
initially
accept
the
the
deployment
kubernetes
will
refuse.
It.
D
And
there's
a
lot
of
kubernetes:
this
is
not
a
kubernetes
deployment.
This
is
a
machine
design
and
there's
a
difference
between
the
two.
Although
they
look
exactly
the
same,
we
do
a
bunch
of
white
spacing
and
petrification
and
all
those
things
due
to
which
you
cannot
take
a
communities
yamulant
copy
paste
that
stuff
into
a
design
and
deploy
it,
and
you
cannot
take
a
design
and
take
a
Unity
stuff
and
take
it
out
of,
and
we
do
not
guarantee
this
thing
because.
C
C
B
E
Measuring
but
not
for
kubernetes
itself,
I
guess
that's
the
real
problem,
even
though
it's
a
confusion
on
on
the
user
side,
like
a
user
that
is
familiar
with
kubernetes,
will
try
to
remove
them
thinking
it's
a
problem,
so
that's
kind
of
like
that's
also,
like
probably
ux
thing
image.
C
C
D
E
D
D
So
so
the
expected
is
having
a
space
there
and
what
you,
the
error,
you're
getting
here.
This
is
a
bug,
so
this
is.
This
needs
to
be.
C
Okay,
so
should
I
just
paste
it
here.
Information.
C
C
So
it's
it's
image
full.
A
B
Matt
that
was
super
important
to
me.
That
was
that
was
really
helpful
and,
had
you
not
shared
your
screen
and
kind
of
walked
through
it,
it
wouldn't
have
been.
It
wouldn't
have
been
as
obvious
that
we
have.
We
have
two.
B
We
have
two
ux
issues
that
are
maybe
more
frustrating
than
the
potential
the
potential
bugs
like
you
not
getting
an
acceptance,
email
just
yet
it
you.
It
might
be
a
time-based
thing,
so
you
might
still
yet
it
might
still
yet
show
up
because
in
part,
that's
the
way,
the
email
system
that
we're
using
works
But,
but
so
anyway,
there's
like
three
there's
like
so
we
need
to
go,
do
consider
some
ux
around
the
providers
and
what?
Why
does
it
matter
if
you're
choosing
one
or
how
it
matters
like.
B
When
you
fill
in
a
form
to
sign
up
for
a
playground
or
mesh
map
access,
which
are
quite
similar,
similar
that
the
playground
uses
half
of
what
mesh
map
does
so
when
you
sign
up
the
that
sign
of
request,
goes
to
Mastery
cloud
and
there's
just
a
a
request
queue,
and
so,
while
we're
on
this
call,
I
I
recognize
the
name
accept,
or
you
know,
Grant
access
which
either
does
one
of
two
things
it
either
just
meshery
Cloud
just
sends
the
email
directly
it
just
uses
a
service
account
a
Gmail
account,
that's
just
a
service
account
or
it
tags.
B
You
in
MailChimp,
like
a
free
MailChimp,
account
just
to
help
with
like
who
all
is
receiving
what
emails
and
when
and
the
way
that
we
interact
with
MailChimp.
It
might
it'll
it.
Usually
it
takes
like
between
about
15
minutes
or
so
before
the
person
would
receive
the
email
and
15
minutes
has
already
gone
and
you're
already
passed
by
pranav
owns
figuring.
This
out
is
MailChimp,
is
MailChimp
even
I.
B
Don't
think
in
this
I
don't
know
it
may
be
yeah
so
anyway,
pranav
owns
that
to
figure
it
out
and
he
can
report
back
in
slack
or
what
have
you
or
then
then
we've
got
this
ux
around
none
and
what
that
means.
When
you
choose
none
I,
think
that
probably
the
answer
there
is
where
it
says
sign
up
for
mesh
map.
It
probably
needs
to.
B
B
Yeah
yeah
yeah.
Well,
it's
kind
of
this
weird
thing
of
like
if
you're
using
none
you
still,
it
would
be
appropriate
to
show
you
sign
up
because
we
don't
know
who
you
are,
and
maybe
you
don't
have
access
to
it,
or
maybe
you
don't
know
about
it.
So
you
should
probably
it
should
be
shown
to
you
when
you
said
yeah
yeah,
but.
E
B
So
we
can
put
in
I
mean
like,
even
if
we
in
the
message
right
there.
It
said,
oh
as
you
sign
up,
make
sure
that
you
sign
into
blah
blah
blah.
I
know
that
I
wouldn't
read
it
because
I'm,
a
lazy
user
and
I
just
click.
The
button,
like
I,
saw
a
sign
up
button.
So
I
just
click
the
button
and
I'll
read
the
rest
of
the
crap
I
just
see
an
image
and
like
Oh
I
like
that
and
then
sign
up,
and
then
you
know,
and
so
oh.
E
B
B
The
book
info
bug
I,
see
she's
going
to
work
through,
but
the
bug,
but
then
there's
a
ux
concern
around
you
know
again.
A
user
sees
a
bunch
of
ammo,
looks
like
kubernetes
to
me,
looks
like
kubernetes
to
everybody
else
like
we
don't
make
it.
You
know
like
we
need
to
make
it
really
clear
what
that
is
and
if
what
you
can
do
to
it
or
what
you
should
do
to
it.
B
B
A
F
B
Okay,
you
think
you
have
an
audio
check
again:
okay,
well
great
I'm,
gonna
I
have
a
couple
of
tasks
just
to
tidy
up
some
of
the
action
items
and
to
make
sure
that
people
have
credentials.
People
can
edit
the
issue
and
Ashish
was
saying
hey.
Our
next
touch
point
in
terms
of
a
synchronous.
Discussion
on
this
topic
would
be
this
coming
Wednesday
at
the
mesh
Redevelopment
call.
B
So
that's
it!
So,
no
doubt
this
will
be
a
topic
there,
but
otherwise
we
can
just
advance
the
progression
through
slack
and
through
GitHub,
until
we
talk
again
on
on
Wednesday
or
using
the
mailing
list,
if
you
want
to,
but
we
might
want
to
use
the
developers
at
meshery.io
mailing
list,
I
think
some
of
the
people
on
the
other
mailing
list
may
be
sensitive
to
email
that
they
weren't
used
to
email
traffic.
They
were
used
to.
B
Wonderful
very
nice
to
see
you
all
Allah,
you
will.
Let
us
know
if
you
can't,
if
you
don't
see
mesh
map
because
we're
gonna
yeah
nice
to
see
you
all
I
will
post
this
recording
onto
the
GitHub
issue
and
let
other
people
know
that
we're
interested
that
you
know
they
can
come
check
it
out
and
it's
nice
to
have
a
couple
of
new
friends.
B
Harsh
I
knew
maybe
Allah
I,
don't
know
like
I
need
some
more
white
in
his
beard,
I'm,
not
really
sure
what
to
think
of
him
just
yet,
but
all
right,
I'm
out
of
here,
no
more
bad
jokes!
Thank
you
all
see
you
guys
later.