►
From YouTube: Meshery Development Meeting (Dec 22nd, 2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
yeah
good.
A
A
Austin,
it
feels
like
you're
around
the
corner,
just
seeing
the
name.
Austin
feels
like.
A
D
D
Welcome
everybody
to
the
mastery
level,
it's
23
december,
merry
christmas!
If
you
celebrate
it,
if
you
don't
happy
holidays
anyway,.
D
D
And
that's
it,
that's
all
for
the
outstanding
title.
Let's
start.
D
All
right,
so,
if
you
want
to
attend
any
of
the
verify
meetings,
there's
a
table
there's
a
handy
table
with
all
the
links
and
all
the
links.
The
meeting
minutes
to
the
meetings
to
the
youtube
recordings
to
these
slack
channels
go
check
it
out.
If
you
a
shot
on
any
of
the
links.
D
Oh
is
ruth
here.
A
No
she's
not
but
but
yep,
but
that
conference
fosdom
is
has
been
a
long-standing
tech
conference
focused
in
our
area
and
by
our
area.
I
mean
just
on
this
technological
space
and
a
good
conference
to
get
the
word
out
at
to
to
share
at
so.
If
any
of
you
are
thinking
of
trying
to
speak
at
that
one,
I'm
here
to
help
when
others
are
here
to
help,
particularly
if
you
want
to
talk
about
any
any
of
the
projects
that
are
going
on
here
or
the
community
here.
So.
D
All
right,
let's
start
with
the
gender,
then
to
that.
E
D
E
Okay,
I
had
some
my
schedules
previously.
I
didn't
hear
anything
he
was
speaking,
so
I
had
to
restart
close
the
resume
and
restart
so.
A
Oh
nice
yeah,
no
problem
we're
gonna,
we
are
just
getting
into
your
area.
Your
bullet
point.
I
figured
you'd
done
some
some
good
work
here.
We
could,
you
could
maybe
walk
people
through
it
and
I
think
it's
still
evolving
a
little.
You
you'd
have
some
good
suggestions
and
so
yeah.
Please,
please
take
people
through
what
you've
done
so.
E
E
E
Currently,
currently,
where
this
section
here
like
creates
the
draft
release
drafted
for
every
push-
and
we
have
modified
here
for
that
it
takes
after
after
after
release,
it
takes
the
drafted
pr
and
creates
a
release
file
a
markdown
file
under
release,
dot
docs
under
underscore
releases
so
that
it
automatically
the
it
automatically
said
to
gets
added
to
the
collection.
E
So
then
we
can
check
the
release
page
so
that
every
release
and
we
can
view
every
release,
release
currently-
and
there
is
some
amendments
to
be
done
in
the
release
page
as
well-
and
this
is
in
this
section
we
couldn't
completely
automate
the
this
process.
There's
one
manual
process
to
be
done.
That's
after
creating
a
release
note
we
have
to
create
a
pull
request
to
push
the
created
release
note
to
the
repo
and
we
were
trying
it
our
best
to
automate
it
completely,
but
with
github
it's
not.
E
The
current
workflow
in
the
current
implementation
is
not
possible,
so
yeah
currently
after
it
creates
a
pull
request
and
now
it's
it
labels
and
we
need.
We
need
to
accept,
merge
the
pull
request,
that's
it
and
we
have
after
every
release.
That's
the
process.
A
A
Some
of
us-
even
maybe
some
of
us
on
this
call-
have
volunteered
to
take
tackle
this
before,
but
it
wasn't
until
sudan
by
the
way
yeah,
but
until
sudan
came
that
it
got
done.
So
this
is
nice
there's
to
reflect
on
what's
written
here.
Actually,
the
the
fact
that
you've
pointed
out
that
you
can
oh
trigger
a
workflow
based
on
a
release.
A
If,
if
everyone
can
see
the
build
and
release
yaml
and
the
ci
yaml,
it's
the
two
files
in
the
workflows
folder
these
two
ci.yaml
and
build
and
release
yaml
are
pretty
similar
or
like
really
similar
ones.
A
bit
has
a
few
more
steps
than
the
other,
but
the
primary
reason
that
we
have
two
separate
workflows
here
between
build
and
release
and
cid
at
yaml
was
because
we're
tr,
you
know
the
other
contributors
in
the
community
have
been
trying
to
figure
out
how
to
well.
A
You
know
trigger
builds
on
when
someone
creates
a
pr
and
when
they
push
to
that
pr,
when
they
create
a
branch,
push
commits
to
it.
It's
nice
that
the
ci
kicks
off
and
builds.
You
know
that
work
in
progress
or
tests
it
out
and
that's
desirable.
A
A
A
And
but
but
so
anyway,
my
point
is:
we
have
two
different
files:
workflows
that
pretty
much
did
the
same
thing
and
they're
separated,
because
we
didn't
have
great
control
over
when
they
were
triggered,
so
we're
triggering
them
too
frequently.
We
really
need
to
ideally
sit
on.
We
would
consolidate
the
ci.yaml
and
the
build
and
release.yaml
into
a
single
workflow
and
just
conditionally
execute
certain
portions
based
on
whether
or
not
a.
A
What
we're
saying
is
you
know
this
is
the
name
of
the
workflow
and
this
workflow
will
be
triggered
when
these
things
happen,
when
a
push
event
happens
against
well,
not
against
the
docs
directory,
but
against
the
master
branch
and
in
the
presence
of
a
tag
which
makes
sense.
Like
what
we're
trying
to
say,
there
is
oh
hey,
this
should
really
only
initialize
when
there's
a
release
tag
present
so
meaning
this
is
intended
to
only
kick
off
on
release
so
like
actually
changing
it
to
on
release
might
be
the
right
thing
to
do
here.
A
That
might
save
us.
I
think,
there's
still
some
repeating
of
ourselves
between
the
two
files,
but
that's
something
that
we
can
address
over
time
immediately
like
whenever
you
do
a
push
to
a
pr
to
a
branch.
You
kick
off
these.
These
two
workflows
kick
off
and
it
takes
forever
and
it's
a
waste,
and
so.
E
A
All
right,
yeah!
Well
now,
thanks
to
you
now,
just
after
that,
I've
learned
about
the
release
event
as
well.
Now
everybody
here
knows
about
it,
hopefully
so
yeah,
hopefully
we'll
be
able
to
be
more
intelligent
and
more
selective
on
the
triggers
yeah.
This
is
good.
Anybody
else
have
comments
for
sit
down.
A
Here
so
hopefully
everyone's
like
the
jit,
the
the
summary
is,
we
have
release
drafter
as
a
bot
that
will
capture
up
all
the
the
change
log.
All
the
release,
notes
from
the
last
time
that
a
release
was
made
so
all
the
mer,
all
the
pr's
that
have
merged
that's
helpful,
but
that
automation
is
just
putting
those
release,
notes
into
github's
release
section
for
each
of
our
repos,
the
documentation
for
meschery
in
this
case
that
that
documentation
site
it's
a
jekyll
site
runs
on
github
pages.
A
It
has
a
page
called
releases
and
it
just
captures
those
same
release,
notes
in
a
convenient
location
inside
the
docs,
and
it
has
so
far
been
a
manual
process
to
every
time
that
a
release
is
made
to
go
over
and
manually
copy
over
the
release,
notes
and
so
what's
happening
here.
Is
that
that
that's
changed
so
now
the
docs
will
be
almost
automatically
updated.
What
will
happen
is
based
on
okay
well
to
so,
hopefully,
I'm
going
to
take
you
through
this.
Did
you
talk
about
sudan?
E
A
A
You
just
we're.
Gonna
have
to
give
you
a
like
a
tattoo
on
your
shoulder
about
github
actions.
You
can
find
yourself
swimming
in
workflows,.
E
Like
I
have
one
more
situation
like,
I
think
I
previously
mentioned
here
like
if,
when
we
use
these
two
labels,
like
this
release,
note
will
be
added
to
the
next
draft.
I
think
yeah.
This
push
will
be
a
recording,
the
next
draft.
So
if
we
can
have
a
separate
the
label
for
only
for
automated
release
documents,
so
we
can
exclude
it
from
the
release.
Draft
nice,
yeah,
that'll
work.
A
We
can
designate
one
honestly
like
like
the
thing
about
something
like
that
is
like
that'll
work.
That
would
be
helpful.
It's
kind
of
a
nice
to
have
honestly
what'll
end
up
happening
is
you'll,
spend
time
working
on
it.
You'll
put
that
logic
into
release
drafter
somewhere
in
here,
and
then
someone
else
will
review
it
and
you'll
if
you've
typed
miss
type
as
frequently
as
I
do
with
my
fat
fingers
you'll,
your
yama
will
be
incorrect,
we'll
merge
it.
A
There
will
be
an
issue,
then
we'll
have
to
go
back
and
fix
it'll
take
longer
than
it's
worth
like.
You
should
work
on
something
else.
That's
what
I'm
saying
and
there's
there's
actually
a
bunch
of
other
things,
some
of
the
ideas
that
I
was
I'm
tossing
out
earlier,
like
it's,
not
it's
not
worth
it.
It's
just
more
code
to
maintain.
I
forget
it
doesn't
really
matter
if
we
have
an
extra
pull
request.
Every
release
that
says
release
notes
were
automatically
merged.
A
At
least
for
now,
just
because
this
one
actually
getting
this
right
for
my
part,
I
think
it
took
me,
I'm
not
sure
12
prs,
because
it's
hard
to
test
github
actions
get
up
workflows.
A
Some
of
you
are
familiar
with
utilities
that
you
can
run
github's
runners
locally
on
your
machine,
although
it
just
never
hasn't
worked
for
me,
like
the
environments,
aren't
the
same,
the
environments
aren't
the
same,
and
and
because
of
that,
you
can
try
to
test
github
action,
workflows
locally.
Sudan.
Have
you
been
testing
them
locally?.
A
Yep,
but
even
at
that,
you
didn't
have
access
to
the
same
secrets,
which,
by
the
way,
is
something
that
we
can
enable
in
the
repo
the
problem
with
enabling
it
is,
the
secrets,
then
become
available
to
any
of
the
contributors
of
which
there's
like
150
members
and
then
so
anyway.
My
point
my
point
of
saying
all
that
is
it
ain't
or
that
that
other
enhancement
that
you're
talking
about
since
you
can't
test
locally,
very
well
and
since
yeah,
but
so
far
so
good
here.
This
is
this
is
great.
D
Right,
instead
of
a
tattoo,
maybe
a
member
profile
would
be
the
better
way
here.
We
should
get
you
one
sit
down.
D
All
right
drew
europe
thanks.
B
B
We
are
now
fusing
the
service
mesh
proto,
which
is
present
in
our
smp
protocols,
where
we
have
defined
all
the
meshes
to
define
the
request
of
one
particular
mess
like
before.
We
were
doing
it
manually
using
string,
but
since
we
have
a
place
where
we
have
defined
all
the
meshes
properly,
it
is
logical
to
use
that
itself
for
the
request
itself.
B
So
now
we
are
using
that
from
the
smp
protocols
itself
and
along
with
that,
there
are
also
few
changes
we
made
to
the
structure
of
the
data
which
we
are
getting
from
the
smi
test
right.
So
the
response
is
a
bit
renewed
from
now.
Mostly,
the
changes
are
in
the
details
which
you
get
while
invoking
each
of
the
spec
so
yeah.
These
are
a
few
other
changes
and
yeah,
also
including
that
we
have
also
I
have
imported
contours
from
which
we
could
check
the
current.
B
What
do
you
say?
The
current
status
of
the
test
itself,
using
the
controllers
right,
the
info
and
health
controls,
which
are
part
of
the
mesh
kit
itself,.
B
Well,
we
haven't
implemented
the
logic
yet
so
we
would
have
to
do
that,
but
if
you
want
to
see
what
those
protos
looks
like,
let
me
try
yeah
so
yeah.
It
is
a
pretty
simple
and
normal
one,
as
you
can
see
the
status
itself
and.
A
All
are
these
controllers,
and
this
health
is
standard
part
of
the
operator
sdk.
B
A
Well,
it's
actually
actually
now
that
I'm
I'm
realizing
that
we're
talking
about
mesh
kit.
It's
a
bad
question.
B
A
B
A
B
Yeah
yeah-
and
I
guess
also
one
thing
to
add
would
be:
we
are
also
updating
the
smp
protos,
the
basically
the
service
mesh
which
we
are
defining.
We
are
also
adding
few
things
like
the
version
of
the
mesh
and
stuff
like
that,
which
was
not
getting
all
we
had
was
all
we
had
was
a
list
of
all
the
meshes,
so
we
are
also
now
defining
having
a
variant
where
you
can
also
specify
the
version.
A
Yeah,
how
the
heck
do
we
miss
that?
Do
you
mind
briefly,
going
over
to
the
other
proto
the.
A
B
B
A
Well,
okay,
but
I
think
the
intention
for
smp
version
on
line
140
was
is
to
track
the
version
of
s
p
itself,
although
all
those
this
variable,
this
very
well
may
need
to
change,
and
I
think,
even
if
the
service
mesh
version
was
called
out
in
the
service
mesh
in
this
proto,
it
probably
yeah
there's
some
work.
That
needs
to
be
done
here,
like
it
probably
does
make
sense
to
have
that
version
number
called
out
in
the
service
mesh
proto.
I
think
that's
a
good
move.
B
Yeah,
like
it
makes
logical
sense
and
I
don't
think
it's
compulsory
for
anyone
to
use
it.
It's
not
a
mandatory
field.
So
that's
that.
A
A
B
A
Yeah
that
that
build
is
a
kind
of
a
question,
but
we
should
probably
since,
but
but
we
know
since
that
a
portion
of
this
proto
isn't
being
used,
it
may
make
sense
to
okay
yeah.
You
know
what
this
is
a
good
conversation
for
another
call,
because
there's
a
few
considerations
to
account
for
here
so,
but
the
the
pr
that
you
have
is
looks
pretty
good.
I
mean,
I
think,
that's
the
right
direction.
C
B
A
That
the
way
that
we
approached
it
was
a
temporary
approach,
yeah,
okay
and
just
for
my
own
for
everyone's
edification,
probably
it's
worth
looking
at
it
inside
of
the
measuring
mystery
code,
where
we
import
this
spec
this
package,
oh.
A
A
Actually,
yeah
that
last
the
last
oh
yeah,
that
would
work
as
well.
That's
fine,
so
drew.
I
think
you
were.
You
know
conceptually
taking
the
right
approach
where
you're
pinning
what
version
we
want
to
pull
in
and
that's
a
valid
url
that
you
have
there,
but
with.
B
A
A
So
in
this
fine
note
like
not
as
a
temporary
solution,
it's
actually
appropriate
to
pin
to
a
specific
version
around
protos
like
that,
like
that,
that's
okay,
but
but
yeah
this
particular
format
of
the
way
in
which
it
was
done.
It
was
causing
the
build
to
fail,
and
so
this
solution
was
bootcash
to
do
what
a
go
get
go.
Get
on
this
package,
specifying
the
basically
to
specify
in
godot
mod
instead
of
here.
B
A
This
is
good
work,
though
this
is
this
helps
us
move
forward.
I
mean
that's
the
right
thing
to
do.
B
Like
it
was
so
easy
answer
over
there,
okay
yeah,
so
that's
that
I
guess
if
anyone
has
any
questions,
we
can
move
on
to
next
agenda.
A
A
A
Okay,
great
thank
you
drew,
so
I
don't
believe
is
on
the
call,
I'm
not
sure
that
he
got
the
invite
today.
It
doesn't
necessarily
need
to
be
presented
to
him.
Here's
an
open
question
for
everyone
here
and
everyone
who's
watching
the
recording,
and
that
is
that
that
is
that
there's
consideration
to
use
the
distrolus
as
the
base
image
for
measuring
adapters,
so
anyone
who's
run.
Mescheri
is
both
extremely
pleased
and
somewhat
dismayed
at
the
same
time
by
the
fact
that
there
are
well
soon
to
be
well.
Actually,
it's
a
good
point.
A
Gosh
there's
soon
to
be
10
adapters
with
karsh
you've
got
some
news
I
think,
to
share
in
that
regard,
but
so
they're
both
pleased
about
it
and
dismayed
by
it,
because
that
you
have
to
sit
there
and
watch
as
container
images
download
and
get
pulled
from
docker
hub,
so
the
smaller
that
those
are
the
better
so
good.
In
that
sense,
it
makes
it
makes
intuitive
sense
to
go
ahead
and
move
to
the
smallest
size
image.
A
You
can
kind
of
get
to
the
a
challenge,
or
this
also
there's
a
trade-off
like
almost
everything
in
life
and
all
the
decisions
that
we
make
here
is
that
when
you
do
use
a
disreleased
space
image,
you
lose
things
like
ping
or
netstat,
or
a
bunch
of
other
basic
troubleshooting.
You
know
linux
utilities
that
you
would
sort
of
naturally
expect
to
be
available
when
you're
trying
to
troubleshoot
things
like.
Oh,
I
don't
know
connectivity
to
minicube
from
the
adapter
and
it
becomes
a
little
bit
bothersome.
A
I
don't
know
that
everyone's
well
oriented
to
doing
something
like
a
docker
attach
or
like
about
how
to
troubleshoot
running
containers
that
don't
have
dev
tools
in
them.
There's
other
approaches
it's,
but
that
that's
mostly
what
I
was
gonna.
What
siddhant
was
gonna
talk
about
today
and
just
kind
of
a
discussion
around
proliferating.
The
use
of
destroyless
comments
on
this.
A
F
Hey
just
to
point,
since
the
concern
is
around
losing
certain
linux
commands
for
dice
to
this,
I
think
we
can
also
look
at
alpine
images,
which
are
smaller,
but
they
include
all
these
utilities.
A
So
ishaan
great
suggestion
also
sean
welcome
nice
day,
yeah
a
great
suggestion,
as
as
we
would
have
it,
that's
unless
I'm
entirely
mistaken.
That's
actually
alpine
is
the
image
that
we're
using
currently
across
that.
So
that's
a
like
a
beautiful
suggestion
and
that's
in
fact
kind
of
what
we're
just
like.
That's
like
actually
exactly
the
point
is
like
well,
here's
another
base
image,
that's
slightly!
That's
even
smaller
saves
us
a
shaves
off
a
few
megs.
A
Two
things
to
help
us
there's
a
few
there's
like
three
things
to
help
us
overcome
this
challenge,
and
that
is
one
enable
users
with
control
to
with
easy
control
over,
not
deploying
all
10
adapters.
If
they
don't
want
to
use
all
those
meshes
okay
and
that
way,
they're
not
as
concerned
about
how
long
it
takes
to
download
them.
Okay,
that
addresses
one
aspect
or
helps.
Another
aspect
is
well
fine,
I'm
just
using
one
adapter,
but
I'm
having
some
trouble
with
it.
I'd
like
to
troubleshoot
it
so
I'll
go
ahead
and
exec
in
there.
A
A
Yeah
such
that-
and
we
have
a
little
bit
of
that,
like
an
example
being
if
anyone
if
you've
all
run
used
measuring
ctl
the
cli.
If
you
do
a
mastery
ctl
system
logs,
you
can
see
logs
from
the
adapters.
Okay,
that's
that's
how
you
like.
We
would
need
to
bring
some
additional
troubleshooting
tools,
some
additional
instrumentation
in
the
adapters
to
help
overcome
the
lack
of
tooling
industrialists.
A
So,
having
said
all
that,
unless
others
feel
differently
or
like
it
sounds
like
we're,
probably
on
a
path
to
moving
toward
distro,
this
there's
been
about
four
people
who
joined
recently,
who
said
they'd
like
to
work
on
some
devops
things,
a
short
article,
a
short
guide,
troubleshooting
guide
in
the
measuring
docks
as
to
how
to
approach
troubleshooting
using
a
disreless
environment.
A
D
A
It
could
be
a
good
thing
and
then,
with
that
ishaan
thanks
for
mentioning
that
also
ishaan,
welcome
shredi
will
tell
you
it's
your
first
time
and
so
and
so
it's
time
to
orient
you
and
so
plea,
please
say
hi
to
everybody.
Please
introduce
yourself
if
you
would
just
so
everybody
can
get
to
know
you
hey
guys.
F
I've
just
joined
the
community
now
and
basically,
I'm
working
as
a
product
engineer
with
gochek
and
mostly
my
interest,
is
around
custom
operators
and
kubernetes
api.
So
that's
something
that
I'm
looking
to
contribute
to
to
the
mystery
project.
To
start
with.
A
F
C
A
There's
a
contributor
here,
cush
trivetti
that
had,
I
think,
spent
a
little
bit
of
time
at
go
jack.
Oh,
I
think
I
don't
know
that
his
his
name
would
ring
a
bell,
but
but
nice
all
right
well
I'll.
For
my
part,
I
promise
to
follow
up
with
you
on
an
introduction
to
operators.
As
a
matter
of
fact,
there's
there's
a
gentleman
dev
cholera
is,
if
you
haven't
been
introduced
or
haven't,
had
a
chance
to
say,
hi
poke
him.
A
If
you
would
he's
ready
and
waiting
to
give
you
a
tour
of
the
mastery
operator
and
mesh
sync
like
oh
good.
A
Okay,
yeah
and
then
let
me
give
an
update
on
his
behalf
a
little
bit.
So
all
of
you
are
aware
of
our.
I
don't
know
almost
two-time
running
mvp
on
this
call
mark
martin,
since
last
time
that
we've
met
since
this
last
week.
He
has
taken
it
upon
himself
to
help
create
an
easy
way
to
deploy
measuring
inside
of
a
vm
based
environment,
and
so
he's
put
together
a
vagrant
package
with
some
an
ansible
role
for
well
near,
as
I
can
tell
for.
A
Centos-Based,
vms,
or
maybe
other
others
in
hyper-v,
I
think
as
a
target
environment,
I'm
not
quite
entirely
sure
of
all
of
the
ways
in
which
you
can
configure
this
and
deploy
it,
and
maybe
it's
not.
Maybe
I'm
misinterpreting
what
he
had
said,
and
maybe
this
isn't
specific
to
hyper-v.
He
was
just
using
that
as
an
example
hypervisor
to
be
able
to
take
this
vm
based
environment
and
deploy
using
vagrant.
So
that
was
that
was
pretty
neat,
something
that
well,
if
you're
familiar
with
the
kubernetes
project,
they
have
forget
what
they're
called.
A
Basically,
you
know:
community
plug-ins
and
community
extensions
like
this
may
fall
into
a
category
of
things
that
are
that,
I'm
not
that
I'm
not
quite
sure
how
much
time
everyone
would
be
able
to
how
much
how
much
support
we
would
be
able
to
offer
for
this
ongoing
and
spend
time
on
it.
But
we
do
want
to
support
this.
It's
pretty
fantastic
yeah,
I
guess
yeah.
It
does
support
hyper-v
and
virtualbox
pretty
awesome.
We
want
to
uplift
the
work
and
offer
it.
You
know,
try
to
pull
it
into
the
project.
A
We
could
place
it
into
the
mesh
reinstall
folder
and
and
see
if
it's
a
value
to
people
so
sweetie.
I
think
just
assuming
that
martin
is
going
to
be
desirous
of
that.
Will
you
also
create
a
github
issue
there
as
well?
I
think.
A
All
of
you
ephemeral
people-
I
guess
that's
the
okay
cool.
That's
pretty
awesome
all
right!
Well,
we've
got
17
more
minutes.
Do
who's
interested
in
talking
about.
Well,
I
guess
yeah.
I
guess
this
goes
without
saying.
I'm
if
sean
is
is
interested
in
some
of
this.
He
just
said:
let's
talk
about
some
things
that
have
happened
recently
with
respect
to
measuring
architecture
and
some
new
components
actually
before
we
do.
G
G
That's
so
the
file
default
file
name
is
config.yaml,
and
so
I
just
do
like,
I
don't
know
any
command.
Let's
just
say
it's
simply
on
my
results:
it
creates
a
config
yaml
with
the
default
because
the
config
file
did
not
exist
and
it
creates
a
local
with
endpoint
localhost
9081.
G
You
can
modify
this
directly,
but
I
I
would
not
recommend
that,
but
you
could
do
like
it's
some,
it's
better.
If
you
don't
mess
with
this
and
if
you
just
want
to
use
this
cli
system
context
create
and
then
let's
just
say
attempt
I
have
okay,
sorry,
I
forgot
the
url,
the
okay,
so
I
have
token
and
url
okay
create.
G
G
Okay,
so
it's
attempt
added
temp
context,
so
you
have
another
context
here:
okay,
it
opens,
it
opens
token,
didn't
get
added
here.
Yeah,
you
have
the
end
point
here,
and
this
is
currently
not
working.
So
this
is
not
a
valid
endpoint.
If
I
put
a
valid
end
point
here,
you'll
be
able
to
access
the
messaging
server
other
than
that
you
have
system,
you
have
context,
create
creation,
you
have
context,
delete
or
I'll
just
demonstrate,
switch
first,
so
switch,
which
I
have
local
here.
G
So
it
now
you
have
switched
to
local,
so
the
current
context
keeps
track
of
which
context
you're
currently
working
on.
Then
we
do
have
adapters
right
now,
but
it
it's
pointless
to
actually
add
any
of
those,
because
you
see
this
adapter
right,
so
the
adapters
right.
This
would
actually
be
the
initial
list
of
adapters
that
you
would
like
to
launch
your.
G
How
do
we
say
it?
The
measuring
server
which,
whichever
adapter
you
want
to
create,
create
while
creating
the
measuring
server?
This
will
keep
a
list
of
that
and
adding
this
on.
The
cli
then
would
be
annoying.
So
you
could
simply
do
like
linkadi
istio
and
I
guess
over
osm
and
if
you,
if
you
do,
system,
start
and
pass
in
context,
so
this
is
not
working
right
now,
so
this
this
hasn't
created.
Yet
so,
and
you
would,
I
guess,
pass
in
context.
G
This
was
the
idea
right,
passing
in
context
or
simply
running
start
and
having
context
being
taken
from
the
whichever,
whichever
is
active.
A
G
Yeah,
okay,
okay,
so
the
part
on
launching
some
number
of
adopted
based
on
the
context
is
something
that's
to
be
done
other
than
that
we
have
context.
So
this
is
the
system
context,
so
you
have
create
delete,
switch
and
view.
So
it's
self-explanatory.
Basically,
so
you
create
a
context,
you
can
delete
it.
You
can
switch
between
context
and
you
can
also
view
which
is
your
current
context
and
the
delete
ones
remaining
so
delete.
I
guess
let's
just
click
it
and
I've
got
some
error.
Invalid
command,
delete,
okay
system
context,
click.
G
No
context
today:
okay,
it's
deleted,
but
this
is
I'll.
Just
save
this
first,
let's
just
see
what
I
get
and
now
let's
delete
it:
yeah
yeah,
okay,
so
I
deleted
the
context
here
and
so
this
this
would
be
your
main
configuration.
The
other
thing
you
could
also
do
is
system
or
any
command
you
could
pass
in
config
and
then
the
pointer
to
the
config
file.
G
So
you
have
context
and
not
context
conflict
and
your
file
will
be
home
dot
nursery,
and
this
is
conflict..
So
here
it
is,
you
can
pass
in
a
config
file
separately.
I
can
have.
I
can
have
a
separate
configuration,
so
this
would
help
in
cases
where
you
have
where
you
have
already
or
pre-configured
configuration
with
you.
So
let's
say
you
have.
You
are
using
a
configuration
of
another
user
and
you-
and
you
are
like
you-
want
to
test
it
out.
G
You
can
simply
ask
them
the
configurations,
the
config
file
and
you
could
use
the
config
file
to
work
with
it
and
that's
more
on
the
con
context
structure
and
how
things
work
so
I'll
just
make
I'll
like
take
you
through
the
structure
of
the
like
how
the
context
look
so
just
look
configured,
so
you
have
three
properties
con
context,
current
context
and
tokens
and
in
each
context
you
will
have
an
endpoint
token
path
platform.
So
this
this
will
be
gke,
gk,
kubernetes,
local
or
mini
cube
or
anything.
G
That's
that
host
kubernetes
environment
and
then
you
have
adapters.
So
this
is
the
list
of
adapters
on
a
particular
context,
then,
for
each
token,
you
would
have
a
name
and
location,
so
location
would
be
the
path
of
token,
which
basically
exists
for
that
particular
context.
G
So
each
context
would
would-
or
you
can
say,
may
or
might
not
have
different
tokens
so
in
in
that
case
we
could
simply
keep
keep
a
pattern
here
which
so
you
don't
need
to
pass
in
dash
like
you
should
for
now
now
you
will
have
to
actually
pass
in
token
with,
I
don't
know
the
token
path,
but
once
contacts
are
implemented,
you
don't
actually
have
to
pass
in
token
unless
it's
nil
in
the
system
right.
So
if
we
file
like
this,
you
don't
need
to
pass.
G
In
the
token
it
will
simply
be
fetch
from
your
context.
A
Nice,
who
has
questions
or
feedback
here.
A
Now
here's
the
thing
generally
most
of
you
get
to
slide
on
giving
feedback,
not
this
time.
A
F
Haven't
touched:
mystery,
ctl,
I've
just
stride,
the
dashboard
and
the
katakura
scenarios
up
till
now.
F
One
thing
I
had
a
question:
why
do
we
need
a
separate
config
file
for
mystery
ctl,
because
I've
used
this
to
ctl
and
just
picks
up
things
from
your
cube,
config,
yeah
yeah,
it
just
uses.
I
mean
it
just
uses
your
cube,
config
file,
whatever
is
it
and
I
think
the
cube
config
is
using
a
very
similar
pattern
to
what
we
just
saw
in
the
demo
right
now,
and
it's
already
baked
in
and
cubectl
allows
you
to
configure
different.
F
A
Yeah,
that's
a
good!
That's!
A
good
set
of
thoughts
to
that'll.
Probably
take
us
through
the
end
of
the
meeting.
That's
great!
So
one
of
the
ways
in
which-
and
maybe
that's
the
right
thing
to
do-
I'm
not
sure
but
talking
through
it.
One
of
the
ways
in
which
meshri
is
different
from
istio
is
that
mesherie,
actually
by
default
today,
deploys
to
a
docker
environment.
A
If
that
docker
environment
is
running
in
kubernetes,
great
it'll
run
there
and
it'll
it's
self-aware
of
its
environment,
meaning
when
it's
mastery
server
boots
up
it'll,
look
to
see
if
it's
inside
of
kubernetes
and
if
so
it'll
you
know
mostly
behave
the
same
but
it'll
it'll
have
a
slightly
different
configuration.
F
A
I
mean,
but
the
thing
is
is
like
it's
still
and
you
know
so,
the
the
we're
being
heavily
inspired
intentionally
from
cube
ctl
context.
So
so
it's
a
great
it's
a
great
highlight
like
hey.
You
know
this.
You
know
like
hey,
why
even
bother,
if,
if
you
can,
if
users
are
already
invested
in
and
know
cubeconfig
and
you're,
basically
trying
to
provide
that,
but
but
but
I
even
for
my
part,
like
I'd
like
to
justify
that
even
further,
because
that,
because
just
one
justification
isn't
usually
enough,
like.
A
It's
that
okay,
so
we
think
about
it.
So
so
you
can,
you
know
you
can
deploy
your
sdo.
You
and
it'll
take
different
configuration.
It's
a
configurable
deployment.
A
You
know
horrific
use
ux,
and
so
I
wouldn't
I
wouldn't
give
them
the
benefit
of
the
doubt
of
this
next
statement,
but
but
maybe
they've
gotten
it
right
in
terms
of
a
balance
between
leaning
into
you
know,
cube
config
in
this
case
and
leaning
into
kubernetes,
while
at
the
same
time
also
providing
an
optional
path
for
basically
what
we're
showing
here,
which
is
the
the
ability
to
configure
istio
and
it's
now
you
know-
and
it's
you
know,
and
so
to
use
the
example
it's
from
mesri
right
now.
A
It's
what
adapters
do
you
want
to
run
with?
No
doubt
other
forthcoming
configurable,
you
know
configuration
parameters,
so
the
istio's
got
profiles,
but
they
lean
into
making
sure
that
you
can
just
like
get
your
kubernetes
connection
kind
of
you
know
immediately
because
it's
referencing
the
same
file.
A
It
might
be
that
there's
a
learning
to
take
from
there,
because
one
of
the
because,
because
measuring
when
you
deploy
it,
it
does
seek
out
your
cube,
config
file
and
attempt
to
just
use
that
and
attempt
to
just
assume
that
you
know
you
might
want
meshweight
to
talk
to
your
to
that
kubernetes
cluster.
That's
your
current
context
and
it
lets
you
choose
between
contexts
but
but
we're
not.
A
While
we
took
a
very
cute
config-centric
approach
to
how
it
is
that
mesh
reservoir
spins
up
we're
not
entirely
doing
the
same
thing
with
measuring
ctl,
and
so
you
know,
as
we
think,
through
this
area
there.
A
This
is
a.
This
is
a
good
thought,
because
one
of
the
things
that
we've
got
going
on
right
now,
it's
a
little
bit
of
a
point
of
confusion
that
needs
cleared
up.
It's
it's
not
the!
A
I
don't
know.
I
don't
know
that
it's
the
best
user
experience
or
or
not,
so
what
okay?
So
what
I'm
referring
to
is
that
when
you
want
to,
when
measurey
server
wants
to
communicate
with
the
kubernetes
it
will,
when
you're
running
meshre
outside
of
kubernetes,
it
asks
for
your
cubeconfig
file
and
it
expects
that
you'll
have
certificates
in
that
file,
so
that
meshrie,
can
you
know,
run
as
that
user
or
mimic
that
user
you'll
give
it
your
now
those
certificates
may
or
may
not
be
inside
of
the
cubeconfig
file.
A
A
I
don't
know
if
you're
able
to
share
again
but
there's
mastery,
ctl
system,
config
system,
config
and
you'd
say
system
config,
like
mini
cube
and
it
would
the
cli
would
help
minify
or
like
flatten
and
minify
export
those
certificates
and
get
them
into
a
new
cube
config.
A
So
that,
then
you
can
pass
that
to
measure
it
into
some
of
what
we've
been
doing
there
we've
been
having
meshre
server
run.
Some
of
that
code
to
minify
to
help
prepare
environments.
So
so
this
mastery
ctl
system,
config,
you
can
say
system,
config,
eks,
aks,
gke
mini
cube
like
these
different
kubernetes
environments,
that
each
of
them
have
a
little
bit
of
a
different
preparation
process.
For
how
it
is
that
you
would,
you
know,
get
your
get
your
cube,
config
ins!
A
You
know
configured
in
such
a
way
that
you
can
speak
to
those
clouds
or
to
those
managed
kubernetes
systems
and
we're
yeah
we're
trying
to
help.
You
provide
a
nice
user
experience,
helping
people
overcome
those
initial
setup
challenges,
but
some
of
that
code
runs
in
mesri
server,
which
means
when
you
run
mestre
ctl
as
a
client.
A
You
actually
need
to
have
it.
You
need
to
have
a
token
to
be
able
to
speak
to
mesherie's
api,
and
so
you
need
to
stand
up
measury
log
into
meshri
grab
your
token
come
back
to
this.
Cli
run
this.
It's
not
that!
That's
so
awkward
that
you,
you
know,
you
know
if
you've
administered
systems
or
integrated
systems
before
it's
not
awkward
or
uncommon
that
you
go
to
one
system.
You
get
an
api
key.
You
copy
that
you
go
to
the
other
system,
you
paste
it
in.
A
I
think
it's
time
for
an
overhaul
there,
like
that
code,
problem,
that
that
ease
of
use
code
for
configuring,
your
connection
to
eks
or
aks
or
whatever
probably
needs
to
live
within
the
mastery,
ctl
binary
and
not
necessarily
within
mesri
server.
I
think
both
of
those
systems
might
need
to
be
able
to
invoke
those
same
the
same
functions,
and
maybe
we
were
trying
not
to
repeat
ourselves,
but
it's
probably
causing
a
bad
user
experience.
A
Yeah
I
mean
ishaan
another
like
another.
A
melding
of
what
you
were
saying
is
hey.
When
you
do
measure
ctl
system.
A
Well,
you
know,
you'll,
give
it
a
name
and
then,
like
any
rude,
was
showing
you.
You
give
it
probably
two
other
parameters:
the
url
to
the
mastery
server
endpoint
and
a
token
now.
A
It
might
make
sense
right
there
that
there
that
we
might,
as
we
take
next
steps
on
the
context
command,
we
are
intending
to
say
to
identify
like
onion
root,
had
showed
what
platform
it
is
that
you're
deploying
to
so
is
that
mesh
reservoir
that
context
that
you're
pointing
at?
Is
it
a
docker
environment
or
is
it
a
kubernetes
environment?
So
we
would
use
this
platform
as
a
parameter
to
track.
A
F
A
Yeah
thanks,
and
that
goes
for,
drew
you
as
well
andrew
needs
feedback
and
feedback
other
than
mine,
because.
G
G
What
we
are
trying
to
do
a
lot.
What,
as
I
was
aiming
here,
is
once
we
have
the
foundation
of
context
in
place.
We
can
actually
automate
the
part
on
populating
the
context.
So,
if
I
have
like,
if
I
access
the
context,
let's
say
I
did
a
mini
cube
system
concept,
sorry
measure
ctrl
system,
config
mini
cube.
It
would
actually
fetch
the
ip
of
my
measuring
server
in
mini
cube.
It
would
get
the
required
details
and
actually
pre-populate
the
entire
context.
G
So
next
time,
when
I
want
to
work
on
it
or
whenever
I'm
working
on
it
I'll
actually
get
the
required
parameters
already
pre-populated.
So
I
can
simply
work
directly
on
the
mini
game.
Instead
of
you
know
having
to
mess
around
with
the
configuration
on
my
own.
A
Yeah,
you
might
run
into
a
bit
of
it.
That's
great
honey
that
watch
out
for
the
egg
before
the
chicken
before
the
egg
challenge
of
like
trying
to
go,
identify
like
trying
to
communicate
with
kubernetes
to
identify
where
actuary
server
is
deployed,
while
trying
to
configure
the
connection
to
kubernetes
yeah
anyway.
A
A
This
addresses
that
and
a
lot
more
so
anirud
just
to
confirm,
I
think,
there's
a
stipulation
in
here
about
some
user
acceptance
tests,
one
of
them
I'm
just
trying
to
refresh
my
memory.
Okay,
any
given
context
includes
an
adapter's
collection
with
zero
or
more
adapters
defined,
and,
if
there's
okay,
okay,
to
accept
one
or
more,
I
guess
what
I'm
trying
to
figure
out
what
what
I
can't
remember,
what
behavior
we
specified.
If
you
don't
have
any
adapters,
does
that
mean
that
if
you
don't
have
any.
G
I
think
it's
better
to
actually
publish
the
entire
adapter
list,
because
if
they
like
also
passing
adapters
in
a
flag,
it
might
not
be
the
best
way
here,
because
it
how
how
would
the
like,
if
you
have
single
flag
like
adapter,
one
adapter
to
it
after
three,
it
would
be
easier
to
address
those
in
the
cli,
but
addressing
the
entire
array
that
that
to
pass
in
a
flag
is
something
different.
G
So
what
we
can
do
here
is
have
a
separate
command
for
adding
adapters
or
maybe
I'll,
have
to
look
into
if
flags
actually
support
an
array
input.
A
Yep
make
sense
yeah
or
that
this
just
really
doesn't
get
used
to
your
point
when
you-
and
this
is
actually
why
I
was
kind
of
bringing
it
up
is
like.
I
don't
think
I
don't
know
that
we
specified
it
here.
Maybe
we
did
in
the
doc.
Basically,
when
you
create,
I
think
we
did
in
the
doc.
It
says
when
you
create
a
context,
and
you
know
you
give
it
a
name
that
some
of
the
variables
may
have.
A
Okay,
that
we
would
take
from,
we
would
have
a
hard,
basically
a
hard-coded
config
that
includes
by
default
all
the
adapters
and
that
yeah,
if
we
do
support
an
adapter's
flag
that
that
would
probably
be
rarely
used
or
used
only
generally
only
used.
If
someone
wants
to
specify
one,
you
know,
because
and
or
otherwise
we
just
expect
people
to
edit
the
config
file.
A
G
Then
we
can
just
leave
it
to
that
right.
So,
if
adapters
like,
if
the
adapters
are
empty,
then
it's
basically
zero
running
metric
server
with
zero
adapters.
G
If
there
is
any
adapter
run
those
or
maybe
pass
pass
in
a
flag
that
actually
says
like
push
all
or
something
like
that,
so
push
all
the
adapters
or
something
like
that.
That
would
actually
add
all
the
adapters
to
the
adapters
list
and
then
create
the
environment.
A
All
then
we're
not
meeting
on
friday
have
a
merry
christmas.
Have
some
some
new
year's
in
there
as
well?
Probably
we
will
not
have
the
community
meeting
for
two
weeks
in
a
row,
so
the
friday
community
meeting
the
next
two
fridays,
but
we
will
this.
You
know
on
monday,
tuesday,
wednesday
thursday.
Those
meetings
are
no
I'm
sorry,
monday,
tuesday,
wednesday
of
next
week.
We're
good
thursday
will
be
new
year's
eve.